paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
nips_2022_C7jm6YgJaT
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Data-free Knowledge Distillation (DFKD) has attracted attention recently thanks to its appealing capability of transferring knowledge from a teacher network to a student network without using training data. The main idea is to use a generator to synthesize data for training the student. As the generator gets updated, the distribution of synthetic data will change. Such distribution shift could be large if the generator and the student are trained adversarially, causing the student to forget the knowledge it acquired at the previous steps. To alleviate this problem, we propose a simple yet effective method called Momentum Adversarial Distillation (MAD) which maintains an exponential moving average (EMA) copy of the generator and uses synthetic samples from both the generator and the EMA generator to train the student. Since the EMA generator can be considered as an ensemble of the generator's old versions and often undergoes a smaller change in updates compared to the generator, training on its synthetic samples can help the student recall the past knowledge and prevent the student from adapting too quickly to the new updates of the generator. Our experiments on six benchmark datasets including big datasets like ImageNet and Places365 demonstrate the superior performance of MAD over competing methods for handling the large distribution shift problem. Our method also compares favorably to existing DFKD methods and even achieves state-of-the-art results in some cases.
Accept
This paper trains a generator to produce synthetic data for knowledge distillation from a teacher model, thus allowing distillation without the need of the original training data. The reviewers generally liked and had positive things to say about the method as well as the presentation, and the discussion was mostly around clarification and having better comparisons. This seemed to have satisfied the reviewers who responded to the rebuttals (not all of them did), and from my reading the authors did a good job at responding to concerns of already mostly positive reviews. The one negative review, which I felt was a little off-the-mark, I feel was addressed well by the rebuttals, but the reviewer dropped out afterwards and did not back up their ongoing criticisms. I therefore recommend acceptance of this paper to NeurIPS. Overall, discussion was rather limited, but this could be that the reviewers didn't have and serious concerns from the start and discussion was straightforward. I wish tq3D had contributed a little more as it would have been nice to arrive to a consensus.
train
[ "6LEEWFV5bs", "SOK9QVMRv9X", "YSYi3mGiky", "0jzPA-9qdbb", "tWpGyBvgzgq", "YWcs83r-fR", "EdysfoRhBB", "319ltvrhjLR", "HSxeEAy-T0L", "6A1-fOKbeb-", "Lkb8GLW0_9-", "wI7W-lwtlP9", "e4qc2f4aDRG" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. We really appreciate your time and consideration.", " I have carefully read the responses and other reviewers' comments. My concerns have been properly addressed. I think this paper is a good complement to the field of data-free distillation, and have raised the score to 7.", " Thank you for your insightful comments, we would like to address your concerns in detail below:\n\n***[Weakness 1] + [Question 3]*** *“In Figure5, it is hard to conclude that the generated images by EMA Generator are better than the Generator. Could you use a metric to show how the difference for example the FID score?” + “Another concern is from Figure5, the EMA generator fails to improve the quality of generated images, which makes me concerned about the cause of the improvement of student performance.”*: ***[Answer]*** We would like to clarify that in Data Free Knowledge Distillation (DFKD), the performance of the student almost does not depend on the visual quality of generated samples as previous works empirically showed that the student can achieve very high accuracy of about 92-93% on CIFAR10 even when the generated images do not show any visually interpretable pattern of the original training data (Fig. 3 in [1], Fig. 3c in [2]). Moreover, the characteristic formula of adversarial DFKD (Eq. 2 in our paper) tends to force the generator $\\mathtt{G}$ to generate samples that look different from the training data so that $\\mathtt{G}$ could maximize the difference between the student and teacher. Thus, one cannot make any claim about the improvement of the student performance just by checking the visual quality of generated samples. In fact, a possible claim that we could make by visualizing synthetic samples in Figure 5 of our paper is that both the generator and the EMA generator can generate diverse samples and there is no sign of “mode collapse” (as presented in lines 260-263).\n\nWe note that some methods like DeepInversion [2] could synthesize images that look quite like training samples. This is mainly attributed to the heuristic losses used by these methods (e.g., DeepDream loss, pseudo-cross-entropy loss, BatchNorm moment matching loss) rather than the adversarial loss between the generator and student. In our paper, we hardly used these heuristic losses (except for the BatchNorm moment matching loss in case of small datasets) but merely the adversarial loss. Therefore, it is reasonable that synthetic samples from our generator $\\mathtt{G}$ do not look real. Similarly, synthetic samples from our EMA generator $\\mathtt{\\tilde{G}}$ also do not look real since $\\tilde{\\mathtt{G}}$ is the moving average of $\\mathtt{G}$ over time.\n\n***[Weakness 2]*** *“In numerical experiments, the results seem better than baselines. Results are mainly given the teacher and student are similar networks, how about the performance with diverse architectures (resnet as teacher, alexnet as student)?”*: ***[Answer]*** We thank Reviewer Lniz for this interesting question. In our paper, we did perform experiments with the teacher and the student being different networks. For example, we tested with the teacher/student being ResNet34/ResNet18 and WRN-40-2/WRN-16-2 on small datasets like CIFAR10, CIFAR100, and TinyImageNet. On large datasets like ImageNet, we set both the teacher and student to Alexnet to reduce the GPU memory and training time. We have run the setting that Reviewer Lniz suggested on ImageNet and have found that our method still significantly outperforms ABM - the baseline that does not use an EMA generator. Specifically, at training step 3000, our method achieves 8.32% accuracy while ABM only achieves 5.64%. This, again, validates the importance of the EMA generator in our method. \n\nIn addition, thank to the suggestion of Reviewer Lniz, we observed an interesting phenomenon that the performance of the AlexNet student is much poorer when learning with the ResNet teacher than with the AlexNet teacher (8.32% compared to 40.66% for our method, and 5.64% compared to 34.52% for ABM at the training step 3000). After investigation, we have found some possible reasons for this listed below:\n* One possible reason is that the ResNet teacher is more complex than the AlexNet teacher so the AlexNet student has more difficulty mimicking the behavior of the ResNet teacher.\n* Another possible reason is that the training settings that we use for the AlexNet teacher and AlexNet student scenario are not suitable for the ResNet teacher and AlexNet student scenario. This could be true since, during training, we observed that the distillation loss of the student on synthetic samples increases instead of decreasing. It suggests that the student requires more optimization steps (larger $n_{\\mathtt{S}}$). Other hyper-parameters may also need to be adjusted as well.\n\nWe think this phenomenon should be investigated further. However, it does not change the fact that our method still performs better than the baseline that does not use an EMA generator.", " \n***[Question 1]*** *“Using EMA to stabilize the model training is a common technique in existing works. This paper applies it to the generator of images for data-free knowledge distillation. Thus the novelty of the proposed method is limited.”*: ***[Answer]*** Thank you for this insightful question. Your question is similar to Weakness 1 of Reviewer tq3D. We would like to provide our answer to your question below:\n\nWe agree with Reviewer Linz that the idea of momentum update has been used in previous works. However, most of these works applied EMA update to the classifier/encoder to solve the semi-/self-supervised learning problem, with Mean Teacher [3] and MoCo [4] being two notable candidates. Compared to these works, our method is very different in terms of model design and motivation as specified below:\n* In terms of model design, in Mean Teacher/MoCo, the EMA update is applied to the classifier/encoder. Meanwhile, in our method, the EMA update is applied to the generator instead.\n* In terms of motivation, Mean Teacher/MoCo uses the EMA classifier/encoder to facilitate semi/self-supervised learning via forcing the outputs of the main classifier/encoder and the EMA classifier/encoder to be close. Meanwhile, we use the EMA generator to address the large distribution shift problem in adversarial DFKD, and we don't force the outputs of the generator and the EMA generator to be close but use samples from the EMA generator to train the student.\n\nWe think the above differences constitute the novelty of our method.\n\n***[Question 2]*** *“How about adding EMA to the student?”*: ***[Answer]*** Thank you for your interesting question. Your question is similar to Question 4 of Reviewer 5Fok. We would like to provide our answer to your question below:\n\nWe tried to use the EMA student instead of the EMA generator on ImageNet and found that it just slightly improves the performance by about 0.4% which is very small compared to a 4% improvement when using the EMA generator. We hypothesize the reason is that using synthetic samples from the EMA generator helps the student recall old knowledge better than letting old knowledge decay slowly with the EMA student.\n\n**References**\n1. Data Free Adversarial Distillation, Fang et al., arxiv 2019\n2. Dreaming to Distill - Data-free Knowledge Transfer via Deep Inversion, Yin et al., CVPR 2020\n3. Mean Teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results, Tarvainen and Valpola, NeurIPS 2018.\n4. Momentum Contrast for Unsupervised Visual Representation Learning, He et al., CVPR 2020", " Thank you for your insightful comments, we would like to address your concerns in detail below:\n\n***[Weakness 1] + [Question 1]*** *“The momentum updating method itself is not novel and has been used in many problems, such as semi-supervised learning, self-supervised learning, ....”* + *“Clarify the key contribution and novelty of the proposed method”*: ***[Answer]*** We thank Reviewer tq3D for this insightful comment. We agree with Reviewer tq3D that the idea of momentum update has been used in previous works, especially in the two famous models Mean Teacher and MoCo that Reviewer tq3D referred to. Our method is thus, inevitably inspired by these models. However, there are still some notable differences between our method and these models that we would like to highlight here:\n* In terms of model design, in Mean Teacher/MoCo, the EMA update is applied to the classifier/encoder. Meanwhile, in our method, the EMA update is applied to the generator instead.\n* In terms of motivation, Mean Teacher/MoCo uses the EMA classifier/encoder to facilitate semi/self-supervised learning via forcing the outputs of the main classifier/encoder and the EMA classifier/encoder to be close. Meanwhile, we use the EMA generator to address the large distribution shift problem in adversarial DFKD, and we don't force the outputs of the generator and the EMA generator to be close but use samples from the EMA generator to train the student.\n\nThe above differences in motivation and model design between our method and Mean Teacher/MoCo constitute the novelty of our method. The significance of our method lies in its simplicity and adaptability to other problems (e.g., generalized continual learning, source-free domain adaptation, etc.) and data types. This was discussed in the Conclusion section (lines 295-299) of our paper and was appreciated by most reviewers.\n\n***[Weakness 2] + [Question 3]*** *“It seems the performance of the proposed method is not competitive if compared to other existing DFKD methods, as shown in Table 1. The comparison with other DFKD methods in Table 2 is also not sufficient.”*: ***[Answer]*** In Table 1, it is clear that our method outperforms most existing DFKD methods. Our method only performs poorly than CMI and DFQ in some settings which we hypothesize due to their use of different losses and configurations for training. For example, CMI uses additional pseudo-cross-entropy loss [1] and inverse contrastive loss which are absent in our method; DFQ uses Variational Information Distillation (VID) loss [2] on intermediate features while our losses are only computed on the output logits. We discussed these differences in our paper from line 219 to line 222.\n\nSince our main target is to show that the EMA generator can help mitigate the large distribution shift problem rather than winning over current state-of-the-art methods, in Table 2, we only focused on the related baselines (ABM and DFKD-Mem) trained under the same settings as our method and didn't include other existing DFKD methods that do not support our main target. We explicitly stated that in our paper from line 223 to line 226. In addition, we note that all baselines in Table 1 have not been tested on large-scale datasets (ImageNet, Places365, Food101) so we have no clue about their desirable performances to put in Table 2.\n\n***[Weakness 3] + [Question 2]*** *“Provide more analyses and visualizations on how momentum update helps alleviate the distribution shift issue, not just the synthetic samples”*: ***[Answer]*** We think in our paper, we conducted several experiments to show that EMA update helps alleviate the large distribution shift problem. We would like to list them below:\n\n* In Table 2, we showed that our method outperforms ABM - a baseline that does not use the EMA generator. \n* We conducted an ablation study on the weighting coefficient w.r.t. the EMA generator ($\\lambda_{1}$ in Eq. 4) where we showed that reducing this coefficient leads to the decrease in the student's performance. \n* In Section 5.2.3, we empirically verified that the EMA generator's updates actually lead to smaller distribution shifts than the generator's updates.\n\n**References**\n1. Data-free Learning of Student Networks, Chen et al., ICCV 2019\n2. Variational Information Distillation for Knowledge Transfer, Ahn et al., CVPR 2019", " Thank you for your insightful comments, we would like to address your concerns in detail below:\n\n***[Concern]*** *“The only one remaining concern is that the difference or superiority of the proposed method from some highly related works should be clarified. For example, MosaicKD [1] also studies the domain shift problem in knowledge distillation, the authors should make the differences from the work clearer or make some comparisons if possible.”*: ***[Answer]*** We thank Reviewer qSj6 for introducing this interesting work to us. We added a discussion about MosaicKD in the related work of our revised version. Here, we would like to summarize the difference between our method and MosaicKD:\n* *Difference in the problem setting*: Our method considers the setting in which no training data is provided to the student. Meanwhile, MosaicKD considers a less-extreme scenario in which some unlabeled, out-of-distribution (OOD) data is given to the student for distillation.\n* *Difference in the motivation*: Our method attempts to mitigate the large distribution shift problem in adversarial DFKD while MosaicKD tries to craft synthetic mosaic images using local patches extracted from the OOD data to facilitate knowledge transfer from the teacher to the student.\n* *Difference in the model design*: To achieve our goal, we propose to train the student with additional samples from the EMA generator. Meanwhile, MosaicKD uses a patch discriminator in addition to the student-teacher pair to guide the synthesis of synthetic mosaic images.", " Thank you for your insightful comments, we would like to address your concerns in detail below:\n\n***[Weakness 1]*** *“The empirical evaluation is over a single run, so it is hard to ascertain the significance of the results. Adding error bars would definitely help here.”*: \n***[Answer]*** We agree with this comment from Reviewer 5Fok. Due to the large number of runs we had to do (e.g., for hyper-parameter search and ablation study) and the long training time of each run (e.g., several days), it is costly to provide the standard deviation for our method and baselines. Therefore, in our paper, we only showed the result of one run. However, during our experiment, we tried to run some settings multiple times and observed that our method and related baselines usually have very small standard deviations among different runs. For example, the standard deviations of 3 different runs of our method on CIFAR100 and on ImageNet are just 0.18% and 0.32%, respectively. For ABM and ABM-Mem, the standard deviations are roughly the same. Meanwhile, our performance gains over ABM and DFKD-Mem are about 1.5-4% (Table 2) which makes the gain statistically significant. We note that in many related works (including those in Table 1) [1,2,3,4,5], the authors also did not show standard deviations, mainly because these values are small.\n\n***[Weakness 2]*** *“The claim of the student undergoing catastrophic forgetting is not sufficiently explored. While the experiments show that having a higher momentum (and weight for the ema generator’s images) is better to an extent, a class wise breakdown of where such models do better, and how this improvement is related to catastrophic forgetting could help”*: ***[Answer]*** We would like to clarify the problem that the student suffers from catastrophic forgetting in adversarial DFKD has already been investigated in [5]. This problem also relates to the catastrophic forgetting problem of GANs which was analyzed in [6]. We discussed these papers in detail in our related work (starting from line 165). In our paper, we aim at providing a solution to this problem rather than justifying it. We hypothesize that the reason for the student's forgetting is due to the large distribution shift of synthetic samples from the generator $\\mathtt{G}$. This is reasonable because the adversarial game between $\\mathtt{S}$ and $\\mathtt{G}$ forces $\\mathtt{G}$ to continuously change its generated samples (on which $\\mathtt{S}$ is trained) to maximize the difference between $\\mathtt{S}$ and $\\mathtt{T}$. Visualization of generated samples from an “uncond” generator in Fig 10 in our Appendix also provides some evidence to support this hypothesis. Therefore, we introduce an EMA generator to mitigate the large distribution shift induced by the generator, which in turn will mitigate the forgetting problem of the student.\n\n***[Weakness 3]*** *“The method is not interpretable. This is a general shortcoming of such adversarial methods, and should not be taken as a very special shortcoming of this work.”*: ***[Answer]*** We thank Reviewer 5Fok for giving a fair judgment about the interpretability of our method. As in adversarial DFKD, the generator is forced to synthesize samples that maximize the difference between the student and the teacher, these synthetic samples may not look like the original training samples. Thus, the visual quality of synthetic samples cannot be used to interpret the results. In our paper, we tried our best to make our method as clear as possible via extensive experiments. For example, besides the main results in Tables 1 and 2, in Section 5.2.3, we compared the change in the updates of $\\mathtt{G}$ and $\\mathtt{\\tilde{G}}$ to show that $\\tilde{\\mathtt{G}}$ can actually help mitigate the large distribution shift caused by $\\mathtt{G}$.\n\n***[Question 1]*** *“Can the authors provide more evidence around how EMA helps with the catastrophic forgetting of the student? An ablation comparing the training and test accuracy curves with and without EMA could help”*: ***[Answer]*** The comparison between our method and a variant that does not use an EMA generator was provided in Table 2 of our paper. In this table, ABM is the variant that does not use the EMA generator. A brief description of ABM was provided in lines 288 - 289 of our paper.\n\n***[Question 2]*** *“Can this trick of using an additional generator be plugged into other DFKD methods which rely on synthetic data? If so does it always help, or are other methods of making training more stable better?”*: ***[Answer]*** We thank Reviewer 5Fok for this interesting question. We haven't tried to incorporate the EMA generator into other DFKD methods so we cannot say for sure but we think our idea of using an EMA generator can be favorably applied to existing DFKD methods especially those that continuously seek for new samples like in adversarial DFKD.", " ***[Question 3]*** *“How would the performance change if the train and test set have some distribution shift within themselves? In that case, would EMA help?”*: ***[Answer]*** We would like to clarify that in Data-Free Knowledge Distillation (DFKD), the student is not exposed to any training data and has to use synthetic data from the generator for training. The distribution shift of synthetic data is almost inevitable in adversarial DFKD since the generator has to continuously generate new samples to maximize the difference between the student and teacher. For evaluation, it is standard to use the original test set of the teacher which is usually drawn from the same distribution as that of the training set. One could consider out-of-distribution (OOD) test data but we think this setting is more suitable for the OOD generalization problem rather than the DFKD problem.\n\n***[Question 4]*** *“Can EMA also be applied on the student to stabilize the generator's training?”*: ***[Answer]*** Thank you for this interesting question. We tried to use the EMA student instead of the EMA generator on ImageNet and found that it just slightly improves the performance by about 0.4% which is very small compared to a 4% improvement when using the EMA generator. We hypothesize the reason is that using synthetic samples from the EMA generator helps the student recall old knowledge better than letting old knowledge decay slowly with the EMA student.\n\n***[Question 5]*** *“While EMA might mitigate catastrophic forgetting to some extent, it is also possible that there is a large shift within the EMA generator as well, leading to some amount of catastrophic forgetting. While this effect can be controlled by the alpha parameter of decay, an alternative could be to maintain an EMA of the EMA generator and so on. Do the authors have any insights around this?”*: ***[Answer]*** We think this is an interesting suggestion but could be costly in practice if we maintain multiple EMA generators. Therefore, we think it is better to stick with the simple yet efficient solution which is using a momentum decay alpha to control the update of the EMA generator.\n\n**References**\n1. Data-Free Learning of Student Networks, Chen et al., ICCV 2019\n2. Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion, Yin et al., CVPR 2020\n3. Data-Free Network Quantization With Adversarial Knowledge Distillation, Choi et al., CVPR Workshop 2020\n4. Contrastive Model Inversion for Data-Free Knowledge Distillation, Fang et al., IJCAI 2021\n5. Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data, Binici et al., WACV 2022\n6. Generative adversarial network training is a continual learning problem, Liang et al., arxiv 2018", " We are glad that the reviewers share common positive views about different aspects of our work which say that:\n* **our proposed method** is: *“better than memory store techniques with lower overhead”* (Reviewer 5Fok), *“reasonable”* (Reviewer qSj6), *“simple and seems effective”*, *“easy to follow for other research”* (Reviewer tq3D), and *“straightforward”* (Reviewer Lniz).\n* **our experiments** are: *“extensive”*, *“with a sensitivity analysis to various components of the approach”* (Reviewer 5Fok); *“well designed”*, *“well demonstrating the superiority of the proposed method”* (Reviewer qSj6), *“sufficient”* (Reviewer tq3D, Reviewer Lniz), and *“delivering the strong message that the proposed method is efficient”* (Reviewer Lniz). \n* **our presentation** is: *“well-written”* (Reviewer 5Fok, Reviewer tq3D, Reviewer Lniz). \n\nWe also thank the reviewers for other insightful comments. We have revised our paper according to your suggestions. This makes the line numbers in the revised version different from those in the original submission. Therefore, we refer the reviewers to our original submission for the line numbers in our responses. We would like to address your concerns in detail below:", " The paper tackles the problem of data free knowledge distillation, wherein access to a (teacher) model trained on some data is assumed, however the training data is not available. DFKD tries to train a student model using just the teacher model. The paper focuses on one method for DFKD which trains a generator network to generate synthetic examples for distilling the knowledge of the teacher. The main contribution of the paper is to use the generator along with an exponential moving average of the generator’s weights for producing samples. The paper claims that this improves training stability, since the data distribution that the student network sees does not change abruptly. This helps limit catastrophic forgetting of the student network on data from generators at previous iterations. The paper has a thorough empirical evaluation of the proposed method on multiple image benchmarks. Strengths - \n1. The paper proposes a simple trick to prevent catastrophic forgetting which does better than memory store techniques with lower overhead. The compute overhead is also lower than other DFKD methods.\n2. The paper does extensive empirical evaluation of their method, along with a sensitivity analysis to various components of the approach.\n3. The related works section is very well written.\n\nWeaknesses - \n1. The empirical evaluation is over a single run, so it is hard to ascertain the significance of the results. Adding error bars would definitely help here.\n2. The claim of the student undergoing catastrophic forgetting is not sufficiently explored. While the experiments show that having a higher momentum (and weight for the ema generator’s images) is better to an extent, a class wise breakdown of where such models do better, and how this improvement is related to catastrophic forgetting could help.\n3. The method is not interpretable. This is a general shortcoming of such adversarial methods, and should not be taken as a very specific shortcoming of this work.\n 1. Can the authors provide more evidence around how EMA helps with the catastrophic forgetting of the student? An ablation comparing the training and test accuracy curves with and without EMA could help\n\n2. Can this trick of using an additional generator be plugged into other DFKD methods which rely on synthetic data? If so does it always help, or are other methods of making training more stable better?\n\n3. How would the performance change if the train and test set have some distribution shift within themselves? In that case, would EMA help?\n\n4. Can EMA also be applied on the student to stabilize the generator's training?\n\n5. While EMA might mitigate catastrophic forgetting to some extent, it is also possible that there is a large shift within the EMA generator as well, leading to some amount of catastrophic forgetting. While this effect can be controlled by the alpha parameter of decay, an alternative could be to maintain an EMA of the EMA generator and so on. Do the authors have any insights around this?\n Some of the limitations are listed in the weaknesses above. More analysis of the generators and their impact on the student would be appreciated. ", " This work focuses on Data-free Knowledge Distillation (DFKD). The main idea of existing DFKD is to use a generator to synthesize data for training the student. The authors propose Momentum Adversarial Distillation (MAD) to avoid forgetting the knowledge it acquired at the previous steps. Pros:\n\nMany existing data-free knowledge distillation methods adopt an adversarial training paradigm, where the generator is adversarial trained for distillation, making the data distribution different from the earlier. This work focuses on the important yet understudied distribution shift problem in DFKD and propose a reasonable method to solve the problem.\n\nThe paper is overall well written and easy to follow. The experiments are well designed and the results well demonstrate the superiority of the proposed method.\n\nCons:\n\nThe experiments of this work have already addressed most my concerns about the paper. The only one remaining concern is that the difference or superiority of the proposed method from some highly related works should be clarified. For example, MosaicKD [1] also studies the domain shift problem in knowledge distillation, the authors should make the differences from the work clearer or make some comparisons if possible.\n\n[1] Fang G, Bao Y, Song J, et al. Mosaicking to distill: Knowledge distillation from out-of-domain data[J]. Advances in Neural Information Processing Systems, 2021, 34: 11920-11932. Please see the weakness. Yes", " This paper shows that momentum adversarial distillation can help data-free knowledge distillation by solving the distribution shift problem. Extensive experiments are conducted on six datasets to demonstrate the effectiveness of the proposed method, including: CIFAR10, CIFAR100, Tiny-ImageNet, ImageNet, Places365 and Food101. Strengths:\n\n-\tThe proposed method is simple and seems effective, also easy to follow for other research.\n-\tThe paper is generally well-written with sufficient experiments on six datasets.\n\nWeaknesses:\n\n-\tThe momentum updating method itself is not novel and has been used in many problems, such as semi-supervised learning [1], self-supervised learning [2], etc. I’m a little bit concerned about the novelty.\n\n\t[1] Tarvainen, Antti, and Harri Valpola. \"Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results.\" Advances in neural information processing systems 30 (2017).\n\n\t[2] He, Kaiming, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. \"Momentum contrast for unsupervised visual representation learning.\" In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9729-9738. 2020.\n\n-\tIt seems the performance of the proposed method is not competitive if compared to other existing DFKD methods, as shown in Table 1. The comparison with other DFKD methods in Table 2 is also not sufficient.\n\n-\tThere can be more analyses and visualizations to show how momentum update helps alleviate the distribution shift issue, not just the synthetic samples.\n -\tClarify the key contribution and novelty of the proposed method since the momentum updating method is not a new technique and has been widely studied.\n\n-\tProvide more explanations on how momentum update handles the distribution shift issue in the proposed framework.\n\n-\tThe performance in the paper is not competitive and the comparison is not sufficient in the experiments.\n This paper has discussed the limitations and negative social impacts.", " In data-free knowledge distillation, a generator is used to synthesize data for training the student. This paper found that the distribution of synthetic data will change when the generator gets updated. Thus, this paper proposes a new method called Momentum Adversarial Distillation for data-free knowledge distillation. Experiments are done on six benchmark datasets to show the effectiveness of the proposed method. \n Strengths\n1. The presentation of this work is clear and well-written.\n2. The idea of the proposed method is straightforward.\n3. I appreciate the sufficient experiments on different datasets and study on CIFAR100. They deliver the strong message that the proposed method is efficient.\n\nWeakness\n1. In figure5, it is hard to conclude that the generated images by EMA Generator are better than the Generator. Could you use a metric to show how the difference for example the FID score?\n\n2. In numerical experiments, the results seems better than baselines. Results are mainly given the teacher and student are similar networks, how about the performance diverse architecture (resnet as teacher, alexnet as student)?\n\nAfter rebuttal:\nThe reviewers have addressed my concerns. I insist my previous score after considering other reviewers' comments. Using EMA to stabilize the model training is a common technique in existing works. This paper applies it to the generator of images for data-free knowledge distillation. Thus the novelty of the proposed method is limited. Another concern is from Figure5, the EMA generator fails to improve the quality of generated images, which makes me concerned about the cause of the improvement of student performance. How about adding EMA to the student? Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "SOK9QVMRv9X", "YWcs83r-fR", "e4qc2f4aDRG", "e4qc2f4aDRG", "wI7W-lwtlP9", "Lkb8GLW0_9-", "6A1-fOKbeb-", "6A1-fOKbeb-", "nips_2022_C7jm6YgJaT", "nips_2022_C7jm6YgJaT", "nips_2022_C7jm6YgJaT", "nips_2022_C7jm6YgJaT", "nips_2022_C7jm6YgJaT" ]
nips_2022_Fx7oXUVEPW
A Simple and Provably Efficient Algorithm for Asynchronous Federated Contextual Linear Bandits
We study federated contextual linear bandits, where $M$ agents cooperate with each other to solve a global contextual linear bandit problem with the help of a central server. We consider the asynchronous setting, where all agents work independently and the communication between one agent and the server will not trigger other agents' communication. We propose a simple algorithm named FedLinUCB based on the principle of optimism. We prove that the regret of FedLinUCB is bounded by $\widetilde{\mathcal{O}}(d\sqrt{\sum_{m=1}^M T_m})$ and the communication complexity is $\widetilde{O}(dM^2)$, where $d$ is the dimension of the contextual vector and $T_m$ is the total number of interactions with the environment by agent $m$. To the best of our knowledge, this is the first provably efficient algorithm that allows fully asynchronous communication for federated linear bandits, while achieving the same regret guarantee as in the single-agent setting.
Accept
The reviewers all recommend acceptance (with various extent), and the AC also shares their opinion. Regarding the experiments, please make sure that the final version of the paper complies with the relevant parts of the checklist (e.g., report error bars). It could also be interesting to see an experiment where $\theta^*$ is non-uniform.
train
[ "qasRiAwKrO", "SZQUjpBqaFR", "BCA4HY2rw-y", "_49LEJsQerI", "HGbB5VY7UW", "43JCxjpxDy1", "esajHkAqYJH", "vk1hz3pwML3", "sVXfMNai5-b", "Zz9JJ1-iE4i", "GMdZswhy5xR" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nSince the deadline for the author-reviewer discussion phase is fast approaching, we would like to follow up with you to see if you have any further questions. \n\nIn our rebuttal, we have addressed all your questions. In particular, per your suggestion, we have added numerical experiments to corroborate our theory and compare our algorithm with several baselines. We are looking forward to your feedback. Thank you.\n\nBest, \n\nAuthors of Paper 4870\n", " We are glad that our responses and experiments have addressed your previous concerns.\n\nThank you for raising the score!\n", " First, we would like to thank all the reviewers for their careful reading and insightful comments and questions!\n\nIn complement to our response to the reviewers, we have uploaded a revised paper. Besides correcting some minor typos, some major changes are marked out by blue in the PDF, and we summarize the major updates as follows:\n1. We revise the writing of the introduction section to better motivate our work.\n2. We updated Table 1 for a clearer comparison among different algorithms. \n3. We have revised the writing of the comparison between our results and those of [Li and Wang, 2022] in Appendix A.1. This also corresponds to Reviewer BhjP’s comments on the comparison between our work and [Li and Wang, 2022].\n4. We have conducted numerical experiments to compare our algorithm with other baselines. The simulation results reported in Appendix A.3 corroborate our theory. This addresses the questions on the simulation of the proposed algorithm by Reviewer S7tZ and Reviewer BhjP. A brief summary of the experiment is provided below.\n\n**Summary of the experiments**:\n\n_Experiment setup_: In detail, we construct a linear bandit instance with dimension d=25 and true model parameter $\\theta^*=[1/\\sqrt{d},\\ldots,1/\\sqrt{d}]$. In each round $t \\in [T]$, an active agent $m_t$ is uniformly sampled from $M$ agents and the decision set $\\mathcal{D}_t$ consists of 25 different actions, where each action is uniformly sampled from $[-1/\\sqrt{d},1/\\sqrt{d}]^d$. After choosing the action, the active agent will receive the reward perturbed by a Gaussian noise (R=0.3). We run a simulation on the above linear bandit instance with a total number of rounds T=30000 (averaged over 20 runs) and the number of agents is set to be $M=15$ or $M=30$. We implement our FedLinUCB algorithm and compare its performance with Async-LinUCB [Li and Wang, 2022] and OFUL [Abbasi-Yadkori et al., 2011] with full communication (i.e., the active agent communicates with the server during each round). We tune the parameter $\\alpha$ for FedLinUCB and $\\gamma_U=\\gamma_D$ for Async-LinUCB to ensure a fair comparison. We found that $\\alpha=1$ for FedLinUCB and $\\gamma_U=\\gamma_D=5$ for Async-LinUCB lead to the best performance for both algorithms. \n\n\n_Results_: Here we report part of the simulation results with $M=15$ agents.\n\n| Algorithm\\ Cumulative regret \\Rounds | 5000 | 10000 | 15000 | 20000 | 25000 | 30000 |\n|:-------------: |:---------: |:--------:|:--------:|:---------: |:------: |:--------: | \n|OFUL with full communication | 42.5 | 49.4 | 53.3 | 56.0 | 59.1 | 59.9 |\n| Async-LinUCB ($\\gamma_U=\\gamma_D=5$) | 79.7 | 92.0 | 99.6 |105.0 |109.2 |112.7 |\n|FedLinUCB ($\\alpha=1$) | 67.8 | 76.9 | 82.1 | 85.9 | 89.2 | 92.1|\n\n| Algorithm\\ Communication cost \\Rounds | 5000 | 10000 | 15000 | 20000 | 25000 | 30000 |\n|:-------------: |:---------: |:--------:|:--------:|:---------: |:------: |:--------: | \n|OFUL with full communication | 10000 | 20000 | 30000 | 40000 | 50000 | 60000 |\n| Async-LinUCB ($\\gamma_U=\\gamma_D=5$) | 487.5 | 570.0 | 621.75 |660.0 |688.5 |705.0 |\n|FedLinUCB ($\\alpha=1$) | 284.8 | 329.1 | 354.9 | 373.9 | 388.1 | 400.4|\n\nThese simulation results suggest that our FedLinUCB algorithm significantly outperforms the Async-LinUCB algorithm proposed by [Li and Wang, 2022], as our algorithm achieves lower regret with lower communication cost. \n\n\n\nThank you,\n\nAuthors \n", " We would like to thank the reviewer for acknowledging the contributions and strengths of our paper and for the constructive suggestions and feedback. Below we provide clarifications and additional results in response to the questions and comments. \n\n**Q1**: The contribution of this work is not significant compared to [Li and Wang 2022, AISTATS].\n\n**A1**: We would like to emphasize that our contribution is very significant compared to [Li and Wang, 2022], which we clarify from the following three aspects:\n1. **Flexible communication protocol.** Specifically, the communication protocol of their proposed algorithm is more restricted than ours because in their algorithm uploading by one agent may trigger the other agents to download the latest data from the server. Even if those agents only wish to run the job locally and do not want to communicate, they will be forced to do so ( i.e., communicate with the server to fetch the data, update their policy accordingly, and then run the job). Such a communication protocol could be problematic since it completely neglects the common real-world scenario that the agents are offline or lose connection with the server, or sometimes the client simply does not want to communicate for some reason. As a comparison, in our Algorithm 1, the communication between the agent and the server (Line 9 and 12) involves only the participating agent, which is completely independent of other agents. This is clearly more flexible than the communication protocol in [Li and Wang, 2022] and does not suffer from the aforementioned issues. \n2. **Mild assumptions for analysis.** [Li and Wang, 2022] imposed strong assumptions on the contexts in their Assumption 1. First, the distribution of the context vector needs to have a fixed covariance matrix across all time steps, i.e., $E_{t-1} [x_{t,a} x_{t,a}^\\top ] = \\Sigma_c$ for all $t$. Second, they require a minimum eigenvalue condition that $\\Sigma_c \\succcurlyeq \\lambda_c$ for some $\\lambda_c >0$. In fact, their assumption violates the standard setup of contextual linear bandits [Abbasi-Yadkori et al, 2011], where the context vectors can be arbitrary or even adversarial. In this sense, their algorithm is not an “authentic” contextual linear bandits algorithm. \n3. **Sound theoretical analysis.** More importantly, the proof in [Li and Wang, 2022] is flawed and hard to fix. In detail, in the asynchronous setting, we cannot define a fixed filtration as we do in the synchronous setting due to the asynchronous communication (as illustrated at the beginning of Section 6), which is caused by the fact that the data at the server do not have a fixed order. This further implies that in the asynchronous setting, the reward estimator based on the server-end data can be biased. Therefore, existing concentration results [Abbasi-Yakdori, 2011] cannot be directly applied, and hence the proof in [Li and Wang, 2022] is problematic. Please see Appendix A.1 for more details and for a counterexample. We have communicated with the authors of the paper about the flaws in their proof, and they acknowledged this flaw and were not able to give a sound fix.\n", " **Q2**: The originality of the paper is limited. In my opinion, the originality also does not reach the NeurIPS bar. The model is not new. The algorithmic technique is already broadly used (already in the same model as well). Only the analysis technique is novel, though seems minor (since it only works in this specific federated contextual linear bandits' LinUCB algorithm).\n\n**A2**: We agree that the contextual linear bandit model and some algorithmic designs (determinant-based criterion) are not new and have been studied in the existing literature. However, we would like to emphasize that the asynchronous federated linear bandits setting is new and of broader interest. As we have discussed in our paper, there are some existing efforts towards federated linear bandits. Yet, they are either limited to the synchronous setting [Wang et al., 2019; Dubey and Pentland, 2020; Huang et al., 2021] or not fully asynchronous with strong assumptions [Li and Wang, 2022] (whose proof is flawed as we discussed above). \n\nOur work provides the first asynchronous federated linear bandit algorithm that is simple, efficient, and provably correct. The analysis of our algorithm involves novel proof techniques by first establishing the local concentration of each agent’s data and then relating it to the global concentration of all data. These are critical to establishing the regret bound in the asynchronous setting. Also, we believe such proof techniques can be applied to other problems like asynchronous federated reinforcement learning.\n\n---\n\n**Q3**: Why don't you provide simulation results? If my guess is right, in numerical simulation, FedLinUCB would have almost the same performance as [Li and Wang 2022], if not a little worse. But FedLinUCB could still have a smaller communication time.\n\n**A3**: Thanks for the suggestion! We have run simulations on synthetic data to compare our algorithm with the algorithm proposed by [Li and Wang, 2022] (although their analysis is flawed, their algorithm is still a practical algorithm). The settings and results are reported in Appendix A.3 in the rebuttal revision, and a brief summary of the experiment results can be found in the summary of rebuttal revision.\n\n", " First we thank the reviewer for the effort, the positive support of our paper and the constructive feedback. Please see our response to the reviewer’s question below.\n\n**Q**: There may be some benefits if inactive agents are allowed to download new information. Is it possible to close the gap between the communication upper and lower bound by personalized event-trigger downloading procedure?\n\n**A**: First of all, we would like to make the following clarification. When we say an agent is “inactive”, we mean the agent is offline so it cannot upload or download data. If an agent can download data, we would consider it an “active” agent rather than an “inactive” agent in our setting.\n\nSo we guess your question is about: what if an active agent would like to download data even though it does not upload data? Regarding this question, our algorithm can readily accommodate it if any agent wants to download new information from the server. However, since every single agent does not know the information of other agents and the data collected by the server unless the agent communicates with the server, it is not clear what criterion should be used to determine whether and when to download new information. In other words, the agent does not know if it would be significantly beneficial to download new information (which causes communication costs). Therefore, currently, we are not sure whether a personalized downloading procedure can help close the gap in the communication cost. It is definitely an important open problem for future work.\n\nPlease let us know if we have misunderstood your question.\n\n", " We would like to thank the reviewer for acknowledging the contributions and strengths of our paper and for the thoughtful feedback. Below are our responses to the questions and comments.\n\n**Q1**: The proposed algorithm shows that at each time t, only one client exchanges parameters to the server. Is this setting reasonable? Does this make the collaboration less effective? Are there any practical applications fit for this setting?\n\n**A1**: We believe there is some misunderstanding about our setting. We apologize if we did not make it clear. We would like to clarify that here $t \\in [T]$ only denotes the index of the rounds, and it merely indicates the order of clients (i.e., agents) participating in the bandit problem, rather than the ‘real time’ of participation for the agents. In other words, if there is more than one client participating (e.g., exchanging parameters with the server) within a very short interval of time, there is still an order of occurrence among these participation events (i.e., even if the occurrence time of two close events only differ by milliseconds, there is an order), so the client participation will still happen in a sequential order that can be indexed by round $t$. \n\nIn addition, our algorithm can be equivalently rewritten to reflect the application scenarios where a group of participation happens at the same round, as demonstrated by Algorithm 2 in Appendix A. Note that in Algorithm 2: we use round $k$ as the index instead of `time’ $t$ (line 2); we allow a group of participants in one round (line 3). The form of Algorithm 2 aligns with those of existing algorithms in, e.g., [1, 2].\n\nIn fact, our setting is more general than existing federated (distributed) linear bandits work [1, 2], because they require full participation of all the clients in each round (all $M$ clients need to be active in each round), but we allow partial participation (any subset of $M$ agents). Our setting is also more flexible than [3] because in our setting the communication between one client and the server will never trigger the communication between the server and other clients, but their setting [3] will.\n\nSo our setting is actually very flexible and reasonable and fits practical applications very well. Our setting does not prevent collaboration among clients at all. \n\n_Reference_:\n\n[1]: Distributed bandit learning: Near-optimal regret with efficient communication, Yuanhao Wang, Jiachen Hu, Xiaoyu Chen, Liwei Wang, ICLR 2020.\n\n[2]: Differentially-private federated linear bandits, Abhimanyu Dubey, Alex Pentland, Neurips 2020.\n\n[3]: Asynchronous upper confidence bound algorithms for federated linear bandits, Chuanhao Li, Hongning Wang, AISTATS 2022.\n\n---\n\n**Q2**: In Algorithm 1, the parameters exchanged between the server and the client are $\\Sigma$ and $u$, which may lead to the privacy leakage of local data. I suggest the authors refer to the work [Dubey and Pentland, 2020] to address this issue.\n\n**A2**: Thanks for pointing this out and it is a great point! We think in principle such privacy concerns can be mitigated by properly ejecting noises to $\\Sigma$ and $u$ following [Dubey and Pentland, 2020]. Nevertheless, the main focus of our work is to develop a fully asynchronous federated linear bandits algorithm with provable guarantees, which itself is a significant contribution and can serve as a solid first step towards devising differentially private asynchronous federated bandit algorithms. We have added a discussion in the conclusion and future work section regarding this important issue.", " **Q3**: The motivation of this work is not very clear.\n\n**A3**: We would like to clarify our motivation as follows:\n1. Linear bandit is a simple yet powerful model for real-world applications, and the increasing amount of data being distributed requires a federated version of linear bandits.\n2. However, most existing algorithms for federated linear bandits only work for the synchronous setting, and the asynchronous algorithm proposed by [Li and Wang, 2021] has limitations in their communication protocol and data assumption (their proof is also flawed, as we discuss in Appendix A).\n3. Therefore, our goal is to develop a _fully_ asynchronous algorithm for federated linear bandits, and we propose a simple algorithm to achieve this goal.\n4. Moreover, we identify and address unique theoretical challenges in a novel way, which we believe could be of independent interest in other asynchronous scenarios like federated reinforcement learning.\n\nWe have revised our paper to further emphasize the above motivation. We would appreciate it if the reviewer can tell us which specific part needs further motivation.\n\n---\n\n**Q4**: It would be better to conduct some numerical experiments to support the merits of the proposed algorithm. Also how about the performance of the proposed algorithm in dealing with some real-world applications?\n\n**A4**: Thanks for the suggestion! We did not perform experiments on real-world datasets, but we have added experiments on synthetic data. We added a new section in the appendix to describe the experiment setup and results, where we compared our algorithm with some baselines. Please see a brief summary of the experiment results in the summary of rebuttal revision, and also see Appendix A.3 in the rebuttal revision for the complete results.", " The manuscript proposes a federated contextual linear bandit algorithm in the asynchronous setting, where all agents work independently and the communication between one agent and the server will not trigger other agents’ communication. The proposed algorithm obtains a near-optimal regret and low communication complexity. Strengths\n\n(a) This is the first asynchronous federated linear contextual bandit model considering the following features: (i) Each agent can decide whether or not to participate in each round; (ii) The communication between each agent and the server is asynchronous and totally independent of other agents. \n\n(b) Considering that the order of the interaction between the agent and the environment is not fixed, standard martingale-based concentration inequality cannot be directly applied. This work applies a novel proof technique to solve this issue, which could be interesting. \n\n(c) The proposed algorithm achieves a near-optimal regret, low communication complexity and switching cost simultaneously.\n\nWeaknesses\n\n(a) The proposed algorithm shows that at each time $t$, only one client exchanges parameters to the server. It seems that this setting is not reasonable and could make the collaboration less effective. \n\n(b) In Algorithm 1, the parameters exchanged between the server and the client are $\\Sigma$ and $u$, which may lead to the privacy leakage of local data. \n\n(c) The motivation of this work is not very clear. The authors should pay more attention to this issue. \n\n(d) There lack of numerical experiments to support the theoretical findings. (a) The proposed algorithm shows that at each time $t$, only one client exchanges parameters to the server. Is this setting reasonable? Does this make the collaboration less effective? Are there any practical applications fit for this setting?\n\n(b) How about the performance of the proposed algorithm in dealing with some real-world applications?\n (a) In Algorithm 1, the parameters exchanged between the server and the client are $\\Sigma$ and $u$, which may lead to the privacy leakage of local data. I suggest the authors refer to the work [Dubey and Pentland, 2020] to address this issue. \n\n(b) It would be better to conduct some numerical experiments to support the merits of the proposed algorithm. ", " The paper considers asynchronous federated contextual linear bandits problem, where each agent faces a single contextual linear bandits model and the parameters are the same. The learning objective is to minimize the cumulative regret of all agents. By a novel analysis, the paper prove that a simple algorithm called FedLinUCB enjoys regret upper bound $O(d\\sqrt{MT})$, where d is the dimension of parameter space, M is the number of agents, and T is the time horizon. The paper also gives an lower bound of the communication cost scales $\\Omega(dM)$. The primary novelty of the paper is to provide an analysis for fully asynchronous federated linear contextual bandits, where each agent can independently decide whether it communicates with the central server. The paper is well-written and sound.\n\nWhile the independency between agents' communication is new, the algorithm also restricts the inactive agents to download new information from the server.\n As mentioned above, IMHO, there may be some benefits if inactive agents are allowed to download new information. Is it possible to close the gap of communication cost upper and lower bound by personalized event-trigger downloading procedure?\n\n\n------------\nThank the author for the explanation. My concerns are addressed and I decide to keep my score. Yes", " This paper proposes the FedLinUCB algorithm for asynchronous federated contextual linear bandits. The authors prove that the algorithm has a tight regret upper bound and low communication complexity. 1. Strengths\n\t1. Propose a better communication protocol that fits the asynchronous setting;\n\t2. Provide a new method to derive LinUCB's upper bound in the asynchronous setting.\n\t3. The author did a very good job in presenting their work. \n2. Weaknesses:\n\t1. My major concern lies in the significance. The strengths above are not major compared to [Li and Wang 2022, AISTATS]. Because FedLinUCB is a (very) minor modification of [Li and Wang 2022] in communication protocol: given the same communication condition (determinant-based criterion), the difference is only in *how* agent and server communicate. (also see the paper's Table 1's last two rows).\n\t2. Secondly, in my opinion, the originality also does not reach the NeurIPS bar. The model is not new. The algorithmic technique is already broadly used (already in the same model as well). Only the analysis technique is novel, though seems minor (since it only works in this specific federated contextual linear bandits' LinUCB algorithm). \n\n------\nAfter rebuttal: I thank the author for the detail comparison [Li and Wang, 2022] and the additional experiments. My concerns are addressed. I raised my score to weak accept. Why don't you provide simulation results? If my guess is right, in numerical simulation, FedLinUCB would have almost the same performance as [Li and Wang 2022], if not a little worse. But FedLinUCB could still have a smaller communication time. This part of the paper looks good for me. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3 ]
[ "sVXfMNai5-b", "GMdZswhy5xR", "nips_2022_Fx7oXUVEPW", "GMdZswhy5xR", "GMdZswhy5xR", "Zz9JJ1-iE4i", "sVXfMNai5-b", "sVXfMNai5-b", "nips_2022_Fx7oXUVEPW", "nips_2022_Fx7oXUVEPW", "nips_2022_Fx7oXUVEPW" ]
nips_2022_bg7d_2jWv6
On Divergence Measures for Bayesian Pseudocoresets
A Bayesian pseudocoreset is a small synthetic dataset for which the posterior over parameters approximates that of the original dataset. While promising, the scalability of Bayesian pseudocoresets is not yet validated in large-scale problems such as image classification with deep neural networks. On the other hand, dataset distillation methods similarly construct a small dataset such that the optimization with the synthetic dataset converges to a solution similar to optimization with full data. Although dataset distillation has been empirically verified in large-scale settings, the framework is restricted to point estimates, and their adaptation to Bayesian inference has not been explored. This paper casts two representative dataset distillation algorithms as approximations to methods for constructing pseudocoresets by minimizing specific divergence measures: reverse KL divergence and Wasserstein distance. Furthermore, we provide a unifying view of such divergence measures in Bayesian pseudocoreset construction. Finally, we propose a novel Bayesian pseudocoreset algorithm based on minimizing forward KL divergence. Our empirical results demonstrate that the pseudocoresets constructed from these methods reflect the true posterior even in large-scale Bayesian inference problems.
Accept
There was a consensus among reviewers that this paper should be accepted. The authors formulate dataset distillation methods as approximate Bayesian pseudocoreset procedures which appears to be a novel viewpoint. They further propose a new coreset procedure and show that it performs well. The paper appears to be well-written.
train
[ "vyfgk15uQSV", "iJ3R-OBEP4j", "jzVFAWa0z9I", "MGrBO8mlcpm", "Fs6Y5fvWuNo", "xgWgt-cLvwo", "g25Hk9zofNH", "h8QY3AMAkuO", "rRJboalgx30", "cdOoLvcJNtM", "sZWmVatGXi5", "U2JLGJvw_G6", "ddNdxSIEal", "k-FHeUUZ5qx", "hNvig7OzjC5", "zVt9vEI3sUu", "an6PijyU88E" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. We have included them in Figure 7. ", " Thanks for the new results.\nYou will need to include them to figure 7 to make the trend clear to readers\nBut the results looks great to me as they give an idea of how large the set is needed for a nearly perfect match.", " We appreciate your effort to review our paper and responses.\n\n**[Q1]** The paper relies on the reasonable Gaussian approximation. I expect the reasoning for the choice should be carefully elaborated in the revised paper. Also, the paper should make it clear that the equivalency between the Bayesian Pseudocoreset methods and data distillation methods does not necessarily rely on approximation. The approximation simplifies the analysis but probably is not the necessary condition for the intrinsic connection.\n- Thanks for your suggestion that can enhance the presentation of the paper. As you suggested, we added relevant discussions in the revision (line 127).\n---\n**[Q2]** The eq (12) in proposition 3.1 does not match eq (6) of the DC method as the latter relies on the trajectory of θ while proposition 3.1 only holds for near optimum θ. If so, the results on DC should be added for comparison.\n- As you mentioned and we pointed out in line 127, the relation in proposition 3.1 is only for near optimum. So DC and BPC-rKL are not exactly equivalent, but BPC-rKL with Gaussian approximation reduces to DC when the learning trajectory of theta reaches local optimum. As you suggested, we've additionally conducted experiments with DC and present partial results here, and the rest will be put into the paper as soon as completed.\n\n**<Dataset Condensation>**, CIFAR-10, HMC\n| | Acc | NLL |\n|--------|:-----------------:|:-----------------:|\n| ipc 1 | 0.2678$\\pm$0.0090 | 2.2790$\\pm$0.0573 |\n| ipc 10 | 0.3753$\\pm$0.0131 | 1.8489$\\pm$0.0211 | (edited) ", " I appreciate the detailed responses by the authors. I believe the significance of the work is more about its positioning in the literature and therefore I've updated my score. Here are my remaining concerns.\n\t\n- The paper relies on the reasonable Gaussian approximation. I expect the reasoning for the choice should be carefully elaborated in the revised paper. Also, the paper should make it clear that the equivalency between the Bayesian Pseudocoreset methods and data distillation methods does not necessarily rely on approximation. The approximation simplifies the analysis but probably is not the necessary condition for the intrinsic connection. \n- I might misunderstand here. The eq (12) in proposition 3.1 does not match eq (6) of the DC method as the latter relies on the trajectory of $\\theta$ while proposition 3.1 only holds for near optimum $\\theta$. If so, the results on DC should be added for comparison.\n- The equivalency between minimizing the divergence of the (sub)Gaussian distributions and minimizing the distance of empirical learning losses are typically expected. Even the analysis of proposition 1 doesn't provide significant insights into the connections.\n", " Thank you for your response. We present additional results with larger pseudocoreset sizes, where we used larger learning rates (20.0) for all configurations for faster convergence. For all three divergence measures, pseudocoresets of size larger than 60 could achieve divergence values less than 1, and pseudocoresets of size 100 could achieve near zero divergence values.\n\n**<BPC-fKL>**\n| | 20 | 40 | 60 | 80 | 100 |\n|-------------|-------|------|------|-------|-------|\n| Reverse KL | 12.16 | 3.10 | 0.89 | 0.18 | 0.09 |\n| Forward KL | 4.09 | 1.65 | 0.62 | 0.15 | 0.09 |\n| Wasserstein | 0.16 | 0.03 | 0.01 | 0.002 | 0.001 |\n\n**<BPC-rKL>**\n| | 20 | 40 | 60 | 80 | 100 |\n|-------------|-------|------|-------|-------|------|\n| Reverse KL | 11.94 | 2.92 | 0.78 | 0.14 | 0.01 |\n| Forward KL | 4.04 | 1.58 | 0.55 | 0.12 | 0.01 |\n| Wasserstein | 0.15 | 0.03 | 0.008 | 0.001 | 0.00 |\n\n**<BPC-W>**\n| | 20 | 40 | 60 | 80 | 100 |\n|-------------|-------|------|-------|-------|------|\n| Reverse KL | 11.86 | 2.90 | 0.77 | 0.13 | 0.01 |\n| Forward KL | 4.03 | 1.57 | 0.55 | 0.11 | 0.01 |\n| Wasserstein | 0.15 | 0.03 | 0.008 | 0.001 | 0.00 | (edited) ", " Thanks for the extra experiment on synthetic data.\nIt looks like in the region of pseudocoreset size you explored, the divergences are far away from 0 but still are decreasing.\nHow much points do you need to get some number that is close to 0 or have the curves in figure 7 almost converged?", " Dear reviewers,\n\nWe sincerely appreciate your efforts in reviewing our paper, and your constructive comments. We have responded to your comments, faithfully reflected them in the revision, and provided additional experimental results that you have requested. Could you please go over our responses and the revision since end of the final discussion phase is approaching? Please let us know there is anything else we need to clarify or provide.\n\nThanks, authors.", " We really appreciate all the reviewers for their constructive comments. Here is the summary of the revision. The edited parts are marked as blue in the file.\n\n* **More coreset baselines (Herding, K-center) and more uncertainty quantification metrics (ECE, Brier scores)** in Appendix C.3 as suggested by reviewers knYM, 6N6Z.\n* Experiments with **synthetic data** where the exact divergence values are tractable to compute in Appendix C.5 as suggested by reviewer Pvtm.\n* **Quantitative evaluation** of learned pseudocoreset images in Figure 2 as suggested by reviewer RC9c.\n* Assumptions are made **more explicit in Proposition 3.1** as suggested by reviewer RC9c.\n* Rewrote unclear paragraphs (especially in the part where we are motivating our work in the introduction) with more explanations, corrected typos pointed out by the reviewers.\n* Moved section 5.3 to Appendix C.2 due to the page limit.", " We sincerely appreciate your constructive comments. We respond to the individual comments below:\n\n**[Q1]** The experiments might be improved by showing a setting where the posterior is tractable.\n* We conducted an **additional experiment on the synthetic dataset of multivariate Gaussian distribution.** We considered a setting to infer the posterior distribution of the mean given observed samples. We first trained the pseudocoreset by each method we suggested, then validated them through exact divergences between two posteriors which are computable in this Gaussian setting. \n* As expected, all three methods work well in that all divergences are well reduced as the training progresses. Also, the larger the pseudocoreset size, the smaller the divergences. We have included the exact divergences between pseudocoreset posteriors and true posteriors both in terms of training steps and pseudocoreset sizes in **Appendix C.5.**\n---\n**[Q2]** I wonder if the divergence from [1] is related to the contrastive divergence paragraph in section 3.3 or maybe a good choice of divergence in this framework.\n* Thanks for the pointer, we think that the variational contrastive divergence [1] can be a good candidate for the tractable divergence measures whose gradients can be computed efficiently during the pseudocoreset learning.\n---\n**[Q3]** Typos.\n* Thank you for finding the typos and we have corrected them.\n---\n## References\n[1] Ruiz, Francisco, and Michalis Titsias. \"A contrastive divergence for combining variational inference and mcmc.\" In International Conference on Machine Learning, pp. 5537-5545. PMLR, 2019.", " We sincerely appreciate your constructive comments. We respond to the individual comments below:\n\n**[Q1]** The theoretical analysis is extremely rough. \n* We agree that the current theoretical analysis is rough as you said (especially proposition 3.1), please note that our construction is to **reveal connections** between the dataset distillation and Bayesian pseudocoreset literatures through the lens of such simple variational approximations. Note also that we proposed a novel Bayesian pseudocoreset algorithm (BPC-fKL) that is comparable or sometimes better than BPC-W (MTT) with much less time and space complexity. As we proposed BPC-fKL in our framework, one can further propose more sophisticated variational approximations based on the rich theory of approximating posteriors with Gaussians as you suggested, or even considering alternative divergence measures.\n---\n**[Q2]** There is not a rigorous evaluation of the quality of the posterior approximation.\n* We have added additional metrics to evaluate uncertainty quantification aspects of the methods, **including ECE and Brier scores. Please refer to the revision (Appendix C.3 and Table 6).**\n\n\n**<HMC performances of pseudocoresets>**\n\n| | ECE | Brier score |\n|---------|:----------:|:-----------:|\n| Random | 0.1385 | 0.8595 |\n| BPC-rKL | **0.1183** | 0.7988 |\n| BPC-W | 0.1457 | 0.8030 |\n| BPC-fKL | 0.1538 | **0.7231** |\n\n---\n\n**[Q3]** Is there a general recipe for constructing Bayesian pseudocoresets using Gaussian approximations?\n* We thank you for your valuable question, which can potentially strengthen our paper. Generally, we find BPC-fKL to be comparable with BPC-W and better than BPC-rKL. Considering the heavy training cost of BPC-W due to the backpropagation through the unrolling step, BPC-fKL would be an attractive alternative to BPC-W when the **training resources are limited** or the target model has an extremely large number of parameters. \n* For the Gaussian approximations, we have tried constructing covariance matrices using the empirical statistics computed from the sample trajectories (e.g., running SGHMC, collecting the posterior samples, and computing empirical covariance matrices), but it did not result in significant improvement over the vanilla Gaussian approximation with constant covariances. Designing a flexible yet tractable variational approximation would be an important future research direction under our framework. \n---\n**[Q4]** Can you quantify the various sources of error in this approximation?\n* Needless to say the approximation error of the Gaussian distribution, in practice, when we are constructing the full-data posterior, we use the mini-batch sampled from the entire dataset. The gradient noise due to the mini-batch sampling also contributes to the approximation error. Also, most of the divergence measures are intractable, so we rely on the stochastic gradients to minimize them. Although reparametrization tricks for the Gaussian sampling can reduce the variance of the stochastic gradient to some extent, there still is a gap between the true gradient of the divergence measures and the stochastic gradients.\n---\n**[Q5]** I think it may be worth discussing privacy in the societal impacts section.\n* Thank you for your valuable comment. We have added the positive aspect of pseudocoresets that help increase data privacy.\n---\n**[Q6]** Typos and unclear paragraphs.\n* Thank you for finding the typos and we have corrected them and rewrote the paragraphs more clearly.\n", " We sincerely appreciate your constructive comments. We respond to the individual comments below:\n\n**[Q1]** The assumption of proposition 3.1 should be defined clearer that $‖θ_t−θ_{t−1}‖$ should be sufficiently small.\n* We have **updated the paper to be more explicitly clear about the assumption in proposition 3.1** about the magnitude of the parameter update and the fact that the approximation introduced here is applicable when close to convergence. Please also note that this proposition only compares BPC-rKL with DC, and our analysis of BPC-W and BPC-fKL does not rely on this approximation.\n---\n**[Q2]** Results shown in figure 1 are not very illustrative due to their size. Could you pick some representatives and move the rest to the appendix? Could you demonstrate the differences quantitatively rather than qualitatively?\n* In the revised version, Fig 1 in the main text shows **fewer example images** to look better. We also added quantitative experiments on the learned images to supplement the qualitative observations about these images. **We took the Fourier transform of learned synthetic images and plotted the log amplitude vs. frequency in Fig 2.** BPC-rKL has the most high-frequency noise, substantiating our previous qualitative observation that “BPC-rKL is the most noisy”. Corresponding to the previous works [5,6] that discussed high frequency noises interfering with the training, the most noisy BPC-rKL performs the worst.\n\n---\n**[Q3]** Could the performance gap between the using pseudocoreset and the entire dataset be indicated by the divergence?\n* Table 1 of the revised paper includes SGHMC performance on the **entire CIFAR10 dataset.** Using the same architecture, 3-layer ConvNet, we got an accuracy of 0.7383+0.0052 and an nll of 0.9387+0.0152. \n* To more directly verify whether the divergences we consider are a good measure of convergence to the true posterior, we added new experiments to Appendix C.4. We consider a **synthetic Gaussian dataset** where we can exactly compute the divergence with the true posterior. This experiment shows that the divergences between the true posterior and the pseudocoreset posterior decrease as training progresses and converges to a low value.\n---\n**[Q4]** Could you clarify the bias amplification on the downstream task mentioned in line 299? \n* Previous works [1,2,3] discuss the tendency of deep learning models to amplify unintended bias in datasets. Although their claim was not explicitly about pseudocoresets, we believe their reasoning also extends to pseudocoresets, which are similarly (synthetic) datasets. For clarity, we removed the sentence on amplification because it has not yet been studied exactly for the pseudocoresets.\n---\n**[Q5]** Is there any reference or empirical results substantiating the claim that weight doesn't help with the performance?\n* We conducted experiments both with and without the learnable weights. However, since they were just similar or worse with weights in our setting, we simply tried to learn pseudocoresets only. These are some results of our experiments with weights. \n\n**<HMC performances of BPC-fKL with and without learnable weights>**\n\n| | acc | nll |\n|-------------------|:---------------------:|:---------------------:|\n| ipc=1 | 0.3354 $\\pm$ 0.0066 | 2.0253 $\\pm$ 0.0311 |\n| ipc=1 (+weights) | 0.2851 $\\pm$ 0.0231 | 2.2860 $\\pm$ 0.0636 |\n| ipc=10 | 0.4294 $\\pm$ 0.0101 | 1.7292 $\\pm$ 0.0248 |\n| ipc=10 (+weights) | 0.4209 $\\pm$ 0.0107 | 1.7429 $\\pm$ 0.0188 |\n| ipc=20 | 0.4910 $\\pm$ 0.0088 | 1.6279 $\\pm$ 0.0264 |\n| ipc=20 (+weights) | 0.4960 $\\pm$ 0.0099 | 1.6164 $\\pm$ 0.0258 |\n", " **[Q6]** Gaussian approximations should be carefully elaborated throughout the paper.\n* While we agree on this point, we believe that we can develop enhanced versions of the previous approaches (BPC-rKL, BPC-W) or the one we proposed (BKC-fKL) with more elaborated choices of the variational posteriors, which is a natural future research direction. Still, as noted by the reviewer 6N6Z, “There is a rich literature on Gaussian approximations to the posterior, including in particular Bernstein von Mises theorems”; that is, Gaussian approximation often makes sense with properly chosen covariance matrices. Ours is a more naive version of such approaches with the covariance matrices simply set as spherical matrices with constant variance values.\n---\n**[Q7]** The relation between BPC-W and MTT accredits mostly to the simplified setting using the gaussian approximation.\n* As you said, our reinterpretation of MTT as BPC-W hinges on a simplest possible Gaussian approximation, we believe that this suggests a way of improving MTT by employing more sophisticated variational distributions, for instance advanced covariance matrices or even non-Gaussian variational posteriors. The key desiderata for those lines of work would be how to construct flexible yet tractable variational posteriors on the fly during the training.\n---\n**[Q8]** In section 5, BPC-rKL and BPC-W are not exactly equivalent to existing methods.\n* **BPC-W is exactly equivalent to MTT**, and we used their code as a starting point for our implementation for the other methods. We used the name BPC-W for consistency throughout the paper, as our paper focuses on providing a unified view of pseudocoreset methods.\n* BPC-rKL corresponds to the Bayesian pseudocoreset [4] with the variational posteriors replaced with simple Gaussian from the Laplace approximation. This is mainly **due to the quadratic complexity of constructing Hessian matrices in the Laplace approximation** during the training in high-dimensional space.\n---\n**[Q9]** Typos.\n* Thank you for finding the typos and we have corrected them.\n---\n## References\n[1] Lisa Anne Hendricks, Kaylee Burns, Kate Saenko, Trevor Darrell, and Anna Rohrbach. “Women also snowboard: Overcoming bias in captioning models.” In European Conference on Computer Vision, pages 793–811. Springer, 2018.\n\n[2] Pierre Stock and Moustapha Cisse. “Convnets and imagenet beyond accuracy: Understanding mistakes and uncovering biases.” In Proceedings of the European Conference on Computer Vision (ECCV), September 2018.\n\n[3] Tianlu Wang, Jieyu Zhao, Mark Yatskar, Kai-Wei Chang, and Vicente Ordonez. “Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations.” In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), October 2019.\n\n[4] D. Manousakas, Z. Xu, C. Mascolo, and T. Campbell. “Bayesian pseudocoresets.” In Advances in Neural Information Processing Systems 33 (NeurIPS 2020), 2020.\n\n[5] Shao, Rulin, et al. \"On the adversarial robustness of vision transformers.\" arXiv preprint arXiv:2103.15670 (2021).\n\n[6] Park, Namuk, and Songkuk Kim. \"Blurs behave like ensembles: Spatial smoothings to improve accuracy, uncertainty, and robustness.\" International Conference on Machine Learning. PMLR, 2022.\n", " We sincerely appreciate your constructive comments. We respond to the individual comments below:\n\n**[Q1]** Have you compared your method with another baseline? At least another coreset algorithm a bit more intelligent than random, such as weighted summation or using input points distances?\n\n* In the revised submission, **we added two more sophisticated coreset baselines: Herding [1] and K-center [2] (Appendix C.3, Table 6).** Herding gathers samples near centers of feature representations for each class and K-center selects multiple center points such that the distance between each data point is maximized while the distance between centers is minimized. We use a pre-trained ConvNet to obtain the feature representations for both methods. Similarly to how [3] reports that these baselines underperform pseudocoresets for SGD, our results also show that **pseudocoresets are better for Bayesian inference tasks.** Please refer to the revised paper for more results.\n\n\n**<HMC performances of coresets and pseudocoresets>**\n\n| | acc | nll |\n|----------|:-------------------:|:-------------------:|\n| Herding | 0.3000 $\\pm$ 0.0067 | 2.0343 $\\pm$ 0.0189 |\n| K-center | 0.1739 $\\pm$ 0.0048 | 2.3934 $\\pm$ 0.0132 |\n| BPC-rKL | 0.3334 $\\pm$ 0.0064 | 1.9516 $\\pm$ 0.0178 |\n| BPC-W | 0.3538 $\\pm$ 0.0111 | 1.9369 $\\pm$ 0.0158 |\n| **BPC-fKL** | **0.4361 $\\pm$ 0.0080** | **1.7198 $\\pm$ 0.0204** |\n\n---\n**[Q2]** The relationship between dataset distillation and Bayesian pseudocoroesets needs to be more organized. It would be good to emphasize more on the reasons that they were able to make the Bayesian algorithm tractable with a divergence perspective.\n\n* Thanks for your valuable comment. As you suggested, **we revised the section in the introduction so that it can better highlight the connection** between dataset distillation and Bayesian pseudocoresets, and how the Bayesian pseudocoresets can be made scalable thanks to this connection. \n---\n**[Q3]** The ideas and proofs are too straightforward and the experiments are too preliminary.\n* Although we agree that the proof of Proposition 3.1 is quite straightforward, **our core contribution is in revealing connections** between existing dataset distillation and Bayesian pseudocoreset methods, not the proposition or theory themselves. Based on our findings, one can further develop both dataset distillation or Bayesian pseudocoresets, as we did in our paper by minimizing alternative divergence measures and adopting advances in dataset distillation to make Bayesian pseudocoreset methods more scalable. \n* To the best of our knowledge, our work is **the first to learn Bayesian pseudocoresets for real-world image classification datasets** with Bayesian neural networks, targeting SGHMC rather than vanilla SGD. We believe the experiments in the paper demonstrate the scalability of Bayesian pseudocoreset methods given our additional tricks, and we leave evaluation on ImageNet-scale data to future work. \n---\n## References\n[1] Max Welling. “Herding dynamical weights to learn.” In Proceedings of the 26th Annual International Conference on Machine Learning, pp. 1121–1128. ACM, 2009.\n\n[2] G W Wolf. “Facility location: concepts, models, algorithms and case studies.” 2011.\n\n[3] B. Zhao, K. R. Mopuri, and H. Bilen. “Dataset condensation with gradient matching.” In International Conference on Learning Representations, 2021.\n", " The authors discuss Bayesian pseudocoreset methods that uses reverse KL, KL and Wasserstein metrics special cases. They link these algorithms to data distillation as different posterior choices. Their main method is the Bayesian pseudocoreset algorithm that replaces reverse KL with a forward KL algorithm. They test these algorithms with Cifar-10 dataset with random coresets baseline. Strengths:\n\nFocusing/unifying on the Bayesian coresets and distillation methods is a good direction and influence for the academia. The paper might have a good potential to be proceeded with more future works.\n\nWeaknesses:\n\nThe baseline is a random coreset which is a very weak baseline. However given that they apply their algorithm on an image dataset, this might be acceptable, and to be improved as a future work.\n\nThe paper needs to be organized and motivated better. The authors might more clearly mention the relations and related works for data distillation and bayesian coresets method, then unify them. There are some optimality requirements for these relations, it could be organized better. It would be good to emphasize more on the reasons that they were able to make the Bayesian algorithm tractable with a divergence perspective.\n\nThe ideas and proofs are too straightforward and the experiments are too preliminary. \n\nIt would be good to do one more pass for proofreading. Have you compared your method with another baseline? At least another coreset algorithm a bit more intelligent than random, such as weighted summation or using input points distances? \n\n Yes", " The paper unifies the Bayesian Pseudocoresets and existing dataset distillation methods in certain settings. It investigates multiple choice divergence measurements and proposes an algorithm using forward KL-divergence to construct Bayesian Pseudocoresets. - ***Strengths***:\n1. The direction of unifying existing data condensation and Bayesian Pseudocoresets is well-motivated.\n2. The effectiveness of the proposed method is well illustrated.\n\t\n\t\n- ***Problems***: \n1. Proposition 3.1 relies on two factors: (1) $\\Vert \\theta_{t} - \\theta_{t-1}\\Vert$ is sufficiently small so that the expansions in eq 24 and eq 25 hold; (2) the gaussian approximation. The second approximation might be commonly used in practice, yet this choice should be carefully elaborated on since it plays a key role throughout the paper. The first assumption actually limits the efficacy of the proposition to large t when \\theta_t is close to local optima instead of the whole trajectory as in Data Condensation (eq 6). Then the connection between BPC-rKL and the existing Dataset Distillation method is not as exact as claimed. \n2. In section 3.2, the discussed connection between BPC-W and MTT accredits mostly to the simplified setting using the gaussian approximation. As the distance of two Gaussians is minimized when the variance is shared and the distance of the mean is minimized.\n3. Theoretical-wise, the paper mainly leverages the gaussian approximation to unify the Bayesian Pseudocorset with data distillation methods. The strong reliance on the gaussian approximations without adequate elaboration downgrades its contribution.\n4. In section 5, since BPC-rKL and BPC-W are not exactly equivalent to existing methods, the dataset distillation methods and the original Bayesian Pseudocoresets algorithm are expected as baselines while missing. Especially when the implementations of these algorithms are available.\n\n***Minor Issues***:\n1. Line 422, I think it should be $\\approx$ rather than =.\n2. Line 419, the expectation seems to be over $\\pi_x$ rather than $\\pi_u$. 1. The assumption of proposition 3.1 should be defined clearer that $\\Vert \\theta_{t} - \\theta_{t-1}\\Vert$ should be sufficiently small.\n2. Results shown in figure 1 is not very illustrative due to their size. Could you pick some representatives and move the rest to the appendix? Could you demonstrate the differences quantitively rather than qualitatively? \n3. Could the performance gap between the using psuedocoreset and the entire dataset be indicated by the divergence?\n4. Could you clarify the bias amplification on the downstream task mentioned in line 299? The Bayesian Pseudocoreset algorithm [Dionysis et al. 2020] claims weight w could potentially help with reducing the computation burden when nonzero entries are small. Is there any reference or empirical results substantiating the claim that m doesn't help with the performance? The limitation is well discussed. The negative societal impact is not applied.", " This paper reinterprets heuristic dataset distillation methods as approximate Bayesian pseudocoreset procedures, with different choices of divergence between the true posterior and the pseudocoreset approximation. Motivated by this analysis, they propose a new Bayesian coreset procedure based on an approximation to the forward KL divergence, and show empirically that it outperforms alternatives. This paper offers an insightful and unifying new view on heuristic dataset distillation techniques, which leads to improved practical methods. It is original (to my knowledge) and clearly written, and a significant contribution both to the dataset distillation and the Bayesian coresets literatures.\nIt suffers from two main weaknesses in terms of quality. First, the theoretical analysis is extremely rough, involving severe approximations (essentially treating the posterior as a Gaussian with zero variance, i.e. a delta function). There is a rich literature on Gaussian approximations to the posterior, including in particular Bernstein von Mises theorems. Moreover, SGD can be viewed as providing a Gaussian approximation to the Bayesian posterior (Mandt, Hoffman, Blei, Stochastic Gradient Descent as Approximate Bayesian Inference). It would be valuable to establish more general and rigorous results in terms of Gaussian posterior approximations, and then derive the heuristic approximations as limiting cases when the covariance goes to zero (i.e. when the number of datapoints goes to infinity). The second major weakness is the empirical analysis. Despite motivating the work by Bayesian inference, there is not a rigorous evaluation of the quality of the posterior approximation (e.g. is the uncertainty correct), only evaluation of accuracy and log likelihood. Is there a general recipe for constructing Bayesian pseudocoresets using Gaussian approximations? In particular, can you enumerate the different choices of how the trajectory is managed, what divergence to pick, etc. Can you quantify the various sources of error in this approximation?\n\nSmall Suggestions:\nEquation 1 has a typo, the comma in the middle (which makes it look as if the prior pi_0 is defined to be the posterior). Same typo appears in Eqn. 2\n\nEquation 4: I believe u_m should not be bold, since it’s one datapoint, and the gradient on the RHS should be with respect to u_m not u, since the overall gradient should have dimensions the size of a single datapoint.\n\nLine 67, the index should run to capital N not lowercase n. Might want to do the same in the contrastive divergence section, for consistent notation.\n\nLine 89 - Typo: Traning -> Training\n\nLine 180 - Check the grammar here and in the following lines (extends -> extend, etc.)\n\nLine 193 - In fact, [16] and other Bayesian coreset methods demonstrate applications to real data (some real data is low dimensional, believe it or not).\n\nLines 215 - this paragraph is very unclear to me.\n The authors are honest about some serious limitations of the method’s performance. I think it may be worth discussing privacy in the societal impacts section; pseudocoresets may potentially help increase data privacy while enabling downstream analysis on the one hand, while on the other hand their privacy properties have not be thoroughly studied", " The paper studies learning Bayesian pseduocoresets, which is the task of a small set of \"data\" by conditioning on which the posterior is close to the posterior conditioned on the full dataset.\nIt establishes the framework in which two important ingredients are (1) the choice of divergence measure and (2) the choice of variational approximation.\nFollowing this, it first recasts/reinterprets two existing, successful dataset distillation methods are Bayesian pseduocorsets learning with reverse KL/Wasserstein distance with Gaussian approximations to the posterior.\nThen it proposes to use the forward KL which in theory doesn't suffer form the mode-seeking behaviour of reverse KL.\nOn a few real-world datasets, the proposed choice of forward KL is better than existing choice of divergence measures in terms of NLL and accuracy. Pros\n- The paper studies an important problem of scaling Bayesian methods to large datasets by learning Bayesian pseudocoresets. This approach can benefit a wide range of Bayesian inference methods.\n- The paper establishes the framework of this problem, which is novel, and studies the relation to existing dataset distillation methods. The paper is significant as it (1) gives justification of some existing heuristics-based methods and (2) paves the way of further research in this direction.\n- The paper is clearly written with good quality.\n- Related works are discussed adequately.\n\nCons\n- The experiments might be improved a bit.\n - I'd like to see at least one synthetic dataset where the posterior is tractable and see how the posterior each method learns differs from the true one.\n - Some classic Bayesian inference datasets could be used so that naive HMC results could be provided as a reference.\n\nWriting\n- Equation (1) is broken: I assume the $\\pi_0$ in the middle shall be removed. - There are some discussions regarding contrastive divergence in section 3.3. I wonder if the divergence from [1] is related there or maybe a good choice of divergence in this framework.\n\n[1] Ruiz, Francisco, and Michalis Titsias. \"A contrastive divergence for combining variational inference and mcmc.\" In International Conference on Machine Learning, pp. 5537-5545. PMLR, 2019. Limitations and potential negative societal impact are addressed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "iJ3R-OBEP4j", "Fs6Y5fvWuNo", "MGrBO8mlcpm", "U2JLGJvw_G6", "xgWgt-cLvwo", "rRJboalgx30", "nips_2022_bg7d_2jWv6", "nips_2022_bg7d_2jWv6", "an6PijyU88E", "zVt9vEI3sUu", "hNvig7OzjC5", "hNvig7OzjC5", "k-FHeUUZ5qx", "nips_2022_bg7d_2jWv6", "nips_2022_bg7d_2jWv6", "nips_2022_bg7d_2jWv6", "nips_2022_bg7d_2jWv6" ]
nips_2022_OMZG4vsKmm7
Domain Adaptation under Open Set Label Shift
We introduce the problem of domain adaptation under Open Set Label Shift (OSLS), where the label distribution can change arbitrarily and a new class may arrive during deployment, but the class-conditional distributions $p(x|y)$ are domain-invariant. OSLS subsumes domain adaptation under label shift and Positive-Unlabeled (PU) learning. The learner's goals here are two-fold: (a) estimate the target label distribution, including the novel class; and (b) learn a target classifier. First, we establish the necessary and sufficient for identifying these quantities. Second, motivated by advances in label shift and PU learning, we propose practical methods for both tasks that leverage black-box predictors. Unlike typical Open Set Domain Adaptation (OSDA) problems, which tend to be ill-posed and amenable only to heuristics, OSLS offers a well-posed problem amenable to more principled machinery. Experiments across numerous semi-synthetic benchmarks on vision, language, and medical datasets demonstrate that our methods consistently outperform OSDA baselines, achieving $10$--$25\%$ improvements in target domain accuracy. Finally, we analyze the proposed methods, establishing finite-sample convergence to the true label marginal and convergence to optimal classifier for linear models in a Gaussian setup. Code is available at https://github.com/acmi-lab/Open-Set-Label-Shift.
Accept
The paper addresses an interesting domain adataption question and proposes an novel and elegant solution supported with relevant theory. Although some issues have been raised, all reviewers agree that the paper worth be published, and we expect the authors to take into account the comments of the reviewers (eg discussing limitations of PULSE, checking positivity conditions...)
train
[ "OYfXrGIhrLh", "eC_-qIy3tHQ", "mhBEuO8mnNi", "JqKVglh1cj6", "Ix1AR9tAGUM", "fV_wcF2D0i", "_jhz1r9uyD", "aj9zuF1NMHf", "emBQqZQXugo", "Qi_RMEuoYxv", "SMNZ5P0Vg3f4", "rLUs7A2Ke9Z", "tphZ7Wen9MA", "CZjYFs7D1U", "YmMQwFyBxX8", "VLds9CgKWS", "8EymETiYcKF", "Zu4Nh3_JjY5" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi,\nThanks for the reply, your reply has covered most of my main concerns, and I will raise my assessment.", " Thanks again for your thoughtful review! Since the discussion window is closing, we just wanted to check in to see if our replies successfully addressed your primary concerns or if there is anything else that we can add in the remaining time that might improve the paper such that you might be moved to increase your score. Please let us know if there is anything else that we can do on our end. ", " Hi, since we haven't heard back and the window for discussion is coming to a close we just wanted to check in to see if our replies and improvements to the paper have successfully addressed your concerns or if there is anything else that we contribute in the remaining time to improve the draft to your satisfaction. ", " Hi, thanks again for your thoughtful review. We would just like to check in to see whether our replies and improvements to the manuscript have successfully addressed your key concerns. Can you please let us know if you are moved to improve your assessment or if there are any other concerns that we might address in the remaining time that could improve the paper to your satisfaction? Thanks!", " Thank you for the detailed response addressing many of my concerns! I am in agreement with these responses, and appreciate the work you have done to integrate these changes into the paper - I believe the paper is much clearer narratively for it. \n\nI have no other comments or questions for now, but will let you know if I do. \n", " **Does all the 5 tasks in the experiments satisfy \"strong positivity\"? Could the author include benchmarks not satisfying this condition?** \nThanks for this suggestion. We have now included experiments on an age-prediction task (UTK-Face) in Appendix F.9 in Table 6. We observe that the prevalence of the novel class as estimated with our PULSE framework is significantly closer to the true estimate. Additionally target classification performance of OSLS is similar to that of $k$PU both of which significantly improve over domain discriminator and source only baselines. \n\t\n| Method | Acc | MPE|\n|--------------|-------|------|\n| Source Only | 50.1| 0.11 |\n| Domain Disc. | 52.4 | 0.08 |\n| kPU | 56.7 | 0.11 |\n| PULSE (Ours) | 56.8 | 0.01 |\n\n\n\n\n**Minor issue, typos** \nThanks for catching the typos. We have fixed them all in the updated version. \n\n**The paper seems not to define what the metric novel prevalence estimation is.** \nFor novel class prevalence estimation, we report the absolute difference between the true and estimated prevalence of the novel class in the target (Lines 306-307). \nFor previously seen classes, we report the $\\ell_1$ difference between the estimated marginal and true label marginal among previously seen classes in the target. \n", " Thank you for your positive assessment and constructive feedback on our work. \n\n**However, I find the method section a bit obfuscated. One reason is that there is no headings or subsection title to remind the reader what is the focus of each paragraph... I would like to suggest the author organize section 6 and each of its subsections to have an overview-and-details structure.** \nThanks for your suggestion. As per your suggestions, we have re-structured and re-written Section 6 of the paper to improve the exposition. We added a high-level description of the algorithm with separate paragraph titles detailing the algorithm. \n\n**My main concern is that the paper's significance is constrained by the practicality of the proposed problem. The problem setting is a bit artificial to me. I am not sure how practical it is that we assume the target domain has exactly one novel class unseen. The experiment is kind of synthetic/semi-synthetic since the author chooses source and novel classes randomly.** \n\nWe would like to make several clarifications. First, in our work, the novel class can be a union of multiple classes. The goal of our work is to reject all of them into a separate class (i.e. k+1) which can further trigger appropriate auditing responses. For example, if the prevalence of the novel class in the unlabeled target data is significant, the practitioner can choose to acquire labels for the identified examples from the novel class. \n \nSecond, despite its simplicity, the OSLS setting is of significant practical relevance. For example, biologists often have a labeled database with known cell types (source) and an unlabeled database of tissue cells (target) which may contain novel cell types [Cao et al 2019, Elkan and Noto 2008]. They can use the data to train a classifier model and then deploy the model to not only identify cell types in a target tissue but also to discover previously unknown cell types. The OSLS problem can also routinely appear in medical diagnosis where disease cause symptom [Lipton et al. 2019]. Along with changing prevalence of known diseases, novel diseases can appear over time. \n\nThis has been the motivation behind our choices of BreakHis (tumor cell classification) and DermNet (skin disease classification) datasets. For example, DermNet is motivated by scenarios where doctors have labeled data of several skin diseases (source) and they may wish to deploy a model trained on source data on unlabeled data where along with changing prevalence of previously seen diseases images with new diseases or no diseases may appear. \n\nThird, similar to the previous work in simulating open set domain adaptation problems and due to the lack of benchmark datasets with previously unseen classes, we randomly divide the classes in datasets we consider into seen and novel classes. By repeating experiments for different novel fractions and with different seeds, we hope to cover numerous diverse settings. \n\nOverall, instead of working on general domain adaptation problems which are mathematically ill-posed, our goal is to expand the umbrella of structured distribution shift settings which allows for the development of principled machinery. As discussed in our work, the OSLS setting introduced in our work is strictly more general than the structured setting of label shift and PU learning settings widely explored in the past literature [56, 45, 4, 1, 27, 77, 24, 22, 63, 36, 6, 7, 23, 21]. \n\n**Does OSLS apply to medical applications such as COVID diagnosis?** \n\nWe agree that the OSLS assumption is strong and may not strictly hold exactly in practice. However, as discussed before, we believe that rigorously understanding these idealized settings is a fundamental building block toward more complex settings. Moreover, in many important problems (e.g. medical diagnosis during an epidemic), OSLS is nevertheless a useful model because the prevalence is likely to change faster than conditional probabilities of symptoms given disease. \n\nStrong positivity may be satisfied if we aim to distinguish symptomatic COVID from flu and normal people (e.g. if fever is not common with some flu patients but common with a large population with symptomatic COVID, then it may give us a sub-domain specific to the flu). Depending on the task, strong positivity can be best understood by the practitioner. As discussed above, a more relevant example is distinguishing cell types in biology where an unlabeled database of tissue cells (target) may contain novel cell types, and a labeled database with known cell types (source) can be used to formulate it into OSLS problem. \n", " **How about the results from the general domain adaptation datasets like USPS, OFFICEHOME, DOMAINNET etc.. Since the domain discrepancy, the author proposed in the paper is mostly between the positive samples and the unlabeled samples ... I have little concern on if this problem would degrade into open-set classification (since the domain discrepancy is limited) instead of the open-set domain adaptation as we can find in the experiments** \n\nWe think there may be some misunderstanding here. Domain adaptation problems arise when the source distribution $p_s(x,y)$ shifts to a target distribution $p_t(x,y)$. In a closed-set setting, the distribution shift can be induced due to (i) covariate shift where $p(x)$ shifts but p(y|x) remains invariant from source to target; or (ii) label shift where $p(y)$ shifts but $p(x|y)$ remains invariant from source to target (refer to Kun Zhang et al. 2013 [78]). \n\nOSLS setting extends the label shift (the latter) setting to scenarios where along with shifts in label distribution among source classes, previously unseen classes may be observed. Note that even without the novel class, there is a distribution shift among previously seen classes due to shifting prevalences of common classes (i.e. label shift). \n\nAdditionally without catering to the label shift among source classes and ad-hoc simplification to directly distinguish previously seen versus novel classes doesn’t work in practice. We empirically demonstrate this by showing that simple domain discriminator-based baselines perform poorly in the OSLS settings. \n\nTo summarize, in our work, we focus on label shift (changing label prevalence) instead of covariate shift (changing appearance) among previously seen classes. Our goal is to expand the umbrella of structured distribution shift settings which allows for the development of principled machinery instead of working on general domain adaptation problems which are mathematically ill-posed. As discussed in our work, the OSLS setting introduced in our work is strictly more general than the structured setting of label shift and PU learning settings widely explored in the past literature [56, 45, 4, 1, 27, 77, 24, 22, 63, 36, 6, 7, 23, 21]. \n\nWe hope that our work provides solid ground for principled algorithmic developments to further expand the umbrella of structured shifts. In particular, one may extend the OSLS problem to more general settings for future work where along with shifting label distribution from source to target, p(x|y) can also deviate from source to target within some divergence constraint. As a first step, we performed preliminary experiments on the FMoW dataset from Koh et al. 2021 [3] (see common response). \n\n\n\n**In the paper line 188 section 5, why only satisfying the strong positive condition the OSLS can be reduced to K PU? It seems most of the OSLS problems can all be reduced into K PU problems that way in equation 2.** \nYes, mathematically any OSLS problems can be thought of as k-PU problems as per eq (2). However, for the identifiability of each of these PU problems, we need the irreducibility assumption (Davis and Bekkar 2020). Put simply, for individual PU problems defined for source classes $j \\in Y_s$, we need the existence of a sub-domain $X_j$ such that we only observe examples for that class j in $X_j$. Collectively $X_j$ gives us the $X_sp$ needed for strong positivity. We have clearly discussed this in Appendix A.1. \n", " **And The author should explain more on the connection between the α with the novel target and why they are equal and can be derived this way (why don't use the estimation from algorithm 2 but use algorithm 4 instead in algorithm 3, however, I found the resample based estimation should be derived from algorithm 2 as described in line 805).** \n\nThanks for your suggestion. We have improved the exposition in Section 6 of the paper to clarify this confusion. \n\nWe use Algorithm 2 to only estimate the relative proportion of previously seen classes (i.e., all classes except the novel class). As mentioned in Section 5 (and shown in our experiments), using Algorithm 2 to directly estimate the proportion of the novel class as $\\widehat p_t (y = k+1 ) = 1 - \\sum_{j = 1}^k \\widehat p_t( y =j) $ leads to significant under-estimation of the novel proportion due to error accumulation (because of over-estimation bias in mixture proportion in PU learning algorithms). \n\nHence, to estimate the proportion of the novel class we use Algorithm 4. In particular, after estimating the label shift among source classes with Algorithm 2 (i.e., relative proportion $\\frac{\\widehat p_t(y = j)}{\\sum_{i =1}^{k} \\widehat p_t( y = i}$ for all $j \\in Y_s$), we construct label shift corrected source distribution $p_s^\\prime (x)$. We leverage this to obtain a single PU learning problem which we solve using Algorithm 3. This is based on the reduction presented in equation 3. We estimate the proportion of the novel class with Algorithm 4 and the corresponding guarantee is presented in Appendix D.3 (Theorem 5). \n\n**Both in algorithm 2 and algorithm 4, the z is ambitious in the paper, are they discrete or continuous variables, and do we need to compute all of the values from 0 to 1 for estimation? Will that increase the computation burden?** \n\nWe have now clarified the notion Z used in Algorithms 2 and 4 in Appendix C. $Z$ simply denotes the output of the underlying classifier on input $X$. For estimating the proportion of different classes, instead of operating in the input space, we operate in the output space (push-forward) of the classifier. $Z$’s capture the prediction probability and hence are continuous variables that take values between [0,1]. The computation overhead is negligible as the time required to compute Z’s is just the time required for a single forward pass with the underlying classifier.\n\n**Also, for the c in algorithm 2 line 6, do they have the connections with the threshold in Theorem 1 since their notation is all c?**\n\nYes, they are related. In fact, in the proof of our Theorem 3 (formal statement of Theorem 1), we show that $\\hat c_j$ concentrate to $c^*_j$ (defined in Theorem 3). Intuitively, $c^*_j$ captures the threshold on [0,1] for the top bin such that the ratio of the fractions of positive and unlabeled points receiving scores above the threshold $c_j^*$ is minimized. With Algorithm 2, we aim to ideally estimate these $c_j^*$ for the source classifier to estimate the relative proportion of previously seen classes in the target. With Algorithm 4, we aim to estimate the threshold to estimate the prevalence of the novel class in the target. \n", " We thank the reviewer for their detailed comments and positive assessment. \n\n**Clarification on typos. The authors need to better organize the paper and improve the mathematical notations, e.g. in line 48 \"the matrix submatrix\" is not so clear.** \n\nWe have fixed the typos. Thanks for catching them. \n\n**For the extended BBE algorithm 2, which is the importance weight w? (c^j I assume), the author should be more clear.** \nAlgorithm 2 estimates the label shift among previously seen classes in the source which is denoted with $\\widehat p_t^\\prime (y)$ in Algorithm 1, step 3, and in the output of Algorithm 2. We have clarified this in the main text. \n\n**As we can find from Theorem 1, the threshold would be very important, so the author should discuss more about the threshold e.g. is that a hyperparameter? Or is there any trade-off between varying the threshold with the performance on target data?** \n\nPULSE builds on top of BBE proposed in Garg et al. 2021b for target marginal estimation. In a PU learning problem (OSLS with k=1), given positive and unlabeled data, BBE estimates the fraction of positives in unlabeled in the push-forward space of the classifier. In particular, instead of operating in the original input space, BBE maps the inputs to one-dimensional outputs (i.e., a score between zero and one) which is the predicted probability of an example being from the positive class. Based on your suggestion, we have now included an intuitive description of the BBE algorithm. \n\nEfficacy of BBE relies on the existence of a threshold on probability scores assigned by the classifier such that the examples mapped to a score greater than the threshold are *mostly* positive. This is referred to as the *top bin property*. If such a threshold exists, BBE algorithm recovers (or learns) such a threshold with $1\\sqrt{n}$ guarantees as shown in Theorem 3 and 5. In more general cases when such a threshold doesn’t exist, our guarantees capture the tradeoff due to the proportion of negative examples in the top bin (bias) versus the proportion of positives in the top bin (variance).\n\nTo elaborate, BBE minimizes the upper confidence bound in Step 6 of Algorithm 3 (or Step 3 of Algorithm 4) which provides us with the guarantee in Theorem 3 (extension of Theorem 1 from Garg et al. 2021b). Importantly, this theorem shows that minimizing the upper confidence bounds allows us to **learn** the threshold. Note that this threshold is not a hyperparameter and the tightness of recovering this threshold depends on hyperparameters $\\gamma$ and $\\delta$. Varying the $\\delta$ used for high probability bound could provide some trade-off between the tightness of the bound and the associated guarantee. However, as highlighted in Garg et al. 2021b, the trade-off is insignificant and we use the same values as in Garg et al. 2021b. \n\n**I also wonder that is that directly using the source classifier for target in Algorithm 2 line 6 is guaranteed by this Theorem?** \nIn general scenarios, our guarantees in Theorem 3 capture the tradeoff due to the proportion of negative examples in the top bin (bias) versus the proportion of positives in the top bin (variance). Additionally (as you suggested), we provide empirical evidence to the top bin property with a source classifier in Figure 2. We elaborate more on this experiment in Appendix D.1.\n\n**Since the PULSE framework is built on the strong positive condition to guarantee the target inevitability, I found no regularization for the data in this paper, so the author also needs to explain why their algorithm can achieve this strong positive condition** \nWe think there may be some misunderstanding here. Strong positivity is an assumption on the data and not on the underlying algorithm. Put differently, strong positivity lays down the conditions that the underlying source and target data should satisfy *in order for it to even be possible (for any algorithm)* to learn the parameters of interest in the OSLS problem. \n\n**In algorithm 2 , the author should expand the loss function in line 6 and line 10 to make it more clear.**\nThanks for your suggestion. We have now made this clear in Appendix C. In Algorithm 3, the loss function can be any standard classification loss. As mentioned in Section 2 (Lines 124-128) we use cross-entropy loss.\n", " **Limitations of PULSE** \n\nThanks for your suggestion to include the limitations of PULSE. We have added limitations of PULSE in Appendix C.2. We will include them in the main paper in the final version. \n\n**It could also be useful to discuss the types of models or datasets that PULSE is expected to work on**\n\nSince PULSE inherits the limitations of CVIR and BBE, it is expected to work when the black-box classifiers (i.e., source classifier and domain discriminator classifier) identify an almost pure top bin, i.e., they identify a threshold on the probability score such that examples that get a score higher than the threshold are mostly positive. While this property seems to be satisfied across different datasets spanning different modalities and applications (CIFAR 10 illustrations in Figure 2 in appendix), failure to identify an almost pure top bin can degrade the performance of BBE and hence, our PULSE framework.\n\n **the computational efficiency with regards to larger models** \n\nAcross our experiments, to adapt the source classifier to the target, we train the domain-discriminator classifier for the same number of epochs as we use to train the source classifier. Hence, as with the typical unsupervised domain adaptation methods, the compute time required is approximately doubled. \n\n**the amount of labeled data of the target domain - for example, with no labels it would be difficult to approximate what the label shift is.** \n\nWe do not need labeled data from the target domain to estimate the label shift. We only need labeled data from the source and unlabeled data from the target to (i) estimate the target marginal (i.e., label shift among source classes and prevalence of the novel class). Our rates in Theorem 1 (and Theorem 5) hint at the number of unlabeled samples from the target required to get good estimates. \n", " \nWe thank the reviewer for their positive and thoughtful feedback and for championing our paper. \n\n**A worthwhile addition in the related work section would be to make it more clear exactly who your setting differs from each of the main categories that are defined there**\nThanks for your suggestion. We have added a few lines highlighting the distinction. Here, we lay out the main differences. First, DA methods do not handle previously unseen classes in the target. Second, while OSDA methods handle previously unseen classes in target, existing OSDA methods are heuristic in nature, addressing settings where the right answers seem intuitive but are not identified mathematically. Third, PU learning is a base case of OSLS, i.e., when k=1.\n \n**Further, it may be worth a few sentences relating why identifiability is useful and related to the remainder of the paper; Section 4 felt significantly out of place compared to the rest of the paper.**\nIdentifiability, in particular sufficient conditions, are needed to understand the nature of assumptions that our datasets (labeled source and unlabeled target) shall satisfy to allow us to tackle the OSLS problem. Additionally, the constructive solution for identifiability under sufficient conditions allows us to start developing practical algorithms to tackle the problem. \n\n**How robust is PULSE expected to be in general OSDA problems, not just label shift?** \nPer standard impossibility results [8], a single domain adaptation method can not handle different kinds of distribution shift problems that may arise. This is empirically corroborated in Sagawa et al. 2022 [57], where no single method provided consistent gains over the ERM baseline. \n\nThus, instead of working on general domain adaptation problems which are mathematically ill-posed, our goal is to expand the umbrella of structured distribution shift settings which allows for the development of principled machinery. As discussed in our work, the OSLS setting introduced in our work is strictly more general than the structured setting of label shift and PU learning settings widely explored in the past literature [56, 45, 4, 1, 27, 77, 24, 22, 63, 36, 6, 7, 23, 21]. \n\nWe hope that our work provides solid ground for principled algorithmic developments to further expand the umbrella of structured shifts. In particular, one may extend the OSLS problem to more general settings for future work where along with shifting label distribution from source to target, p(x|y) can also deviate from source to target within some divergence constraint. \n\nAs a first step, we performed preliminary experiments on FMoW dataset from Koh et al. 2021 [39] (see common response).", " We thank the reviewer for their constructive comments and positive assessment. We are glad that you find the OSLS problem setting interesting. \n\n**All experimental results are semi-synthesized and are on relative small datasets, such as CIFAR-10, CIFAR-100. It is good to have results on some popularly used datasets for domain adaptation such as DomainNet.**\nAs per your suggestion and the suggestion of Reviewers Pd4h and dUdt, we have included additional experiments on the UTK Face dataset (age prediction from images) and FMOW dataset (please see common response above). Additionally in our experiments, we have large-scale datasets like Entity30 (a subset of Imagenet), BreakHis (tumor cell classification), and DermNet (skin disease classification). All these three datasets are DomainNet scale datasets (in terms of input size and the number of images per class). \n\nOur work is primarily focused on distribution shift settings where (i) p(x|y) remains invariant across common classes in source and target; (ii) novel classes can show up in the target. Due to the nature of the problem setting in OSLS, we do not include experiments on DomainNet across different domains (e.g. sketch, real images) which doesn’t allow exploration of structured shifts. Instead of working on general domain adaptation problems which are mathematically ill-posed, our goal is to expand the umbrella of structured distribution shift settings which allows for the development of principled machinery. As discussed in our work, the OSLS setting introduced in our work is strictly more general than the structured setting of label shift and PU learning settings widely explored in the past literature [56, 45, 4, 1, 27, 77, 24, 22, 63, 36, 6, 7, 23, 21]. \n\nOur work takes the first step in tackling the OSLS problem for large-scale datasets with deep learning. We leave the investigations on extending the OSLS problem to settings where along with shifting label distribution from source to target, p(x|y) can also deviate from source to target with some divergence constraints for future work. As a first step, we performed preliminary experiments on FMoW dataset from Koh et al. 2021 [3] (see common response).\n\n**The writing is not easy to follow, the references (e.g., Algorithm 2&3) switch from the main paper and supplementary materials without clear instruction.**\nThanks for the feedback. We have significantly updated the manuscript presentation in Section 6 with clear references to Algorithms 2 and 3 in the main paper. Algorithms 2 and 3 are built on top of the BBE and CVIR procedure proposed in previous work. Due to space constraints, we only include the description necessary to understand the PULSE framework by largely treating the BBE and CVIR procedures as black boxes. We have included a detailed description of these algorithms in Section C of the appendix. \n\n**The paper mentions that \"In future work, we hope to bridge the gap between the necessary and sufficient identifiability conditions.”**\nYes, as mentioned in the conclusion and future work, we hope to bridge the gap between the necessary and sufficient identifiability conditions. In our work, we illustrate the existing gap with examples (Examples 1 and 2 in App B.1) in a tabular setting. \n", " We would like to thank the reviewers for their detailed and thoughtful feedback. We are glad to see that all 4 reviewers recommend acceptance, with the reviewers recognizing the ingenuity in scoping the OSLS setup (R1, R2, R3, R4), the novelty of the PULSE framework (R1, R2, R3), and the significance of theoretical and empirical results (R1, R2, R3, R4).\n\n1. Inspired by the feedback from the reviewers, we have significantly improved the exposition of the paper, specifically Section 6 of the paper. We have also updated the description of Algorithms 2, 3, and 4 in Appendix C of the paper. \n\n2. As per suggestions from the reviewers, we have also included results on two more datasets: \n\n - **Age prediction task. As per the suggestion of Reviewer dUdt, we have now included experiments on an age-prediction task (UTK-Face) in Appendix F.9 in Table 6 to simulate problems where strong positivity might not hold.** We observe that the prevalence of the novel class as estimated with our PULSE framework is significantly closer to the true estimate. Additionally target classification performance of OSLS is similar to that of $k$PU both of which significantly improve over domain discriminator and source only baselines. \n\n\n\t\n| Method | Acc | MPE|\n|--------------|-------|------|\n| Source Only | 50.1| 0.11 |\n| Domain Disc. | 52.4 | 0.08 |\n| kPU | **56.7** | 0.11 |\n| PULSE (Ours) | **56.8** | **0.01** |\n\n - **FMoW-Wilds. We include this experiment to present preliminary OSLS results on a real-world problem.** For the source domain, we use the training data and for the unlabeled target domain, we use the (i) ID val dataset and (ii) OOD val dataset. In OOD setting, by default going from train data to OOD val presents a shift in the label distribution. To simulate the open set problem, we restricted source classes to 46 and used the data from the rest of the 16 classes as data from a novel class. Note that here along with changing prevalence of different classes from source to target, in OOD case, p(x|y) also varies slightly due to source and target data collected over different (non-overlapping) years. Note for OOD case, we also compare with an oracle PULSE method where we have access to the true target marginal. The table below shows preliminary results: \n\n| | FMoW-Wilds (ID) | FMoW-Wilds (OOD)\n| ---- | ---------- | ---------- | \n| Method | Acc &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; MPE | Acc &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; MPE\n| Source Only | 35.4 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; - | 35.7 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; -\n| Domain Disc. | 45.2 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0.22 | 31.9 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0.6\n| kPU | 55.7 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0.44 | 34.1 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0.51\n| PULSE | **62.4** &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0.15| 33.9 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; 0.52\n| PULSE (Oracle target marginal) | - &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; - | 46.5 &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; - \n\n\n We respond further to each reviewer’s individual concerns in the respective threads. \n", " This paper introduces the problem of domain adaptation under open set label shift (OSLS), where the class-conditional distributions p(x|y) are domain-invariant, and p(y) can change. Domain Adaptation under label shift and positive-unlabeled (PU) learning can be all considered as special cases of OSLS. This paper also provides the identifiablity of OSLS, including necessary condition (weak positivity) and sufficient conditions (strong positvity). Under the strong positivity condition, the OSLS problem can be broken into k PU problems. The PU learning algorithms cannot scale to datasets with large number of classes because of error accumulation. This paper then proposes PULSE framework to solve this problem by exploiting the joint structure of the problem with source-class re-sampling. Experiments across 7 semi-synthetic benchmarks shows that the proposed PULSE consistently outperform OSDA baselines. Strengths\n- Open Set Label Shift (OSLS) an interesting problem, assuming p(x|y) is the same and p(y) is changing.\n- Theories on identifiablity of the problem and convergence analysis\n- New algorithm PULSE to solve the problem\n- Good experimental results\n\nWeakness\n- All experimental results are semi-synthesized and are on relative small datasets, such as CIFAR-10, CIFAR-100. It is good to have results on some popularly used datasets for domain adaptation such as DomainNet.\n- The writing is not easy to follow, the references (e.g., Algorithm 2&3) switch from main paper and supplementary materials without clear instruction. There are also some minor grammar errors including in the formula (line 153) - All experimental results are semi-synthesized and are on relative small datasets, such as CIFAR-10, CIFAR-100. It is good to have results on some popularly used datasets for domain adaptation such as DomainNet.\n- Make the writing better\n\n - The benchmarks used in this paper are all semi-synthesized, and thus lack proof from real-world application\n- The paper mentions that \"In future work, we hope to bridge the gap between the necessary and sufficient identifiability conditions.\"\n", " This paper scopes the common Open Set Domain Adaptation problem to specifically consider Open Set Label Shift (OSLS) problems, where $p(x|y)$ is constant, but the class proportions may change between source and target, and there may be a new unseen class added in testing. In this setting, the goal is to identify instances of the unseen class, while also performing adequately on the previously seen classes. The paper proposes a new framework, PULSE, which combines classical Positive and Unlabeled Learning techniques and label reweighting techniques in order to tackle this new problem. The method is empirically shown to perform well in a variety of OSLS problems, and theoretical analysis is conducted to create sufficient conditions for identifiability of the OSLS problem. * **Originality**\n To the best of my knowledge, this is the first time I have seen the scoping of the open set domain adaptation problem. By focusing purely on label shift, the authors are able to create some novel theoretical results as well as a highly performant framework for tackling the OSLS problem. In particular, their framework combines two standard techniques to great effect; class reweighting to handle the label shift and PU techniques to handle the open set nature of the problem. \n* **Quality**\n The claims are well-substantiated both theoretically and empirically, and the results are impressive. The authors perform a detailed evaluation on existing open source domain adaptation methods as well as a slight ablation study by performing standard PU techniques without any label reweighting. \n* **Clarity**\n The paper is mostly well organized and written. A worthwhile addition in the related work section would be to make it more clear exactly who your setting differs from each of the main categories that are defined there. Further, it may be worth a few sentences relating why identifiability is useful and related to the remainder of the paper; Section 4 felt significantly out of place compared to the rest of the paper.\n* Some minor nitpicks\n * line 48 - matrix submatrix appears to be a typo. \n * line 257 should likely be a heading\n* **Significance**\n By scoping the OSDA problem to focus on label shift, the authors were able to show relatively significant gains compared to standard OSDA methods - this kind of scoping appears quite fruitful for other researchers to build off of. The theoretical contributions in the paper are strong evidence that their framework, PULSE, is able to perform quite well on a variety of scenarios which leaves it as a valuable benchmark for future work. * How robust is PULSE expected to be in general OSDA problems, not just label shift? It would be worthwhile to have a further discussion on limitations of PULSE with regards to the amount of labeled data of the target domain - for example, with no labels it would be difficult to approximate what the label shift is. It could also be useful to discuss the types of models or datasets that PULSE is expected to work on (e.g. does it perform worse as the number of classes increase, the computational efficiency with regards to larger models, etc.); and that it inherits limitations from its particular stages (e.g. any limitations of importance weighted label shift or CVIR/BBE). ", " The goal of this paper is to solve the open set domain adaptation under the label shift setting. The authors proposed a PU learning-based framework to first estimate the label shift and then classify the novel class. Moreover, the authors also gave sufficient and necessary conditions to the open set label shift in order to make the target label marginal identify. The experimental results showed that the PULSE framework could achieve great performance improvement. 1) The idea of combing PU problems with the OSLS is interesting, the authors used an ingenious way to merge the PU problems into the OSLS. The reduction from the k-PU to a single PU is attractive (equation 3).\n\n2) The definitions and theorems are straightforward and easy to be understood, and the mathematics proof seems solid. \n\n3) The author used a two-stage method to separately estimate the label shift and to identify the novel class, which seems effective in solving both the domain adaptation and novel class identification. The results of the PULSE seems good.\n\n4) I have no doubts in the originality to the best of my knowledge, the quality of this paper is good, some more details need to be discussed for clarity as I stated later, and the contribution of this paper has some significance to the OSLS regime.\n 1) The authors need to better organize the paper and improve the mathematical notations, e.g. in line 48 \"the matrix submatrix\" is not so clear.\n\n\n2) For the extended BBE algorithm 2, which is the importance weight w? ($\\hat{c}_j$ I assume), the author should be more clear.\n\n3) Since the key point in this paper is to estimate the target label marginal distribution using source data, this process relies heavily on Theorem 1 to guarantee the target domain estimation on the source domain. As we can find from Theorem 1, the threshold would be very important, so the author should discuss more about the threshold e.g. is that a hyperparameter? Or is there any trade-off between varying the threshold with the performance on target data? I also wonder that is that directly using the source classifier for target in Algorithm 2 line 6 is guaranteed by this Theorem?\n\n4) Since the PULSE framework is built on the strong positive condition to guarantee the target inevitability, I found no regularization for the data in this paper, so the author also needs to explain why their algorithm can achieve this strong positive condition.\n\n5) In algorithm 2 , the author should expand the loss function in line 6 and line 10 to make it more clear. And The author should explain more on the connection between the $\\alpha$ with the novel target and why they are equal and can be derived this way (why don't use the estimation from algorithm 2 but use algorithm 4 instead in algorithm 3, however, I found the resample based estimation should be derived from algorithm 2 as described in line 805).\n\n6) Both in algorithm 2 and algorithm 4, the z is ambitious in the paper, are they discrete or continuous variables, and do we need to compute all of the values from 0 to 1 for estimation? Will that increase the computation burden? Also, for the c in algorithm 2 line 6, do they have the connections with the threshold in Theorem 1 since their notation is all c? \n\n7) How about the results from the general domain adaptation datasets like USPS, OFFICEHOME, DOMAINNET etc. The experiment setup is not convincing, the random choice for source and target in small datasets (like cifar) may make the source and target only have the limited domain discrepancy, which in return limit the domain adaptation problems back into the novel class detection problems under PU setup. Since the domain discrepancy, the author proposed in the paper is mostly between the positive samples and the unlabeled samples, and unlabeled samples are consists of the combination of both positive and negative samples as described in CVIR, I have little concern on if this problem would degrade into open-set classification (since the domain discrepancy is limited) instead of the open-set domain adaptation as we can find in the experiments.\n\n8) In the paper line 188 section 5, why only satisfying the strong positive condition the OSLS can be reduced into K PU? It seems most of the OSLS problem can all be reduced into K PU problems that way in equation 2. 1) This paper needs to be well organized.\n2) Some details need to be completed as described in the questions.\n3) The experiments in this paper is not convincing.\n", " This paper introduces domain adaptation under Open Set Label Shift (OSLS). Specifically, it assumes the target has one more novel class that was not previously seen in the source domain while allowing label shifts between source and target domains. This work provides theoretical findings of OSLS, specifically, the necessary condition \"weak positivity\" and two sufficient conditions \"strong positivity\" and \"separability\". The author further proposes a framework to solve OSLS, named PULSE which combines techniques from both areas of (1) positive and unlabeled learning and (2) DA under label shift. The effectiveness of the proposed methods is demonstrated in language, image, and medical datasets. originality\n---\nThe paper focuses on a special case of the open set domain adaptation where the target domain has one novel class that is not previously unseen in the source domain. The big concept of open set DA is not novel while the special case is well studied yet, thus can be considered novel. The paper delivers an identifiability analysis of the OSLS which is novel. \n\nquality\n---\nThe paper provides sold theoretical analysis as well as extensive experiments on 5 datasets across multiple application domains. \n\n\nMinor issue, typos:\n\n1. Line 46: … the matrix submatrix… \n2. Line 67: double periods after \"(Sec.7)\" \n3. Line 154: \\frac{1}{p_t(y=k+1)} is missing in the closed form of p_t(x|y=k+1)\n4. The paper seems not define what the metric \"novel prevalence estimation\" is. \n\nclarity\n---\nThe paper has many contents and is pretty dense. The majority of the paper is well-written and clear. However, I find the method section a bit obfuscated. One reason is that there is no headings or subsection title to remind the reader what is the focus of each paragraph. Another reason is that the author seems to jump to the details too quickly. The logic of the current text is kind of linear. I would like to suggest the author organize section 6 and each of its subsections to have an overview-and-details structure. \n\nsignificance\n---\nThe paper does a good job on the specific problem it aims to solve. The analysis is solid and the empirical results are illustrative. My main concern is that the paper's significance is constrained by the practicality of the proposed problem. The problem setting is a bit artificial to me. I am not sure how practical it is that we assume the target domain has exactly one novel class unseen. The experiment is kind of synthetic/semi-synthetic since the author chooses source and novel classes randomly. 1. Does OSLS apply to medical applications such as COVID diagnosis?\n\nThe author motivates the OSLS problem by giving a motivating example in the medical domain: the disease proportion changes seasonally and sometime may come with a new disease, like COVID-19. I would like the author to analyze whether the OSLS is suitable for the disease diagnosis problem. For example, let us consider a DA problem where the source domain has two classes, normal people and flu patients while the target domain will have one more class, COIVD patients. How are your identifiability conditions satisfied in this application? Does the PLUSE framework still apply?\n\n2. Does all the 5 tasks in the experiments satisfy \"strong positivity\"? Could the author include benchmarks not satisfying this condition?\n\nThe author claims that \"we observe that techniques proposed under strong positivity were empirically stable and outperform other baselines developed under separability. This is intuitive for many benchmark datasets where for each class there exists a subdomain that only belongs to that class.\" First I want to ask, do all 5 benchmarks used in your experiments all satisfy that there exists a subdomain that only belongs to that class? Second, I would like to point out that samples belonging to different classes are disjoint is the property of object recognition tasks however, there are many other types of tasks that do not satisfy such disjoint property. For example, in age prediction (classify different age groups). It is unlikely that there exist samples in the age group [20,30)'s portrait will never appear in the age group [30,40) or [10,20) since the boundary between different classes is not perfectly clear here. Thus I think it is necessary to include other benchmarks that do not satisfy \"strong possibility\" for the sake of a more complete evaluation of the proposed method. \n This work does not raise potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "JqKVglh1cj6", "YmMQwFyBxX8", "Zu4Nh3_JjY5", "8EymETiYcKF", "SMNZ5P0Vg3f4", "_jhz1r9uyD", "Zu4Nh3_JjY5", "emBQqZQXugo", "Qi_RMEuoYxv", "8EymETiYcKF", "rLUs7A2Ke9Z", "VLds9CgKWS", "YmMQwFyBxX8", "nips_2022_OMZG4vsKmm7", "nips_2022_OMZG4vsKmm7", "nips_2022_OMZG4vsKmm7", "nips_2022_OMZG4vsKmm7", "nips_2022_OMZG4vsKmm7" ]
nips_2022_krV1UM7Uw1
Robust Bayesian Regression via Hard Thresholding
By combining robust regression and prior information, we develop an effective robust regression method that can resist adaptive adversarial attacks. Due to the widespread existence of noise and data corruption, it is necessary to recover the true regression parameters when a certain proportion of the response variables have been corrupted. Methods to overcome this problem often involve robust least-squares regression. However, few methods achieve good performance when dealing with severe adaptive adversarial attacks. Based on the combination of prior information and robust regression via hard thresholding, this paper proposes an algorithm that improves the breakdown point when facing adaptive adversarial attacks. Furthermore, to improve the robustness and reduce the estimation error caused by the inclusion of a prior, the idea of Bayesian reweighting is used to construct a more robust algorithm. We prove the theoretical convergence of proposed algorithms under mild conditions. Extensive experiments show that, under different dataset attacks, our algorithms achieve state-of-the-art results compared with other benchmark algorithms, demonstrating the robustness of the proposed approach.
Accept
The paper studies the problem of label-outlier robust regression with prior on the optimal parameter. The reviewers agree that the results are novel and significant. There is certainly a concern about the novelty of the method and about additional insights provided by the result. However, as the paper studies this relatively new problem and provides solid results for it, we recommend accepting it for publication.
test
[ "x_1Qhf8WL60", "gjCvuHnS8wP", "Umwh0mZu4e", "RwoDpda-x-g", "KqURNhZcGEC", "amVzaNH88-C", "yUZ71Qvw_5", "obPS0PGa756", "X5fZxRfRbUI" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response. I appreciate the authors provide additional experiments as well as more clarification in the revision. I would keep my original evaluation leaning toward acceptance. ", " Thank you for the further comment and for raising our score! We want to address our thoughts on the breakdown point as follows. \n\nFirstly, the explicit form of the breakdown point we have shown in the rebuttal is an approximate result when $\\xi$ is not large. Theorem 3 accurately describes the breakdown point of TRIP. If we do not add prior to TRIP, this method is actually CRR[1], \n and the breakdown point is the lowest in this case. From the perspective of Theorem 3, the breakdown point will increase monotonically as $\\xi$ increases and it could be as large as 0.3023. Actually, if $\\xi$ reaches 49, the breakdown point of TRIP will be 1 theoretically (although the solution at this time is basically determined by the prior). \n\nSecondly, the theoretical breakdown point is much smaller than that in the experiments and practical applications, because the proof needs lots of inequality scaling. TORRENT[2] has a breakdown point of 1/65 in the noiseless case, and has no theoretical guarantees in the noisy case. The theoretical breakdown point of CRR is 1/10000 in noisy case (although their proof of error bound is more rigorous than ours), but the practical effect is excellent. Under the same conditions, the breakdown point of our methods is better than CRR. Although from the perspective of Theorem 3, it seems that we need to add a relatively high weight on prior to reach a high breakdown point. However, this phenomenon is also due to the scaling of inequalities, and practical applications may only require a low weight on prior to achieve good results. \n", " Thank you for addressing my questions. \n\nIt appears that the theoretical breakdown point is much smaller than 1/65 $\\approx$ 0.0153, or 0.23 for reasonable values of $\\xi$, is that right? Given that the objective of the paper is to demonstrate a high breakdown point, this is a little unfortunate. Am I reading that wrong? Can the breakdown point be as large as 0.3023? \n\nBut it does look like in all the experiments the breakdown point is much higher than previous algorithms, so it is possible that this is an artefact of the proof. \n\nIt would have been nice to have tight results here (i.e. a noise model where a high breakdown point is achieved, and a theoretical upper bound that matches it). ", " We are thankful for the positive and constructive feedback, especially on checking the technique parts. In the following we briefly respond your question.\n\n**Question1**: It is better to conduct more experiments in more complicated settings. \n**Response**: Thanks for the suggestion, and more experiments have been conducted in more complicated settings. Specifically, in order to better demonstrate the performance of the proposed methods, we have added more experiments to compare with TORRENT algorithm, proposed by Bhatia et al. in 2015 [2]. TORRENT is a robust regression method which is based on a thresholding operator. CRR[1] is another thresholding operator based robust regression algorithm, which will perform better in noisy case compared with TORRENT. OAA and AAA mentioned in Section 5 are still used as attack methods, and the experiment is divided into two cases: with white noise $\\boldsymbol{\\epsilon}$ and without white noise. Under these two attacks, the performance of TORRENT algorithm is very consistent with that of CRR[1] in both noisy and noiseless settings. TORRENT performs slightly better than CRR in the absence of white noise. However, both CRR and TORRENT perform poorly under AAA. It can be seen that the TRIP and BRHT algorithms are very robust in all cases. The detailed experimental results are shown in **Appendix E**. \nFurthermore, in order to better illustrate the robustness of our methods, the leverage point attack (LPA) is also considered in the experiment as shown in **Appendix E**. Through attacking the high leverage points in the data, LPA can effectively corrupt the regression results. The experimental results show that CRR performs poorly under LPA, and usually collapses first among these methods. Rob-ULA[3] is a bayesian descent method using an unadjusted Langevin algorithm (ULA). Rob-ULA has relatively better performance when the proportion of outliers is high, but there will be relatively large errors in the case of low proportion of outliers. TORRENT is very robust under LPA, especially in the absence of white noise. However, if the data dimension is high and the sample size is small, TORRENT is easier to collapse. The proposed TRIP and BRHT are still better than CRR, and will maintain a robust result even there are lots of outliers. The estimation errors of BRHT are smaller than TRIP, which shows BRHT is the most robust algorithm in this experiment. \n\n[1] Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, and Purushottam Kar. Consistent robust regression. Advances in Neural Information Processing Systems, 30, 2017. \n[2] Kush Bhatia, Prateek Jain, and Purushottam Kar. Robust regression via hard thresholding. Advances in Neural Information Processing Systems, 28, 2015. \n[3] Kush Bhatia, Yi-An Ma, Anca D Dragan, Peter L Bartlett, and Michael I Jordan. Bayesian robustness: A nonasymptotic viewpoint. arXiv preprint arXiv:1907.11826, 2019.", " We thank the reviewer for the constructive feedback. In the following paragraphs we briefly respond the questions, and details can be found in the rebuttal revision. \n\n**Weakness 1**:\tGiven that AAA is assumed to be an adversary, a more diverse attack methods could be considered - the current attack method of ADCA is designed to be quite close to TRIP, and it is likely that TRIP performs better than others under ADCA because it is just trained as an opposite of ADCA and may not generalize under other ways of attacks. \n**Response**: Thank you for the comment. Previous work rarely consider the attacks that all the information in the model is known. Most of the attacks considered are OAA or attacks that only rely on part of the model information. Therefore we propose the ADCA algorithm in order to verify that our proposed method TRIP and BRHT can also have a certain robustness in the most complex cases. In order to better illustrate the robustness of our methods, the leverage point attack (LPA) is also considered in **Appendix E**. Through attacking the high leverage points in the data, LPA can effectively corrupt the regression results. The detailed experimental results are shown in **Appendix E**, which reflect that TRIP and BRHT are also robust compared with other methods.\n\n\n**Weakness 2**:\tAlthough it is still novel to me that the paper aims for adversarial robustness, Bayesian approaches on weights and loss coefficients have been quite known approaches in general so some readers may concern on the technical novelty on the method. \n**Response**: Thanks for comment. We think the novelty lies in the combination of Bayesian method and the hard thresholding operator with better performance. The use of hard thresholding operator in robust regression was first proposed by Bhatia[2] in 2015, and our paper may be the first one that combines Bayesian method with the hard thresholding operator in the published literature. The main purpose of our paper is to show the prior information can make regression more robust, and Bayesian method is an effective tool to integrate prior information. This article is an attempt in this direction, and we demonstrate that this combination can greatly improve the effect of robust regression. This method could be potentially useful in practical application scenarios.\n\n**Questions 1&2**:(1) Section 3.1: Is the proposed hard-thresholding based method actually induced from assuming Gaussian weight prior, or they are just orthogonal, i.e., TRIP works without the prior? More explanation on the relationship between the prior assumption and TRIP may help. (2) For TRIP, I feel it is a bit unclear why one should assume the Gaussian prior for robustness against AAAs, but perhaps not for others? If not, the paper could empirically explore the effect of using different priors on weights. \n**Response**:Thanks for the suggestions. If we do not add prior on TRIP, this method is actually CRR[1]. The Gaussian prior itself is not proposed from the aspect of robustness, but because Gaussian prior and the regression likelihood are conjugate. This conjugation makes it possible to reduce the amount of computation in the iteration of TRIP and BRHT algorithms. For TRIP, an explicit iterative form can be derived to make its calculation more convenient, and the theoretical proof can also be presented in a relatively simple form. And for BRHT, the calculation steps can also be reduced during the iteration of VBEM algorithm. Other priors may achieve similar results, but they will cause difficulties in calculation, and reduce the efficiency of the algorithm. In future research, we will also consider whether we can introduce a more effective prior, which can enhance robustness and reduce the calculation.\n\n**Question 3**:\tLine 115: More discussions could be made on how mild the assumed SSC and SSS conditions in both theoretical and empirical senses? \n**Response**: Thanks for the constructive suggestion. We add a theoretical description of these two conditions in **Appendix C.1**. These two conditions are common in robust regression methods based on hard thresholding operators, such as [1],[2]. Because these two conditions are mainly used in theoretical proof, we pay more attention to the theoretical bound of these two conditions. Under the properties of these two conditions, we can prove that TRIP has a non-zero breakdown point when $\\mathbf{x}_i$ are iid in $\\mathcal{N} (0,\\Sigma)$, which also shows that these two conditions are not so strict.\n\n\n[1] Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, and Purushottam Kar. Consistent robust regression. Advances in Neural Information Processing Systems, 30, 2017 \n[2] Kush Bhatia, Prateek Jain, and Purushottam Kar. Robust regression via hard thresholding. Advances in Neural Information Processing Systems, 28, 2015.", " We thank the reviewer for the constructive feedback. The suggestions and questions are briefly responded below, and details can be found in the rebuttal revision.\n\n**Weakness 1**: I feel that the authors need to place the result in the context of the surrounding literature a little better... \n**Response**: Thanks for your suggestion. We have added the description of relevant literature and adjusted the expression of the other literature. The changes are made in **lines 53 to 59** of the rebuttal revision.\n\n\n**Weakness 2** : Additionally, further comparison with [2] would be helpful... \n**Response**: We add an experiment to compare with TORRENT[2] under OAA and AAA. The experiment is divided into two cases: with white noise $\\boldsymbol{\\epsilon}$ and without white noise. Under attacks of OAA and AAA, the performance of TORRENT is similar with CRR[1]. Operation details and experimental results are shown in **Appendix E**.\n\n\n**Weakness 3**:Another paper to cite might be https://arxiv.org/abs/1809.08055... \n**Response**: This is a closely related paper and has been cited as **[10]**. Thanks for the suggestion.\n\n**Weakness 4**: Clearer motivation and a description of the algorithm would have been helpful... \n**Response**: We have added more explanations on TRIP and explained the differences between it and the previous methods. This part is described from **line 154 to 161**. Here we mainly compare our methods with CRR. Since the idea of TRIP is very different from TORRENT, they are not compared.\n\n**Question 1**: It would be good to explicitly write down the breakdown point achieved in the case of a Gaussian... \n**Response**: Thanks for your inspiring advice. We re-order the theorems and add Theorem 3 in **Appendix C.2, line 490**. Theorem 3 provides the conditions that the breakdown point of TRIP algorithm should satisfy, but this theorem is not very intuitive and cannot give the explicit form of the breakdown point. Therefore, following this theorem, we give an approximate expression of the breakdown point, which is based on a second order Taylor expansion. \n>Suppose $\\lim_{n\\to \\infty}\\frac{\\lambda_{min}(M)}{n}=\\xi$, then when $\\xi$ is not too large, the approximate expression of the breakdown point is $k^{*}\\le k\\le (0.3023-\\sqrt{0.0887-0.0040\\xi})n$. It can be seen that as the weight of prior gradually increases, the breakdown point gradually rises as well.\n\n**Question 2**: I would be curious to see how this algorithm performs with the following noise model from [3]... \n**Response**: Thanks for your comment. The mentioned attack can actually be regarded as a leverage point attack, that is, attacking the high leverage points in all samples. This attack can effectively corrupt the regression results. We extend this attack to high dimensions to verify the robustness of the methods. This experiment is also divided into two cases: with and without white noise. Under the attack mentioned above, TRIP and BRHT proposed in this paper are compared with CRR, TORRENT, and Rob-ULA. The analysis and experimental results of this attack are shown in **Appendix E**. The experimental results show that TORRENT indeed achieves good regression results when white noise is not considered. However, when the data dimension is high and the samples size is relatively small, TRIP and BRHT are more robust.\n\n**Question 3**: Could the authors please explain for my understanding, intuitively, why the existence of the prior allows one to solve the problem in the harder adversarial setting... \n**Response**: The difference between the proposed TRIP and the original CRR [1] is the form of iteration step. The iteration step in both TRIP and CRR can be expressed uniformly as $HT_k(\\mathbf{y}-X^T\\mathbf{w}^t)$, but $\\mathbf{w}^t=(XX^T+M)^{-1}[X(\\mathbf{y}-\\mathbf{b}^t)+M\\mathbf{w}_0]$ in TRIP and $\\mathbf{w}^t=(XX^T)^{-1}X(\\mathbf{y}-\\mathbf{b}^t)$ in CRR. The $\\mathbf{w}^t$ in CRR is just a simple least square estimation, while adding the prior in TRIP is equivalent to adding a quadratic regularization in each iteration. This quadratic regularization can avoid the candidate of iteration that is too far from the prior mean, which is also helpful to ensure the numerical stability of solution. Thus, as long as the prior is not mis-specified too much, TRIP will be more likely to identify the uncorrupted points, and the final result of TRIP will be more robust than CRR. Compared with TRIP, BRHT improves robustness in each iteration. As a result even if the prior weight is low, each iteration of BRHT can try to avoid the influence of outliers and find points that have not been corrupted within a larger search scope.\n\n[1] Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, and Purushottam Kar. Consistent robust regression. Advances in Neural Information Processing Systems, 30, 2017. \n[2] Kush Bhatia, Prateek Jain, and Purushottam Kar. Robust regression via hard thresholding. Advances in Neural Information Processing Systems, 28, 2015.", " The paper studies the problem of robust linear regression and demonstrates two algorithms which improves the breakdown point for this algorithm under adversarial attacks compared to prior work.\n\nThe main contribution is the observation that, with some prior information, even if it is far from the truth, it is possible to improve the breakdown point and find a consistent estimator for the problem. Prior work was able to find consistent estimators only in the setting where the corruption vector was chosen oblivious to the measurements. \n\nThe authors consider the Adaptive Adversarial Attacks (AAA) setting where the observations are given by $(X, y)$, where $X \\in \\mathbb{R}^{d \\times n}$ and $y \\in \\mathbb{R}^n$. The observations satisfy $y = Xw^* + b^* + \\epsilon$ where $w^*$ is the true regression coefficient, $\\epsilon$ is the dense noise and $b^*$ is an arbitrary $k^*$-sparse vector. The goal is to recover $(\\hat{w}, \\hat{S})$ where $\\hat{w}$ is close to $w^*$ and $\\hat{S}$ is an estimate of the uncorrupted sample set. The authors assume that the variance of the noise $\\epsilon$ can be controlled by the algorithm designer and be set to $\\sigma$, or can be estimated independently to a high degree of accuracy. \n\nThe first algorithm (TRIP) assumes the prior on $w*$ is $\\mathcal N(w_0, \\Sigma_0)$ for some $w_0$ and $\\Sigma_0$ determined in advance. This essentially leads to the problem being transformed into a regularised least squares problem, which they solve by a hard-thresholding approach similar to [1]. The second algorithm is more complicated and assumes a prior also on the local weight assigned to specific samples and uses variational Bayesian expectation maximization to solve a reweighted probabilistic model for linear regression. \n Strengths: \n\nIt is qualitatively interesting that bayesian updates can lead to consistent estimators in the AAA setting. I like the result. \n\n\nWeaknesses: \n\n1. I feel that the authors need to place the result in the context of the surrounding literature a little better. This paper considers the model where only the labels are corrupted (i.e. only $y$) while the result by Diakonikolas et al. [6] (mentioned on line 53) is for the setting where both the $X$ as well as $y$ values might be corrupted. \n\n2. Additionally, further comparison with [2] would be helpful, since the goal of this paper seems to be to achieve a good breakdown point for the AAA setting, which I feel [2] also addresses, as opposed to [1] which only deals with the oblivious setting (with the additional goal of being a consistent estimator, i.e. if the dense additive noise is mean 0, the goal here is for the estimator to achieve error going to 0 as the number of samples tends to infinity). \n\n3. Another paper to cite might be https://arxiv.org/abs/1809.08055, which attempts to solve the problem using L1-regression but it appears this is not a consistent estimator. \n\n4. I found the paper a little hard to understand. Clearer motivation and a description of the algorithm would have been helpful. I would also have liked there to be more intuition and a comparison of the difference in terms of the algorithms and updates from [1] and [2] that would help the reader better understand how the assumption of the prior helps. \n\n[1] Kush Bhatia, Prateek Jain, Parameswaran Kamalaruban, and Purushottam Kar. Consistent robust regression. Advances in Neural Information Processing Systems, 30, 2017.\n\n[2] Kush Bhatia, Prateek Jain, and Purushottam Kar. Robust regression via hard thresholding. Advances in Neural Information Processing Systems, 28, 2015.\n 1. It would be good to explicitly write down the breakdown point achieved in the case of a Gaussian. For instance [2] explicitly says that one lower bound for the breakdown point is 1/65. \n2. I would be curious to see how this algorithm performs with the following noise model from [3]: Samples $(x, y)$ where $x \\sim N(0, 1)$ and $y = 100 x$, where the adversary looks at the $\\epsilon$ fraction of the $x_i$ such that $|x_i|$ is maximized, and then sets the corresponding $y$ to 0. The plot of the error $\\| \\widehat w - w^* \\|$ vs $\\epsilon$ might shed light on the comparison between breakdown points in the case of a Gaussian. While I understand that the goal is to provide a consistent estimator, It might be clearer to see the performance of the algorithm and the breakdown point for an example with no additive error. \n3. Could the authors please explain for my understanding, intuitively, why the existence of the prior allows one to solve the problem in the harder adversarial setting? Somehow this has still escaped me. \n\nI will reconsider my score depending on the answers to these questions. \n\n-----\n\nUpdate: The authors have addressed all questions raised, and I have updated the score to reflect my new assessment. I still have some concern about the theoretical breakdown point, but the algorithms seem to perform well on noise models which previous algorithms fail on. This is a theoretical result with limited societal impact. ", " The paper works on robust least-squares regression of the form $\\mathbf{y} = X^{T}\\mathbf{w}^* + \\mathbf{b}^* + \\boldsymbol{\\epsilon}$, where $\\mathbf{b}^*$ represents a $k$-sparse adversarial perturbation. Specifically, it proposes a Bayesian extension of the Hard Thresholding method previously designed against oblivious adversarial attacks (OAAs), to make it resistant against (a stronger) adaptive adversarial attacks (AAAs), that assumes an adversary having an access to $X$, $\\mathbf{w}^*$ and $\\boldsymbol{\\epsilon}$ before attacks. Two methods are proposed, namely TRIP and BRHT, that incorporate a Gaussian prior on weights $w$ and additionally on exponential weighting $r$, respectively. The paper provides both theoretical convergence analysis and empirical results showing that the proposed methods are more robust under AAAs while also improving under the prior setup of OAAs. **Strengths**\n\n- The paper is clearly-written, e.g., it clearly presents the problem formulation and methodologies.\n- The proposed methods are simple, and well-motivated.\n- The paper conducts empirical analysis to support the claims and reports improved results. In the experiments, the paper designs an adaptive attack scheme ADCA in a similar manner to TRIP but in the opposite direction in optimization.\n\n**Weaknesses** \n\n- Given that AAA is assumed to be an adversary, a more diverse attack methods could be considered - the current attack method of ADCA is designed to be quite close to TRIP, and it is likely that TRIP performs better than others under ADCA because it is just trained as an opposite of ADCA and may not generalize under other ways of attacks.\n- Although it is still novel to me that the paper aims for adversarial robustness, Bayesian approaches on weights and loss coefficients have been quite known approaches in general so some readers may concern on the technical novelty on the method. - Section 3.1: Is the proposed hard-thresholding based method actually induced from assuming Gaussian weight prior, or they are just orthogonal, i.e., TRIP works without the prior? More explanation on the relationship between the prior assumption and TRIP may help.\n- For TRIP, I feel it is a bit unclear why one should assume the Gaussian prior for robustness against AAAs, but perhaps not for others? If not, the paper could empirically explore the effect of using different priors on weights.\n- Line 115: More discussions could be made on how mild the assumed SSC and SSS conditions in both theoretical and empirical senses? The paper does not include discussions on potential negative societal impact.", " The authors study the robust least-squares regression (RLSR). The main contribution of this paper is to propose an algorithm that achieves strong results in terms of resisting adaptive adversarial attacks. A theoretical convergence analysis is provided. Extensive experiments have illustrated that their algorithms outperform SOTA methods in terms of both robustness and efficiency. Originality: The related works are adequately cited. The novelty of this paper is high. The results on robust least-squares regression in this paper, will certainly help us have a better understating of adversarial attacks and defenses from a theoretical way. I have checked the technique parts and find that the proofs are solid. I think this is a significant contribution to machine learning immunity. \n\nQuality: This paper is technically sound.\n\nClarity: This paper is clearly written and well organized. I find it easy to follow.\n\nSignificance: I think the results in this paper is significant, as explained above. It is better to conduct more experiments in more complicated settings. Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "KqURNhZcGEC", "Umwh0mZu4e", "amVzaNH88-C", "X5fZxRfRbUI", "obPS0PGa756", "yUZ71Qvw_5", "nips_2022_krV1UM7Uw1", "nips_2022_krV1UM7Uw1", "nips_2022_krV1UM7Uw1" ]
nips_2022__WHs1ruFKTD
A Closer Look at the Adversarial Robustness of Deep Equilibrium Models
Deep equilibrium models (DEQs) refrain from the traditional layer-stacking paradigm and turn to find the fixed point of a single layer. DEQs have achieved promising performance on different applications with featured memory efficiency. At the same time, the adversarial vulnerability of DEQs raises concerns. Several works propose to certify robustness for monotone DEQs. However, limited efforts are devoted to studying empirical robustness for general DEQs. To this end, we observe that an adversarially trained DEQ requires more forward steps to arrive at the equilibrium state, or even violates its fixed-point structure. Besides, the forward and backward tracks of DEQs are misaligned due to the black-box solvers. These facts cause gradient obfuscation when applying the ready-made attacks to evaluate or adversarially train DEQs. Given this, we develop approaches to estimate the intermediate gradients of DEQs and integrate them into the attacking pipelines. Our approaches facilitate fully white-box evaluations and lead to effective adversarial defense for DEQs. Extensive experiments on CIFAR-10 validate the adversarial robustness of DEQs competitive with deep networks of similar sizes.
Accept
This paper studies the empirical robustness of the general deep equilibrium model (DEQ) in the traditional white-box attack-defense setting. As the topic is under-explored in the literature, the authors first pointed out the challenges of training robust DEQs. Then, they developed a method to estimate the intermediate gradients of DEQs and integrate them into the adversarial attack pipelines. The authors did a good job to address the reviewers' concerns in the author-reviewer discussion phase, and at the end, all reviewers unanimously support the acceptance. Although AC sees some limitations, e.g., limited advantages of using robust DEQs over deep CNNs, scalability to large-scale datasets and training instability, AC thinks the merits of this paper outweigh them: this paper can be a useful guideline when researchers pursue the under-explored problem in the future. Hence, AC recommends acceptance.
val
[ "RnJ_h7Wj4kB", "0Pq6Um1exW_", "l_EoOvaR4nV", "rz-JbmKVI2S", "JVJwXUl7P0", "XLQ4olDxPB5", "RemF4D-O-qq", "_9SkVBEluG", "12xzldSXidd" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for updating the score! Your feedback really helped us improve our work. We will further revise our paper with the added experiments and discussions.", " Thank you for the extra experiments and explanations. The extra results solved my concern and I would raise my score.", " Dear Reviewers,\n\nThank you again for your valuable comments and suggestions, which are really helpful for us. We have posted responses to the detailed concerns.\n\nWe totally understand that this is a quite busy period, since the reviewers may be responding to the rebuttal of other assigned papers.\n\nWe deeply appreciate it if you can take some time to return further feedback on whether our responses solve your concerns. If there are any other comments, we will try our best to address them.\n\nBest,\n\nThe authors", " Thank you for the supportive review and kind suggestions. We have uploaded a revision of our paper.\n\n***Question 1: The adversarially-trained DEQs do not significantly outperform the traditional deep neural networks in terms of robustness***\n\nThe motivation of our paper is to comprehensively evaluate the adversarial robustness of DEQs. Intuitively, the fixed-point structure in DEQs can be viewed as a local attractor, which is expected to be more stable to small input perturbations compared to deep networks. However, as observed in our experiments, directly applying existing adversarial training frameworks (e.g., PGD-AT) on DEQs does not show significant advantages compared to conventional deep networks.\n\n\nOur empirical observations indicate that we should explore more advanced AT mechanisms for DEQs, in order to exploit their local attractor structures (similar to the feedback regulation in control theory). A potential way is to explicitly encourage closed-loop control during training, similar to the mechanism introduced in [1]. To this end, the gradient estimation method proposed in this paper would be one of the critical ingredients for solving the misalignment between the forward/backward pass of DEQs.\n\n***Reference:***\n\n[1] Towards Robust Neural Networks via Close-loop Control. ICLR 2021", " Thank you for your valuable review and suggestions. We have uploaded a revision of our paper.\n\n***Question 1: Paper logic and the attacks against vanilla-trained DEQs***\n\nWe appreciate the constructive suggestion and we train a vanilla DEQ on CIFAR-10 following the recipe in [1]. We use the ready-made PGD-10 to attack the vanilla-trained DEQ model. The clean accuracy (\\%) and robust accuracy (\\%) of each state $z_n$ are shown below:\n\n| State | z1 | z2 | z3 | z4 | z5 | z6 | z7 | z8 |\n|-------------|-------|-------|-------|-------|-------|-------|-------|-------|\n| Clean Acc. | 38.81 | 82.62 | 89.63 | 91.77 | 92.08 | 92.29 | 92.39 | 92.53 |\n| Robust Acc. | 2.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 | 0.00 |\n\nAs observed, the ready-made PGD-10 already has a dramatic effect in attacking all the states against vanilla-trained DEQs. \n\nWe proceed to apply the proposed attacks and defense strategies. We follow Section 5.2 to determine the optimal early exit as state z1. The results are shown below (\"SA\" stands for the attacks with \"simultaneous adjoint\" and \"UI\" stands for the attacks with \"unrolled intermediates\"; best viewed in the new Table 13):\n\n| Defense | Clean | SA | SA | SA | UI | UI | UI |\n|----------|-------|--------------|--------------|----------|----------|---------------|----------|\n| | | Final | Intermediate | Ensemble | Final | Intermediate | Ensemble |\n| Final | 92.53 | 8.90 | 11.45 | 3.69 | 0.00 | 0.00 | 0.00 |\n| Early | 38.81 | 6.08 | 4.54 | 3.42 | 2.00 | 2.94 | 1.31 |\n| Ensemble | 87.31 | 9.12 | 6.39 | 3.48 | 0.00 | 0.00 | 0.00 |\n\nWe can see that all the proposed attacks can defeat a vanilla-trained DEQ. As the white-box robustness of DEQs is assessed by the strongest defense under all attacks (minimum over all columns in a row, then maximum over the minimum of the rows), the white-box robustness of the vanilla DEQ is 1.31\\% with a 38.81\\% clean accuracy using the early-state defense. When using the final-state and the ensemble-state defense, the robustness is 0.00\\%. We include detailed discussions in Lines 267~269 and Appendix E of the revised paper.\n\n***Question 2: The actual meaning of the terms \"exact-trained\" and \"unrolling-trained\", and their relationship with the two different intermediate gradient estimation techniques (without specific names) in Sections 4.1 & 4.2***\n\nThe term \"exact-trained\" means using the exact gradient (Eqs. (2) and (3)) in the DEQ adversarial training procedure (both to generate adversaries with $\\frac{\\partial L}{\\partial x}$ in PGD-AT and to optimize for parameters with $\\frac{\\partial L}{\\partial \\theta}$), while the term \"unrolling-trained\" means using the unrolling-based phantom gradient (Eqs. (4), (5), and (6)) during the DEQ adversarial training procedure. These two terms refer to the two previous works for training DEQs. We more clearly clarify their definitions in Lines 202~204 of the revised paper.\n\nWe use the terms \"simultaneous adjoint\" and \"unrolled intermediates\" to refer to the two intermediate gradient estimation techniques in Section 4. These terms appeared in the headers of Tables 2 and 3; We further name these two methods explicitly in Section 4, Lines 142\\~143 and Lines 168\\~169 of the revised paper. As illustrated in Figure 1, these two techniques estimate the gradient w.r.t. some intermediate state ($z_n$ , $1<=n<=N$), while the exact/unrolling-based phantom gradient is defined w.r.t. the final state ($z^* = z_N$). The two intermediate gradient estimation techniques are integrated into the construction of the white-box adaptive attacks in Section 5.1.\n\nAs for their relationship in experiments, \"exact/unrolling-trained\" refers to **training strategies**, while \"simultaneous adjoint/unrolled intermediates\" refers to **evaluation strategies**. More specifically, we first apply the exact or unrolling training strategy to obtain the adversarially-trained DEQs, then use adaptive attacks with both of the intermediate gradient estimation methods to perform evaluation (as shown in Tables 2 and 3).\n\n***Question 3: The discussion on the limitation of the paper***\n\nIndeed, (adversarially) training DEQs on large-scale datasets is much more non-trivial compared to standard DL models. Nevertheless, recent progress [1,2] has shown that DEQ models can work well on large-scale vision tasks including ImageNet and Cityscapes. Given this, we could apply, e.g., FastAT [3] to DEQs, which preserves the scalability of our methods. In the updated version of the paper, we have added more discussions in Appendix I.\n\n***References:***\n\n[1] Stabilizing Equilibrium Models by Jacobian Regularization. ICML 2021\n\n[2] Multiscale Deep Equilibrium Models. NeurIPS 2020\n\n[3] Fast is Better than Free: Revisiting Adversarial Training. ICLR 2020", " Thank you for the supportive review and kind suggestions. We have uploaded a revision of our paper.\n\n***Question 1: A friendlier introduction to DEQs in Section 2***\n\nWe will certainly improve and polish the introduction to DEQs in the revised paper. Intuitively, DEQs propose to directly solve for the latent representation equilibrium $z^*$, which is defined by a fixed-point equation as $z^* = f_\\theta(z^*; x)$. Here $f_\\theta$ represents a single-layer transformation parametrized by $\\theta$. For DEQs, the forward pass is the iterative root-finding process searching for $z^*$, which is implemented via black-box solvers like Broyden's method; similarly, the backward pass differentiating through the loss function $L$ yields another fixed-point structure, which can be treated as the root-finding process for $u^*$ described as $u^* = \\left(\\frac{\\partial f_\\theta(z^*, x)}{\\partial z}\\right) u^* + \\frac{\\partial L(z^{*}, y)}{\\partial z}$ and solved by another black-box solver. Compared with deep networks, DEQs require only O(1) memory and have a broad range of applications from language modeling, image recognition, graph modeling to optical flow estimation.\n\n\n***Question 2: Complexity analysis and running time report for simultaneous adjoint in the forward pass***\n\nThe time complexity of approximating the Jacobian inverse in the simultaneous adjoint is equivalent to that in the original forward pass, since we directly **reuse** the approximated Jacobian inverse $B_n$ in the forward calculation of $z_n$ when calculating the adjoint state $u_n$. This is different from the original DEQ design, where the forward and the backward passes are decoupled by separate fixed-point solvers, and consequently different $B_n$'s need to be maintained separately.\n\nConcretely, compared with the original forward pass (Eqs. (7) and (8)), the simultaneous adjoint calculation augments it with Eqs. (9) and (10). The time complexity of Eq.(10) is equivalent to that of Eq.(7), while Eq. (9) calculates the residual of Eq.(3) as\n\n$$ u^* = \\left(\\frac{\\partial f_\\theta(z^*, x)}{\\partial z}\\right) u^* + \\frac{\\partial L(z^{*}, y)}{\\partial z}, $$\n\nat $u^* = u_n$ and $z^* = z_n$. As Eq. (3) defines the fixed-point equation of the backward pass in an original DEQ, the time complexity of Eq. (9) is equivalent to just the **residual evaluation** when solving for the exact gradient in an original DEQ.\n\nIn practice, the running time comparison using the adaptive PGD-10 attack with **(a)** the final adjoint state in \"simultaneous adjoint\" and **(b)** the unrolled final state in \"unrolled intermediates\" is\n\n| Type of attack | Time used per batch (ms) |\n|-------------------------|--------------------------|\n| Simultaneous Adjoint | 6,479 |\n| Unrolled Intermediates | 5,369 |\n\nWe can see that the computational burden introduced from Eq.(9) and Eq. (10) does not take the majority of the running time. We have included these discussions in the added Appendix G.2 and referred to them in Line 259 in the updated main paper.\n\n***Question 3: Why AutoAttack's performance is worse than PGD10 for DEQs***\n\nAutoAttack is stronger than PGD-10 under the condition that the model gradients are accurately accessible (without gradient obfuscation), which is usually true for deep networks by automatic differentiation. However, as explained in Lines 277~289, the model gradients of DEQs at the intermediate states can only be estimated (because of the black-box fixed-point solvers used in DEQs) and are not completely accurate. Therefore, note that the APGD-CE / APGD-T methods in AutoAttack use adaptive step size, which facilitates the constructed perturbation to overfit the (inaccurate) gradients. In contrast, PGD-10 applies constant step size, which is usually less powerful against deep networks, but happens to act as a regularization in the case of DEQs, especially when the defense strategies leverage an output state (final, early, or ensemble) different from the state that the intermediate gradient is estimated at.", " This paper evaluated the robustness of the general deep equilibrium model (DEQ) in the traditional white-box attack-defense setting. The authors first pointed out the challenges of training robust DEQs. Then they developed a method to estimate the intermediate gradients of DEQs and integrate them into the adversarial attack pipelines. Strength:\n1. General DEQs' robustness is not well studied in the literature and this paper provides the first study in this area.\n2. The intermediate gradient estimation methods proposed are interesting, especially the first one inspired by the adjoint process in neural ODE models. I am wondering if such methods can be applied to attacks on the regular deep neural networks.\n\nWeakness:\n1. I found the intro to DEQs a little bit hard to follow. I spent a lot of time on the formulation of the problem. I think if the authors want to present this work in the adversarial robustness community, a friendlier intro in Section 2 may be useful. \n2. The time complexity of \"simultaneous adjoint in the forward pass\" proposed in 4.1 seems high, especially with a Jacobian inverse (even with low-rank approximation). Some complexity analysis and running time report may be useful. 1. In Table 3, I noticed that for many models AutoAttack's performance is worse than PGD10, which surprised me. I was wondering if this is something special for DEQs. More explanation on this is welcome. The authors adequately addressed the limitations and potential negative social impact of their work.", " This paper investigates the adversarial robustness of DEQs. In order to apply the standard attack algorithms (e.g. PGD attacks), the authors propose to estimate the gradient of intermediate states with two different approaches. The authors also propose techniques to improve the robustness of DEQs. ## Strengths\n\n* The paper proposes and investigates an interesting topic - the empirical robustness of DEQs. The authors identify the key problem of empirical adv attacks on general DEQs and propose methods for both attacks and defenses.\n\n* As far as I can tell, the proposed approach is correct and intuitively makes sense. The authors propose two techniques to approximate the gradient of intermediate states, based on which they can perform the adversarial attack and adv training algorithms. These proposed approaches are clear and correct.\n\n## Weaknesses\n\n* The attack on vanilla-trained DEQs are missing in the evaluation, which is a key part of the story. To me, the logic of the paper would be 1) authors propose an adv attack against DEQs; 2) authors utilize the attack to do PGD adversarial training. Although the authors show that the adv-trained models indeed have a good performance against the proposed attacks, they do not show that the proposed attack indeed works against vanilla models. Therefore, we cannot arrive at conclusions like \"adv-trained DEQs have better robustness than standard models\" without seeing the results on the vanilla-trained DEQs.\n\n* Some terms are not clearly defined and confusing to readers. For example, no specific names are given to the two approaches on intermediate gradient estimation proposed in Sec. 4.1 and 4.2. I hypothesize that the terms \"exact-trained\" and \"unrolling-trained\" will sometimes refer to the two approaches respectively, but they are also used to refer to the two previous works for training DEQs. * How do you apply the two different intermediate gradient estimation techniques in Section 4.1&4.2 to your experiments? How do they relate with the previous two training approaches \"exact\" and \"unrolling\"?\n\n* What is the performance of your proposed attack against vanilla-trained DEQs? Which attack will have the better performance, regardless of how the model is trained (e.g. exact/unrolling)? I do not see the discussion of the limitation in the paper. As far as I can tell, I think the efficiency of adv-DEQs would be a concern compared with standard DL models. In addition, I doubt whether the approach can be applied to larger-scale tasks such as ImageNet.", " This paper studies the robustness issue in deep equilibrium models.\nSpecifically, the paper points out the misalignment between forward and backward tracks of DEQ due to the black-box solver.\nTo address this issue, the paper proposes two gradient estimation methods:\nan adjoint process and unrolling the intermediate states.\nThose two proposed methods allow accurate robustness measurement and effective adversarial training. Strength:\nThe observation of misalignment between forward and backward passes in DEQs is a significant contribution to the adversarial robustness research.\nThe proposed two gradient estimation methods are helpful for accurately evaluating the robustness of DEQs.\n\nWeakness:\nAdversarially trained DEQ models do not show significant advantages compared with conventional deep neural networks, as shown in Table 3. Their performances are comparable.\n\n Could the author elaborate on the advantages of using adversarially trained DEQs models over the traditional deep convolutional networks? The limitations of this proposed method are discussed in the conclusion.\nThe stability of adversarial training is indeed one concern of DEQ models.\nMeanwhile, I think showing the advantages of using robust DEQs over deep CNNs is also not very clear at this point." ]
[ -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, 2, 3, 3 ]
[ "0Pq6Um1exW_", "JVJwXUl7P0", "nips_2022__WHs1ruFKTD", "12xzldSXidd", "_9SkVBEluG", "RemF4D-O-qq", "nips_2022__WHs1ruFKTD", "nips_2022__WHs1ruFKTD", "nips_2022__WHs1ruFKTD" ]
nips_2022_z9cpLkoSNNh
Continual learning: a feature extraction formalization, an efficient algorithm, and fundamental obstructions
Continual learning is an emerging paradigm in machine learning, wherein a model is exposed in an online fashion to data from multiple different distributions (i.e. environments), and is expected to adapt to the distribution change. Precisely, the goal is to perform well in the new environment, while simultaneously retaining the performance on the previous environments (i.e. avoid ``catastrophic forgetting''). While this setup has enjoyed a lot of attention in the applied community, there hasn't be theoretical work that even formalizes the desired guarantees. In this paper, we propose a framework for continual learning through the framework of feature extraction---namely, one in which features, as well as a classifier, are being trained with each environment. When the features are linear, we design an efficient gradient-based algorithm $\mathsf{DPGrad}$, that is guaranteed to perform well on the current environment, as well as avoid catastrophic forgetting. In the general case, when the features are non-linear, we show such an algorithm cannot exist, whether efficient or not.
Accept
This paper provides a theoretical analysis of continual learning when the learner is modeled as a featurizer followed by a linear head. The analysis provides theoretical guarantees on learnability when the featurizer is linear, and is learned using doubly projected gradient descent. The guarantee ensures good accuracy on all environments and resilience to catastrophic forgetting. For a nonlinear featureizer, it is shown that continual learning is not possible in general: there exist scenarios that even when good features exist, either catastrophic forgetting or poor performance has to occur. Reviewers raised questions about the implication of the theory for practical settings (s7RD, KsmC about how useful the results are in practice, cwgv on whether current analysis based on quadratic activation functions can carry over to ReLU, and VAS6 about whether the analysis works for classification as well [instead of regression]). The authors responded to these. They highlighted how the algorithm in the linear setting provides an insight into Orthogonal Gradient Descent (OGD), which is known algorithm for continual learning. Authors also explained the significance of the lower bounds for understanding what is fundamentally possible and impossible in continual learning. Moreover, the authors clarified that quadratic activation is not essential for the lower bound, and extended their proof to ReLU in the revised version of the submission. The authors also clarified that classification could be treated as a special case of regression, with target values being discrete, and hence the result is not limited to regression only (although I, the AC, add that for classification, L2 loss is less common, and by classification, we typically refer to loss functions like cross-entropy, but the results of the paper are interesting regardless). Final scores are all on the accept side, indicating reviewers have found the contributions strong enough for the submission to be published. In concordance, I think this paper provides very interesting insights about some fundamental aspects of learnability in continual settings.
train
[ "c92O3Bd4jrA", "SMuSSxXao_3", "tQ8FjTd3gR", "ecVgBXczXK-", "uN4gJfGLIVt", "vAbyVMaJtlP", "yMPcFkT4_r3", "FKXlwydIBaQ", "M6b2lgOlOf7", "iFTTaHv1ICc", "u7MIlPyZ7nm", "ecvhmwIHZEJ", "Fe1mcwnsYhD", "dQ6hJWqK5V", "JhCWB08j5Cm", "LEq_Cpvo1AL", "LZhUTfWnGQ_", "akGyodr4ndn" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarifications! Indeed OGD seems to have a massive forgetting. Looking at the results with a variety of task orders, I have a little concern about this trade-off of plasticity v/s learnability of new tasks, which is in general a big question in the CL community. This problem is in general alleviated to some extent in replay-based methods, however, I cannot use them as the baseline numbers while assessing the validity of the provided numbers since the setting is altogether different. Thanks for the experiments and for providing a theory for the linearly separable case. Lastly, I've decided to keep my rating (6). \n", " Thank you for the reply and comments! We address your questions below.\n\n>- Regarding the choice of h for the number of top eigenvectors used for the algorithm, I am suspecting this is decided based on the eigenspectrum. If so, any intuitive explanation of why the same h worked well for both the datasets? \n\nFor the experiments we provided results for, we choose the same h across all tasks on both datasets and its value is tuned on the Rotated MNIST. One intuitive explanation why the same h might work for both datasets (Permuted MNIST and Rotated MNIST) is that both dataset/tasks are variants of MNISTs (up to the application of the permutation/rotation), so the learned feature space could be quite similar.\n\n\n>- For the Rotated MNIST dataset, from Table 2 it seems that OGD outperforms every method from task 6 onwards (quite significantly for the later tasks) pointing out that DPGrad is not plastic about learning new knowledge at later stages. Is it possible for the authors to include metrics such as backward transfer to quantify forgetting, which is very common metric in Continual Learning literature? \n\nWe have added the backward transfer values in Appendix E, Table 3. Indeed, we agree that OGD seems to be more plastic than DPGrad, however at the expense of incurring a larger forgetting ratio (or negative backward transfer).\n\n\n>- For the Rotated MNIST dataset, how does the task ordering affect the performance? I am suspecting that only one task order is used in the reported results, but I am curious to see what happens if the results are averaged across task orders and also, different random initializations. Checklist 3c states that results don't report the error bars because of a large margin, but I think in classification experiments authors should add error b.\n\nThank you for the question! We have now added the maximum deviations and confidence bars for both the accuracies and the backward transfer, averaged over 5 runs randomized for ordering of the tasks and random seed. The updated tables (Tables 2,3,4) and figures (Figures 4,5) can be found in Appendix E. We are happy to average over more runs, but due to the time constraints of the rebuttal period, this is as much as we could run. We find all the quantities are relatively stable — especially the average accuracy. ", " Thanks for your quick reaction, and apologies for posting in the wrong thread.\n\nTo be more precise, I was thinking about settings where the drift is less abrupt between environments. It seems to me for example that the shift in objective studied in the non-linear case is not common in practice, at least not in classification settings that are the focus here, and I am therefore unclear how much the insights gained with this study would hold in more common environments. \n\nThis being said, I have seen in the answer to another reviewer that this goes beyond the scope of this work and is considered as a potential future research direction. I therefore increased my score accordingly as mentioned in my previous comment. ", " Thank you as well! \n\nWith respect to \"more realistic settings\": are there some specific datasets or metrics you would like to see? The time until the end of the discussion period is very limited, but we can do our best to try to provide some results to the extent time permits. Similarly, if there are outstanding questions about the theory, we're happy to answer them. \n\n(P.S. This reply somehow ended up in the thread of reviewer s7RD. We're happy to continue here, or we can switch to your (cwgv) thread.) ", " I thank the authors for their clarifications. \n\nWhile certain aspects of the contributions are made clearer, it is still unclear to me how they relate to more realistic settings. \nI increased my score accordingly. ", " Dear Reviewer s7RD,\n\nthank you for putting in the time and effort to review our paper.\n\nWe hope our clarification resolved your concerns: in particular, we clarified how our results fit in a broader, ambitious program to understand the possibilities and limitations of different formalizations of continual learning; we also added a substantial number of new experimental results in Appendix E on more realistic datasets (Permuted and Rotated MNIST); finally, we clarified how our results qualitatively differ from [1] --- both in terms of the framework and the implications of the results. \n\nWe'd love to hear back. If we've sufficiently addressed your concern, could you please reevaluate our score? Thank you very much!\n\n", " Dear Reviewer cwgv, \n\nthank you for putting in the time and effort to review our paper. \n\nWe hope our clarification resolved your concerns: in particular, we clarified that the extractor and classifier are indeed trained together in our framework; we also added a version of the lower bound for ReLU activation to demonstrate the lower bound isn't sensitive to the activation function (Section C, Theorem C.2 in); finally, we added substantially more experiments in Appendix E on more realistic datasets (Rotated and Permuted MNIST). \n\nWe'd love to hear back. If we've sufficiently addressed your concern, could you please reevaluate our score? Thank you very much!", " Thanks for providing a way to extend the work to the classification setting, and adding experiments for the common datasets. I have a few comments: \n\n1. Regarding the choice of *h* for the number of top eigenvectors used for the algorithm, I am suspecting this is decided based on the eigenspectrum. If so, any intuitive explanation of why the same h worked well for both the datasets? \n\n2. For the Rotated MNIST dataset, from Table 2 it seems that OGD outperforms every method from task 6 onwards (quite significantly for the later tasks) pointing out that DPGrad is not plastic about learning new knowledge at later stages. Is it possible for the authors to include metrics such as backward transfer to quantify forgetting, which is very common metric in Continual Learning literature? \n\n3. For the Rotated MNIST dataset, how does the task ordering affect the performance? I am suspecting that only one task order is used in the reported results, but I am curious to see what happens if the results are averaged across task orders and also, different random initializations. Checklist 3c states that results don't report the error bars because of a large margin, but I think in classification experiments authors should add error bars. \n\n", " After reading your answer I still have some concerns on the applicability of these results, but I appreciate the effort you made to provide more insights.", " Thank you for the valuable and encouraging comments! We address your major concerns now: \n\n\n>- “The authors generically refer to continual learning in several parts of the paper but, as they correctly point out, there are several different settings in continual learning and their findings do not apply to all of them. It is particular important to be clear about it when providing negative results such the lower bound in Sec. 5.”\n\nThanks for bringing this up—we tried to be very forceful about this point as well. We mentioned it one more time in the theorem statement of the lower bound as well to be even more explicit (e.g. see the new Theorem 2.12 in the updated draft)\n\n>- The definition of forgetting on L102 works because of the assumption that the learner will get a good accuracy on the current task (L103) but it would be wrong otherwise. Generally speaking, it would be more intuitive to see it expressed as a difference with the performance measured in the past on an observed task.\n\nThanks for pointing this out, we agree with it. Our results were mostly focused on the “realizable setting” — that is, where there is a ground-truth featurizer with corresponding linear predictors that achieves 0 loss. In this case, these two notions are equivalent—but we agree that when there’s model mismatch the above distinction would be valuable. \n\n>- Sec 5 provides a lower bound on the error but it is not clear how much of a practical impact that is going to have. It would be interesting if the authors could discuss which realistic assumptions on the environments could provide a more favorable situation.\n\nThanks for the suggestion! This is indeed one of the most interesting directions for further research: finding assumptions that make the problem tractable (i.e. circumvent known lower bounds), and designing algorithms under such assumptions. We can only speculate, but one reasonable assumption that would avoid our lower bound is some kind of “gradualness” in the change of the classifiers from one environment to the next (as opposed to an arbitrary drift). \n\n>- It is quite hard to understand the potential practical impact of the findings in the paper and the results in Sec. 6 do not help. It is clear that the main contribution of this work is theoretical and the aim is not to outperform some continual learning baselines, but it would be beneficial for the community to understand how big is the gap between theory and practice.\n\nAs part of the rebuttal, we provide empirical results for two algorithms inspired by our theory in Appendix E, added to our updated draft. We provide results on two common benchmarks, Permuted MNIST and Rotated MNIST for two variants of our algorithm — one is a modification of DPGrad for *multi-class classification*, the other is a modification that allows *non-linear featurizers*. To adapt it to a multi-class classification problem, we view each task as having 10 linear predictors---one for each class. Recall the key idea of DPGrad is to perform (fine-grained) column/row projection for the gradient of the weight matrix and the column/row space is increased by (at most) 1 after each task. In the multi-class case, we force the increase of the row/column space to be (at most) 10 dimensions per task. To adapt it to *non-linear representations*, we note that in the linear case, the column/row space increases by (at most) one dimension after each task and the newly added column/row is essentially the top eigenvector of the feature matrix as it’s close to rank 1. For non-linear features, there is no reason to hope the weight matrix is rank-one, but instead, we perform singular value decomposition (SVD) to the matrix and take the top-h eigenvectors and then add them to the column/row space for some h. \n\nDetailed numbers and figures can be found in Appendix E in the updated draft. In brief, both algorithms alleviate catastrophic forgetting and perform much better than vanilla SGD. Both outperform OGD, which is a strong baseline approach. Furthermore, the performance of both is much more stable than OGD and the accuracy remains at a high level across tasks. By contrast, OGD has large variance across tasks—it obtains high accuracy in recent tasks but much lower accuracy in early tasks (especially in Rotated MNIST).", " Thanks for the valuable comments. We are glad you thought the “theoretical contributions are strong”! We address your concerns below.\n\n>- While a downside of having a memory-based replay method is mentioned in line 48, they are currently SOTA. How does DPGrad change when CL is accompanied by experience replay? \n\nPart of the program we propose in this paper is that in order to understand what is **fundamentally possible and impossible** in different *settings of practical interest* for continual learning, one needs to formalize the intuitive considerations of the setting into a learning problem. \n\nThrough this lens, it’s not surprising that memory replay helps: it is a fundamentally more permissive setting, in which the algorithm is allowed to store examples from prior tasks. This formalization wouldn’t be appropriate, for example, in situations where memory or privacy is a concern.) It would certainly be a very interesting research direction to explore fundamental tradeoffs in this setting as well (e.g. *how much* memory is needed to achieve a certain accuracy? Can a *differentially-private* algorithm achieve a certain accuracy?) \n\n>- The setting uses task descriptors and have different linear classifier for different tasks vi, However, again from a practical point of view it is a big assumption that much previous CL work (albeit empirical) tends to avoid. Can it be easily proven/disproven that even in a linear feature case if one uses a single head (just one v_i for all the tasks) catastrophic forgetting will happen? Intuitively it seems it cannot prevent forgetting.\n\nYes, in our set up, it is very easy to prove that when only one head is used, catastrophic forgetting can not be prevented.\n\n>- While the focus of this work is regression, what about the classification problems? Some of the previous work such as Gradient Episodic Memory (GEM) has the same formulation of no forgetting (assuming memory is a decent representation of task distribution) but uses local linear assumptions to provide a working algorithm. Can authors add some discussion on extending this to classification problems?\n\nThere is nothing in our approach that is inherently tied to regression. The formalism we consider (Definition 2.2) is in fact not specific to regression: the labels y can just as easily be discrete (and the loss can of course be changed as well). We suspect our positive results can be extended to classification as well. \n\n>- Can the authors mention the following paper in the related work- “Reconciling meta-learning and continual learning with online mixtures of tasks” especially their section 6.1 have additional simulation-based datasets that can be used in this setting to improve the experiments section?\n\nThanks for bringing this up! We have added the reference to the related works. We ran some additional experiments on Permuted MNIST and Rotated MNIST (See the added Appendix E to the updated draft), but due to time constraints, we didn’t have the time to also run experiments on these datasets. We are happy to do it for the final version of the paper. \n", " Thanks for the valuable comments. We are glad you found our feature-extractor formalism “intuitive”, the analysis of the linear case “interesting” and the paper overall “well written” and “easy to follow” ! We think most of the concerns come from a slight misunderstanding of our formalization, and hopefully our clarification below will assuage these worries. \n\n>- You asked: “it would be more interesting in practice to study the case where the feature extractor and the linear classifier are trained simultaneously, taking it closer to e.g. neural networks training. Can the current analysis be extended to cover this case?” \n\n**The extractor and classifier can indeed be simultaneously trained.** Typical algorithms — as well as our algorithm for the linear case— will indeed train them jointly using some form of (possibly regularized, or projected) gradient descent. The only restriction in our setting is that for *past* tasks, the weights of the linear classifier cannot be updated as the feature extractor is updated. The motivation behind this is that to do so, one would need to retrain on the past examples — which often we don’t have access to due to memory or privacy concerns. \n\n>- You asked: “The study of the non-linear case is based on a quadratic activation function and a polynomial target. Most of the neural networks used nowadays use ReLU activations or its variants, and encode piecewise linear functions. Can this fact change the analysis of the non-linear case, and do the results change in that case, especially when coupled with a joint training of the features and the linear classifiers” \n\nJoint training is already allowed, as we explained above. ReLU activation is not an essential ingredient to the lower bound — we can construct a similar example for ReLU activations. We have added this construction as Section C, Theorem C.2 in the updated draft.\n \n>- You asked: “The authors recognize the similarity between their proposed algorithm and the gradient projection family of methods that have been previously proposed. Can they state more clearly the differences and why their proposed method is a novel one”\n\nThe gradient projection family of methods (taking OGD for example) view the entire feature matrix as a vector (of dimension dr) and perform projection on it. Our algorithm, instead, projects twice — with respect to the column and row space, separately. The fine-grained projection is the novel part and it is crucial.\n\n\n>- You asked: “The experimental results are interesting as they mimic exactly the theorem assumptions. They are however very limited. It would be interesting to conduct some experiments with real data, even at a small scale (where a linear model can already give reasonable performance), to see if the observations still holds when the assumptions are partly relaxed.”\n\nAs part of the rebuttal, we provide empirical results for two algorithms inspired by our theory in Appendix E, added to our updated draft. We provide results on two common benchmarks, Permuted MNIST and Rotated MNIST for two variants of our algorithm — one is a modification of DPGrad for *multi-class classification*, the other is a modification that allows *non-linear featurizers*. To adapt it to a multi-class classification problem, we view each task as having 10 linear predictors---one for each class. Recall the key idea of DPGrad is to perform (fine-grained) column/row projection for the gradient of the weight matrix and the column/row space is increased by (at most) 1 after each task. In the multi-class case, we force the increase of the row/column space to be (at most) 10 dimensions per task. To adapt it to *non-linear representations*, we note that in the linear case, the column/row space increases by (at most) one dimension after each task and the newly added column/row is essentially the top eigenvector of the feature matrix as it’s close to rank 1. For non-linear features, there is no reason to hope the weight matrix is rank-one, but instead, we perform singular value decomposition (SVD) to the matrix and take the top-h eigenvectors and then add them to the column/row space for some h. \n\nDetailed numbers and figures can be found in Appendix E in the updated draft. In brief, both algorithms alleviate catastrophic forgetting and perform much better than vanilla SGD. Both outperform OGD, which is a strong baseline approach. Furthermore, the performance of both is much more stable than OGD and the accuracy remains at a high level across tasks. By contrast, OGD has large variance across tasks—it obtains high accuracy in recent tasks but much lower accuracy in early tasks (especially in Rotated MNIST).", " Thanks for the valuable comments. We are glad you found our paper constitutes “step in a very interesting direction” and that the contributions are “novel”. Your concerns seem to mostly revolve around the significance of the paper and how it provides a “step forward” which we hopefully will alleviate with this reply. \n\n**Relevance of insights and why the paper is an important step forward**: Thank you for this question. We think both the algorithm for the linear setting and the lower bound in the nonlinear setting offer useful insights, as well as directions for further work. \n\nThe algorithm in the *linear setting* provides an insight into projection-based methods like Orthogonal Gradient Descent (OGD) [Farajtabar et al, 2019] — and simultaneously show that the way the projection should be chosen can be subtle (i.e. we need to project both on the left and right, i.e. the row and column space). It would certainly be very interesting to prove analogous results beyond linear featurizers—though we point out that understanding gradient-descent like algorithms even for the basic, supervised learning setting for non-linear networks is essentially wide open except in some restricted settings like NTK (very wide networks). \n\nThe significance of the *lower bounds* is as part of a program to delineate what is **fundamentally possible and impossible** in different formalizations for continual learning. In the feature-based formalization we consider (which makes sense when memory is an issue, both for growing the model, and privacy for retaining a memory buffer of prior examples) — our lower bound shows that continual learning for general distributions and non-linear classifiers is impossible. Moreover, this is not due to computational constraints (i.e. NP-hardness) or statistical constraints (i.e. large sample complexity) — but a fundamental obstruction due to the online nature of the setting. Precisely, in some task, it’s possible that multiple pairs of (featurizer, classifier) work, but some featurizers will not be good for the subsequent tasks. \n\nAs part of the rebuttal, we provide empirical results for two algorithms inspired by our theory in Appendix E, added to our updated draft. We provide results on two common benchmarks, Permuted MNIST and Rotated MNIST for two variants of our algorithm — one is a modification of DPGrad for *multi-class classification*, the other is a modification that allows *non-linear featurizers*. To adapt it to a multi-class classification problem, we view each task as having 10 linear predictors---one for each class. Recall the key idea of DPGrad is to perform (fine-grained) column/row projection for the gradient of the weight matrix and the column/row space is increased by (at most) 1 after each task. In the multi-class case, we force the increase of the row/column space to be (at most) 10 dimensions per task. To adapt it to *non-linear representations*, we note that in the linear case, the column/row space increases by (at most) one dimension after each task and the newly added column/row is essentially the top eigenvector of the feature matrix as it’s close to rank 1. For non-linear features, there is no reason to hope the weight matrix is rank-one, but instead, we perform singular value decomposition (SVD) to the matrix and take the top-h eigenvectors and then add them to the column/row space for some h. \n\nDetailed numbers and figures can be found in Appendix E in the updated draft. In brief, both algorithms alleviate catastrophic forgetting and perform much better than vanilla SGD. Both outperform OGD, which is a strong baseline approach. Furthermore, the performance of both is much more stable than OGD and the accuracy remains at a high level across tasks. By contrast, OGD has large variance across tasks—it obtains high accuracy in recent tasks but much lower accuracy in early tasks (especially in Rotated MNIST).", " >- How would you compare your results in section 5 to the conclusions drawn in [1].\n\nThanks for bringing this up! The settings in the two papers are quite different, and the results are (even qualitatively) very different. In brief, the results in [1] have a substantially more “worst case” flavor than ours. For instance, given the generality of their formalization, the NP-hardness results are not surprising: optimization problems with arbitrary distributions/predictor classes are typically NP-hard in the worst case; by contrast, our results, as we mentioned above, show that there are fundamental obstructions due to the online nature of the setting — *not* just due to computational nor statistical constraints. Their memory lower bounds are based on a kind of “counting” argument and when translated to our setting, would just say that $\\Omega(dr)$ on the memory size is needed, which is not interesting since we already have dr parameters in the feature function. These differences stem from the fact that we consider a more fine-grained family of algorithms based on featurizers with a separate linear predictor (Definition 2.2). We’ve added some discussion to this effect in the updated draft, Appendix A. \n\n>- “On line 338, do you mean “r=3” instead of “n=3”?”\n\nGood catch! Indeed, we mean k=2, d = 3, r=2. \n\n\n>- For instance in paragraph 28-34, the authors discuss only a subset of the CL ideas (a domain-incremental CL setting with regularization-based approaches) while claiming to summarise all of CL.\n\nThanks for pointing this out. Due to space constraints, we included a more detailed discussion of CL broadly in Appendix A, where we discuss other CL settings and approaches (e.g. regularization based-approaches, memory replay, dynamic architectures). We have moved some of this discussion back to the main body in the updated draft. \n", " The papers considers the incremental problem setting of Continual Learning (CL) and investigates the performance of methods with a multi-head architecture. This setup is seen through the lens of feature extraction, in which there is a shared network which extracts features from the input, and a task-specific linear classifier. The paper reviews two cases. In the first, the feature extractor is a linear mapping of the input. For this case, the authors present a gradient-based algorithm which is guaranteed to be able to learn to perform new tasks, while avoiding catastrophic forgetting. In the second case, the feature extractor is a nonlinear mapping of the input. The authors show that there is a CL sequence in which a CL algorithm cannot reliably achieve a high performance. I think that this paper constitutes a step in a very interesting direction, namely, exploring the limitations of different CL approaches. I also think that the contributions made in the paper are novel. \nHowever, I am unsure about the paper’s significance. Most of the presentation is dedicated to developing an algorithm which works well when the target feature map is linear. However, there is no discussion on how the insights gained from this are actually relevant to the non-linear setting. Therefore, I am left with the impression that this cannot be used to further the theoretical research in Continual Learning.\nI found the result for non-linear features to be interesting, however, there is a missing discussion on why this is an impactful result. What is more, I think that [1] already contains a similar result, while reached in a different way. Concretely, that if a CL algorithm does not store any data points of past tasks, it’s not going to achieve optimal performance (please correct me if I’m wrong).\nIn terms of clarity, I do not think that the current state of CL research is correctly depicted in the paper. For instance in paragraph 28-34, the authors discuss only a subset of the CL ideas (a domain-incremental CL setting with regularisation-based approaches) while claiming to summarise all of CL. Moreover, I think that the proof sketch in Section 5 needs another iteration in order to improve readability.\n\n[1] “Optimal Continual Learning has Perfect Memory and is NP-HARD” , Jeremias Knoblauch, Hisham Husain, Tom Diethe\n q1) What reasons would you give that this paper provides an important step forward?\nq2) How would you compare your results in section 5 to the conclusions drawn in [1]?\nq3) On line 338, do you mean “r=3” instead of “n=3”?\n I did not find a discussion on the limitations of the current findings.\n", " The authors consider defining a theoretical framework allowing to study the continual learning problem. They look at this problem from the point of view of feature extraction, where features are continually learned on a sequence of environments, followed by task specific linear classifiers. \nThey then provide an analysis of two cases: the case where the features are linear, and the case where they are not. In the first case, they provide an algorithm that is (under certain well defined assumptions) guaranteed to converge and that gives a learner that does equally well on the past and on the present tasks. In the second case, they construct a counterexample on which no continual learning can guarantee a low error with high enough probability. \nThe algorithm provided for the first case is validated by a simulation on synthetic data, and compared to vanilla stochastic gradient descent and to orthogonal gradient descent from which the proposed algorithm is heavily inspired. Strengths: \n* The paper is looking at filling a gap in the literature between theory and practice. It is looking at a problem that can be very useful for the community to improve our understanding of Continual Learning in general.\n* The paper is well written, with a clear structure and easy to follow. \n* Looking at the problem from a feature extractor point of view is quite intuitive for the supervised setting that is the focus here. \n* The analysis of the linear case is very detailed and clear, and the results in this case are sound and interesting. \n\nWeaknesses: \n* While the paper is well structured, it seems quite unbalanced between the linear case and the non linear one. It is focused mostly around the former, while the latter is more interesting in practice. The motivation of the authors is that the linear case provides intuition for the non-linear case. This is not generally true. From a neural network perspective for example, this can hold for the overparameterized setting, in which the input has a significantly lower dimension than the features. The analysis conducted in this paper consider the opposite setting with low dimensional features, and the insights derived from the linear case are likely to break for the non-linear case. \n* The conclusion for the non-linear case seems to stretch the argument more than it allows. The considered setting is so particular and little used in practice (especially for the supervised learning setting). In particular, the choice of the activation function and the changing objective between the environments are not used in practice. Moreover, the choice of learning the features and the linear classifier separately can highly impact the results. Showing that there is no proper continual learner with sufficiently low error in this setting seems insufficient to conclude that it is the case in general for the non-linear case. \n* The experimental results are interesting as they mimic exactly the theorem assumptions. They are however very limited. It would be interesting to conduct some experiments with real data, even at a small scale (where a linear model can already give reasonable performance), to see if the observations still holds when the assumptions are partly relaxed. I think the paper can be stronger by clarifying certain points. \n\n- The authors recognize the similarity between their proposed algorithm and the gradient projection family of methods that have been previously proposed. Can they state more clearly the differences and why their proposed method is a novel one? \n\n- While the feature extraction take on continual learning is interesting, it would be more interesting in practice to study the case where the feature extractor and the linear classifier are trained simultaneously, taking it closer to e.g. neural networks training. Can the current analysis be extended to cover this case? \n\n- The study of the non-linear case is based on a quadratic activation function and a polynomial target. Most of the neural networks used nowadays use ReLU activations or its variants, and encode piecewise linear functions. Can this fact change the analysis of the non-linear case, and do the results change in that case, especially when coupled with a joint training of the features and the linear classifiers? \n The paper is mostly theoretical, and a study of societal impact seems out of scope here. \nThe work can however be improved by bringing the non-linear case study closer to settings used in practice. This can lead to a much higher impact and improved understanding of continual learning. ", " In the recent past, there have been several advancements in the field of Continual Learning (CL), where a big chunk of it is devoted to empirical research, such as the usage of replay/regularization-based methods. However, there has been much lesser attention to the theoretical aspects. This work considers continual learning in the regime of regression tasks, where the task descriptors are available. With the standard definition of catastrophic forgetting, the paper demonstrates an algorithm that provably avoids catastrophic forgetting, when the underlying data generating process is such that when the features that lead to optimum solutions to regression can be written as a linear mapping of inputs. Under this setting, they propose an algorithm called DPGrad, which is based on a projected gradient descent algorithm. They further show using a special case that in a non-linear regime, it is not possible to prevent catastrophic forgetting. Finally, in a synthetic toy example, they show the efficacy of DPGrad against OGD and plain SGD. \n Strengths:\n\n- Theoretical contributions are strong, even for the simplest case, that is linear model regime and squared loss, and proofs seem rigorous (although I’ve not completely verified them).\n\n\nWeakness:\n\nWhile it is primarily a theoretical work, much of the weaknesses of this work are related to limited applicability to the empirical setting. In the following, I will list down my questions:\n\n- In Algorithm 1 Line 13, V is defined as the span of v_i. However in other places as well V is used, but maybe it is referring to row space (see line 205). This is getting a bit confusing to me.\n- While a downside of having a memory-based replay method is mentioned in line 48, they are currently SOTA. How does DPGrad change when CL is accompanied by experience replay? \n- The setting uses task descriptors and have different linear classifier for different tasks ($v_i$). However, again from a practical point of view it is a big assumption that much previous CL work (albeit empirical) tends to avoid. Can it be easily proven/disproven that even in a linear feature case if one uses a single head (just one v_i for all the tasks) catastrophic forgetting will happen? Intuitively it seems it cannot prevent forgetting. \n- While the focus of this work is regression, what about the classification problems? Some of the previous work such as Gradient Episodic Memory (GEM) has the same formulation of no forgetting (assuming memory is a decent representation of task distribution) but uses local linear assumptions to provide a working algorithm. Can authors add some discussion on extending this to classification problems? \n- Can the authors mention the following paper in the related work- “Reconciling meta-learning and continual learning with online mixtures of tasks” especially their section 6.1 have additional simulation-based datasets that can be used in this setting to improve the experiments section? \n\n\n As mentioned in the main review. As mentioned in the main review.", " The authors introduce DPGrad, an algorithm for task-incremental continual learning focused on the case where the features extractors are assumed to be linear. They also provide a lower bound on the error for a specific setting with non-linear features extractors.\nDPGrad is compared with SGD and OGD on a simulated environment. Strengths:\n* The paper rigorously defines the setting and assumptions before introducing the new solution\n* The work in this paper tries to provide additional understanding of the theoretical aspects involved in the continual learning process. A direction which is heavily unexplored in the CL community due to the difficulty of the topic.\n\nWeaknesses:\n* The authors generically refer to continual learning in several parts of the paper but, as they correctly point out, there are several different settings in continual learning and their findings do not apply to all of them. It is particular important to be clear about it when providing negative results such the lower bound in Sec. 5.\n* The definition of forgetting on L102 works because of the assumption that the learner will get a good accuracy on the current task (L103) but it would be wrong otherwise. Generally speaking, it would be more intuitive to see it expressed as a difference with the performance measured in the past on an observed task.\n* It is quite hard to understand the potential practical impact of the findings in the paper and the results in Sec. 6 do not help. It is clear that the main contribution of this work is theoretical and the aim is not to outperform some continual learning baselines, but it would be beneficial for the community to understand how big is the gap between theory and practice.\n* Sec 5 provides a lower bound on the error but it is not clear how much of a practical impact that is going to have. It would be interesting if the authors could discuss which realistic assumptions on the environments could provide a more favorable situation. * It would be great if the authors could provide more empirical evidence of the performance of their algorithm (maybe on standard benchmarks)\n* See the other points in the paragraph above. --" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 3 ]
[ "SMuSSxXao_3", "FKXlwydIBaQ", "yMPcFkT4_r3", "uN4gJfGLIVt", "vAbyVMaJtlP", "JhCWB08j5Cm", "LEq_Cpvo1AL", "u7MIlPyZ7nm", "iFTTaHv1ICc", "akGyodr4ndn", "LZhUTfWnGQ_", "LEq_Cpvo1AL", "JhCWB08j5Cm", "JhCWB08j5Cm", "nips_2022_z9cpLkoSNNh", "nips_2022_z9cpLkoSNNh", "nips_2022_z9cpLkoSNNh", "nips_2022_z9cpLkoSNNh" ]
nips_2022_azBVn74t_2
DigGAN: Discriminator gradIent Gap Regularization for GAN Training with Limited Data
Generative adversarial nets (GANs) have been remarkably successful at learning to sample from distributions specified by a given dataset, particularly if the given dataset is reasonably large compared to its dimensionality. However, given limited data, classical GANs have struggled, and strategies like output-regularization, data-augmentation, use of pre-trained models and pruning have been shown to lead to improvements. Notably, the applicability of these strategies is often constrained to particular settings, e.g., availability of a pretrained GAN, or increases training time, e.g., when using pruning. In contrast, we propose a Discriminator gradIent Gap regularized GAN (DigGAN) formulation which can be added to any existing GAN. DigGAN augments existing GANs by encouraging to narrow the gap between the norm of the gradient of a discriminator's prediction w.r.t. real images and w.r.t. the generated samples. We observe this formulation to avoid bad attractors within the GAN loss landscape, and we find DigGAN to significantly improve the results of GAN training when limited data is available.
Accept
The paper proposes a regularizer for limited-data GAN training. All three reviewers thought the experiments were adequate to demonstrate the method's usefulness and the writing was clear. The paper's biggest weakness seems to be unconvincing conceptual intutition and lack of theoretical justification (pointed out by reviewers Mjjy and Z5nH). This is a borderline paper but I recommend acceptance.
train
[ "os_tWnqu1MY", "dH7bPl81DlG", "FMwd0tdiWPo", "Ldjvz55cI-l", "sD7k7UOWTQ", "3_LaHeI1eho", "S6cRpamMaIs", "KI0Y-gbshqL", "ykSCEjDybf9", "DuqSHFBsYBc" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate authors' effort on addressing my concerns. After checking the responses, as well as other reviewers' comments and authors feedback, my concerns have been well addressed. ", " Thank you for the detailed responses to my questions, and considering including the additional 2D experiments as parts of the supplementary. The main concern I had regarding the approaches to pairing data and how it affects the performance appear to be answered. It seems that there is some variability in performance based on how these pairs and picked and evaluated, depending also, on the kind of training conditions (say, data availability, etc).\n\nI think the authors could include a summary of these results as part of the experimental results and ablation study in Sec. 4, and consider including the detailed analysis as part of the Supplementary, as some of these questions might come up when others read the paper as well. The results and observations of the paper overall are novel and insightful, with applications to multiple GAN flavors, and I am still in favor of accepting the paper. I am raising my score from a 5 to a 6. ", " **Question: Theoretical explanation of the relation between DIG and the training failure.**\n\n=> Thanks for the kind explanation, but this does not resolve my concern (W1).\n\n**Question: FID scores in this paper do not match the results of previous papers [1, 2].**\n\n=> Thank you! This resolve my concern (W2). \n\n**Suggestion**: Refer to Figure 6 and Table 7 in the paper (https://arxiv.org/abs/2206.09479?context=cs) to compare DigGAN with DiffAugment, ADA, LeCam, and APA.\n\n**Question: Experiments using high-resolution images are missing, Comparision with Adaptive Discriminator Augmentation (ADA) is missing.**\n\n=> I compared DigGAN's results with values in DiffAugment paper (page 8). DigGAN seems useful when trying to train a GAN in situations where data is scarce.\n\n**Question: Which gradient (gradient of a discriminator's prediction w.r.t real or fake sample) makes the DIG value large?**\n\n=> I believe providing the source of a large DIG value makes the authors' paper more convincing. \n\n**Question: Missing discussion of limitations, and insufficient discussion of societal impact.**\n\n=> Good. Thank you.\n\nOverall, I am satisfied with the explanations provided by the authors. Some of the concerns I raised are not fully addressed, but I believe this paper has enough quality to be accepted at Neurips. I will raise the score from 4 to 6.\n\nCheer,\n\nReviewer Z5nH\n", " **Question: Theoretical explanation of the relation between DIG and the training failure.**\n\n**Answer:** We tried to analyze DigGAN from a theoretical perspective for quite a while actually. But a theoretical analysis of the studied loss is extremely challenging because of the way batching is performed. We could not find a compelling theoretical result that is meaningful, and we didn’t want to include derivations that don’t add insights.\n\n&nbsp;\n&nbsp;\n\n**Question: FID scores in this paper do not match the results of previous papers [1, 2].**\n\n**Answer:** Thanks for pointing this out. We use a different version of the code from [1, 2]. [1, 2] use the StudioGAN codebase for DiffAug, while we use the original codebase provided by the DiffAug authors (https://github.com/mit-han-lab/data-efficient-gans). Thus, compared to [1,2], our results are closer to [a]. Concretely, FIDs are 9.59, 21.58, and 39.78 for 100%, 20% and 10% data availability in [a], and ours are 10.53, 21.38 and 36.35 for 100%, 20% and 10% data availability. The difference in the case of 100% data availability is due to the randomness in training and evaluation. More importantly, our baseline results for 20% and 10% data availability are better than the ones reported in the baseline [a]. Since limited data is the main focus, we opted for the stronger baseline numbers.\n\n[a] S. Zhao, Z. Liu, J. Lin, J.-Y. Zhu, and S. Han. Differentiable augmentation for data-efficient gan training. In Advances in Neural Information Processing Systems, 2020.\n\n&nbsp;\n&nbsp;\n\n**Question: Experiments using high-resolution images are missing.**\n\n**Answer:** We provide 256x256 resolution image generation results. Larger-scale experiments are beyond our compute budget.\nSpecifically, we conduct new low-shot generation experiments with StyleGAN-V2+ADA. We run experiments on the 100-shot Obama, 100-shot Grumpy Cat, and AnimalFace dataset (160 cats and 389 dogs) provided by [a]. All datasets have a 256×256 resolution. We use the maximum training length of 600k images for all experiments. We set the regularization weight=100. We provide the results below. Results show that DigGAN achieves consistent gains on all datasets.\n\n| | 100-shot Obama | 100-shot grumpy cat | AnimalFace Dog | AnimalFace Cat |\n|---------------------|----------------|---------------------|-----------------|----------------|\n| StyleGAN+ADA | 49.78 | 27.34 | 66.25 | 41.40 |\n| StyleGAN+ADA+DigGAN | **41.34** | **26.75** | **59.00** | **37.61** |\n\n&nbsp;\n&nbsp;\n\n**Question: Comparision with Adaptive Discriminator Augmentation (ADA) is missing.**\n\n**Answer:** Please see the table above for comparison to ADA.\n\n&nbsp;\n&nbsp;\n\n**Question: Which gradient (gradient of a discriminator's prediction w.r.t real or fake sample) makes the DIG value large?**\n\n**Answer:** We don’t observe any specific gradient norm magnitude pattern, regardless of dataset and model structure. For instance, \n- The gradient norm for fake data is larger for SN-GAN+10%CIFAR10.\n- The gradient norm for real data is larger for BigGAN+20%CIFAR10/CIFAR100.\n- The gradient norm for real data is larger for BigGAN+10%CIFAR10/CIFAR100. \n- The gradient norm for fake data is larger for BigGAN+50% TinyImageNet.\n- The gradient norm for real data is larger for BigGAN+10% TinyImageNet.\n\n&nbsp;\n&nbsp;\n\n**Question: Missing discussion of limitations, and insufficient discussion of societal impact.**\n\n**Answer:** One limitation of our work: results still need to be improved for very scarce data. (e.g. 10% Tiny ImageNet, FID=84.27 without DiffAug and 51.18 with DiffAug). We hope our work encourages more research in this direction.\n\nWe update the social impact as follows. This work can have both positive and negative effects. On the positive side, training GANs with limited real-world data is important for fields where it is expensive to collect large datasets. For instance, our research can make AI methods more powerful in rare disease diagnosis, antique authentication, etc. On the negative side, improved generative methods can be used to generate fake data and spread misinformation via DeepFakes.\n", " **Question: Missing and incorrect interpretation of related work. [a] Data-Efficient Instance Generation from Instance Discrimination; [b] Deceive d: Adaptive pseudo augmentation for gan training with limited data.**\n\n**Answer:** Thanks a lot for pointing out [a]. We will discuss [a] in the related work section. Our approach differs from [a]: we perform binary classification (i.e., real vs. fake) and propose a new regularizer that encourages the Discriminator gradIent Gap of a GAN to be small. In contrast, [a] requires the discriminator to do instance-level classification (i.e. distinguish every individual real and fake image instance as an independent category). \nFor [b], we understand it uses generated data as an “augmentation” for the real data (see the paper title which mentions the word “augmentation”). Hence, we think it can be referred to as performing “augmentation”. We will clarify this in the related work section.\n\n&nbsp;\n&nbsp;\n\n**Question: Insufficient experiments on ADA and StyleGAN-v2, which are the main focus of prior work.**\n\n**Answer:** We don’t think prior work mainly focuses on StyleGAN and unconditional image synthesis. E.g., [1, 2, 3] conduct the majority of their experiments with BigGAN and conditional image synthesis. \nTo provide additional evidence, we conduct new low-shot generation experiments with StyleGAN-V2+ADA. We run experiments on the 100-shot Obama, 100-shot Grumpy Cat, and AnimalFace dataset (160 cats and 389 dogs) provided by [3]. All datasets are at 256×256 resolution. We use the maximum training length of 600k images for all experiments. We set the regularization weight=100. We provide the results below. Results show that DigGAN achieves consistent gains on all datasets.\n\n| | 100-shot Obama | 100-shot grumpy cat | AnimalFace Dog | AnimalFace Cat |\n|---------------------|----------------|---------------------|-----------------|----------------|\n| StyleGAN+ADA | 49.78 | 27.34 | 66.25 | 41.40 |\n| StyleGAN+ADA+DigGAN | **41.34** | **26.75** | **59.00** | **37.61** |\n\n[1] H.-Y. Tseng, L. Jiang, C. Liu, M.-H. Yang, and W. Yang. Regularizing generative adversarial networks under limited data. In Proceedings of CVPR, 2021.\n\n[2] T. Chen, Y. Cheng, Z. Gan, J. Liu, and Z. Wang. Data-efficient gan training beyond (just) augmentations: A lottery ticket perspective. In Neurips, 2021.\n\n[3] S. Zhao, Z. Liu, J. Lin, J.-Y. Zhu, and S. Han. Differentiable augmentation for data-efficient gan training. In Neurips, 2020.\n\n&nbsp;\n&nbsp;\n\n**Question: Theoretical analysis in Sec. 3.3.**\n\n**Answer:** We tried to analyze DigGAN from a theoretical perspective for quite a while actually. But a theoretical analysis of the studied loss is extremely challenging because of the way batching is performed. We could not find a compelling theoretical result that is meaningful, and we didn’t want to include derivations that don’t add insights.\n\n&nbsp;\n&nbsp;\n\n**Question: Insufficient experiments to validate the effectiveness of the proposed method.**\n\n**Answer:** We kindly disagree that experiments are insufficient. We provide a significant amount of experiments on both synthetic and real data to show the efficacy and efficiency of DigGAN. \n\n&nbsp;\n&nbsp;\n\n**Question: Missing discussion of limitations.**\n\n**Answer:** One limitation of our work: results still need to be improved for very scarce data. (e.g. 10% Tiny ImageNet, FID=84.27 without DiffAug and 51.18 with DiffAug). We hope our work encourages more research in this direction.", " **Question: 2-D synthetic experiments**\n\n**Answer:** We use 1-D synthetic experiments to visualize as many details as possible. To answer the question, we conducted four new 2-D synthetic experiments. The results are provided below and they are consistent with the results of the 1-D experiments in the paper. \nIn all four experiments, we consider the task of generating a 5-Gaussian mixture with modes evenly spaced on a circle of radius 1. The generator is a 4-layer fully connected network (FCN) with 8 neurons per layer and leaky ReLU activation. The discriminator is a 4-layer FCN with 128 neurons per layer and ReLU activation. The experiments follow the same logic as the 1-D experiments in the paper. All experiments are run for 30k iterations.\n\nFor each result, we provide 3 plots: 1. real data and generated data distribution, 2. logits: sigmoid(D(x)) distribution in the 2D space, 3. dD/dx distribution in the 2D space.\n\nExperiment 1: vanilla GAN can get stuck in unregularized-local-attractors. We train a vanilla GAN from scratch with a random initialization D0 and G0. We get D1 and G1 at the end of training. D1 and G1 end up covering only two of the five clusters (bad local attractor). [exp1 repository: https://anonymous.4open.science/r/DigGAN_rebuttal-A8B7/exp1]\n\nExperiment 2: vanilla GAN cannot escape unregularized-local-attractors. We verify that the state D1, G1 is a stable local attractor for a vanilla GAN, by testing if adding noise to the generated data from the beginning would lead to a successful escape. We experiment with three levels of noise variance: 0.1, 1, and 10. None of them help the vanilla GAN escape from the bad local attractor. [exp2 repository: https://anonymous.4open.science/r/DigGAN_rebuttal-A8B7/exp2]\n\nExperiment 3: DigGAN regularization helps to avoid unregularized-local-attractors. We train a DigGAN from the same starting point D0 and G0 used in Experiment 1. We observe that DigGAN ends up covering all 5 clusters at the end of training [exp3.1 repository: https://anonymous.4open.science/r/DigGAN_rebuttal-A8B7/exp3.1]. We also train DigGAN starting from D1 and G1 without adding noise. DigGAN also escapes from the bad local attractor and converges to a good global attractor in the end [exp3.2 repository: https://anonymous.4open.science/r/DigGAN_rebuttal-A8B7/exp3.2]. \n\nExperiment 4: DigGAN regularization helps to escape unregularized-local-attractors. Following the setting in our paper, we initialize with D2 and G2 which only covers 2 out of 5 clusters and for which | ||∂D/∂x||_2 −||∂D/∂xF||_2| < 1e^−2. Training a DigGAN from D2 and G2, we observe that DigGAN ends up escaping and covering all 5 clusters [exp4 repository: https://anonymous.4open.science/r/DigGAN_rebuttal-A8B7/exp4].\n\nTo sum up, the observations in this 2-D experiment are consistent with the observations in the 1-D experiment. We think 1-D experiments provide cleaner visualizations. We will add these 2-D experiments to the appendix.\n\n&nbsp;\n&nbsp;\n\n**The notation is confusing at times, but this is just a personal opinion**\n\nThanks a lot for the suggestion, we’ll simplify.\n\n&nbsp;\n&nbsp;\n\n**Some proofreading would help.**\n\nThanks a lot, we’ll fix it.\n\n&nbsp;\n&nbsp;\n\n**Question: How are real and fake samples drawn in each batch, randomly or from the same class? How does pairing of the real and fake data affect results?**\n\n**Answer:** Great observation. We draw real samples and fake samples randomly in pairs, and we studied the pairing effect at length before submission too: 1) To reduce the effect of a particular pair on the loss, we use the exponential moving average (EMA) (line 127), which ensures that the trained model isn’t significantly influenced by one pair. We observe EMA helps to significantly stabilize the training. 2) Other losses which we considered to mitigate this effect, e.g., difference of sample-averaged norms, didn’t yield results that were as good as random pairing.\n\nNote, for all experiments, we pair samples irrespective of their class, for both conditional GAN and unconditional GAN. We also run experiments where we draw real and fake samples from the same class for CIFAR-100. We provide the results below, which exhibit no significant difference. This is perhaps due to the use of EMA.\n\n| | 100% CIFAR-10 | 10% CIFAR-10 | 100% CIFAR-100 | 10% CIFAR-100 |\n|-------------------------|---------------|--------------|----------------|---------------|\n| random pairing (DigGAN) | **9.74** | 23.75 | **12.93** | **27.61** |\n| same-class pairing | 10.12 | **21.53** | 13.14 | 28.08 |\n\n&nbsp;\n&nbsp;", " **Question: How do permutations of the real and fake samples affect the penalty?**\n\n**Answer:** In our answer to the previous question, we studied class-conditional pairing and didn’t find a significant difference. \nIn addition, here, we show how permutations of the real and fake samples affect the results. Specifically, we pair ||∂D/∂xR|| and ||∂D/∂xF|| by sorting the norms within each batch instead of pairing randomly. We conduct the comparison experiments on CIFAR-10 and CIFAR-100 with the BigGAN structure. We use the default regularization weight. We observe that the “sorting norm permutation” works better with 100% data availability for both datasets. However, DigGAN works much better than the “sorting norm permutation” method with 10% data availability. We think the random pairing is important.\n\n| | 100% CIFAR-10 | 10% CIFAR-10 | 100% CIFAR-100 | 10% CIFAR-100 |\n|-----------------------------------|---------------|--------------|----------------|---------------|\n| random pairing (DigGAN) | 9.74 | **23.75** | 12.93 | **27.61** |\n| sorting norm permutation pairing | **9.09** | 27.64 | **12.86** | 47.75 |\n\n&nbsp;\n&nbsp;\n\n**Question: Have the authors considered batch-level sample averages of the gradient norms?**\n\n**Answer:** Great question. Indeed, before writing the paper we considered many gradient penalty variants. But none worked as well as the one reported in the paper. We provide regularization variants and results below. For each variant, we search the weight \\lambda from the set {0.1, 1, 10, 100, 10^3, 10^4, 10^5}.\n\n| Regularization format | 100% CIFAR-100 FID | 10% CIFAR-100 FID |\n|---------------------------------------------------------------------------------------|--------------------|-------------------|\n| DigGAN: ( (\\|\\|∂D/∂xR\\|\\|_2 - \\|\\|∂D/∂xF\\|\\|_2) **2 ).mean() * \\lambda | 12.93 | **28.61** |\n| Variant 1: ( (\\|\\|∂D/∂xR - ∂D/∂xF\\|\\|_2) **2 ).mean() * \\lambda | **12.74** | 32.97 |\n| Variant 2: ( (\\|\\|∂D/∂xR\\|\\|_2^2).mean() - (\\|∂D/∂xF\\|\\|_2^2).mean()) **2 ) * \\lambda | 32.97 | 32.97 |\n| Variant 3: ( (\\|\\|∂D/∂xR\\|\\|_2).mean() - (\\|\\|∂D/∂xF\\|\\|_2).mean()) **2 ) * \\lambda | 12.40 | 50.75 |", " The authors propose a novel approach to training GANs with limited data, one that is motivated by the gap in the norms of the gradients between the target and generated images. They demonstrate applications of the proposed regularizer on various BigGAN architectures with and without data augmentation. **Originality and Significance:** Limited data (LD) training with GANs has been gaining popularity in the recent years, and the proposed regularization approach to training shows potential. The fact that it can be used in parallel with other schemes for LD training such as DiffAug is an advantage. The gradient gap in the discriminator is generally well analyzed and adequate experimental validations are provided. \n\n**Presentation and Clarity:** The paper is clear to read and the flow is consistent. \n - The synthetic experiments with Vanilla GAN consider a simplistic scenario — a similar experiment in 2D might have been more insightful, as the effect of different norm on the gradient vector (L1, L2), is clearer at least in 2D. Nevertheless, the given experiments help to give a clear intuition on the proposed method. \n - The notation is confusing at times. For example, The additional parentheses when time indexing time $t$ seems unnecessary, but this is just a personal opinion.\n - Some proof read would help, as there are a few typos here and there. L233 L146 Forth -> Fourth. L262 improvement to 32.56 -> improvement **by** 32.58, just to name a few. \n\n**Literature Survey:** Most of the relevant literature has been cited and the paper includes relevant discussions on related works. - **Loss Formulation:** I have a fundamental concern with the form of the loss $\\left( \\left\\| \\frac{\\partial D}{\\partial x_R} \\right\\| - \\left\\| \\frac{\\partial D}{\\partial x_F} \\right\\|\\right)$. It is unclear how the samples are drawn. If $x_R$ and $x_F$ are drawn randomly in pair, then, the particular paining of the real and fake images will affect how this loss performs. While this issue might not be visible in the synthetic 1D experiment present, on the image scale, this might have effect. For example, are we making sure the pairs are drawn from the same class. In class-conditional GANs, it might be possible, bit, in unconditional variants, this cannot be guaranteed, and there is a strong possibility that the manifold nature of each class might itself affect how these terms interact. \n - To address the above, maybe the authors could consider some ablation study where, given the choice of the sample batch, how permutations of the real and fake samples affects the penalty?\n - Alternatively, have the authors considered batch-level sample averages of the gradient norms? This would essentially bring the terms closed to the R1 and R2 penalties. —", " This paper proposes a new method for improving gan training under limited data. The key idea is to regularize the gradient norm of discriminator between real and fake samples. Authors have done empirical studies showing that this regularization can help avoid bad attractors within the GAN loss landscape. Authors have applied the proposed regularization on BigGAN, leading to improvement across multiple datasets. Strengths:\n+ The manuscript is easy to follow\n+ The empirical studies provided by the authors are important to understand the effectiveness of proposed regularization\n\nWeaknesses:\n- Missing related work, and incorrect interpretation of related work. Specifically, [b] ([17] in manuscript) is not doing data augmentation in my understanding. And [a] is also related but is missed in the manuscript. Since both [a] and [b] are not doing data augmentation, I believe they should be compared as well.\n\n[a] Data-Efficient Instance Generation from Instance Discrimination\n[b] Deceive d: Adaptive pseudo augmentation for gan training with limited data.\n\n- while at L30 authors claim \"data augmentation can enhance the results, but the benefit is limited with insufficient data (Tab. 3).\" In table 3 authors didn't evaluate sufficient data augmentation methods to support this claim.\n\n- in the experiments, authors only compared to DiffAug and R_LC, which is insufficient. Authors should also include ADA, etc.\n\n- while previous works targeting gan training with limited data mainly evaluate their methods on StyleGAN and unconditional image synthesis, authors mainly evaluate the proposed method on BigGAN and conditional image synthesis. It's better to stay consistent with previous methods to ensure a fair comparison. Or at least expand table 4.\n\n- It's great that authors provide empirical studies in Sec.3.3. It's better if authors include a theoretical analysis. \n\n\n\n\n Overall I think the idea proposed in this manuscript looks interesting and effective to some extent. But I think the experiments are insufficient to validate the effectiveness of the proposed method. Detailed questions are covered in paper weaknesses. Authors didn't include the discussion of limitations in the main manuscript. I think authors can briefly discuss in what cases the proposed method may fail or lead to marginal improvement.\n", " This paper introduces a new data-efficient training technique for GAN. The technique named Discriminator gradIent Gap (DIG) regularization aims to equalize the changes of discriminator’s judgments w.r.t its inputs (real and fake data) so that the discriminator can effectively learn representations for adversarial training without an imbalance issue. Experimental results demonstrate that GAN with the proposed DIG regularization exhibits better image generation results than the previous techniques (LeCam regularization and DiffAug). Strengths\n\n[S1] I think the proposed method is new to the GAN community. \n\n[S2] The empirical finding that the gap between the norms of the gradients of discriminator’s predictions increases when the GAN is trained with fewer data, seems to be very useful for future development of data-efficient training methods. \n\n[S3] The experiments in the paper show that the proposed DIG regularization is helpful for GAN training in data-hungry situations. To demonstrate the effectiveness of DIG regularization, the authors utilize various datasets (CIFAR10, CIFAR100, Tiny ImageNet, and CUB200), which means the proposed method can be used for real-world applications. \n\n[S4] The paper is well written and easy to follow. \n\nWeaknesses\n\n[W1] I think the theoretical explanation of the relation between DIG and the training failure is needed. Although experimental results show that applying DIG regularization can resolve the problem of getting stuck in local attractors, I am not sure why this is possible by imposing DIG regularization.\n\n[W2] FID scores in this paper do not match the results of previous papers [1, 2]. The FIDs of BigGAN on CIFAR10 in the papers [1, 2] are 8.57 and 8.08, respectively, while the FID of BigGAN in this paper is 10.53 (Table 1). Performance discrepancies occur for experiments across the paper, which significantly lowers the credibility of the paper.\n\n[W3] Experiments using high-resolution images are missing. Verifying the usefulness of DIG regularization using MetFaces, AFHQv2, or FFHQ will make the paper more convincing.\n\n[W4] Comparision with Adaptive Discriminator Augmentation (ADA) is missing. Since ADA is a standard method to train GAN with limited data, I think comparing DIG regularization with ADA is necessary. \n\n[1] Chen, T., Cheng, Y., Gan, Z., Liu, J., & Wang, Z. (2021). Ultra-Data-Efficient GAN Training: Drawing A Lottery Ticket First, Then Training It Toughly. ArXiv, abs/2103.00397.\nhttps://openreview.net/forum?id=BBVcs78PEDA\n\n[2] Kang, M., Shim, W., Cho, M., & Park, J. (2021). Rebooting ACGAN: Auxiliary Classifier GANs with Stable Training. NeurIPS. [Q1] Which gradient (gradient of a discriminator's prediction w.r.t real or fake sample) makes the DIG value large? I guess the norm of the gradient w.r.t real sample might be very large, as the discriminator tends to memorize training samples and predicts samples other than the training samples to be fake regardless of the realism of samples. I cannot find any section that explains the limitations of the paper. Also, the described negative societal impact is not sufficient. I hope the authors will add more portions to tell the pros and cons of the paper and its societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "sD7k7UOWTQ", "S6cRpamMaIs", "Ldjvz55cI-l", "DuqSHFBsYBc", "ykSCEjDybf9", "KI0Y-gbshqL", "KI0Y-gbshqL", "nips_2022_azBVn74t_2", "nips_2022_azBVn74t_2", "nips_2022_azBVn74t_2" ]
nips_2022_mE1QoOe5juz
Tiered Reinforcement Learning: Pessimism in the Face of Uncertainty and Constant Regret
We propose a new learning framework that captures the tiered structure of many real-world user-interaction applications, where the users can be divided into two groups based on their different tolerance on exploration risks and should be treated separately. In this setting, we simultaneously maintain two policies $\pi^{\text{O}}$ and $\pi^{\text{E}}$: $\pi^{\text{O}}$ (``O'' for ``online'') interacts with more risk-tolerant users from the first tier and minimizes regret by balancing exploration and exploitation as usual, while $\pi^{\text{E}}$ (``E'' for ``exploit'') exclusively focuses on exploitation for risk-averse users from the second tier utilizing the data collected so far. An important question is whether such a separation yields advantages over the standard online setting (i.e., $\pi^{\text{E}}=\pi^{\text{O}}$) for the risk-averse users. We individually consider the gap-independent vs.~gap-dependent settings. For the former, we prove that the separation is indeed not beneficial from a minimax perspective. For the latter, we show that if choosing Pessimistic Value Iteration as the exploitation algorithm to produce $\pi^{\text{E}}$, we can achieve a constant regret for risk-averse users independent of the number of episodes $K$, which is in sharp contrast to the $\Omega(\log K)$ regret for any online RL algorithms in the same setting, while the regret of $\pi^{\text{O}}$ (almost) maintains its online regret optimality and does not need to compromise for the success of $\pi^{\text{E}}$.
Accept
The new two-group RL framework is interesting, even though it is somewhat restricted to assume the exact same model for both groups. Both the gap-independent and the gap-dependent settings are discussed properly, with lower and upper bounds. Overall we believe that the paper is worth publishing at NeurIPS.
train
[ "4RK5prrO4M", "QJ71Z_ldNlV", "Pr3FalH_ZAV", "Nlm-GWrw32D", "MLrEi0xyc-v", "G91xWNFRILa", "IBlFAuSuNB2", "QddPZvX_AfA", "98UtLdZrC0", "xjL_YDd9r7F", "Rz9UXTOULf2", "366hMTHeNx", "WHHu5_Z7gf" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " > plan 2 has high uncerntainty (large confidence interval) but its true expected value has chance to be higher than plan 1.\n\nSuch a type of uncertainty may be related to what is called \"ambiguity\" in decision-making theory. Anyway, people usually avoid an arm with large confidence interval (called ambiguity aversion) especially in safety-related areas such as medical treatments. Assuming ambiguity-seekers is not very convincing for me.\n\n> we also constrained the regret by $G^O$ is near-optimal\n\nIf half are explorer, this seems obvious, considering only that group. The expected value of reward for the $T$-th person (in groups E and O combined) seems to worsen as the proportion of group E increases.\n\nI just want to emphasize that I fully recognize the contribution in the theoretical aspect. However, if you are aiming for more points in terms of practicality, such as citing some use-cases that actually have a preference for ambiguity.", " Thanks for letting us know your further concerns.\n\n### The medical example\n\nWe are feeling that, maybe similar to Reviewer e8u5, the ''risk'' considered by reviewer is separated from and orthogonal to that we consider in the paper. Here we make a brief explanation and please also refer to our response to Reviewer e8u5 for more clarification.\n\nBy risk, we refer to the risk resulting from **uncerntainty during the learning process**. Take the medical treatment setting as a concrete example, we do not know the exact value of each treatment plan, and we can only estimate it by data with some uncerntainty. Consider the case when we have two treatment plans, and suppose given the history data, plan 1 has a good estimated expected value and also low uncerntainty (small confidence interval), while plan 2 has high uncerntainty (large confidence interval) but its true expected value has chance to be higher than plan 1. By taking plan 2, the patient may suffer the risk resulting from uncertainty (because of limited data). In that case, some patients will still prefer plan 2 because they value more about the chance that plan 2 has higher true value and we refer them as $G^O$, while another group of patients may prefer plan 1 to avoid the risk resulting from uncertainty and we refer them as $G^E$.\n\nMoreover, in fact, under our meaning of risk, our risk-tolerated group $G^O$ do not need to ''suffer not only high variance but also low expectation of the outcome''. We only expect $G^O$ to balance the exploration and exploitation as normal optimal online learning algorithms (see our summarized objective in lines 63-64). In our algorithms, while establishing the provable benefits for $G^E$, we also constrained the regret by $G^O$ is near-optimal.\n\n\n### The UWB setting\nThanks for making it clear. The original UWB example is a bandit setting, since the original ''decoupling exploration and exploitation'' paper only focused on bandit setting. We believe the MDP setting extended by considering state transition is reasonable and practical. We used UWB as an example in our rebuttal just in order to highlight that, given any practical MDP examples in decoupling setting (including those extended from bandit setting), our results can be directly applied into them.", " > Concrete Examples\n> ・・・people's tolerance on risk is independent w.r.t. their body condition and their reaction to the treatment plan\n\nIn the medical case, it seems unnatural for me to assume risk-seekers, who want to suffer not only high variance but also low expectation of the outcome (high regret). Risk appetite is usually a tradeoff between high returns and low variances.\n\n> Moreover, given the connection between ``decoupling exploration and exploitation setting'' [1,2] and our framework, our algorithms and guarantees can be directly applied to their settings, e.g. ultra-wideband (UWB) communications in [1].\n\nShould state transition be considered in UWB problem? The extension to the MDP seems a bit application-oriented compared to the earlier work on MAB, but it does not seem to have broad applicability. Therefore, while interesting, it cannot be rated as absolutely worthy of publication in top venues. With this, I keep the overall rate of 6.\n", " Thank you for your answer, I do not have further questions and I have increased my score.", " We thank the reviewer for the valuable comments.\n\n### Concrete Examples\nWe believe that in the scenarios mentioned in Sec. 1 (line 30-43), there are some real-world cases where the factors determining risk tolerance are **independent** w.r.t. the transition and reward of the model, so that we can assume that both two groups share the model.\nFor example, in some medical treatment scenarios, people's tolerance on risk is independent w.r.t. their body condition and their reaction to the treatment plan (i.e. the model).\n\n\nMoreover, given the connection between ``decoupling exploration and exploitation setting'' [1,2] and our framework, our algorithms and guarantees can be directly applied to their settings, e.g. ultra-wideband (UWB) communications in [1].\n\nWe also have a general remark about our assumption on model sharing, please check our general response above.\n\n[1] Orly Avner, Shie Mannor, and Ohad Shamir. Decoupling Exploration and Exploitation in Multi-Armed Bandits\n\n[2] Chloé Rouyer, Yevgeny Seldin. Tsallis-INF for Decoupled Exploration and Exploitation in Multi-armed Bandits\n", " We thank the reviewer for the valuable comments.\n\n### Same transition and reward function\nPlease check our additional general response above.\n\n### Budgeted setting\nWe agree that this is an interesting future direction which our work builds the foundation for, and we also briefly mentioned it on Line 331.\n\n### Question about the framework\n\n> why do different groups' risk tolerances matter?\n\nThat the definitions of regret are the same for two groups only means that two groups use the same way to measure the performance of their algorithms. The groups' risk *tolerances* are about how much **total** regret they expect to experience over the entire learning process, i.e., one group may prefer a lower total regret than usual.\n\n> Is exploration still necessary...?\n\nYes, continuous exploration is necessary. The main reason is that we target at achieving constant regret for arbitrarily large $K$ (recall $K$ is the total number of episodes). If the exploration stops after some constant $k_0$ (in comparison to $K$ which is ever-growing), then the failure probability $\\delta_{k_0}$ (i.e., $Alg^E$ fails to identify the optimal action(s)) will also be a constant relative to $K$. However small $\\delta_{k_0}$ is, it is still a constant bounded away from $0$, and the failure event's contribution to the regret becomes $O(\\delta_{k_0} \\cdot (K-k_0))$, which is linear in $K$.\n\nMore technically speaking, in order to show constant regret, we require not only the RHS of Eq.(1) to decay to zero with a high probability $1-\\delta_k$ after some $k \\geq K_0$ but also the accumulative failure rate $\\sum_{k=K_0+1}^K \\delta_k$ is bounded by constant, especially for large $K$, which is guaranteed only if $Alg^O$ continues to explore.\n\n\n### Technique contributions\nAlthough PVI and LCB are widely used, coordinating pessimism and optimism algorithms together to achieve our goal is a novel setup that has not been studied before. \nWe also briefly highlight our novel technical contributions here and please refer to Sec. 1 for more details:\n\n* To our knowledge, the high-probability bounds Lemmas 4.2 and 4.3 are new in the stochastic bandit setting. $O(k^{-O(\\alpha)})$ (with $\\alpha > 1$) is carefully chosen to guarantee the accumulative failure rate is bounded by constant, i.e. $\\sum k^{-O(\\alpha)} < +\\infty$.\n Besides, we contribute Lem. D.1 to establish constant regret for $Alg^E$, which can outperform [1] in some cases (see the discussion between lines 184-191).\n * There is also some novelty in Thm. 4.5. Previous literature only focused on gap-dependent bound for optimistic algorithms, while we are the first to study pessimism-based algorithms.\n \n* We contribute Thm. 4.7, which bridges low regret and high occupancy on optimal state actions, and our result is general and holds for arbitrary policy sequences $\\pi_1,...,\\pi_k,...$.\n Besides, Thm. 4.8 is a novel observation to overcome the difficulty occurring when there are multiple optimal policies (also see the discussion between lines 231-233).\n\n\n[1]: Chloé Rouyer, Yevgeny Seldin. Tsallis-INF for Decoupled Exploration and Exploitation in Multi-armed Bandits.", " We thank the reviewer for the valuable comments. \n\n### Fixed initial state is a simplification without loss of generality\n\nGiven an arbitrary episodic MDP $M$ with random initial distribution $\\mu_0$ and horizon $H$, one can convert it to another MDP $M'$ with a fixed initial state and horizon $H+1$ by introducing a new fixed initial state which can transit to the original initial states of $M$ according to the $\\mu_0$ regardless of action. Even without the above conversion, our results easily generalize to random initial states by a slight modification of the proofs. \n\n### The setting: the reviewer talks about something different from and orthogonal to our setup\n\n> there is no difference between... It is also unclear ... A much more reasonable structure...\n\nThe type of difference the reviewer mentioned (''introducing different rewards for different groups'') is reasonable and interesting by itself, but it is separated from and **orthogonal to** the type of difference we consider in the paper. Let us explain: if the agent (the system/algorithm) has full knowledge of the users (i.e., the MDP transitions and rewards are fully known), the two groups might still prefer different policies due to their risk preferences associated with the **randomness of MDP transitions and rewards**---this is what reviewer is talking about, and related studies can be found in the areas of e.g., risk-sensitive RL. \n\nWhile this is a very reasonable consideration, we are considering something orthogonal: we consider all users have the same MDP transitions and rewards, so when the agent has full information, it will treat all users equally. That said, differences between groups can still exist, because our agent does **not** have full information from the beginning. Rather, it is a learning agent and can exhibit various different behaviors when they interact with the users to learn their transitions and rewards. What our paper is concerned about is the user's risk preference over **the algorithm's learning behavior**. Note that studying user experiences about algorithm's learning behavior (instead of when all the information is known) is very prevalent, e.g., in the fairness ML literature.\n\nSo to summarize, we consider the user's risk preference about the algorithm's learning behavior. The reviewer's suggestion is also interesting and can be potentially combined with our notion of risk preference to form a more realistic and complicated setting, and our study of risk preference over learning behaviors provides the foundation for studying this more complicated setting. \n\nAs a final remark, the reviewer mentioned that \n> there is no difference between these two groups in reward, transition...\n\nPlease check our additional general response above.\nBesides, even with user-group variability, our primary concern in this line of research is still the risk preference of the algorithm's learning behaviors. \n\n### Not use data collected from $\\pi^E$\n\nWe have provided some explanation in Sec. 1 (lines 52-55). Briefly speaking, our main objective is to show the benefits of leveraging the tiered structure by designing algorithms with advantages in terms of regret, and we have already achieved this goal even after ignoring the data collected from $\\pi^E$. Of course, utilizing data from both groups should intuitively help (and is probably a good idea practically), but it will complicate the analysis without necessarily improving our results, as the data from $\\pi^E$ lacks exploration and provides little additional information. Such a situation is quite common in RL theory, where we discard some data in theory (which could be practically useful) for clean concentration analysis. \n\n### Additional reference\n\nThanks for pointing it out. We will cite and add a discussion about that paper after acceptance.", " We thank the reviewer for the valuable comments. \n\nRegarding your question on the clipping operator: it is defined in Line 246 by $\\text{Clip}[x|\\epsilon]:=x\\cdot \\mathbb{I}[x\\geq \\epsilon]$, and is first used in the upper bound of the (one-step) sub-optimality gap in Eq.(1) in Thm. 4.5. As we briefly introduced in Sec. 4.2.3, $N_{k,h}(s_h,a_h)$ will gradually increase for those $s_h,a_h$ with non-zero $d^{\\pi^*}(s_h,a_h)$. Hence, the clipping operator will take effect after some $k\\geq k_0$, i.e. $\\mathbb{I}[\\cdot \\geq \\epsilon_{Clip}] = 0$, which results in zero sub-optimality gap, and therefore constant regret.", " We note that some reviewers have common concerns about our assumption that different groups share the model, and therefore, we provide a general response here.\n\nWe agree that all users being exactly the same is a relatively restrictive assumption. \nHowever, we remark similar (or exactly the same) transitions and rewards enable knowledge transfer/sharing between the different groups. If they are completely different in unrelated ways, there is no point considering both groups together. \nBesides, we believe our results can be extended to a more general setting where different groups only share a similar **hidden** model with the help of feature extractors (i.e. function mappings from original state action space to hidden feature space), and in Line 335 we also mentioned future directions about a contextual setting that allows variability across user groups. ", " The paper proposes a new framework for the reinforcement learning problem (specifically, the MAB and finite-horizon tabular MDP) when the instances can be divided into two groups; one group which tolerates risky action selections for the sake of exploration, and another group for which the agent just focuses on maximizing the rewards. A goal is to achieve the near-optimal O(logT) regret for the first group while achieving a time-horizon-independent constant regret for the second group.\n\nUnder the assumption that the underlying distributions of the sate, action, and reward are identical for the two groups, and that the proportion of the number of instances in the first vs second group is constant and fixed, the authors provide a framework algorithm which achieves the goal with high probability.\n\nSpecifically, the core of these algorithms is to make action decisions according to the \"pessimism in the face of uncertainty\" principle for the second group. For the bandit problem, the authors propose a specific algorithm which employs the usual UCB algorithm for the first group and select the maximizer of the LCB (Lower confidence bound) for the second group. Strengths:\n\n1. The authors provide detailed lower bound analysis for this problem. Specifically for the gap-independent case, the authors provide an example where it is impossible to do better than employing the near-optimal regret reinforcement learning algorithm to the whole instance sequence without making any distinction between the two groups.\n\n\n\n2. The authors derive the regret guarantee for the \"framework\" algorithm rather than just a specific algorithm. The framework requries to use a near-optimal regret reinforcement learning algorithm (that satisfies Condition 4.6) for the first group and then use the pessimism in the face of uncertaint principle (satisfying Condition 4.4) for the second group. As long as Conditions 4.6 and 4.4 are satisfied, the algorithm can be diversified according to the specific inner algorithms used. Hence the framework can apply to a wide range of problems with different kinds of reward functions.\n\n\n\n3. The regret guarantee hold without the assuming the uniqueness of the optimal policy. Questions\n\nIt would be good to have more details on the use of clipping operator. The authors use it in the algorithm but just refer the explanation to other papers I did not find any discussion on societal impact .", " This work focused on reinforcement learning and proposed a novel framework to utilize the tiered structure in applications. More specifically, the users are divided into two groups: a risk-tolerant group with the UCB-types algorithm and a risk-averse group with the LCB-types algorithm. With this structure, the author shows that there is nearly no advantage in the gap-independent instance but can improve the regret guarantee from log-T to constant in the gap-dependent instance. \n Strength:\n\n1. This work proposed a novel structure to utilize the tiered structure in applications and can improve the performance to constant regret guarantee with the gap-dependent instance.\n\n2. This work provides a theoretical lower bound for the gap-independent instance and shows that tiered can provide nearly no help in this case.\n\n3. This work first considers the pessimism algorithm in online reinforcement learning and combines it with the optimistic rule, which may be of independent interest.\n\nWeakness:\n\n1. The algorithms require the initial state to be fixed ($s_1$), which is much more restrictive than previous work in reinforcement learning.\n\n2. The intuition of introducing a tiered structure is not clear. For the tiered structure, the author mentions that users can be divided into multiple groups depending on their different preferences and tolerance of the risk that results from the necessary exploration to improve the policy. However, from the setting, there is no difference between these two groups in reward, transition, or regret. It is unclear what the critical difference between risk-tolerant and risk-averse people is. It is also unclear why we need to introduce the tiered structure and consider the two groups of people individually. A much more reasonable structure to represent different preferences and tolerance is introducing different rewards for different groups.\n\n 1. For both Algorithm 2 and Algorithm 3, only the data from online policy $\\pi^O$ are used to design the policy, and other data from policy $\\pi^E$ are ignored, which seems will deteriorate the algorithm performance. Is there some intuition behind it or just to protect the personal privacy of group E?\n\n2. For the Gap-dependent setting, the author mentions recent advances in regret guarantees for reinforcement learning. More recent advanced works have focused on using the function approximation techniques in MDPs. For instance, He et al. [2022] first obtained a logarithmic regret with the gap-dependent setting. Therefore, it is better if the author can mention them or have some discussion about them.\n\nHe J, Zhou D, Gu Q. Logarithmic regret for reinforcement learning with linear function approximation. ICML2021\n\n This paper provides theoretical guarantees for learning linear bandit and linear mixture MDP. There is no negative societal impact.", " This paper formulated and studied the tiered RL problems where the uses are divided into two groups and treated separately. They showed that the constant and log K regret are achievable while keeping the online algorithm near-optimal for bandit and RL settings, respectively. **Strengths**\n\n1. This paper is well written and organized, and the insights behind the main results are well presented. \n2. Both MAB and RL settings are considered in this paper. The regret bound is comparable with the best existing results.\n\n**Weakness**\n1. The model assumes the users from two groups share the same transition and reward function, there is still some gap between the model and reality.\n2. It would be great if the framework could model the total \"toleration budgets\" of different groups. 1. I have a general question about the Tiered RL Framework. Under the assumption that $Env^O = Env^E,$ if the definition of regret is the same for both groups, why do different groups' risk tolerances matter? Is exploration still necessary if the $Alg^E$ can already achieve constant regret after certain timesteps/episodes? We are eventually looking for an algorithm that can give us lower regret. Then maybe it is also worth looking at how many samples are needed by collecting with the online algorithm such that a constant regret is achievable by using an exploitation algorithm.\n\n2. Since PVI and LCB are widely used in offline settings, can the authors clarify what the novel technical contributions of this paper are?\n I don't think this paper has any potential negative societal impact.", " - They investigate a 2-tiered play structure in the stochastic bandit and MDPs, one of which is the \"exploiters\". \n- Those two groups play the same instance simultaneously, and the exploiter group was shown to achieve a (gap-dependent) constant regret by using a conservative strategy (LCB) and exploit information from the other group (explorer). The regret of the explorer remains (near-)optimal.\n- Although there exists studies of decoupled exploration-exploitation in bandit settings, the differential is that it can be naturally extended to MDPs by using an algorithm (UCB/LCB) which does not use sample weights, resulting in high compatibility to the MDPs. - Originality: somewhat weak\n\t- The idea of decoupled exploration-exploitation was inherited from previous work. Since the previous work considers adversarial setting also, its method uses sample-weights, which has a risk of the \"curse of horizon\" when extended to MDPs. It is not a surprising idea to use UCB/LCB to avoid the problem. \n- Quality: excellent\n\t- Their analysis is convincing since it proves the problem-dependent constant regret as well as the impossibility of the problem-independent constant regret. \n- Clarity: good\n\t- This paper is well organized and reads naturally from a simple MAB setup to a table RL setup. \n- Significance: good\n\t- It is a significant contribution that a constant regret can be achieved also in reinforcement learning by decoupling the explorers and the exploiters. - Are there concrete examples of exploiter groups who share a homogeneous model with explorers? - Limited to the setting where two parties continue to play at the same time. \n- Dividing into tiers and favoring one side (the exploiter group) would imply that there are differences that should be favored (e.g., they are wealthy, paying users). Such differences may have an impact on transitions and reward distributions, but they are using a homogeneous model. Therefore, it remains to be seen if there are specific examples that would have an impact in realistic problems. \n- In that case, since the method only learns the data from the explorer group, the exploiter group might suffer a linear regret. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "QJ71Z_ldNlV", "Pr3FalH_ZAV", "MLrEi0xyc-v", "IBlFAuSuNB2", "WHHu5_Z7gf", "366hMTHeNx", "Rz9UXTOULf2", "xjL_YDd9r7F", "nips_2022_mE1QoOe5juz", "nips_2022_mE1QoOe5juz", "nips_2022_mE1QoOe5juz", "nips_2022_mE1QoOe5juz", "nips_2022_mE1QoOe5juz" ]
nips_2022_qbSB_cnFSYn
DEQGAN: Learning the Loss Function for PINNs with Generative Adversarial Networks
Solutions to differential equations are of significant scientific and engineering relevance. Physics-Informed Neural Networks (PINNs) have emerged as a promising method for solving differential equations, but they lack a theoretical justification for the use of any particular loss function. This work presents Differential Equation GAN (DEQGAN), a novel method for solving differential equations using generative adversarial networks to "learn the loss function" for optimizing the neural network. Presenting results on a suite of twelve ordinary and partial differential equations, including the nonlinear Burgers', Allen-Cahn, Hamilton, and modified Einstein's gravity equations, we show that DEQGAN can obtain multiple orders of magnitude lower mean squared errors than PINNs that use $L_2$, $L_1$, and Huber loss functions. We also show that DEQGAN achieves solution accuracies that are competitive with popular numerical methods. Finally, we present two methods to improve the robustness of DEQGAN to different hyperparameter settings.
Reject
This paper presents a new method for solving differential equations using generative adversarial networks to "learn the loss function" for optimizing the neural network. After the discussion, the reviewers still have a few major concerns: (1) The authors' claim that the existing methods lack of theoretical justifications. However, the paper does not provide a sufficient justification on their proposed method, either, which makes their key motivation of the paper questionable. (2) Some important baseline methods are missing in the comparison as well as references. The authors should improve their literature survey. (3) The computational challenges of solving PDEs mainly lie in high dimensionality. Most of existing deep-learning based PDE solves, including PINN, attempt to demonstrate the benefit of using deep neural networks for approximating high dimensional functions or operators. However, the experiments only consider low dimensional PDEs, which not difficult to solve, and existing numerical methods can solve them efficiently without deep neural networks and complicated tuning.
train
[ "YVQyb5O5CYL", "OdxLMcOEbmq", "mwcqBji_mO3", "pdwWPEgD8g", "8EgSmS3n1I", "z-XMKOvAs7", "lgAcmR7myME", "jIkg2sL4vow", "8FmJF6QIkrr", "h2UadikiS3", "HQuIij8JYUa", "IaZwF7t__ax", "rOGwO5FbR9" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \n\nDear Reviewers,\n\nWe are entering the discussion phase, where the authors will be not involved in the discussion.\n\nI would like to request you to confirm that you have already read the rebuttal from the authors.\n\nBest\n\nAC\n", " Many thanks to the AC for the comments, and we'd be happy to clarify:\n\n1. We are aware that we are not the first to apply GANs to solving differential equations and cite multiple works that do this in the Related Work section (third paragraph). The first paper cited by the AC [1] leverages the weak form of PDEs by training the generator and discriminator to approximate the weak solution $u_{\\theta}$ and test function $\\phi_{\\eta}$, respectively. By contrast, our method (DEQGAN) is based on the strong form, which frees the discriminator to learn the loss function for optimizing the generator. We also note that this paper addresses only a small number of PDEs and does not always outperform classical PINNs, whereas DEQGAN consistently outperforms classical PINNs on a suite of twelve problems (including PDEs and ODEs). The second paper cited by the AC [2] is narrowly focused on a particular linear equation and does not directly leverage GANs.\n2. While our results do not include high-dimensional PDEs, we show that DEQGAN consistently outperforms classical PINNs on a wide variety of challenging equations, many of which exhibit higher degrees of non-linearity and more complex dynamics than those addressed in [1]. Given our promising results, we expect our method to also perform well on higher dimensional problems and think that this would be a worthwhile direction for future work.\n3. Our approach replaces an explicit loss function (e.g., L2, L1, Huber) with a discriminator network that learns the loss function for optimizing the generator. Therefore, DEQGAN circumvents the lack of theoretical justification for using any particular loss function entirely and offers the flexibility to overcome the weaknesses of explicit loss functions. Indeed, our results make clear that these losses show variable performance on different equations (sometimes failing entirely) and are consistently outperformed by DEQGAN in terms of predictive accuracy. These results indicate that it would be worthwhile for future work to examine exactly what is learned by the discriminator network of DEQGAN, which might help us understand why some loss functions appear to be more effective for training PINNs than others.\n\n[1] Zang, Yaohua, et al. \"Weak adversarial networks for high-dimensional partial differential equations.\" Journal of Computational Physics 411 (2020): 109409.\n\n[2] Liu, Shu, et al. \"Neural Parametric Fokker--Planck Equation.\" SIAM Journal on Numerical Analysis 60.3 (2022): 1385-1449.\n", " Hi authors,\n\nI read the paper and have three concerns:\n\n(1) The literature survey in this paper is insufficient. The idea of using GAN-type methods to solve PDEs is not new. For example, there have been some work in the applied mathematics literature: https://arxiv.org/pdf/1907.08272.pdf and https://arxiv.org/pdf/2002.11309.pdf, which are not cited in this paper. \n\n(2) The paper only compared their method with basic PINN and very low dimensional PDEs. They should at least compare with https://arxiv.org/pdf/1907.08272.pdf and higher dimensional PDEs.\n\n(3) The authors argue that existing methods do not have theoretical justifications on the loss functions. However, I did not find any theoretical justification in their paper also.\n\nCould you please make some clarification?\n\nBest\n\nAC\n", " Thank you for the follow up – we have posted a revised version of the paper that incorporates many of your helpful suggestions, including those on the overall structure of the paper. Regarding your question, we simply mean that a lower generator loss than discriminator loss indicates that the discriminator is generally performing worse than the generator, and we have softened the language in section 4.1 to reflect this. Thanks!", " Many thanks for the follow up comment. We have posted a revised version of the paper that incorporates many of your helpful comments and would like to provide additional clarifications:\n\n1. By “standard initial conditions,” we simply mean values that are commonly used in the context of particular differential equations. For example, for the damped nonlinear oscillator (NLO) problem, we use $x_0=1, \\dot{x}_0=0.5$ simply because these are clean values. However, they are also arbitrary in the sense that we could have trained DEQGAN on different initial conditions and obtained similar results. In Figure 4a of the updated paper (page 9), we have plotted the phase space of the DEQGAN solutions (solid color lines) for three different initial velocities $\\dot{x}_0=0.5, 0.6, 0.7$ and can see that these are indistinguishable from the solutions obtained using a high-quality numerical integrator (dashed black lines). We emphasize that DEQGAN attains highly accurate solutions on arbitrary initial conditions, but we believe that it is much more compelling to showcase its performance on a variety of equations (we include twelve) that exhibit different and challenging dynamics.\n2. Thank you for elaborating on the point about theoretical justification. As our work is focused on the predictive performance of various PINN methods, we mean that no particular loss function has been demonstrated to provide advantages in terms of convergence or solution accuracy. However, our empirical results indicate that classical loss functions (L2, L1, and Huber) show varied performance on different problems and that DEQGAN consistently achieves better predictive accuracy. The paper [3] cited by the reviewer focuses on generalization error, and while this may provide a reason (e.g. uncertainty quantification) to use a particular loss function, it is not the reason we are interested in. We agree that we could have made this point more clear and have updated the third paragraph of the introduction (page 1) to reflect this.\n3. The paper on SA-PINNs cited by the reviewer [4] is the same one we were referring to in our original comment – the additional citation we provided was to a similar paper [5] on adaptive loss functions that we had referenced in our paper. While the losses proposed in [4, 5] are adaptive, we still view them as fundamentally different from DEQGAN because our method uses a neural network (the discriminator) to learn the loss function for optimizing the generator and is therefore much more flexible than an explicit loss function. Nonetheless, we agree that these works should be referenced in our paper and have updated the fourth paragraph of the introduction to include them. We believe that our empirical results provide strong evidence that it would be worthwhile for future work to investigate exactly what the discriminator network learns, which might elucidate why some loss functions achieve better predictive performance than others.\n\n[3] Mishra, Siddhartha, and Roberto Molinaro. \"Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for PDEs.\" IMA Journal of Numerical Analysis 42.2 (2022): 981-1022.\n\n[4] McClenny, Levi, and Ulisses Braga-Neto. \"Self-adaptive physics-informed neural networks using a soft attention mechanism.\" arXiv preprint arXiv:2009.04544 (2020).\n\n[5] Zeng, S., Zhang, Z., & Zou, Q. (2022). Adaptive deep neural networks methods for high-dimensional partial differential equations. Journal of Computational Physics, (pp. 111232).", " I thank the authors for their detailed response -- the answers clarified many things for me, and I am satisified that they answer most of my queries.\n\nRegarding point (2), and why one should expect the discriminator loss to be greater than the generator loss when the generator cannot fool the discriminator, I am still confused. Is my understanding correct, that this was an _empirical_ finding, and is not based on some theoretical reasoning? -- if yes, then I would suggest re-writing that last part of section 4.1 to reflect this. If not, it would really help to know how $L_g$ and $L_d$ are defined, and maybe a clearer explanation of the theoretical intuition behind this assertion.", " Thank authors for their quick response. I would like to clarify several points. \n1. The definition of 'the standard initial condition' mentioned in authors' response is not clear. To me, one specific initial condition for each equation cannot provide strong evidence for an empirical paper. I would still suggest do experiments on multiple initial conditions sampled from a random field because the experiments should be extensive to better understand the proposed method and how they will perform under different conditions. \n2. I also would like to elaborate more on the second point in the weakness section. We know for a big class of problems L2 loss is reasonable with theoretical justifications [3]. My point is that authors should be careful when they say standard L2 loss lack of theoretical justification and should be more specific about the problem settings. In fact, many PDEs (eg. heat equation, viscous Burgers equation) in DEQGAN's experiments belong to this class of problems where L2 has theoretical justification. In contrast, DEQGAN does not have theoretical justification even for those problems. It is mysterious to me how generative model fits in here and how the loss objective is chosen in the paper. It seems to be copying the GAN framework to PINN training without any theoretical justification for why generative model makes sense here. \n3. By SA-PINNs I mean the self-adaptive physics-informed neural networks [4]. The paper cited by authors is some other paper which is also related but not what I meant. Self-adaptive PINNs perform much better than classical PINNs and their loss is adaptively learned. That's why I think it is an important baseline. \n4. Regarding the second point in the response, I appreciate author's clarification and ablation study in Table 3. I agree that the ablation study Table 3 shows DEQGAN with residual monitor and instance noise can resolve instability issue. \n \n[3] Mishra, Siddhartha, and Roberto Molinaro. \"Estimates on the generalization error of physics-informed neural networks for approximating a class of inverse problems for PDEs.\" _IMA Journal of Numerical Analysis_ 42.2 (2022): 981-1022.\n\n[4] McClenny, Levi, and Ulisses Braga-Neto. \"Self-adaptive physics-informed neural networks using a soft attention mechanism.\" _arXiv preprint arXiv:2009.04544_ (2020).", " We appreciate the concrete suggestions and questions in this review. We would be happy to implement the modifications suggested to further improve the readability of the paper. To address the three questions posed by the reviewer:\n\n1. We used MSE to evaluate the accuracy of PINN solutions in comparison to “ground-truth” solutions simply because this is a common and well-known metric, which we also believe is less arbitrary than using L2, L1, or some other loss that may be used in PINN optimization. We also used MSE to evaluate the accuracy of traditional numerical methods, enabling a fair comparison across all methods tested.\n2. In our original implementation, we found that DEQGAN training sometimes failed when the discriminator loss plateaued below the generator loss, indicating that the generator was unable to fool the discriminator and neither model improved further. This motivated the addition of Gaussian noise to the “real” and “fake” data samples, which made the discriminator’s job more difficult and encouraged convergence to Nash equilibrium. A generator with a lower loss than the discriminator corresponds to the opposite scenario, in which the generator is already able to fool the discriminator.\n3. The main effect of increasing the number of training points in PINN (and DEQGAN) optimization (i.e., using a finer $t,x$ grid) is an increase in computational cost. While using too few points might hinder solution accuracy, we generally found that the models achieved good accuracy with reasonably small grids, e.g. 32x32 or 64x64 for PDEs. We were also able to reduce interpolation error by sampling training points from noisy grids (evenly-spaced grids perturbed by Gaussian noise), which we mention in section 4 of the paper. As our main objective was to compare the performance of DEQGAN to classical PINNs in terms of accuracy, we used standard grid sizes for all methods. However, future work could investigate the relationships that exist among the number of training points, computational cost, and accuracy.\n\nTo address the additional questions listed in the limitations section of the review:\n\n1. The discriminator and generator networks in our method are no less interpretable than classical PINNs. Classical PINNs can be thought of as consisting of only a generator network, which is also a black-box model that is trained using a classical loss function such as L2, L1, or Huber. The addition of a discriminator model, therefore, does not hinder interpretability but, rather, offers possible insights that could be explored in future work. Investigating exactly what the discriminator network learns might enable better understanding of why some loss functions appear to be more effective for optimizing the generator than others.\n2. Training time and computational complexity: While this is an important consideration, our primary objective was to show that DEQGAN can obtain more accurate solutions than classical PINNs on a wide variety of ODEs and PDEs, which necessitated a lengthy table of results (Table 2, page 7). Further, we found that all methods had similar runtimes, and therefore did not feel the need to make a comparison in our paper.\n3. Size and coarseness of the $t,x$ grid: Addressed in (3), above.\n", " We thank the reviewer for these helpful comments and questions. While we did not use the term “mode collapse” in our paper, this issue and others related to GAN training instability were addressed in sections 4.1 and 4.2, which discuss the techniques we employed to improve the robustness of our method. We found that adding instance noise proportional to the difference between the generator and discriminator losses made convergence to equilibrium more likely, and we performed an ablation study to demonstrate this.", " We appreciate the time taken by the reviewer to provide these detailed comments and thoughtful questions. However, we believe that this review does not fully appreciate the novelty of our method and contains several inaccuracies that we would like to address:\n\n1. The reviewer speculates that specific initial conditions were chosen to highlight the advantage of DEQGAN – this is not the case. We used standard initial conditions for the differential equations considered and emphasize that DEQGAN performs similarly across other values. We did not include results for multiple initial conditions because our aim was to showcase results on a variety of equations that exhibit challenging and varied dynamics.\n2. The reviewer suggests that multiple training runs are required to solve a single differential equation – this is also not the case. While our original formulation of DEQGAN required hyperparameter tuning to attain the best results, we proposed two methods to improve robustness (instance noise and residual monitoring) and conducted an ablation study to demonstrate the efficacy of these methods. In this study, we performed 500 training runs on a single equation and showed that for the vast majority of hyperparameter values, DEQGAN performs very well. In practice, however, only a single training run is required to obtain an accurate solution.\n3. The reviewer notes that we do not provide a theoretical explanation of when or why classical loss functions like L1 and L2 perform poorly. These are good questions, but they are out of scope for this work and remain open research problems. Our method circumvents this gap in the theory by proposing an adversarial training setup that can be thought of as learning the loss function for the generator, but we do not discredit any work that aims to address this question more directly. In fact, our conclusion suggests that future work could examine exactly what is learned by the discriminator network of DEQGAN, which might help us understand why some loss functions appear to be more effective for optimizing the generator than others. We also note that although a theoretical explanation is currently lacking, our empirical results clearly demonstrate that classical loss functions (L2, L1, and Huber) show varied performance and that DEQGAN consistently obtains more accurate solutions. \n4. The reviewer cites self-attention PINNs as an alternative method that “learns a loss function.” While the loss proposed in SA-PINNs incorporates self-adaptation weights to improve training, the losses over these terms still take the form of L2 (equations 11-13 in their paper), making this method similar to other adaptive losses, which we also cite in our paper [1]. We would be happy to mention SA-PINNs in our related works section for completeness. \n5. The reviewer asks about the time complexity of DEQGAN in comparison to classical PINNs. While this is an important consideration, our primary objective was to show that DEQGAN can obtain more accurate solutions than classical PINNs on a wide variety of ODEs and PDEs, which necessitated a lengthy table of results (Table 2, page 7). Further, we found that all methods had similar runtimes, and therefore did not feel the need to make a comparison in our paper.\n\n[1] Zeng, S., Zhang, Z., & Zou, Q. (2022). Adaptive deep neural networks methods for high-dimensional partial differential equations. Journal of Computational Physics, (pp. 111232).\n", " This work proposes Differential Equation GAN (DEQGAN) for solving differential equations using generative adversarial networks (GAN) where the generator learns to output the solution and the discriminator tries to learn a good loss function. The authors compare DEQGAN against classical PINNs over twelve ordinary differential equations (ODEs) and partial differential equations (PDEs). The experimental results show that DEQGAN can achieve multiple orders of magnitude lower L2 error than PINNs. Strengths: the paper is well-written and easy to follow. DEQGAN demonstrates superior empirical results compared to classical PINNs. The fact that DEQGAN doesn't depend on predefined distance such as L1, L2 can be useful in some problems.\n\nWeakness:\n1. The Related Work section misses some important discussion [1, 2]. The authors should discuss the relationship to works and place the contribution of DEQGAN in the context of prior works. For example, SA-PINNs [1] can also be viewed as learning a loss function.\n2. The authors state that the L2 loss lacks theoretical justification but miss the important explanation on why and when L2 loss can be problematic. Furthermore, DEQGAN do not have a theoretical justification of the learned loss function either.\n3. The loss function in DEQGAN is borrowed from generative adversarial network (GAN). The original GAN work designs its loss function such that generator is minimizing Jensen-Shannon divergence between the generated distribution and target distribution given optimal discriminator dynamics. However, the Jensen-Shannon divergence doesn't make sense in DEQGAN because the problem is to solve a differential equation with unique solution. Even with Gaussian instance noise, minimizing the divergence between two Gaussians will just be minimizing the L2 loss. I do not see why a generative model is needed here.\n4. The equations considered in the experiments are artificial. Only one specific initial condition is used for each equation, which raise the concern that the authors may pick the initial conditions to put DEQGAN in advantage in the experimental results.\n5. DEQGAN has training instability issue. To solve one equation, it needs to repeat runs multiple times and filters out the ones with poor performances.\n\n[1] McClenny, Levi, and Ulisses Braga-Neto. \"Self-adaptive physics-informed neural networks using a soft attention mechanism.\" arXiv preprint arXiv:2009.04544 (2020).\n[2] Daw, Arka, M. Maruf, and Anuj Karpatne. \"PID-GAN: A GAN Framework based on a Physics-informed Discriminator for Uncertainty Quantification with Physics.\" Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining. 2021. 1. The authors motivate the need of learning loss function by saying that L2,L1 loss functions lack theoretical justification. However, the author challenges the standard practice in physics-informed learning without explaining why and when L1, L2 loss functions can be bad choices. If L2 loss is problematic, how is the learned loss function better than L2 loss? Can the authors provide theoretical justification for it?\n2. How fast is DEQGAN compared to PINNs and numerical solvers? Can authors provide time complexity comparison against PINNs and its improved variants such as SA-PINNs?\n3. Only one specific artificial initial condition is used for each equation. How will DEQGAN perform if the initial condition is a random function sampled from a random field? 1. Lack of theoretical justification of the loss function and how the disciminator can learn a better objective function.\n2. DEQGAN has training instability issue. To solve one equation, it needs to repeat runs multiple times and filters out the ones with poor performances.\n3. The equations considered in the experiments are artificial. Only one specific initial condition is used for each equation, which raise the concern that the authors may pick the initial conditions to put DEQGAN in advantage in the experimental results.", " The paper presents a new way to solve differentiable equations by leveraging GAN-based adversarial training to “learn” the loss function for PINNs. Strengths\n-Provides a novel method to solve Differential Equations with GANs in an unsupervised training setup. They do not use the solutions to the equations.\n-As can be seen from the results it improves the performance on the Differential Equations task by multiple orders of magnitudes.\n-They authors also integrate different effective methods in Section 3 for stable training of the GAN.\n-Good paper presentation and a lot of details for the experiments in the Appendix. I also like the \n\n\nWeaknesses\n-No discussion for mode collapsing. I understand that this is a different application of GANs but it would be good to have a discussion on mode collapsing. Maybe the proposed GAN can provide different solutions for some equations.\n-No theoretical analysis or guarantees are provided in the work. GANs in general they do not have a lot of guarantees but a theoretical analysis would be much appreciated. -It would be good to see an ablation study where they see the effectiveness of the “tricks” that they used to train the GAN effectively because some of them may not be needed.\n-Can you make some comments on mode collapsing as a I mentioned on the weaknesses above? The authors have addressed the limitations on the Conclusions section. The work is not applicable for negative societal impact in my opinion.", " The paper proposes a method to solve ordinary and partial differential equations using GANs. Following in the tradition of physics-informed neural networks (PINNs), the flow equation is parametrized by a neural network. The parameters of the neural network are optimised by replacing a hand-tuned loss function such as the mean-squared error, with a discriminator and an adversarial loss. This method appears to provide improved performance over PINNs trained with hand-tuned loss functions and numerical solvers, on several well-known differential equations. Strengths:\n1) The paper is very well-written, well-structured and easy to follow. The method and results, for the most part, are described in clear detail.\n2) The ablation study, and discussion of methods to ensure stable training were extremely interesting and relevant, given that GANs are known to be difficult to train.\n2) The extensive results in the paper make a convincing case for using GANs over other methods for solving differential equations -- in particular, with respect to the variability of the solutions from other methods across different problems.\n\nWeaknesses:\n1) Many of the details of training, hyperparameters etc., included in the methods section (sections 3.2.1, 3.2.2, 3.4), while important for reproducibility, distract from the main message of the paper, and hinder readability.\n2) Some of the results require a lot more elucidation: e.g. the details of the ablation study in section 5.2 are mostly relegated to the appendix, and thus make the import of the results hard to parse; the plots in Figure 3 are barely mentioned or described in the text. \n3) The paper could improve with a more extensive discussion of the limitations of the method (the conclusion mentions non-interpretability of the discriminator, but not much more).\n4) The paper claims to present two \"methods\" to improve robustness in training: residual monitoring and addition of instance noise. While it is important to document the techniques used to stabilised training, neither of these are nescessarily novel contributions in the context of GANs. Furthermore, it is not clear whether these measures would be equally effective without spectral normalisation, skip connections in the generator, or with different choice of variance for the instance noise, or a change in the 25% training iteration threshold for residual monitoring.\n As outlined in the previous section, I believe the paper makes a strong case for using GANs to solve differential equations. Some concrete suggestions to address the weaknesses:\n1) It would be good to move the details of sections 3.2.1, 3.2.2 and 3.4 to the appendix, and move details of the ablation study to the main paper. Furthermore, sections 3.3 and 4.1 describe training / engineering choices to a level of detail that hinders readability, and can certainly be condensed.\n2) It would be good to have a more extensive discussion of Figure 3. Moreover, Figure 3 appears to show only simulations from the GAN -- it would be good to have some examples in of the simulations from other methods as well, in order to contextualize the GAN performance beyond the reported MSE numbers.\n3) While the methods residual monitoring and instance noise are certainly important details for reproducibility, it is not convincing that they would work without the other hyperparameter or architectural choices. It would therefore be good to modify the discussion of these two \"methods\", and to present them as potential avenues for stable training rather than a contribution of the paper.\n\nQuestions:\n1) The performance of the different methods for solving differential equations are compared solely on the basis of MSE. Is there a particular reason for choosing this metric? -- it is particularly surprising that there is no justification provided for this in the paper, since the introduction makes a strong point that the choice of similarity metric typically used to optimize parameters for the PINNs are arbitrary. It would also be good to see whether the GAN performance is equally good under other similarity metrics e.g. L1, or MMD \n2) In line 181-182, \"We use the ReLU function...noise should not be used.\" -- this logic is not clear. Why should $L_g < L_d$ on any particular iteration indicate a better generator? -- this could also be due to initialisation, especially at the beginning of training.\n3) How does the coarseness of the grid from which $t, x$ are sampled affect the performance of the GANs over other methods?\n\nMinor comments:\n1) The notation in section 4 is confusing in places: $G(x)$ and $\\Psi_{\\theta}(x)$ are used somewhat interchangeably; in line 179-181 $L_g$ and $L_d$ are not defined.\n2) In Figure 2, a legend is not required in every panel\n3) In Figure 3, the dotted lines in (a) and (b) are note identified in the legend; the colourbars have no label in (c) and (d) It would be good if the discussion could address the following issues:\n1) While the discriminator is not interpretable, the generator has potentially the same issues. What scientific insight is gained from an (effectively) black box generator modeling a flow equation?\n2) While the GANs do provide an improvement over existing methods in terms of performance, it would be good to have an idea of the associated computational costs, sample efficiency and training times compared to the other methods.\n3) How does this method scale with the number of dynamical variables, the size of the coarseness of the $t, x$ grid?" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3 ]
[ "nips_2022_qbSB_cnFSYn", "mwcqBji_mO3", "nips_2022_qbSB_cnFSYn", "z-XMKOvAs7", "lgAcmR7myME", "jIkg2sL4vow", "h2UadikiS3", "rOGwO5FbR9", "IaZwF7t__ax", "HQuIij8JYUa", "nips_2022_qbSB_cnFSYn", "nips_2022_qbSB_cnFSYn", "nips_2022_qbSB_cnFSYn" ]
nips_2022_dmCyoqxEwHf
GenerSpeech: Towards Style Transfer for Generalizable Out-Of-Domain Text-to-Speech
Style transfer for out-of-domain (OOD) speech synthesis aims to generate speech samples with unseen style (e.g., speaker identity, emotion, and prosody) derived from an acoustic reference, while facing the following challenges: 1) The highly dynamic style features in expressive voice are difficult to model and transfer; and 2) the TTS models should be robust enough to handle diverse OOD conditions that differ from the source data. This paper proposes GenerSpeech, a text-to-speech model towards high-fidelity zero-shot style transfer of OOD custom voice. GenerSpeech decomposes the speech variation into the style-agnostic and style-specific parts by introducing two components: 1) a multi-level style adaptor to efficiently model a large range of style conditions, including global speaker and emotion characteristics, and the local (utterance, phoneme, and word-level) fine-grained prosodic representations; and 2) a generalizable content adaptor with Mix-Style Layer Normalization to eliminate style information in the linguistic content representation and thus improve model generalization. Our evaluations on zero-shot style transfer demonstrate that GenerSpeech surpasses the state-of-the-art models in terms of audio quality and style similarity. The extension studies to adaptive style transfer further show that GenerSpeech performs robustly in the few-shot data setting. Audio samples are available at \url{https://GenerSpeech.github.io/}.
Accept
All 3 reviewers agree that the paper is novel, technically strong and experimentally convincing. THis paper should be accepted.
train
[ "B5sIosrToDt", "j-A7kQMpFz", "MY2ihPzavOf", "NDrrOC3Epq", "f8Ye0XkoAHT", "aRRaax5q2B1", "qwHa5Nw91Nj", "fFb2Ys3ktVU", "NQC0pKGgWGw", "qoLu_LJwfQR", "n4naLMwC0HT", "_bhoGwgl5FG", "iCuxP45zsuc" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We greatly appreciate that you have raised your score. We believe that your valuable comments have improved the paper, and feel free to ask more questions if you have any time. Thank you again for raising the score.", " \nTo all reviewers, ACs, and PCs:\n\nWe thank all reviewers for the valuable suggestions with their effort and time. Your comments have improved our work. Here we summarize our effort in addressing reviewers' concerns, and please refer to the responses to each reviewer for more details.\n\n**Extensive experimental comparison to demonstrate the effectiveness of proposed modules.**\n\n- Per the reviewers' suggestions, we have extensively included the expressive Fs2 with post-net for comparison, suggesting the superiority of GenerSpeech in modeling fine-grained style patterns (e.g., local rises and falls of the pitch and stress).\n\n- To prove that global style attributes have been eliminated from the linguistic content representation, we conduct a toy experiment on speaker and emotion classification. The phonetic representation with MLSN has witnessed a distinct decrease in classification accuracy, which verifies the effectiveness of MSLN in disentangling the style information. \n\n- In response to the reviewer's question, we include our preliminary ablation studies for deciding the optimal choice of the VQ size.\n\n**More detailed explanations on model design.**\n\n- As shown in our responses, we discuss the insights of designing MSLN for disentangling style information and learning style-agnostic representation. MSLN refines the sequence conditioned on the mismatched style information, which could be regarded as injecting noise to confuse the model and prevent it from generating style-consistent representation.\n\n- For global (speaker and emotion) style learning, we explain why considering the fine-tuned wav2vec 2.0 model as the style encoder to generate discriminative embeddings.\n\n- For fine-grained style learning, we visualize the mel-spectrograms and corresponding pitch tracks generated by different TTS systems for ablation, presenting the effectiveness of the local style adaptor in modeling multi-level style latent representations.\n\n**More precise definitions and presentation.**\n\n- Per the reviewers' suggestions, we revise Fig. 1(a) to include more annotations for clear definitions.\n\n- As shown in our responses, we present the detailed fine-tuning setting and loss objective for a more precise presentation.\n\nIn the meanwhile, we revise the manuscript according to the comments and suggestions of reviewers. We hope all reviewers can carefully read our detailed feedback and re-consider your rating to give a new model a chance. Feel free to ask more questions if you have any time. We are always happy to have a further discussion and answer more questions raised by you.\n\nBest regards, Authors", " Dear Reviewer Mo8N,\n\nThanks again for your constructive comments. We would like to kindly remind you that we tried our best to respond to your concerns with additional experiments, etc. As the end of the author-reviewer discussion period is approaching, we would be grateful if we could hear your feedback regarding our answers to the reviews. We would be happy to answer and discuss if you have further comments.\n\nBest regards, Authors", " Dear Reviewer X6YQ,\n\nThanks again for your constructive comments. We would like to kindly remind you that we tried our best to respond to your concerns with additional experiments, etc. As the end of the author-reviewer discussion period is approaching, we would be grateful if we could hear your feedback regarding our answers to the reviews. We would be happy to answer and discuss if you have further comments.\n\nBest regards, Authors", " Dear Reviewers,\n\nThank you again for the great efforts and the valuable comments. \n\nWe have carefully addressed the main concerns in detail. We hope you might find the response satisfactory. As the end of the rebuttal phase is approaching, we would be grateful if we could hear your feedback regarding our answers to the reviews. We will be very happy to clarify any remaining points (if any).\n\nThanks in advance,\n\nPaper 4823 authors", " We thank all reviewers for the constructive feedback. Here we summarize the revision of the manuscript according to the comments and suggestions of reviewers:\n\n- In section 3.3, we include more insights and explanations on designing mix-style layer normalization (MSLN).\n- We update Fig. 1 and provide more details.\n- In section 3.4.2, we provide the definition of $\\mathcal{S}_p$.\n- In section 4.3, we bold the definition of abbreviations.\n- In section 4.4, we include the detailed fine-tuning setting and put this in Appendix E.\n- In Appendix F in the supplementary material, we plot the mel-spectrograms and corresponding pitch tracks generated by different TTS systems for ablation.\n\n ", " We are grateful for your positive review and valuable feedback, and we hope our response fully resolves your concern.\n\n**[About the style representation $\\mathcal{S_u}$.]**\n\nThe style representation $\\mathcal{S_u}$ is a sequence of embedding vectors. In style encoders, the vector quantization block serves as a bottleneck to eliminate the style-unrelated information effectively, which is regularized by the gradient of pitch reconstruction loss $\\mathcal{L_p}$ in SSP Predictor. To further ensure the representation sequence does not explode, we include a commitment loss $\\mathcal{L_c}$ as described in Section 3.6.1.\n\n**[About the abbreviations in Table 3.]**\n\nWe use USE, PSE, and WSE to denote the utterance, phoneme, and word-level style encoder, respectively, which have been presented in the caption of Table 3.\n\n**[About the comparison with expressive FS2.]**\n\nThanks for the reviewer's suggestion. We further include the expressive FS2 with post-net for comparison in the ESD dataset. The evaluation procedure stays consistent with the manuscript, and we present the results of parallel style transfer in the following tables:\n\nMethod | MOS | SMOS | Cos | FFE\n- | - | -| - | -\nReference | 4.47 $\\pm$ 0.08 | / | / | /\nReference(voc.) | 4.40 $\\pm$ 0.09 | 4.47 $\\pm$ 0.10 | 0.99 | 0.07\nExpressive FS2 | 4.04 $\\pm$ 0.08 | 3.93 $\\pm$ 0.09 | 0.93 | 0.41\nExpressive FS2 + Post-Net | 4.09 $\\pm$ 0.08 | 3.95 $\\pm$ 0.08 | 0.94 | 0.39 \nGenerSpeech | **4.11 $\\pm$ 0.10** | **4.20 $\\pm$ 0.09** | **0.97** | **0.26** \n\nThe flow-based post-net is designed to refine the coarse-grained outputs of the mel-spectrogram decoder, and thus an improvement in audio quality and naturalness could be observed. Regarding style similarity, the expressive FS2 with post-net shares a commonly limited capability with the original FS2 in modeling the highly dynamic style variation, showing an apparent gap from GenerSpeech in SMOS and FFE evaluation.\n\nTo conclude, the performance gap of samples generated between the expressive FS2 and GenerSpeech is mainly attributed to the different capacities in modeling fine-grained style patterns (e.g., local rises and falls of the pitch and stress). We illustrate the pitch tracks of generated mel-spectrograms in Fig. 2 and find that GenerSpeech precisely resembles and transfers the prosodic style of a reference signal, which is nearly time-aligned in pitch contours. In contrast, expressive FS2 tends to model the \"average\" prosodic distribution over their input data, generating less expressive speech especially for long-form phrases.\n\n**[About the size of VQ code-book.]**\n\n\nIn response to the reviewer's question, the vector quantization block enjoys a carefully-crafted information bottleneck design. We conducted ablation studies before deciding the optimal choice of the VQ size, and the results are presented in the following tables:\n\nMethod | SMOS\n- | :-: \nReference | 4.47 $\\pm$ 0.08 \nReference(voc.) | 4.40 $\\pm$ 0.09 \nGenerSpeech(VQ Size=64) | 4.02 $\\pm$ 0.08 \nGenerSpeech(VQ Size=96) | 4.06 $\\pm$ 0.07 \nGenerSpeech(VQ Size=128) | **4.11 $\\pm$ 0.10**\nGenerSpeech(VQ Size=160) | 4.05 $\\pm$ 0.09 \n\n\nGenerSpeech with a 64-category code-book has witnessed a decreased sample similarity, demonstrating that the tighter latent space fails to represent the diverse style patterns. In contrast, an expanded code-book (e.g., 160) produces \"information leakage\" where the content information of reference audio is unexpectedly modeled in the style encoder, and thus the entangled representation leads to distinct quality degradation. As a result, we set VQ size as 128, which is robust across style encoders in multiple levels.\n\n**[About the positional encoding embedding.]**\n\nTo make use of the order of the style representation sequence, we introduce the positional encoding embedding to include information about the position. Consequently, it creates reasonable alignments close to the diagonal in differential local-level style encoders, properly controlling and transferring the prosodic variations in different places.\n \n\nAgain, we appreciate your positive reviews and hope our response can fully resolve your concern.", " \n\n**[About the caption and mark in Fig. 1(e).]**\n\nThanks for the reviewer's suggestion. We have detailed $Q$ in the style-to-content alignment module in the revised version of the paper.\n\n**[About the pitch reconstruction loss.]**\n\nThe gradient of pitch reconstruction loss is utilized to optimize the whole model and prevents sub-optimal training.\n\n**[About the detailed fine-tuning setting.]**\n\nFollowing the common practice [1], we fine-tune GenerSpeech using 1 NVIDIA 2080Ti GPU with the batch size of 64 sentences for 2000 steps, and all parameters are optimized. The optimizer configuration and loss functions stay consistent with those in the experimental setup.\n\n\n**[References]**\n\n\n[1] Chen M, Tan X, Li B, et al. Adaspeech: Adaptive text to speech for custom voice[J]. ICLR, 2021.\n\n[2] Min D, Lee D B, Yang E, et al. Meta-stylespeech: Multi-speaker adaptive text-to-speech generation[C]//International Conference on Machine Learning. PMLR, 2021: 7748-7759.\n\nAgain, we appreciate the reviewer's valuable reviews and believe some misunderstandings are due to our clarity. Hope our response can address your concerns.\n", " We thank the reviewer for the constructive feedback and for considering our work as “showed good performance” and “achieved better model generalization”. We understand that your concerns are mainly related to the paper's clarity and hope our response resolves your concerns fully.\n\n**[About the proposed mix-style layer normalization (MSLN).]**\n\nThanks for the reviewer's feedback that requests more explanations about our proposed method. Conditional layer normalization has demonstrated its effectiveness in influencing the hidden activation and final prediction. AdaSpeech [1] and Meta-StyleSpeech [2] utilize the speaker embedding as the conditional information, which adaptively scales and shifts the normalized input features to synthesize speech of various voices. \n\n\nFor disentangling style information and learning style-agnostic representation, a straightforward solution is to refine the sequence conditioned on the mismatched style information, which could be regarded as injecting noise to confuse the model and prevent it from generating style-consistent representation. Consequently, the model refines the input features regularized by perturbed style and learns generalizable style-invariant content representation. To further ensure diversity and avoid over-fitting, we perturb the style information by randomly mixing the shuffled vectors with a shuffle rate $\\lambda$ sampled from the Beta distribution. \n\nFrom a high-level perspective described in Section 4.3, we observe that removing the mix-style layer normalization in a generalizable content adaptor results in decreased quality and similarity. Further, we conduct a toy experiment to verify the effectiveness of MSLN in disentangling the style information. Specifically, we fine-tune the learned phonetic representation $\\mathcal{H}_c$ in the downstream speaker and emotion classification tasks using the LibriTTS (2456 speakers) and ESD (5 emotions) datasets, respectively. As the classifiers are converged, we compare the accuracy across the test set and present the results in the following tables: \n\nMethod | Speaker Acc | Emotion Acc\n- | :-: | :-:\n$\\mathcal{H}_c$ | 15.0\\% | 34.5\\%\n$\\mathcal{H}_c$ with MSLN | 6.5\\% | 25.0\\%\n\n\nThe phonetic representation $\\mathcal{H}_c$ with MSLN has witnessed a distinct decrease in accuracy in the downstream classification tasks, getting close to random prediction. By introducing MSLN to the generalizable content adaptor, the global style attributes (i.e., speaker and emotion) could be disentangled from the linguistic content representation, which promotes the generalization of the TTS model towards out-of-domain custom voices. \n\n \n**[About the multi-level style adaptor.]**\n\nThanks for the reviewer's feedback that requests more explanations about our proposed method. In the global style encoder, the speaker and emotion conditions have been constructed by a generalizable wav2vec 2.0 model, which mainly represents the overall style characteristics of a speech. However, considering the rises and falls of the local pitch and highly dynamic prosodic variations in custom voices, the fine-grained style representations should be adequately modeled. \n\nFrom a high-level point of view as described in Section 4.3, we observe a distinct drop when removing utterance, phoneme, or word-level style encoder, which demonstrates the effectiveness of capturing style latent representations in different receptive levels. \n\nFurthermore, we detail our analysis and investigation of different levels of style representation. Please kindly refer to Appendix F in the supplementary material, where we plot the mel-spectrograms and corresponding pitch tracks generated by different TTS systems: 1) The utterance-level latent representation mainly resorts to the long-term patterns with a large receptive field. Dropping the utterance-level style encoder results in an unrealistic prosodic style, making it more challenging to capture the long-term dependencies; 2) The phoneme level style encoder is supposed to model the short-term prosodic attributes between phonemes. Removing it has witnessed the incorrect fluctuation in local pitch, which indicates the effectiveness of the pooling operation in capturing short-term style variations. 3) A word may consist of several phonemes, and we find that removing the word-level style representation leads to an unnatural transition between words, resulting in a distinct drop in audio similarity.\n", " We are grateful for your positive review and valuable feedback, and we hope our response fully resolves your concern.\n\n**[About the phonetic representation $\\mathcal{H}_c$.]**\n\nThe phonetic representation $\\mathcal{H}_c$ has been expanded as frame-level in the content adaptor. We apologize for the mistake in Fig. 1(a), which has been fixed in the new version of the paper.\n\n**[About the marks in Fig. 1(d).]**\n\nThanks for the reviewer's suggestion, and the additional input (i.e., $Q$) has been explicitly given in the revised version of the paper.\n\n**[About the proposed mix-style layer normalization (MSLN).]**\n\nThanks for the reviewer's feedback that requests more explanations about our proposed method. Conditional layer normalization has demonstrated its effectiveness in influencing the hidden activation and final prediction. AdaSpeech [1] and Meta-StyleSpeech [2] utilize the speaker embedding as the conditional information, which adaptively scales and shifts the normalized input features to synthesize speech of various voices. \n\n\nFor disentangling style information and learning style-agnostic representation, a straightforward solution is to refine the sequence conditioned on the mismatched style information, which could be regarded as injecting noise to confuse the model and prevent it from generating style-consistent representation. Consequently, the model refines the input features regularized by perturbed style and learns generalizable style-invariant content representation. To further ensure diversity and avoid over-fitting, we perturb the style information by randomly mixing the shuffled vectors with a shuffle rate $\\lambda$ sampled from the Beta distribution. \n\n \n \n**[About the design of global style encoder.]**\n\nWav2vec 2.0 is a representation learning framework following a two-stage training process of pre-training and fine-tuning, which has proven its efficiency in learning latent features and even generalizing well to out-of-distribution recordings. Recently, wav2vec 2.0 has been further adopted in various downstream tasks and demonstrated the SOTA results [3]. Inspired by this, we consider the fine-tuned wav2vec 2.0 model as the global style encoder to generate discriminative speaker and emotion embeddings, and perform strong robustness to out-of-distribution custom voices.\n\n**[About the definition of $\\mathcal{S}_u$]**\n\nIn this work, we use $\\mathcal{S}_u$ to denote the phoneme-level style latent representation derived in the style adaptor. Thanks for the reviewer's reminder, and we have attached this definition to the new version of the paper.\n\n**[References]**\n\n\n[1] Chen M, Tan X, Li B, et al. Adaspeech: Adaptive text to speech for custom voice[J]. ICLR, 2021.\n\n[2] Min D, Lee D B, Yang E, et al. Meta-stylespeech: Multi-speaker adaptive text-to-speech generation[C]//International Conference on Machine Learning. PMLR, 2021: 7748-7759.\n\n[3] Wang, Y et al. A fine-tuned wav2vec 2.0/hubert benchmark for speech emotion recognition, speaker verification and spoken language understanding. arXiv preprint arXiv:2111.02735.\n\n\n\nAgain, we thank the reviewer for the insightful reviews and \"Accept\" recommendation for our paper.\n", " The paper addresses the problem of zero-shot style transfer for text-to-speech (TTS) synthesis of out-of-domain (OOD) custom voice. To address the problem, the paper proposes GenerSpeech, a generalizable text-to-speech model, which models and controls the style-agnostic (linguistic content) and style-specific (speaker timber, emotion and prosody) speech variations respectively. More specifically, mix-style layer normalization (MSLN) is proposed to eliminate the style attributes in the linguistic content representation; multi-level style adaptor is adopted for modeling the style representations at global level speaker and emotion characteristics, and the local level (utterance, phoneme and word-level) prosody representations. Extensive experiments demonstrate the effectiveness of the proposed method in zero-shot style transfer. Strengths:\n\n1) Towards zero-shot style transfer of OOD voice, the paper proposes a method by decomposing the speech variations into style-agnostic and style-specific parts. The idea is a kind of speech representation disentanglement which is quite useful for voice cloning and/or style transferring.\n\n2) The paper incorporates several techniques to improve the generalization ability of the proposed method, including mix-style layer normalization (MSLN), multi-level style adaptor, and the flow-based post-net.\n\n3) The samples in the provided demo webpage do show the superiority of the proposed method over the other comparison methods. For both parallel style transfer and non-parallel transfer, the synthesized results well demonstrate the effectiveness of the proposed method in transferring speaker timbers, emotions, and prosody variations.\n\nWeaknesses:\n\nI don't think this paper has major flaws. But please refer to further comments for detailed questions.\n 1) It is unclear how the phonetic representation $H_c$ is input to different local style encoders as the query $Q$. Is $H_c$ phoneme level (before length regulator) or frame level (after length regulator)? From Fig. 1(a), it seems $H_c$ is phoneme level. If it is the case, how could the output of Style Adaptor (phoneme level) be added to the output of Content Adaptor (frame level) as the input to Mel Decoder? The authors need to make this clearer.\n\n2) In Fig. 1(d), it would be better if the authors could explicitly give the additional input of $Q$ (i.e. phonetic representation $H_c$) for Utterance / Phoneme / Word-level local style encoders.\n\n3) In Section 3.3, the paper proposes mix-style layer normalization (MSLN) by considering not only the original style vector $w$ but also the shuffled style vector $ \\tilde{w}$. Experiments also validates the effectiveness of such design. It would be better the authors give more explanations about MSLN and why it can be adopted for deriving style-agnostic representations.\n\n4) In Section 3.4.1, from Appendix A.2.1, averaging pooling is adopted followed by two sperate full connection (FC) layers to derive the global speaker and emotion information respectively. It is expected that the averaging pooling of the wav2vec model outputs should carry such speaker and emotion styles. But it is unclear why wav2vec could be adopted to capture the global style characteristics. The authors could give more explanations about such design.\n\n5) Although I can guess what does the $S_u$ mean, the authors do not give the definition of $S_u$ in Section 3.4.2.\n The authors address the impacts and limitations in Appendix G. ", " This paper proposed GenerSpeech, which decomposed the speech variation into the style-agnostic and style-specific parts by introducing a multi-level style adaptor modeling both global speaker and emotion characteristics and the local fine-grained prosodic representations, and a generalizable content adaptor with Mix-Style Layer Normalization to eliminate style information in the linguistic content representation. The evaluations on style transfer demonstrated that GenerSpeech could synthesize high-quality speech in terms of audio quality and style similarity.\n\nUpdate: the authors addressed my concerns well. I changed my ratings. The introduced GenerSpeech model achieved better model generalization with several techniques to learn both the style-agnostic and style-specific variations in speech separately. The multi-level style adaptor could model and transfer various style attributes, including the speaker and emotion global characteristics, and the fine-grained utterance-level, phoneme-level, and word-level prosodic representations. The Mix-Style layer normalization was able to eliminate the style information in linguistic representations for improving the model generalization. GenerSpeech showed good performance in zero-shot style transfer results for OOD text-to-speech synthesis compared to other baseline methods. However, there are missing analyses about the effectiveness of the proposed style-agnostic and style-specific modules.\n\nFirst, the author used shuffle operation in the Mix-Style Layer Normalization (MSLN) layer to eliminate the style information from linguistic content representation. But the study of the effectiveness of shuffle operation is missing, then how to guarantee the style information is fully disentangled and removed from the linguistic content representation by using shuffle in MSLN? Second, what's the latent representation extracted by the Style Adaptor from reference audio, such as different levels (utterance, word, phoneme) of latent representations? Do these local modules effectively capture the desired style information?\n\nMinor: in Figure 1(e), it's helpful to mark where the Q(query) in \"Style-to-Content Alignment\" module comes from. Since 1(d) shows that the \"Local style encoder\" only takes mel-spectrogram as input. In section 3.6.1, is the pitch reconstruction loss L_p only used to train the parameters in pitch predictors, or the gradient from pitch reconstruction loss is also used to train the rest part of the model?\n\nIn section 4.4, when performing model adaptation using different amount of data, what's the detailed fine-tuning settings for the multi-level style adaptor? For example, what are the optimization configuration, and loss functions (if there are any changes). The authored mentioned future research directions in section 5 and potential negative societal impacts in Appendix G.", " This work proposed a non-autoregressive TTS model with good style transfer under out of domain conditions. It includes 1) a multi-level style adaptor for global styles (speaker and emotion) and local styles (utterance, phoneme and word) 2) a generalizable content adaptor with mix-style layer-normalization 3) a flow based post-net. The experimental results show the proposed method could outperform baselines compared with and demonstrate the efficacy of the proposed method. Strengths:\n1. Proposing a layer-normalized mix-style for better generalization\n2. A style-specific module to integrate different style information. \n3. Including a flow-based post-net on the top of the fast speech 2 model\n4. Detailed comparison with different baselines.\n\nWeaknesses:\n1. Despite the authors' lengthy appendix, which provides more information about the experiments, some critical details of the proposed method are still missing. For example, what is the shape of $Su$ after VQ? Is it a vector or a sequence of embedding vectors? If it is a sequence of embedding vectors, is there any constraint applied to make them identical or close?\n2. Some abbreviations are used without definition, such as USE, PSE and WSE in Table 3. \n After listening the sample audio clips, one impression I have the style transfer from the expressive FS2 is as good as the proposed GenerSpeech. One difference I noticed is the synthesized audio quality. Could we add another experiment to compare the expressive FS2 with a flow-based post-net?\n\nWhat is the shape of $Su$ after VQ? Is it a vector or a sequence of embedding vectors? If it is a sequence of embedding vectors, is there any constraint applied to make them identical or close?\n\nThe local multi-layer style encoders share the common architecture, are they all use VQ code size=128? How do we choose this number?\n\nWhat's the purpose to include positional encoding embedding to the style representation before they are fed into the attention module? The authors have addressed that in section 5." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "_bhoGwgl5FG", "nips_2022_dmCyoqxEwHf", "iCuxP45zsuc", "_bhoGwgl5FG", "nips_2022_dmCyoqxEwHf", "nips_2022_dmCyoqxEwHf", "iCuxP45zsuc", "NQC0pKGgWGw", "_bhoGwgl5FG", "n4naLMwC0HT", "nips_2022_dmCyoqxEwHf", "nips_2022_dmCyoqxEwHf", "nips_2022_dmCyoqxEwHf" ]
nips_2022_fHUBa3gQno
Improving Task-Specific Generalization in Few-Shot Learning via Adaptive Vicinal Risk Minimization
Recent years have witnessed the rapid development of meta-learning in improving the meta generalization over tasks in few-shot learning. However, the task-specific level generalization is overlooked in most algorithms. For a novel few-shot learning task where the empirical distribution likely deviates from the true distribution, the model obtained via minimizing the empirical loss can hardly generalize to unseen data. A viable solution to improving the generalization comes as a more accurate approximation of the true distribution; that is, admitting a Gaussian-like vicinal distribution for each of the limited training samples. Thereupon we derive the resulting vicinal loss function over vicinities of all training samples and minimize it instead of the conventional empirical loss over training samples only, favorably free from the exhaustive sampling of all vicinal samples. It remains challenging to obtain the statistical parameters of the vicinal distribution for each sample. To tackle this challenge, we further propose to estimate the statistical parameters as the weighted mean and variance of a set of unlabeled data it passed by a random walk starting from training samples. To verify the performance of the proposed method, we conduct experiments on four standard few-shot learning benchmarks and consolidate the superiority of the proposed method over state-of-the-art few-shot learning baselines.
Accept
In this paper, authors study a few-shot learning setting where the training distribution deviates from true distribution. To achieve a more accurate approximation of the true distribution, authors propose assuming Gaussian-like vicinal distribution around each training data point which results in a vicinal loss. Extensive empirical results in the paper show that the proposed method improves over the baseline methods. While vicinal loss has been used in other settings, this problem formulation and use of vicinal loss is novel and the empirical advantage is significant. Therefore, I am recommending acceptance. Given reviewers' concerns about typos and presentation issues, I encourage the authors to make a few more passes over the paper and improve the writing and presentation.
train
[ "W-F8aJ7AdPh", "t84xwLxApNp", "j5cLwyippPY", "gONzKfDzsf-", "1guJIVdrNKD", "2jHpWnk33kC", "v2CUwi5TChq", "t6ZYC8uYMb", "sXlUVasBchW", "W0EFgUURBxn", "3KOP_E4_dty", "XC99y4zPiKzq", "2FKPpjZY3YV", "lFWDw_lo0r_", "I_4aQY7myNP", "tfFu1_6fukR", "kcK8sn5Pt6GC", "t5TRGib_LPB", "L2-A6EbkRCV", "Qj2WXCIddvB", "1sei90iqGc", "QEGng-PHjpC", "G2beXFko_sn" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " First, we again post part II of the previous response here for your reference, in case that you have missed that part which is in the other post.\n\nSecond, if the following response still fails to address your concerns, we would very much appreciate if you elaborate the weakness of our experiments.\n\n**Q13: Experimental Results: if compared to STOA with different backbones (Table 2), ADV is weaker than STOA in most.**\n\n> - It is **not true** that \"ADV is weaker than STOA in most\". We summarize the comparison with SOTA with CONV and ResNet-12 backbones by the results in Table 1 and 2. ADV is better than SOTA baselines in all cases expect the FEAT with CONV in 5 shot.\n> \n> | Method | Backbone | 1 shot | 5 shot |\n> | --- | --- | --- | --- |\n> | TPN [18] | CONV | 55.51 ± 0.86 | 69.86 ± 0.65 |\n> | FEAT [41] | CONV | 57.04 ± 0.20 | **72.89 ± 0.16** |\n> | TEAM [22] | CONV | 56.57 | 72.04 |\n> | ADV-SVM | CONV | *59.02 ± 0.70* | *72.13 ± 0.53* |\n> | ADV-CE | CONV | **59.22 ± 0.68** | 72.07 ± 0.51 |\n> \n> | Method | Backbone | 1 shot | 5 shot |\n> | --- | --- | --- | --- |\n> | TPN [18] | ResNet | 59.46 | 75.65 |\n> | TEAM [22] | ResNet | 60.07 | 75.90 |\n> | ADV-SVM | ResNet | *64.27 ± 0.74* | *76.37 ± 0.54* |\n> | ADV-CE | ResNet | **65.03 ± 0.74** | **77.12 ± 0.52** |\n> \n> - We compared ADV-CE with **3 SOTA methods**: Transductive Information Maximation (TIM) [R1], LaplacianShot [R2], and Obliquer Manifold (OM) [R5].\n> \n> - The performance of few-shot learning is affected by both the base model obtained in the pre-training phase (meta-training phase) and the fine-tuning strategy in the fine-tuning phase (meta-testing phase). **For a fair comparison in the meta-testing phase, the base model should be the same for different methods.**\n> \n> - The reason why we select these 3 methods is that they are all independent of the pre-training of the base model and do not require training any extra module in the pre-training /meta-training phase.\n> \n> - **For a fair comparison**, we feed all methods the same fixed WRN feature extractor and the same 10,000 meta-testing tasks. As TIM and LaplacianShot use the same pre-trained weights of a feature extractor, we use these pre-trained weights for all of the 4 methods. No data augmentation is applied to images in meta-testing tasks.\n> \n> - We present the results in the table below. **We observe that ADV-CE outperforms all baselines in 1-shot learning and obtains the second best performance in 5-shot learning, which is slightly lower than TIM.**\n> \n> \n> | Method | 1-shot | 5-shot |\n> | --- | --- | --- |\n> | TIM[R1] | 74.36 ± 0.26 | **84.58 ± 0.15** |\n> | LaplacianShot[R2] | 69.10 ± 0.23 | 80.86 ± 0.15 |\n> | OM[R5] | 68.67 ± 0.23 | 83.42 ± 0.15 |\n> | ADV-CE | **74.92 ± 0.25** | 84.53 ± 0.16 |\n> \n> - Note that the major contributions of TIM lie in regularization on query sets, though ours can be deemed as regularization on support sets, **which means its contributions are orthogonal to our contributions**. We are more than interested in integrating TIM to further boost ours. When integrating TIM with ADV-CE and applying it on WRN feature extractor pre-trained by [19] on miniImageNet, ADV-CE+TIM achieves competitive performance to SOTA methods in the following table, where the performances of SOTA methods are reported in the original paper.\n> \n> | Method | 1-shot | 5-shot |\n> | --- | --- | --- |\n> | TIM [R1] | 77.8 | 87.4 |\n> | LaplacianShot [R2] | 74.86 ± 0.19 | 84.13 ± 0.14 |\n> | [R3] | 80.04 | 87.64 |\n> | [R4] | 76.84 | 84.36 |\n> | OM[R5] | 80.64 ± 0.34 | **89.39 ± 0.39** |\n> | ADV-CE+TIM | **80.75 ± 0.25** | 88.97 ± 0.14 |\n> \n> Note that both [R3] and OM [R5] are integrated with TIM.\n>\n> [R1] Boudiaf, Malik, et al. \"Information maximization for few-shot learning.\" NeurIPS 2020.\n> \n> [R2] Ziko, Imtiaz, et al. \"Laplacian regularized few-shot learning.\" ICML 2020.\n> \n> [R3] Cui, Wentao, and Yuhong Guo. \"Parameterless transductive feature re-representation for few-shot learning.\" ICML 2021.\n> \n> [R4] Lee, Dong Hoon, and Sae-Young Chung. \"Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot Classification.\" ICML 2021.\n> \n> [R5] Qi, Guodong, et al. \"Transductive Few-Shot Classification on the Oblique Manifold.\" ICCV. 2021.", " I still think that the method is not novel and the experimental part is too weak. This is not above the borderline in my opinion.", " The rebuttal has addressed most of my concerns. I keep my rating and recommend acceptance.", " Thanks again for your time and comments. There might be some misunderstanding. We would like to clarify point by point. Please kindly let us know whether you have any further concerns.\n\n**Q10: Research Problem.**\n\n> - In Lines 32-34, we **do not** claim the research problem is unexplored. Our claim is that the research problem is relatively **\"less explored\"** than \"the majority of meta-learning work revolved around developing better meta-training strategies\".\n> \n> - We agree that Free-Lunch[1] ([39] in the paper) has explored the research problem of task-specific generalization in meta-testing stage.\n> \n> - In lines 99-105, we acknowledge that our method shared a similar spirit of [1] and summarize the main differences between [1] and ours.\n> \n> - Considering the importance of task-specific generalization in meta-testing stage, we believe it merits further exploration.\n> \n> - We highlight the advantages of the proposed method over [1].\n> \n> - The vicinal distribution estimated using unlabeled data from the same task in our method is closer to the true distribution than the calibrated distribution estimated in [1].\n> - Unlike [1] which optimizes the objective by sampling 750 data instances, our method directly optimizes the expected vicinal loss over the distributions, resulting in more efficient and stable training than [1].\n> - Our method significantly outperforms [1] (6% and 3% in 1-shot and 5-shot learning on miniImageNet).\n\n**Q11: Proposed Method: the whole VRM application process does not bring new challenges.**\n\n> - There are two main challenges as listed below. It may be these challenges that are holding back the popularity of VRM using Gaussian vicinity, even though it has been proposed for more than 20 years.\n> \n> 1. It is extremely challenging to derive the expected vicinal loss function over a Gaussian distribution, especially for cross-entropy loss.\n> \n> - We accomplish the challenge by approximating the CE loss using an uncommon technique, namely, a 2nd-order Taylor expansion w.r.t. to data.\n> \n> - To the best of our knowledge, we are the first to derive such vicinal loss for CE and SVM in multi-class classification.\n> \n> 2. In few-shot learning, It is difficult to estimate the vicinal distribution for each instance given the limited data.\n> \n> - Our contribution to addressing this challenge is to propose performing lazy random walks among unlabeled data to estimate the vicinal distribution.\n\n**Q12: The experimental results of Q6 are also difficult to convince me.**\n\n> \n> - We analyze why the performance of 5-step ADV-CE is competitive with 100-step ADV-CE.\n> \n> - We use a good initialization and a large learning rate for a fast convergence of the 5-step ADV-CE.\n> \n> - The initialization is the prototype of each class.\n> \n> - The learning rate is 10 times that of 100-step ADV-CE.\n> \n> - Since the learning rate may be too large, or the number of step too small, the 5-step ADV-CE converges to a sub-optimal solution, achieving a lower performance than 100-step ADV-CE.\n>\n> - If using the same learning rate as 100-step ADV-CE, the performance of 5-step ADV-CE is 71.81 and 83.90 for 1-shot and 5-shot learning. ", " **Q13: Experimental Results: if compared to STOA with different backbones (Table 2), ADV is weaker than STOA in most.**\n\n> - It is **not true** that \"ADV is weaker than STOA in most\". We summarize the comparison with SOTA with CONV and ResNet-12 backbones by the results in Table 1 and 2. ADV is better than SOTA baselines in all cases expect the FEAT with CONV in 5 shot.\n> \n> | Method | Backbone | 1 shot | 5 shot |\n> | --- | --- | --- | --- |\n> | TPN [18] | CONV | 55.51 ± 0.86 | 69.86 ± 0.65 |\n> | FEAT [41] | CONV | 57.04 ± 0.20 | **72.89 ± 0.16** |\n> | TEAM [22] | CONV | 56.57 | 72.04 |\n> | ADV-SVM | CONV | *59.02 ± 0.70* | *72.13 ± 0.53* |\n> | ADV-CE | CONV | **59.22 ± 0.68** | 72.07 ± 0.51 |\n> \n> | Method | Backbone | 1 shot | 5 shot |\n> | --- | --- | --- | --- |\n> | TPN [18] | ResNet | 59.46 | 75.65 |\n> | TEAM [22] | ResNet | 60.07 | 75.90 |\n> | ADV-SVM | ResNet | *64.27 ± 0.74* | *76.37 ± 0.54* |\n> | ADV-CE | ResNet | **65.03 ± 0.74** | **77.12 ± 0.52** |\n> \n> - We compared ADV-CE with **3 SOTA methods**: Transductive Information Maximation (TIM) [R1], LaplacianShot [R2], and Obliquer Manifold (OM) [R5].\n> \n> - The performance of few-shot learning is affected by both the base model obtained in the pre-training phase (meta-training phase) and the fine-tuning strategy in the fine-tuning phase (meta-testing phase). **For a fair comparison in the meta-testing phase, the base model should be the same for different methods.**\n> \n> - The reason why we select these 3 methods is that they are all independent of the pre-training of the base model and do not require training any extra module in the pre-training /meta-training phase.\n> \n> - **For a fair comparison**, we feed all methods the same fixed WRN feature extractor and the same 10,000 meta-testing tasks. As TIM and LaplacianShot use the same pre-trained weights of a feature extractor, we use these pre-trained weights for all of the 4 methods. No data augmentation is applied to images in meta-testing tasks.\n> \n> - We present the results in the table below. **We observe that ADV-CE outperforms all baselines in 1-shot learning and obtains the second best performance in 5-shot learning, which is slightly lower than TIM.**\n> \n> \n> | Method | 1-shot | 5-shot |\n> | --- | --- | --- |\n> | TIM[R1] | 74.36 ± 0.26 | **84.58 ± 0.15** |\n> | LaplacianShot[R2] | 69.10 ± 0.23 | 80.86 ± 0.15 |\n> | OM[R5] | 68.67 ± 0.23 | 83.42 ± 0.15 |\n> | ADV-CE | **74.92 ± 0.25** | 84.53 ± 0.16 |\n> \n> - Note that the major contributions of TIM lie in regularization on query sets, though ours can be deemed as regularization on support sets, **which means its contributions are orthogonal to our contributions**. We are more than interested in integrating TIM to further boost ours. When integrating TIM with ADV-CE and applying it on WRN feature extractor pre-trained by [19] on miniImageNet, ADV-CE+TIM achieves competitive performance to SOTA methods in the following table, where the performances of SOTA methods are reported in the original paper.\n> \n> | Method | 1-shot | 5-shot |\n> | --- | --- | --- |\n> | TIM [R1] | 77.8 | 87.4 |\n> | LaplacianShot [R2] | 74.86 ± 0.19 | 84.13 ± 0.14 |\n> | [R3] | 80.04 | 87.64 |\n> | [R4] | 76.84 | 84.36 |\n> | OM[R5] | 80.64 ± 0.34 | **89.39 ± 0.39** |\n> | ADV-CE+TIM | **80.75 ± 0.25** | 88.97 ± 0.14 |\n> \n> Note that both [R3] and OM [R5] are integrated with TIM.\n>\n> [R1] Boudiaf, Malik, et al. \"Information maximization for few-shot learning.\" NeurIPS 2020.\n> \n> [R2] Ziko, Imtiaz, et al. \"Laplacian regularized few-shot learning.\" ICML 2020.\n> \n> [R3] Cui, Wentao, and Yuhong Guo. \"Parameterless transductive feature re-representation for few-shot learning.\" ICML 2021.\n> \n> [R4] Lee, Dong Hoon, and Sae-Young Chung. \"Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot Classification.\" ICML 2021.\n> \n> [R5] Qi, Guodong, et al. \"Transductive Few-Shot Classification on the Oblique Manifold.\" ICCV. 2021.", " Thanks again for your valuable comments. We have double-checked the paper, corrected typos, added more explanation to make it clearer, and submitted the revision.\n\nQ7: more discussions on the choice of the prior.\n\n> - In our method, the feature of each instance admits a vicinal distribution. So the prior is instance-wise.\n> - We choose Gaussian vicinal distribution for both ADV-CE and ADV-SVM for the following reasons.\n> - In ADV-CE, the approximated expected vicinal loss function involves only the mean and variance of the vicinal distribution. So we consider Gaussian vicinal distribution, which is parameterized by mean and variance and has zero higher order moments.\n> - In ADV-SVM, Gaussian vicinal distribution is closely related to RBF kernel. The derivation of the closed form solution for vicinal SVM depends on using a Gaussian vicinal distribution.\n> - The Gaussian prior provides a vicinal distribution of sufficient information in the few-shot learning.\n> - The isotropic Gaussian vicinal distribution, whose covariance matrix is represented by the simplified matrix $\\Sigma = \\sigma^2 I$, assumes different dimensions are independent and have the same variance. But in practice, different dimensions of the feature may be correlated and has different variances. Therefore, we choose non-isotropic Gaussian vicinal distribution.", " Dear Reviewers,\n\nWe have submitted a revision of the paper. The main changes are summarized below.\n\n- We corrected typos and grammatical errors.\n \n- We revised sections 3.2.2 and 3.3 to give a clearer presentation of Vicinal SVM and the lazy random walk algorithm for obtaining the adaptive vicinal distribution.\n \n- We added additional experimental results on the comparison with SOTA methods in the appendix.\n \nThe main changes are highlighted in blue.", " Thank you for the responses! But the work does not make it above the borderline in my opinion. I will insist on my original comments & rating.\n\nThe reasons are as follows:\n\n**(1) Research Problem**\n\nThe research problem in this paper has been explored [1]. This contradicts the author's claim to solve an unexplored problem in Lines 32~34. \n\n**(2) Proposed Method**\n\nAlthough it is novel for the authors to adopt the old VRM algorithm to address the empirical distribution bias in few-shot learning, the whole VRM application process does not bring new challenges. It's hard to convince me that the method of task A applied to task B is a great contribution.\n\n**(3) Experimental Results**\n\nThe experimental results of Q6 are also difficult to convince me. \n\nAlthough ADV shows good performance for the WRN backbones in Table 1, if compared to STOA with different backbones (Table 2), ADV is weaker than STOA in most.\n\n[1] Shuo Yang, Lu Liu, and Min Xu. Free lunch for few-shot learning: Distribution calibration. ICLR, 2021.\n\n\n", " Thank you for the response from the authors. It addresses my concerns. I will raise my score.", " I am happy with most of your response though I am still not convinced by your answer to Q1. Anyway, this is not a big deal in terms of the general idea of the paper.\n\nAlthough the techniques are not new, the application of vinical loss in few shot learning is interesting. I would be interested to see more discussions on the choice of the prior since the approach that you have picked *\"an instance-wise non-isotropic Gaussian vicinal distribution / prior\"*.\n\nI agree with other reviewers that the paper definitely needs to be well proofread before it gets ready for publication. I will keep my rating same as before.", " Hi Reviewer WJab,\n\nWe would like to follow up to see if our response addresses your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you again!", " Hi Reviewer kfYN,\n\nWe would like to follow up to see if our response addresses your concerns or if you have any further questions. We would really appreciate the opportunity to discuss this further if our response has not already addressed your concerns. Thank you again!", " #### Q6: Comparison with other methods that use additional labeled data.\n> - The performance of few-shot learning is affected by both the base model obtained in the pre-training phase (meta-training phase) and the fine-tuning strategy in the fine-tuning phase (meta-testing phase). **For a fair comparison in the meta-testing phase, the base model should be the same for different methods.**\n> \n> - Therefore, we compared ADV-CE with **3 SOTA methods**: Transductive Information Maximation (TIM) [R1], LaplacianShot [R2], and Obliquer Manifold (OM) [R5]. \n> \n> - The reason why we select these 3 methods is that they are all independent of the pre-training of the base model and do not require training any extra module in the pre-training /meta-training phase.\n> \n> - **For a fair comparison**, we feed all methods the same fixed WRN feature extractor and the same 10,000 meta-testing tasks. As TIM and LaplacianShot use the same pre-trained weights of a feature extractor, we use these pre-trained weights for all of the 4 methods. No data augmentation is applied to images in meta-testing tasks.\n> \n> - We present the results in the table below. We observe that ADV-CE outperforms all baselines in 1-shot learning and obtains the second best performance in 5-shot learning, which is slightly lower than TIM.\n> \n> | Method | 1-shot | 5-shot |\n> |-------------------|--------------|--------------|\n> | TIM [R1] | 74.36 ± 0.26 | 84.58 ± 0.15 |\n> | LaplacianShot [R2] | 69.10 ± 0.23 | 80.86 ± 0.15 |\n> | OM [R5] | 68.67 ± 0.23 | 83.42 ± 0.15 |\n> | ADV-CE | 74.92 ± 0.25 | 84.53 ± 0.16 |\n>\n> - Note that the major contributions of TIM lie in regularization on query sets, though ours can be deemed as regularization on support sets, **which means its contributions are orthogonal to our contributions**. We are more than interested in integrating TIM to further boost ours. When integrating TIM with ADV-CE and applying it on WRN feature extractor pre-trained by [19] on miniImageNet, ADV-CE+TIM achieves competitive performance to SOTA methods in the following table, where the performances of SOTA methods are reported in the original paper.\n> \n> | Method | 1-shot | 5-shot |\n> | ------------------ | ------------ | ------------ |\n> | TIM [R1] | 77.8 | 87.4 |\n> | LaplacianShot [R2] | 74.86 ± 0.19 | 84.13 ± 0.14 |\n> | [R3] | 80.04 | 87.64 |\n> | [R4] | 76.84 | 84.36 |\n> | OM [R5] | 80.64 ± 0.34 | 89.39 ± 0.39 |\n> | ADV-CE+TIM | 80.75 ± 0.25 | 88.97 ± 0.14 |\n> \n> [R1] Boudiaf, Malik, et al. \"Information maximization for few-shot learning.\" NeurIPS 2020.\n> \n> [R2] Ziko, Imtiaz, et al. \"Laplacian regularized few-shot learning.\" ICML 2020.\n> \n> [R3] Cui, Wentao, and Yuhong Guo. \"Parameterless transductive feature re-representation for few-shot learning.\" ICML 2021.\n> \n> [R4] Lee, Dong Hoon, and Sae-Young Chung. \"Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot Classification.\" ICML 2021.\n> \n> [R5] Qi, Guodong, et al. \"Transductive Few-Shot Classification on the Oblique Manifold.\" ICCV. 2021.\n", " We sincerely appreciate your comments on this paper. You may find our response below for your concerns. We would really appreciate it if you could let us know if you have any further concerns.\n\n#### Q1: Although the form of $P_{\\delta}(x, y)$ is given in the original paper, this is technically not right. The decomposition is saying that x and y are independent.\n\n> There might be some misunderstanding here. The form of $d P_{\\delta} (x, y)$ is technically correct. The decomposition of $d P_{\\delta}​(x,y)$ into Dirac delta functions does not mean that $x$ and $y$ are independent. \n> \n> - Firstly, the empirical density is estimated by the training samples $\\{x_i, y_i\\}$. So we have $d P_{\\delta} (x, y) =\\frac{1}{N} \\sum_{i=1}^N d P_{\\delta}(x_i, y_i)$ . \n> \n> - By the property of Dirac delta function, we have $d P_{\\delta}(x_i, y_i) = \\delta_{[x_i, y_i]}([x, y]) = \\delta_{x_i}(x) \\delta_{y_i}(y)$, which means that for any $x \\neq x_i$ or $y \\neq y_i$, the density is zero. However, at the point $(x_i,y_i)$, $x_i$ and $y_i$ are still highly dependent in $d P_{\\delta}(x_i, y_i)$. \n> \n> - Improvement of the density estimate is obtained by replacing $\\delta_{x_i}(x)$ by a vicinal distribution $P_{x_i}(x)$ around $x_i$. It means that for any $x$ in the vicinity of $x_i$, i.e., whose means $x$ is close to $x_i$, its label is $y_i$. Obviously, $x\\sim P_{x_i}(x)$ and $y_i$ are highly correlated.\n\n#### Q2: The reviewer believes the vicinal loss is the same as using a prior distribution on the features, can the authors clarify?\n\n> - Yes. The vicinal distribution of the data can be viewed as a prior distribution of the features. Concretely, we use **an instance-wise non-isotropic Gaussian vicinal distribution / prior** obtained by performing a lazy random walk among the unlabeled data for each instance in the support set.\n\n#### Q3: it is not clear to the reviewer what the delta vicinal distribution is and how it is different from the distribution used in ADV-CE.\n\n> - The vicinal distribution used in ADV-CE is a Gaussian distribution $P_{x_i}(x)=\\mathcal{N}(\\mu_i, \\Sigma_i)$, where we obtain both $\\mu_i$ and $\\Sigma_i$ by the lazy random walk algorithm as introduced in Sec. 3.3. The Delta vicinal distribution defined as $P_{x_i}(x)=\\delta_{\\mu_i}(x)$, in contrast, only estimates the $\\mu_i$ by the lazy random walk algorithm. \n> \n> - The key difference between these two distributions, therefore, is that **the variance of the Delta vicinal distribution is zero**. In the ablation study in Table 3, the performance gain of using Gaussian vicinal distribution over using Delta vicinal distribution achieves about 2% in the miniImageNet dataset. This underlines the importance of estimating the true variance in the vicinal distribution. \n\n#### Q4: The size of unlabeled data used in the experiment.\n\n> - The size of unlabeled data is the same as the number of query data. It is 15 per class following the standard few-shot learning setting.\n\n#### Q5: Does the proposed method give higher classification accuracy with more unlabeled data?\n\n> - We have followed the reviewer's suggestion to conduct an experiment on miniImageNet by **varying different numbers of query samples** (equivalent to **unlabeled samples** in our case). We report the results in the table below. \n> \n> - The performance of ADV-CE indeed increases with more unlabeled samples, though it tends to converge if the number of query samples per class gets sufficiently large (e.g., > 30).\n> \n> - As for CE, its performance barely changes with the number of query data, explained by the fact that it is not transductive.\n> \n> | number of query data | 5 | 15 | 30 | 50 |\n> |----------------------|--------------|--------------|--------------|--------------|\n> | 1 shot: CE | 65.58 ± 0.50 | 65.38 ± 0.52 | 65.33 ± 0.40 | 65.37 ± 0.39 |\n> | 1 shot: ADV-CE | 72.94 ± 0.59 | 74.63 ± 0.48 | 76.39 ± 0.43 | 76.72 ± 0.42 |\n> | 5 shot: CE | 83.22 ± 0.37 | 83.33 ± 0.30 | 83.24 ± 0.28 | 83.23 ± 0.26 |\n> | 5 shot: ADV-CE | 84.93 ± 0.38 | 86.12 ± 0.29 | 86.93 ± 0.24 | 87.15 ± 0.23 |", " #### Q3: It is not clear how to compute $\\mu_i$ and $\\Sigma_i$\n\n> - 1\\) $V_i$ is a typo. It should be $v_i$.\n> \n> - 2\\) The computation: $\\mu_i = \\frac{1}{2} z_i + \\frac{1}{2} \\sum_{j=1}^{N_u} v_j u_j$ and $\\Sigma_i = \\frac{1}{2}(z_i-\\mu_i)(z_i-\\mu_i)^\\top + \\frac{1}{2}\\sum_{j=1}^{N_u} v_j (u_j - \\mu_i)(u_j-\\mu_i)^\\top$\n\n#### Q4: What is the value of $N_u$?\n\n> - $N_u$ is the number of unlabeled samples, which is the same as the number of query samples. It is 75 (15 query samples per class multiplying 5 classes) in the experiments.\n\n#### Q5: Typos\n\n> - 1\\) Typos in Line 110. A: It is a failed author citation. It should be Chapelle et al. [3]. \n> \n> - 2\\) What is \"tge\" in line 126? A: It should be \"the\".\n> \n> - We fix the typos in the revised manuscript.", " We thank the reviewer for the valuable feedback. We address your concerns below point by point. Please kindly let us know whether you have any further concerns.\n\n#### Q1: Comparison with transductive few-shot learning methods under the same backbone. Reproduce FEAT with WRN.\n\n> - 1\\) In Table 2, we have already provided the performance comparison of the proposed method using the backbone of CONV and ResNet-12. Therefore, by **joining Table 2 and Table 1**, we observe that **ours using the 4-layer CNN (i.e., CONV) still achieves the highest 1-shot classification accuracy (59.22)** and competitive 5-shot classification accuracy (72.07) among all the tranductive methods that use CONV (including FEAT).\n> \n> - 2\\) We have also run the codes of FEAT with the WRN backbone and evaluated on miniImageNet. The results reported in the table below again demonstrate the **superiority of ours** over the competitive tansductive baseline **FEAT equipped with WRN**.\n> \n> | Method |1 shot | 5 shot |\n> | --| --| --|\n> | FEAT (WRN) | 71.80 ± 0.22 | 79.81 ± 0.16 |\n> | ADV-CE (WRN) | **74.63 ± 0.48** | **86.12 ± 0.29** |\n\n#### Q2: Comparison with the true SOTA methods in a fair manner.\n\n> - We thank the reviewer for providing these 5 SOTA methods [R1-R5]. Note that the performance of few-shot learning is affected by both the base model obtained in the pre-training phase (meta-training phase) and the fine-tuning strategy in the fine-tuning phase (meta-testing phase). **For a fair comparison in the meta-testing phase, the base model should be the same for different methods.**\n> \n> - Therefore, we have followed the reviewer's suggestion by comparing ADV-CE with **3 SOTA methods**: Transductive Information Maximation (TIM) [R1], LaplacianShot [R2], and Obliquer Manifold (OM) [R5]. \n> \n> - The reason why we select these 3 methods is that they are all independent of the pre-training of the base model and do not require training any extra module in the pre-training /meta-training phase.\n> \n> - Instead, we exclude [R3] because (1) it requires training an extra feature re-representation layer, (2) it requires to pre-train the model with self-supervised learning, and (3) it does not open-source the codes. We similarly exclude [R4] because it requires training an extra module to extract task-useful features.\n> \n> - **For a fair comparison**, we feed all methods the same fixed WRN feature extractor and the same 10,000 meta-testing tasks. As TIM and LaplacianShot use the same pre-trained weights of a feature extractor, we use these pre-trained weights for all of the 4 methods. No data augmentation is applied to images in meta-testing tasks.\n> \n> - We present the results in the table below. In 1-shot learning, ADV-CE achieves the best performance and in 5-shot learning, ADV-CE also achieves competitive performance, which is just 0.05% lower than the best performance obtained by TIM. This verifies that the vicinal distribution is an accurate approximation of the true distribution and helps to improve the generalization..\n> \n> | Method | 1-shot | 5-shot |\n> |--|--|--|\n> | TIM [R1] | 74.36 ± 0.26 | 84.58 ± 0.15 |\n> | LaplacianShot [R2] | 69.10 ± 0.23 | 80.86 ± 0.15 |\n> | OM [R5] | 68.67 ± 0.23 | 83.42 ± 0.15 |\n> | ADV-CE | 74.92 ± 0.25 | 84.53 ± 0.16 |\n>\n\n> - Note that the major contributions of TIM lie in regularization on query sets, though ours can be deemed as regularization on support sets, **which means its contributions are orthogonal to our contributions**. We are more than interested in integrating TIM to further boost ours. When integrating TIM with ADV-CE and applying it on WRN feature extractor pre-trained by [19] on miniImageNet, ADV-CE+TIM achieves competitive performance to SOTA methods in the following table, where the performances of SOTA methods are reported in the original paper.\n> \n> | Method | 1-shot | 5-shot |\n> | -- | -- | -- |\n> | TIM [R1] | 77.8 | 87.4 |\n> | LaplacianShot [R2] | 74.86 | 84.13 |\n> | [R3] | 80.04 | 87.64 |\n> | [R4] | 76.84 | 84.36 |\n> | OM[R5] | 80.64 | 89.39 |\n> | ADV-CE+TIM | 80.75 | 88.97 |\n> \n> [R1] Boudiaf, Malik, et al. \"Information maximization for few-shot learning.\" NeurIPS 2020.\n> \n> [R2] Ziko, Imtiaz, et al. \"Laplacian regularized few-shot learning.\" ICML 2020.\n> \n> [R3] Cui, Wentao, and Yuhong Guo. \"Parameterless transductive feature re-representation for few-shot learning.\" ICML 2021.\n> \n> [R4] Lee, Dong Hoon, and Sae-Young Chung. \"Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot Classification.\" ICML 2021.\n> \n> [R5] Qi, Guodong, et al. \"Transductive Few-Shot Classification on the Oblique Manifold.\" ICCV. 2021.\n", " #### Q6: If ADV also uses the same steps (5-10), how does it perform?\n\n> - 1\\) Based the number of fine-tuning steps in the meta-testing phase, meta-learning algorithms can be categorized into (i) zero-step algorithms, e.g. metric based algorithm such as ProtoNet and MatchingNet; (ii) few-step algorithms, e.g., MAML and MetaSGD; and (iii) many-step algorithms, e.g., MetaOptNet[16], R2D2[2], and Free Lunch[39]. The proposed method falls into **the third category of many-step algorithms**, aiming to solve the objective to get a **near-optimal classifier by 100 steps**. We would like to highlight that such a **comparison is fair**, as the baselines in Table 2 include several algorithms in the third category.\n> \n> - 2\\) We show the results of 5-step ADV-CE on miniImageNet in the table below. By initializing the weights of the classifier with the prototypes of the support set, our ADV-CE achieves very **competitive performance by only 5 steps**. With more steps, the performance will slightly increase.\n> \n> | Methods | 1-shot | 5-shot |\n> |-----------------|--------------|--------------|\n> | 100-step ADV-CE | 74.63 ± 0.48 | 86.12 ± 0.29 |\n> | 5-step ADV-CE | 73.44 ± 0.49 | 85.42 ± 0.28 |\n> | 100 step CE | 65.38 ± 0.52 | 82.03 ± 0.32 |\n> | 5 step CE | 63.59 ± 0.48 | 79.21 ± 0.32 |\n> \n\n#### Q7: Comparison with more SOTA algorithms.\n\n> - Please refer to the answer to **Q2** of Reviewer **WJab**, where we provide comparison results with 3 new SOTA algorithms. The comparison is fair as the per-trained models for all algorithms are exactly the same and fixed.\n\n#### Q8: Explanation for why the performance gain of the proposed method seems to diminish quickly from 1-shot to 5-shot settings.\n\n> - 1\\) Our method is not intended for many-shot problems. In the many-shot setting, the empirical distribution estimated by the support set closely approximates the true distribution. This means that estimating the vicinal distribution is less appealing, going against our motivation. \n> \n> - 2\\) Even in the 5-shot setting, ADV-CE **outperforms CE and S2M2*** (one of the most competitive baselines in the 5-shot setting ) **by 2.79% and 3% on miniImageNet** with the WRN pre-trained model, which does not diminish. \n> \n> - 3\\) We have conducted experiments of 10-shot and 15-shot learning on miniImageNet and reported the results in the table below. We can see that ADV-CE still outperforms CE by **2.33% and 2.24% in 10-shot and 15-shot learning**. This means that ADV-CE does not adversely affect the performance.\n> \n> | Method | 10-shot | 15-shot |\n> |------------------|--------------|--------------|\n> | CE | 86.22 ± 0.37 | 87.34 ± 0.35 |\n> | ADV-CE | 88.55 ± 0.32 | 89.58 ± 0.31 |\n> | Performance Gain | 2.33 | 2.24 |\n\n#### Q9: The main concern is that the core idea is not innovative, similar to the incremental superposition of multiple algorithms.\n\n> - Though VRM itself is not a new algorithm, adapting it and making it work to address the empirical distribution deviation in few-shot learning are novel. \n> - Moreover, we derive the resulting expected vicinal loss functions for both the cross-entropy loss and SVM, which are new and listed as our core contributions. \n> - We also propose to address the issue of how to obtain the adaptive vicinal distribution for each instance for VRM in few-shot learning by harnessing unlabeled data and a lazy random walk algorithm. ", " We sincerely appreciate your valuable comments on this paper. You may find our response below for your concerns. We would really appreciate it if you could let us know if you have any further concerns.\n\n#### Q1: Pre-training phase\n\n> - 1\\) The proposed method focuses on **the meta-testing phase** and can be applied on top of any fixed feature extractor pre-trained / meta-trained on the base classes. This strategy follows existing works such as FreeLunch [39], TIM [R1], Laplacianshot [R2]. We just assume the existence of a pre-trained feature extractor / base model, regardless of how the feature extractor is pre-trained. In meta-testing phase, the feature extractor is fixed. We use the feature extractor to extract feature for all support data $\\{z_i\\}$ and query data $\\{u_j\\}$ and run Algorithm 1 for each meta-test task. \n> \n> - 2\\) In the experiments, we have applied the proposed method on **3 different base models pre-trained by 3 different few-shot learning algorithms**. Concretely, we use WideResNet (WRN) pre-trained by [19], ResNet-12 pre-trained by [16] and 4-layer convolution network pre-trained [2] (as introduced in line 261.) The compatibility of our method with all of the 3 pre-trained feature extractors has been validated in Tab. 2. We also note that the results in Tab.1 and Tab.3 are obtained with the WRN pre-trained model.\n> \n> - We will add more pre-training details in the experiment.\n> \n> [R1] Boudiaf, Malik, et al. \"Information maximization for few-shot learning.\" NeurIPS 2020.\n> \n> [R2] Ziko, Imtiaz, et al. \"Laplacian regularized few-shot learning.\" ICML 2020.\n\n#### Q2: The difference between ADV-CE and ADV-SVM methods. Why provides AVD-SVM?\n\n> - 1\\) The difference between ADV-CE and ADV-SVM lies in only the objective function that trains the classifier within each task -- ADV-CE taking the cross-entropy loss while ADV-SVM taking the max-margin loss. Estimation of the adaptive vicinal distribution in ADV-CE and ADV-SVM follows the same way.\n> \n> - 2\\) We introduce ADV-CE and ADV-SVM to show the compatibility of the proposed ADV method with different classifiers. SVM has been one of the commonly adopted classifiers in few-shot algorithms such as MetaOptNet [16] and FreeLunch [39].\n> \n> - 3\\) The slightly worse performance of ADV-SVM than ADV-CE in Tab.1/Tab.2/Tab.3 is attributed to the **inconsistency in the classifier used during pre-training and that used in meta-testing**. \n> \n> - As stated in the response to Q1, we adopt the WRN pre-trained model which takes the CE loss to train the classifier during pre-training. \n> \n> - If we use the ResNet-12 model pre-trained by MetaOptNet-SVM, where the classifier in pre-training is SVM, we observe from the following table that ADV-SVM outperforms ADV-CE on miniImageNet.\n> \n> | Methods | 1-Shot | 5-shot |\n> |---------|--------------|--------------|\n> | ADV-CE | 64.71 ± 0.74 | 76.77 ± 0.52 |\n> | ADV-SVM | 64.95 ± 0.74 | 77.44 ± 0.48 |\n> \n\n#### Q3: $D_{ij}^{UU} = 0?$ if $i=j$ in Eqn. (11)\n\n> - In Eqn. (11), it is correct to set $D_{ij}^{UU} = \\infty$ if $i=j$. \n> \n> - In the lazy random walk algorithm, we fix the lazy stay probability as $\\beta$. By Eqn. (13), the stay probability is $\\beta + (1-\\beta) P_{jj}^{UU}$. Combining both, we expect $P_{jj}^{UU}=0$, so as to deliberately set $D_{ij}^{UU}=\\infty$ for $i=j$.\n\n#### Q4: How many query samples are used as unlabeled data?\n\n> - We do not use extra unlabeled data, and strictly follow the setting of few-shot learning where 15 query samples per class are regarded as the unlabeled data.\n\n#### Q5: Visualization to support the claim that the support set can deviate from the true distribution\n\n> - This claim has been supported by the **visualization in Fig 2 (Sec. 4.2)** where the 1-shot support data (denoted as dots) may deviate from the class mean of the true distribution (denoted as crosses). For example, in the first figure, the support data of the blue and red classes significantly deviate from the true class mean. In the third figure, almost all support data clearly deviate from their true class means.\n", " We sincerely thank you for highlighting the strengths of this work. We detail our response to your concerns below point by point. Please kindly let us know if our response addresses the issues you raised in this paper.\n\n#### Q1: How is this a better formulation: the vicinal distribution obtained by lazy random walk vs. the vicinal distribution located in the original data\n\n> - The empirical distribution (estimated from the original data) may deviate from the true distribution. For example, we illustrate in Fig 1(a)(b) and Fig 2 that the prototypes in 1-shot learning may deviate from the true class mean. Therefore, the vicinal distribution whose mean is located in the original instance is not optimal and cannot significantly improve the classifier. This motivates us to choose a better vicinal distribution that is closer to the true distribution. We obtain such adaptive vicinal distributions by performing a lazy random walk in unlabeled data for each support data.\n\n> - We performed an experiment to compare the performance of i) **ADV-CE** using the adaptive vicinal distribution (both mean and variance are estimated by lazy random walk); ii) **ADV-CE-VAR**, using vicinal distribution located in the original data (only the variance is estimated by a lazy random walk. It mean remains the same as original feature.) and iii) **CE** without using vicinal distribution. Their performances on miniImageNet are:\n> \n> | Methods | 1-Shot | 5-shot |\n> |------------|--------------|--------------|\n> | ADV-CE | 74.63 ± 0.48 | 86.12 ± 0.29 |\n> | ADV-CE-VAR | 66.47 ± 0.62 | 84.51 ± 0.41 |\n> | CE | 65.38 ± 0.52 | 83.33 ± 0.30 |\n> \n> - From the results, we can see that ADV-CE significantly outperforms ADV-CE-VAR. ADV-CE-VAR gains a small improvement over CE from the VRM. The results verifies that the vicinal distribution obtained by lazy random walk is much better than the vicinal distribution whose mean is located in the original data.\n\n#### Q2: What is the dimensionality of the Gaussians? How realistic is the approach as the dataset scales both in terms of number of samples and size of each image.\n\n> - The vicinal distribution of the distribution of the vicinity or neighborhood around a data point in the feature space. The dimension of the vicinal distribution is the same as the feature dimension and is independent of the number of samples and the input size of each image. So there is no scaling problem.", " The goal of the paper is to improve the robustness of classifier for task-specific generalisation in the meta-testing stage. The work introduces Vicinal Risk Minimization in which a Gaussian distribution is fitted on the unlabeled data samples and the probabilities of the labeled samples are obtained using lazy random walk. The work proved the expected vicinal loss functions for cross-entropy loss and SVM. Experiments on miniImageNet, CUB, CIFAR-FS for few-shot learning demonstrate the effectiveness of the proposed approach. Strengths and Weaknesses:\n\n* The paper is well-motivated and easy to follow. The overall idea of constructing vicinal distribution of the training samples using unlabeled training data is novel\n\n* The experimental set-up is sound and demonstrates the usefulness of the approach. The results show the improvement in the generalization performance of the models with the proposed method. The experiments are extensive and outline the set-up clearly. \n\n* Thorough ablations in Table 3 clearly show the advantage of using the VRM principle over the CE-Mixup principle. \n\n\n\nThere are no major weaknesses in the paper, however, certain claims in the manuscript can be supported with citations. There are a few typos and can be corrected in the revision.\n\n\n\n - Minor line 36: “the significant deviation of the empirical distribution..” Add citation here?\n\n - Line 81: benchmars-> benchmarks\n\n - Line 99: It [37] -> In\n\n - Lime 110: . Besides, (author?)\n\n - Line 126: tge -> the\n\n - Line 133: expect loss -> expected loss\n\n - Line 177 it replace->replaces * \"Our approach also considers Gaussian vicinities but estimates the mean by the unlabeled data, which is generally not on the training samples\". How is this a better formulation?\n\n* What is the dimensionality of the Gaussians being considered here? How realistic is the approach as the dataset scales both in terms of number of samples and size of each image.\n The limitations have been adequately addressed. ", " This submission points out that, in few-shot learning, a limited number of support data can hardly represent the true distribution, which results in poor generalization performance of the empirical risk minimization based methods. To tackle this problem, the authors propose to minimize the vicinal risk and estimate parameters of vicinal distribution of each sample by a random walk algorithm with additional unlabeled data. They demonstrate that their algorithm outperforms baseline methods on three datasets. Strengths:\n1. This paper points out an interesting problem for few-shot image classification.\n2. The experimental evaluation is adequate, and the results convincingly support the main claims.\n\nWeaknesses:\n1. The paper is not very clear, and many aspects make the reviewer's understanding of the entire paper very confusing. Moreover, many typos should be carefully checked before submission. \n2. The main concern is that the core idea is not innovative, similar to the incremental superposition of multiple algorithms. \n3. Some experimental settings are unfair, making it impossible to measure the effectiveness of the method in terms of performance gains.\n4. See questions for more details. Methods:\n1. The pre-training phase is the meta-training phase? if not, the authors should detail the learning process of ADV in the meta-training and meta-testing phases.\n2. The main problem is the difference between ADV-CE and ADV-SVM methods. They seem to be two separate branches, and ADV-CE significantly outperforms ADV-SVM. Therefore why provides AVD-SVM?\n3. If i=j, the D_{ij}^{UU} = 0 in equation (11)?\n\nExperiments:\n1. How many query samples are used as unlabeled data to estimate the distribution parameters in experiments? In general, the commonly used sample size is 15, and the more samples, the better the performance.\n2. In lines 238-239, the authors propose that the support set can deviate from the true distribution. If there is a visualization to support this claim? \n3. In the meta-testing phase, the meta-learning methods (e.g., MAML) are optimized by using 5-10 steps in the fine-tuning phase and the more steps have good performance [1]. However, ADV has 100 steps in the fine-tuning phases. if ADV also uses the same steps (5-10), how does it perform? \n4. Only four transductive algorithms (TPN, FEAT, TEAM, SIB) are tested with the proposed method, it would be more convincing if more recent state-of-the-art algorithms can be considered.\n5. The performance gain of the proposed method seems to diminish quickly from 1-shot to 5-shot settings. Does this mean in the case of 5 (or more)-shot setting, considering unlabeled test data with the proposed method could even adversely affect the performance?\n\nTypos:\n1. imporving --- imporve in the title\n2. author? in the 110 line \n3. 4.0.1 in the 222 line\n4. ishas in the 254 line\n\n[1] How to Train Your MAML to Excel in Few-Shot Classification Yes, they discuss the limitations in Section 5. ", " In this paper, the distribution of support data is estimated using transition probability between all samples within a meta-test task (including labeled support data and unlabeled query data). A logistic regression or an SVM model is learned via vicinity risk minimizing based on the estimated distribution of support data. The expected vicinity loss is derived in the paper. The experiments indicate that the proposed method outperforms some few-shot learning methods. The expected vicinity loss is derived.\n\nEstimate the distribution of support data using weighed unlabeled data, where the weight is based on the transition probability in lazy random walk.\n\n\nWeaknesses:\n\nSome typos in the manuscript. Some technical details are missing. See the next section for details.\n\nThe proposed method is a transductive few-shot learning method as it uses unlabeled data in meta-test tasks. Therefore, it is important to compare to transductive few-shot learning methods with the same backbone. However, the comparison to transductive methods in Table 1 is not fair. For example, the reported results of some methods (e.g. FEAT) are based on shallow backbones (4-layer CNN). It is better to reproduce those methods with the same backbone (WRN), especially since the code is available online for those methods.\n\nIn addition, some recent transductive few-shot methods [1, 2, 3, 4, 5] achieve stronger performance using the same backbone (WRN). The proposed method should be compared with those methods to show the effectiveness of adaptive vicinity risk minimization. Since the authors claim that the proposed method outperforms SOTA few-shot learning methods, it is necessary to compare the true SOTA methods in a fair manner.\n\n[1] Boudiaf, Malik, et al. \"Information maximization for few-shot learning.\" Advances in Neural Information Processing Systems 33 (2020): 2445-2457.\n\n[2] Ziko, Imtiaz, et al. \"Laplacian regularized few-shot learning.\" International Conference on Machine Learning. PMLR, 2020.\n\n[3] Cui, Wentao, and Yuhong Guo. \"Parameterless transductive feature re-representation for few-shot learning.\" International Conference on Machine Learning. PMLR, 2021.\n\n[4] Lee, Dong Hoon, and Sae-Young Chung. \"Unsupervised Embedding Adaptation via Early-Stage Feature Reconstruction for Few-Shot Classification.\" International Conference on Machine Learning. PMLR, 2021.\n\n[5] Qi, Guodong, et al. \"Transductive Few-Shot Classification on the Oblique Manifold.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. Typos in line 110. \n\nWhat is \"tge\" in line 126? \n\nAccording to line 210 and 211, it is not clear how to compute $u_i$ and $\\Sigma_i$. The definition of $V_i$ is not given in line 211. Is it a typo? Is it $v_i$ or $V_i$? It is better to give equations about how to compute $u_i$ and $\\Sigma_i$.\n\nWhat is the value of $N_u$? The author discussed limitations in the paper.", " The paper proposes to use vicinal loss function in few shot classification to increase the generalization capability. The vicinal loss is associated with some prior distributions over the features, which is adaptively obtained via random walk algorithm with the assistance of a set of unlabeled examples. * Originality: The task of few shot learning and the techniques of vicinal loss and random walk algorithms are not new. To the best of the reviewer’s knowledge, the use of these techniques in few shot learning is novel. However, there is a lack of comparison with other methods(eg [15])use additional unlabeled data. \n* Quality: All claims are supported by the experimental results and the work is complete. The reviewer do not agree with the statement that the proposed method can beat SOTA. \n* Clarity: The idea of the paper is presented clearly. However, there are many typos, especially in the equations.\n\t* typos (an incomplete list): \n\t\t* L110: citation is not right\n\t\t* L126: should be “the” not “tge”\n\t\t* L130: model is $F(x;\\theta)$ not $f_{\\theta}(x)$\n\t\t* L140: model parameter is $\\theta$ not $w$\n\t\t* L146: the density function is a function of $\\tilde z$ on the LHS but does not depend on $\\tilde z$ on the RHS\n\t\t* L71: the expectation is with respect to z not $\\bar z_i$\n\t* technical error: \n\t\t* L133-L135: Although the form of $dP_{\\delta}(x,y)$ is given in the original paper, this is technically not right. The decomposition is saying that x and y are independent. The vicinal loss from a Bayesian point of view can be seen as adding a prior on the predictor x\n* Significance: The paper proposes to use a loss function that is different from the conventional empirical loss under few shot learning setting. The motivation for this approach may be insightful for developing new methods in few shot learning. The results, while not earthshaking, may be of practical use. * The reviewer believe the statement in L133-L135 is technically not correct, can the authors address this concern?\n* The reviewer believe the vicinal loss is the same as using prior distribution on the features, can the authors clarify?\n* The details of the ablation study is not very clearly presented in the paper. For example, it is not clear to the reviewer what the delta vicinal distribution is and how it is different from the distribution used in ADV-CE.\n* Unlabeled data.\n * The paper does not provide any information on the size of unlabeled data used in the experiment.\n * Does the proposed method give higher classification accuracy with more unlabeled data? This should be shown in the ablation study. Some limitations of the work has been briefly mentioned by the authors. The limitations and questions of the paper the reviewer had is given in Questions section." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "t84xwLxApNp", "gONzKfDzsf-", "L2-A6EbkRCV", "t6ZYC8uYMb", "t6ZYC8uYMb", "W0EFgUURBxn", "nips_2022_fHUBa3gQno", "1sei90iqGc", "QEGng-PHjpC", "lFWDw_lo0r_", "QEGng-PHjpC", "1sei90iqGc", "G2beXFko_sn", "G2beXFko_sn", "QEGng-PHjpC", "QEGng-PHjpC", "1sei90iqGc", "1sei90iqGc", "Qj2WXCIddvB", "nips_2022_fHUBa3gQno", "nips_2022_fHUBa3gQno", "nips_2022_fHUBa3gQno", "nips_2022_fHUBa3gQno" ]
nips_2022_hSxK-4KGLbI
Two-Stream Network for Sign Language Recognition and Translation
Sign languages are visual languages using manual articulations and non-manual elements to convey information. For sign language recognition and translation, the majority of existing approaches directly encode RGB videos into hidden representations. RGB videos, however, are raw signals with substantial visual redundancy, leading the encoder to overlook the key information for sign language understanding. To mitigate this problem and better incorporate domain knowledge, such as handshape and body movement, we introduce a dual visual encoder containing two separate streams to model both the raw videos and the keypoint sequences generated by an off-the-shelf keypoint estimator. To make the two streams interact with each other, we explore a variety of techniques, including bidirectional lateral connection, sign pyramid network with auxiliary supervision, and frame-level self-distillation. The resulting model is called TwoStream-SLR, which is competent for sign language recognition (SLR). TwoStream-SLR is extended to a sign language translation (SLT) model, TwoStream-SLT, by simply attaching an extra translation network. Experimentally, our TwoStream-SLR and TwoStream-SLT achieve state-of-the-art performance on SLR and SLT tasks across a series of datasets including Phoenix-2014, Phoenix-2014T, and CSL-Daily.
Accept
This paper extends models for sign language recognition and translation with a dual encoder where, first, keypoint sequences are estimated using an off-the-shelf model, then fused with the video sequence. It is a minor technical contribution to add the keypoint estimations as input since no new information was introduced; however, the authors demonstrated strong execution of experimental results. This paper can be categorized with pipeline/cascade approaches which rely on domain knowledge for engineered feature extraction and combination. The paper presents many experimental results for architecture changes to improve results: bidirectional lateral connection, sign pyramid network, and frame-level self-distillation. The authors convinced the reviewers with more experimental results during the rebuttal period leading to two solid, and one borderline accept votes.
train
[ "wiId_W7p6ig", "CSYiyPxddYc", "n6EEASGJFz8", "DY6J8WV676q", "VeWzAP4TXo", "94TX9W-RXeX", "ZDVqJqsQGLc", "FUPaH1sLtP9", "XpvHf8ZLxmf", "XHTUPkHeJs", "HonYeZFzRXx", "Oc5t4lxC-A" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I read the response and comments from other reviewers. The rebuttal has addressed all my concerns, as well as many points raised by other reviewers. This is a solid work with convincing experiments and studies. The newly added experiments to verity signer-independentrecognition and background change make the submission stronger. In light of this, I raise my score. I look forward to seeing the additional experiments in the revised paper.", " **Q5**\n*In the SLT experiments, in past studies, the results of experiments with added gloss supervision were higher than the gloss-free method. However, the opposite results were obtained in this experiment.*\n\nWe firstly clarify that gloss-supervised methods do outperform gloss-free methods in our paper. We conjecture that your ``gloss-free'' refers to the Sign2Text in Table 2 of the main paper. Sign2Text is a task of directly translating sign videos into natural languages with or without intermediate gloss supervision. Our method, as well as MMTLB [2], SignBT [3], STMC-T [4], and Joint-SLRT [5] all utilize the gloss supervision to facilitate sign language translation, and surpass gloss-free Sign2Text methods including TSPNet [6] and SL-Luong [7] by large margins (see Sign2Text part in Table 2 of the main paper). For example, our approach (with gloss supervision) outperforms TSPNet (without gloss supervision) by 15.54 BLEU-4 on Phoenix-2014T test set. We also mark gloss-free methods (TSPNet and SL-Luong) in Table 2 of the main paper. Please correct us if our conjecture is wrong. \n\nNext, we make clarification that our Sign2Text (with gloss supervision) model outperforms the Sign2Gloss2Text one. Previous works [3, 5, 7] categorize sign language translation into two types, namely Sign2Text and Sign2Gloss2Text. Sign2Text directly generates spoken texts from given sign videos, where gloss annotations are optional to supervise intermediate representations and better performance is observed if gloss supervision is introduced. Sign2Gloss2Text, however, is a two-stage framework that first uses a sign language recognition system to predict glosses from sign videos (thus gloss annotations are required) and then translates gloss sequence into spoken language. Our work as well as some previous works [2, 3, 5] show that Sign2Text outperforms Sign2Gloss2Text. We suspect this is because Sign2Gloss2Text uses discrete glosses as intermediate representations while Sign2Text processes dense visual features to better capture rich spatial-temporal semantics in sign videos. Besides, Sign2Text mitigates error propagation which hinders performance of Sign2Gloss2Text. \n\nThanks for your comments on this. We will improve the description of Sign2Text in our revised version. \n\n\n**Q6**\n*Explanations for the methods of the visual field.*\n\nThanks for your suggestion. We will refine our paper to increase the readability of our work.\n\n**References**\n\n[1] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked attention mask transformer for universal image segmentation. In CVPR, 2022.\n\n[2] Yutong Chen, Fangyun Wei, Xiao Sun, Zhirong Wu, and Stephen Lin. A simple multi-modality transfer learning baseline for sign language translation. arXiv preprint arXiv:2203.04287, 2022.\n\n[3] Hao Zhou, Wengang Zhou, Weizhen Qi, Junfu Pu, and Houqiang Li. Improving sign language translation with monolingual data by sign back-translation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021.\n\n[4] Hao Zhou, Wengang Zhou, Yun Zhou, and Houqiang Li. Spatial-temporal multi-cue network for sign language recognition and translation. IEEE Transactions on Multimedia, 2021.\n\n[5] Necati Cihan Camgoz, Oscar Koller, Simon Hadfield, and Richard Bowden. Sign language transformers: Joint end-to-end sign language recognition and translation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020.\n\n[6] Dongxu Li, Chenchen Xu, Xin Yu, Kaihao Zhang, Benjamin Swift, Hanna Suominen, and Hongdong Li. Tspnet: Hierarchical feature learning via temporal semantic pyramid for sign language translation. In Advances in Neural Information Processing Systems, 2020.\n\n[7] Necati Cihan Camgoz, Simon Hadfield, Oscar Koller, Hermann Ney, and Richard Bowden. Neural sign language translation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018.", " **Q2**\n*It is unclear about the usage of domain knowledge and keypoint sequences.*\n\nWe state the domain knowledge of sign languages in L36-39 \"Sign languages use two types of visual signals to convey information: manual elements that include handshape, palm orientation, etc., and non-manual markers such as facial expressions and movement of the body, head, mouth, eyes, and eyebrows\". How to leverage such domain knowledge is still underexplored. In our work, we propose to model human keypoint sequences, which contain key information for sign language understanding, to inject \"inductive bias\" and \"domain knowledge\" to ease the learning. Please also refer to our responses to your first question for the benefits of modeling keypoints. \n\n**Q3**\n*The logical coherence of the abstract needs strengthening.*\n\nThanks for your suggestion. We will rephrase the statement in our final version.\n\n**Q4**\n*The details of the selection of the keypoints are not clear enough in Section 3.1.*\n\nHRNet trained on COCO-WholeBody could generate 11 upper body keypoints, 42 hand keypoints, 20 mouth keypoints, and 48 face keypoints (121 keypoints in total). In our main paper, we use all upper body and hand keypoints, but reduce the other keypoints by spatially evenly sampling 10 mouth keypoints and 16 face keypoints (79 keypoints in total) as a trade-off between accuracy and computational costs. \n\nDue to the limited space of the main paper, we include the ablation study of the choice of keypoints in Table 3a in our supplementary material. We redraw the table below. We train single-stream SLR models with different keypoints choices as inputs. All studies are conducted on Phoenix-2014T SLR task. We observe that all parts contribute to sign language recognition.\n| Upper-body | Hand | Mouth | Face | #Keypoints | Dev | test |\n|:--------------:|:--------------:|:--------------:|:--------------:|:------------:|:-------:|:-------:|\n| $\\checkmark$ | | | | 11 | 49.11 | 48.46 |\n| $\\checkmark$ | $\\checkmark$ | | | 53(+42) | 37.15 | 36.88 |\n| $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | | 63(+10) | 28.42 | 28.20 |\n| $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | $\\checkmark$ | 79(+16) | **27.14** | **27.19** |\n\nTo further explore the choice of keypoints, we conduct more experiments by varying the number of keypoints of each part. We show the results as follows. It can be seen that our model is insensitive to the choice of keypoints. We will add this experiment in our revised version.\n\n| #Upper-body(11) | #Hand(42) | #Mouth(20) | #Face(48) | #Total(121) | Dev | Test |\n|:-----------------:|:-----------:|:------------:|:-----------:|:-------------:|:-------:|:-------:|\n| 11 | 42 | 10 | 16 | 79 | **27.14** | 27.19 |\n| 11 | 42 | 20(+10) | 16 | 89 (+10) | 27.36 | 27.07 |\n| 11 | 42 | 10 | 24(+8) | 87(+8) | 27.25 | 27.00 |\n| 11 | 42 | 10 | 48(+32) | 111(+32) | 27.25 | 27.24 |\n| 11 | 42 | 20(+10) | 48(+32) | 121(+42) | 27.22 | 27.21 |\n| 6(-5) | 42 | 10 | 16 | 74(-5) | 27.25 | 27.85 |\n| 11 | 21(-21) | 10 | 16 | 58(-21) | 27.33 | 27.19 |\n| 11 | 42 | 5(-5) | 16 | 74(-5) | 27.78 | 28.06 |\n| 11 | 42 | 10 | 8(-8) | 71(-8) | 27.65 | 27.57 |", " **Q1**\n*Evaluation of visual redundancy and interactions between the raw videos and keypoint sequences.*\n\nNext, we respond to the evaluation of interactions between two streams. As we stated in L59-68 in the main paper, motion blur in sign videos and domain gap between the COCO-WholeBody training set and sign language recognition datasets lead to inaccurate keypoints, which motivates us to jointly model RGB videos and keypoint sequences to complement each other. To this end, we propose bidirectional lateral connection, joint head, and frame-level self-distillation to make the two streams interactive and promote each other. The effects of each proposed component are verified in Table 3 in the main paper. Meanwhile, we conduct exhaustive analysis in L273-283. Here we redraw the table below:\n\n| V-Encoder | K-Encoder | Bilateral | SPN | Joint-Head | Distillation | Dev | Test |\n|:------------:|:------------:|:-----------:|:-----:|:------------:|:--------------:|:-----:|:------|\n|$\\checkmark$| | | | | | 21.08 | 22.42 |\n| | $\\checkmark$| | | | | 27.14 | 27.19 |\n|$\\checkmark$|$\\checkmark$| | | | | 20.47 | 21.55 |\n|$\\checkmark$|$\\checkmark$|$\\checkmark$| | | | 19.03 | 20.12 |\n|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$| | | 18.52 | 19.91 |\n|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$| | 18.36 | 19.49 |\n|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$| **17.48** | **19.04** |\n\nParticularly, we conduct ablation studies on the lateral connections as shown in Table 4(a) in our main paper. Specifically, we train models with only unilateral connections (video $\\rightarrow$ keypoint or keypoint $\\rightarrow$ video) or reduce the number of features on which the bilateral connection is applied. We compare these variants to our final model. We redraw the table as below. It can be seen that performing bilateral connection on C1, C2, and C3 achieves the best results, verifying that more two-stream interactions lead to better performance.\n\n|V$\\rightarrow$K|K$\\rightarrow$V|Connection|Dev|Test|\n|:---------------------:|:---------------------:|:----------------:|:-----:|:-------:|\n| | | None | 18.57 | 20.03|\n| $\\checkmark$ | | C1,C2,C3 | 17.88 | 19.61 |\n| | $\\checkmark$ | C1,C2,C3 | 18.82 | 19.93 |\n| $\\checkmark$ | $\\checkmark$ | C1,C2,C3 | **17.48** | **19.04** |\n| $\\checkmark$ | $\\checkmark$ | C2,C3 | 17.59| 19.30 |\n\nThanks for your comment and we will add more discussions in our revised version.", " Thanks for your constructive comments. Our responses to them are given below.\n\n**Q1**\n*Evaluation of visual redundancy and interactions between the raw videos and keypoint sequences.*\n\nFirst, we give more explanations about visual redundancy and provide evaluations. Sign languages are visual languages that use two types of features to convey information: manual elements including handshape, palm orientation, movement, and location; non-manual markers such as facial expression and movement of the body, head (nod/shake/tilt), and mouth (mouthing). Raw videos inherently provide rich and complete information for sign language understanding; however, irrelevant factors (e.g. background) which are always regarded as visual redundancy, are also included. To reduce the negative effects raised by visual redundancy, we propose to model keypoints which are more robust to background change. We conduct extra experiments to verify this. Concretely, we use Mask2Former [1] to segment the signer (foreground) appeared in each test video from Phoenix-2014/Phoenix-2014T/CSL-Daily and then paste the estimated foreground onto a few pre-defined backgrounds to synthesize test videos. These new test videos have backgrounds which are unseen in training samples. We experiment on different new backgrounds including uniformly-colored canvas (white, black, green, and blue) and several clutter scenes (studios and street). We compare our single-stream SLR with only RGB inputs with our two-stream network to verify the generalization capability when background changes. The results are shown in the following table. Replacing the backgrounds of original test videos with new ones leads to performance degradation for both methods. However, TwoStream-SLR can effectively reduce the discrepancy, achieving smaller performance gap between new test set and the original test set. For example, when evaluated on Phoenix-2014 test videos with a studio background, the WER of Video-only rises dramatically from 23.32\\% to 30.31\\% (+6.99\\%) while TwoStream-SLR still performs well with WER rising from 18.79\\% to 21.85\\% (+3.06\\%). This experiment proves that our method is less sensitive to background mismatch. We will add more discussion in our revised version.\n\n|**Phoenix-2014** | **Video only** | | TwoStream-SLR| |\n|------------|:--------------:|:--------------:|:------------|:--------------:|\n| Background | Dev/Test (WER) | Dev/Test (gap) | Dev/Test (WER) | Dev/Test (gap) |\n| original | 22.44/23.32 | --/-- | 18.39/18.79 | --/-- |\n| white | 24.84/25.94 | 2.40/2.62 | 19.91/20.36 | 1.52/1.57 |\n| green | 24.26/25.63 | 1.82/2.31 | 20.13/19.99 | 1.74/1.20 |\n| blue | 24.82/25.30 | 2.38/1.98 | 18.90/19.51 | 0.51/0.72 |\n| black | 23.97/24.44 | 1.53/1.12 | 19.80/19.93 | 1.41/1.14 |\n| studio \\#0 | 30.93/30.31 | 8.49/6.99 | 21.46/21.85 | 3.07/3.06 |\n| studio \\#1 | 25.06/26.15 | 2.62/2.83 | 20.04/20.07 | 1.65/1.28 |\n| street | 76.53/74.96 | 54.09/51.64 | 33.91/33.77 | 15.52/14.98 |\n\n|**Phoenix-2014T** | **Video only** | | TwoStream-SLR| |\n|------------|:--------------:|:--------------:|:------------|:--------------:|\n| Background | Dev/Test (WER) | Dev/Test (gap) | Dev/Test (WER) | Dev/Test (gap) |\n| original | 21.08/22.42 | --/-- | 17.48/19.04 | --/-- |\n| white | 22.47/24.54 | 1.39/2.12 | 18.65/19.96 | 1.17/0.92 |\n| green | 21.88/24.16 | 0.80/1.74 | 18.44/19.75 | 0.96/0.71 |\n| blue | 22.68/23.41 | 1.60/0.99 | 18.04/19.65 | 0.56/0.61 |\n| black | 21.67/23.06 | 0.59/0.64 | 18.12/19.07 | 0.64/0.03 |\n| studio \\#0 | 27.89/27.94 | 6.81/5.52 |20.12/20.87 | 2.64/1.83 |\n| studio \\#1 | 22.68/23.64 | 1.60/1.22 | 18.52/19.61 | 1.04/0.57 |\n| street | 74.83/73.02 | 53.75/50.60 | 37.55/36.65 | 20.07/17.61 |\n\n|**CSL-Daily** | **Video only** | | TwoStream-SLR| |\n|------------|:--------------:|:--------------:|:------------|:--------------:|\n| Background | Dev/Test (WER) | Dev/Test (gap) | Dev/Test (WER) | Dev/Test (gap) |\n| original | 28.88/28.50 | --/-- | 25.44/25.33 | --/-- |\n| white | 29.76/29.25 | 0.88/0.75 | 25.91/25.91 | 0.47/0.58 |\n| green | 29.65/29.40 | 0.77/0.90 | 26.23/25.94 | 0.79/0.61 |\n| blue | 31.59/31.23 | 2.71/2.73 | 25.95/26.04 | 0.51/0.71 |\n| black | 29.30/29.24 | 0.42/0.74 | 25.79/25.71 | 0.35/0.38 |\n| studio \\#0 | 47.31/47.17 | 18.43/18.67 | 28.91/28.99 | 3.47/3.66 |\n| studio \\#1 | 30.65/30.35 | 1.77/1.85 | 26.23/26.43 | 0.79/1.10 |\n| street | 91.97/92.49 | 63.09/63.99 | 62.71/62.00 | 37.27/36.67 |\n\n**References**\n\n[1] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked attention mask transformer for universal image segmentation. In CVPR, 2022.", " **Q2**\n*Can the Two-stream model solve the background and signer mismatch problems?*\n\n**Background mismatch** To verify the robustness of background change, we use Mask2Former [4] to segment the signer (foreground) appeared in each test video from Phoenix-2014/Phoenix-2014T/CSL-Daily and then paste the estimated foreground onto a few pre-defined backgrounds to synthesize test videos. These new test videos have backgrounds which are unseen in training samples. We experiment with different new backgrounds including uniformly-colored canvas (white, black, green, and blue) and several clutter scenes (studio and street). We compare our single-stream SLR with only RGB inputs with our two-stream network to verify the generalization capability when background changes. The results are shown in the following table. Replacing the backgrounds of the original test videos with new ones leads to performance degradation for both methods. However, TwoStream-SLR can effectively reduce the discrepancy, achieving smaller performance gap between the new test sets and the original test set. For example, when evaluated on Phoenix-2014 test videos with a studio background, the WER of Video-only rises dramatically from 23.32\\% to 30.31\\% (+6.99\\%) while TwoStream-SLR still performs reasonably well with WER rising from 18.79\\% to 21.85\\% (+3.06\\%). This experiment proves that our method is less sensitive to background mismatch.\n\n\n|**Phoenix-2014** | **Video only** | | TwoStream-SLR| |\n|------------|:--------------:|:--------------:|:------------|:--------------:|\n| Background | Dev/Test (WER) | Dev/Test (gap) | Dev/Test (WER) | Dev/Test (gap) |\n| original | 22.44/23.32 | --/-- | 18.39/18.79 | --/-- |\n| white | 24.84/25.94 | 2.40/2.62 | 19.91/20.36 | 1.52/1.57 |\n| green | 24.26/25.63 | 1.82/2.31 | 20.13/19.99 | 1.74/1.20 |\n| blue | 24.82/25.30 | 2.38/1.98 | 18.90/19.51 | 0.51/0.72 |\n| black | 23.97/24.44 | 1.53/1.12 | 19.80/19.93 | 1.41/1.14 |\n| studio \\#0 | 30.93/30.31 | 8.49/6.99 | 21.46/21.85 | 3.07/3.06 |\n| studio \\#1 | 25.06/26.15 | 2.62/2.83 | 20.04/20.07 | 1.65/1.28 |\n| street | 76.53/74.96 | 54.09/51.64 | 33.91/33.77 | 15.52/14.98 |\n\n|**Phoenix-2014T** | **Video only** | | TwoStream-SLR| |\n|------------|:--------------:|:--------------:|:------------|:--------------:|\n| Background | Dev/Test (WER) | Dev/Test (gap) | Dev/Test (WER) | Dev/Test (gap) |\n| original | 21.08/22.42 | --/-- | 17.48/19.04 | --/-- |\n| white | 22.47/24.54 | 1.39/2.12 | 18.65/19.96 | 1.17/0.92 |\n| green | 21.88/24.16 | 0.80/1.74 | 18.44/19.75 | 0.96/0.71 |\n| blue | 22.68/23.41 | 1.60/0.99 | 18.04/19.65 | 0.56/0.61 |\n| black | 21.67/23.06 | 0.59/0.64 | 18.12/19.07 | 0.64/0.03 |\n| studio \\#0 | 27.89/27.94 | 6.81/5.52 |20.12/20.87 | 2.64/1.83 |\n| studio \\#1 | 22.68/23.64 | 1.60/1.22 | 18.52/19.61 | 1.04/0.57 |\n| street | 74.83/73.02 | 53.75/50.60 | 37.55/36.65 | 20.07/17.61 |\n\n|**CSL-Daily** | **Video only** | | TwoStream-SLR| |\n|------------|:--------------:|:--------------:|:------------|:--------------:|\n| Background | Dev/Test (WER) | Dev/Test (gap) | Dev/Test (WER) | Dev/Test (gap) |\n| original | 28.88/28.50 | --/-- | 25.44/25.33 | --/-- |\n| white | 29.76/29.25 | 0.88/0.75 | 25.91/25.91 | 0.47/0.58 |\n| green | 29.65/29.40 | 0.77/0.90 | 26.23/25.94 | 0.79/0.61 |\n| blue | 31.59/31.23 | 2.71/2.73 | 25.95/26.04 | 0.51/0.71 |\n| black | 29.30/29.24 | 0.42/0.74 | 25.79/25.71 | 0.35/0.38 |\n| studio \\#0 | 47.31/47.17 | 18.43/18.67 | 28.91/28.99 | 3.47/3.66 |\n| studio \\#1 | 30.65/30.35 | 1.77/1.85 | 26.23/26.43 | 0.79/1.10 |\n| street | 91.97/92.49 | 63.09/63.99 | 62.71/62.00 | 37.27/36.67 |\n\nWe will add these experiments and discussions into our revised version.\n\n**References**\n\n[1] Junfu Pu, Wengang Zhou, Hezhen Hu, and Houqiang Li. Boosting Continuous Sign Language Recognition via Cross Modality Augmentation, pp. 1497–1505. Association for Computing Machinery, New York, NY,\n182 USA, 2020.\n\n[2] Oscar Koller, Sepehr Zargaran, and Hermann Ney. Re-sign: Re-aligned end-to-end sequence modelling with deep recurrent cnn-hmms. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.\n\n[3] Runpeng Cui, Hu Liu, and Changshui Zhang. A deep neural framework for continuous sign language recognition by iterative training. IEEE Transactions on Multimedia, 21(7):1880–1891, 2019. doi: 10.1109/174 TMM.2018.2889563.\n\n[4] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked attention mask transformer for universal image segmentation. In CVPR, 2022.", " **Q2**\n*Can the Two-stream model solve the background and signer mismatch problems?*\n\nKeypoints are represented by heatmaps, which are more robust to video background and signer appearance.\nVisualization of keypoint heatmaps is shown in Figure 2 in the supplementary material. Here we conduct the following experiments to verify the generalization of our method under the setting of signer mismatch and background change.\n\n**Signer mismatch** There are few works exploring signer-independent sign language recognition and translation. Phoenix-2014 provides a signer-independent split named Phoenix-2014-SI, where the Signer05 is excluded from the training set and the dev/test set only contains videos signed by Signer05. Our model is retrained on the Phoenix-2014-SI training set and evaluated on the official dev/test set (unseen signer) and the rest of the Phoenix-2014 dev/test set which is comprised of signers appeared in the training set (seen signers). As shown in the following table, our TwoStream-SLR can achieve a WER of 29.53\\% and 29.72\\% on the dev and test set of the unseen signer, respectively, and outperforms the previous best method CMA [1] by a large margin. More importantly, compared to the baseline that only uses RGB videos, the performance gap between the seen and unseen signers can drop dramatically with the help of keypoint sequences, which implies that our TwoStream Network can relieve the signer mismatch issue. \n\n**Phoenix-2014-SI:**\n| Method | Dev/Test (seen) | Dev/Test (unseen) | Dev/Test (gap) |\n|----------------------|:---------------:|:-----------------:|:--------------:|\n| ReSign [2] | --/-- | 45.1/44.1 | --/-- |\n| DNF [3] | --/-- | 36.0/35.7 | --/-- |\n| CMA [1] | --/-- | 34.8/34.3 | --/-- |\n| Video only (ours) | 24.14/25.03 | 37.42/37.14 | 13.28/12.11 |\n| TwoStream-SLR (ours) | **19.75/20.81** | **29.53/29.72** | **9.78/8.91** |\n\nFurthermore, we imitate the creation of the Phoenix-2014-SI dataset to construct another dataset named Phoenix-2014T-SI, where the Signer05 is excluded from the original Phoenix-2014T training set and the dev and test set only contain videos signed by Signer05. We obtain identical conclusions as shown in the following table.\n\n**Phoenix-2014T-SI:**\n| Method | Dev/Test (seen) | Dev/Test (unseen) | Dev/Test (gap) |\n|----------------------|:---------------:|:-----------------:|:--------------:|\n| Video only (ours) | 22.54/24.77 | 35.15/34.80 | 12.61/10.03 |\n| TwoStream-SLR (ours) | **18.25/20.56** | **27.54/29.03** | **9.29/8.47** |\n\n**References**\n\n[1] Junfu Pu, Wengang Zhou, Hezhen Hu, and Houqiang Li. Boosting Continuous Sign Language Recognition via Cross Modality Augmentation, pp. 1497–1505. Association for Computing Machinery, New York, NY,\n182 USA, 2020.\n\n[2] Oscar Koller, Sepehr Zargaran, and Hermann Ney. Re-sign: Re-aligned end-to-end sequence modelling with deep recurrent cnn-hmms. In IEEE Conference on Computer Vision and Pattern Recognition, 2017.\n\n[3] Runpeng Cui, Hu Liu, and Changshui Zhang. A deep neural framework for continuous sign language recognition by iterative training. IEEE Transactions on Multimedia, 21(7):1880–1891, 2019. doi: 10.1109/174 TMM.2018.2889563.\n\n[4] Bowen Cheng, Ishan Misra, Alexander G. Schwing, Alexander Kirillov, and Rohit Girdhar. Masked attention mask transformer for universal image segmentation. In CVPR, 2022.", " Thanks for your constructive comments. Our responses to them are given below.\n\n**Q1**\n*Why can the keypoint sequences achieve such an improvement?*\n\nThere are two reasons:\n\nFirst, sign languages are visual languages that use two types of features to convey information: manual elements including handshape, palm orientation, movement, and location; non-manual markers such as facial expression and movement of the body, head (nod/shake/tilt), and mouth (mouthing). All these features need to be considered to capture the complete meanings of sign languages. The keypoint stream in our TwoStream Network can fully incorporate the above domain knowledge to promote sign language recognition and translation. Concretely, we take into account manual elements by modeling 42 hand keypoints, and non-manual markers by modeling 11 upper body keypoints, 10 mouth keypoints, and 16 face keypoints. The illustration of keypoints used in our approach is shown in Figure 1 in our supplementary material. \n\nSecond, sign language recognition and translation suffer from the data scarcity issue. Training an efficient neural machine translation model often requires a corpus of 1M parallel data, however, existing sign language recognition datasets only contain 7-20K training samples. Though raw videos provide rich and complete visual information, training on insufficient data may lead the network to overlook the key information mentioned in the first point. Introducing keypoints into learning injects \"inductive bias'' and \"prior knowledge'' to sign language understanding.\n\nExperiments in Table 3 of the main paper verify the effectiveness of modeling keypoints and interactions between two streams. We redraw the table below. (The experiment is conducted on Phoenix-2014T SLR task. V: video, K: keypoint, Bilateral: bidirectional lateral connection, SPN: sign pyramid network.) It shows that simply averaging the predictions of the video and keypoint streams can achieve better performance than only using either one single stream. Besides, the proposed techniques (bidirectional lateral connection, joint head, and cross-distillation) can strengthen the interactions between the two streams and further improve the performance. \n| V-Encoder | K-Encoder | Bilateral | SPN | Joint-Head | Distillation | Dev | Test |\n|:------------:|:------------:|:-----------:|:-----:|:------------:|:--------------:|:-----:|:------:|\n|$\\checkmark$| | | | | | 21.08 | 22.42 |\n| | $\\checkmark$| | | | | 27.14 | 27.19 |\n|$\\checkmark$|$\\checkmark$| | | | | 20.47 | 21.55 |\n|$\\checkmark$|$\\checkmark$|$\\checkmark$| | | | 19.03 | 20.12 |\n|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$| | | 18.52 | 19.91 |\n|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$| | 18.36 | 19.49 |\n|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$|$\\checkmark$| **17.48** | **19.04** |\n\nParticularly, we conduct ablation studies on the lateral connections as shown in Table 4(a) in our main paper. Specifically, we train models with only unilateral connections (video $\\rightarrow$ keypoint or keypoint $\\rightarrow$ video) or reduce the number of features on which the bilateral connection is applied. We compare these variants to our final model. We redraw the table as below. It can be seen that performing bilateral connection on C1, C2, and C3 achieves the best results, verifying that more two-stream interactions lead to better performance.\n\n|V$\\rightarrow$K|K$\\rightarrow$V|Connection|Dev|Test|\n|:---------------------:|:---------------------:|:----------------:|:-----:|:-------:|\n| | | None | 18.57 | 20.03|\n| $\\checkmark$ | | C1,C2,C3 | 17.88 | 19.61 |\n| | $\\checkmark$ | C1,C2,C3 | 18.82 | 19.93 |\n| $\\checkmark$ | $\\checkmark$ | C1,C2,C3 | **17.48** | **19.04** |\n| $\\checkmark$ | $\\checkmark$ | C2,C3 | 17.59| 19.30 |\n\nThanks for your comment and we will add more discussions in our revised version.", " Thanks for your constructive comments. Our responses to them are given below.\n\n**Q1**\n*Move the formulation of CTC loss and translation loss from Appendix to the main paper.*\n\nThanks for your suggestion. We will move the formulation into the main paper in our final version.\n\n\n**Q2**\n*Study the weight of the distillation loss in Eq 1.*\n\nWe study the weight of the distillation loss $\\mathcal{L}\\_{dist}$ in Table 3c in the supplementary material. This study is conducted on the Phoenix-2014T SLR task. We redraw the table below. The weight of the self-distillation loss controls the trade-off between the pseudo fine-grained supervision (self-distillation loss) and the coarse-grained supervision (CTC loss). In our experiment, we find that the best performance is obtained when the weight is set to 1.0. \n| Weight of $\\mathcal{L}\\_{dist}$ | Dev | Test |\n|:---------:|:-------:|:-------:|\n| 0.2 | 18.41 | 19.21 |\n| 0.5 | 18.20 | 19.63 |\n| 1.0 | **17.48** | **19.04** |\n| 1.5 | 18.28 | 19.93 |\n| 2.0 | 17.83 | 19.28 |\n\n**Q3**\n*The effect of $\\sigma$ and resolution of keypoint heatmaps.*\n\nWe study the effect of $\\sigma$ and resolution of the keypoints heatmaps in Table 3b in the supplementary material. We redraw the table below. We find that decreasing heatmap resolution hurts performance while varying $\\sigma$ has minor effect. In our experiment, $\\sigma=4$ and $H'=W'=112$ can achieve the best performance.\n| $\\sigma$ | ($H'$,$W'$) | Dev | Test |\n|:-------:|:---------:|:-------:|:-------:|\n| 1 | 56 | 29.78 | 28.72 |\n| 2 | 56 | 30.10 | 29.02 |\n| 2 | 112 | 27.22 | 27.52 |\n| 4 | 112 | **27.14** | 27.19 |\n| 6 | 112 | 27.94 | **27.10** |", " This paper proposes a two-stream model including the raw videos and the keypoint sequences for sign language recognition and sign language translation. The two-stream model also utilize a variety of techniques, including bidirectional lateral connection, sign pyramid network, and frame-level self-distillation to further improve their model. Empirically, the two-stream model outperforms the previous best method. Strengths: \n1. This paper takes advantage of the keypoint sequences and a variety of techniques and achieve excellent results.\n2. This paper is well written.\nWeaknesses:\n1. It may be confused that why the keypoint sequences can achieve such an effect improvement. \n2. This paper introduce the robustness of previous models, which suffer from dramatic performance degradation when the background or signer is mismatched between training and testing, but is the two-stream model can solve this problem? 1. It may be confused that why the keypoint sequences can achieve such an effect improvement. \n2. This paper introduce the robustness of previous models, which suffer from dramatic performance degradation when the background or signer is mismatched between training and testing, but is the two-stream model can solve this problem? The authors have addressed the limitations of their work.", " In the SLR task, the paper introduces two separate streams to model both the raw videos and the keypoint sequences. For the two streams to interact with each other better, the paper proposes a variety of techniques, including bidirectional lateral connection, sign pyramid network, and frame-level self-distillation. In the SLT task, the paper extends TwoStream-SLR to TwoStream-SLT by attaching an MLP and NMT. Experimental results show that TwoStream-SLR and TwoStream-SLT achieve SOTA performance on SLR and SLT tasks in three datasets. Strengths:\n\n1. The paper proposes Two-Stream Network, including bidirectional lateral connection, sign pyramid network, and frame-level self-distillation methods, to interact RGB videos and keypoint sequences for advancing SLR and SLT. \n2. The paper reported an improvement on SLR and SLT tasks.\n\nWeaknesses:\n\n1. These lack the evaluation of visual redundancy and interact with both the raw videos and the keypoint sequences.\n\n2. It is unclear about the usage of domain knowledge and key sequences.\n 1. The logical coherence of the abstract needs strengthening. For example, in line4-line7, the link between the questions about visual redundancy and the need to incorporate domain knowledge is rather jumpy, thus affecting the fluency of the reading.\n2. In the 3.1 section, the details of the selection of the key point are not clear enough. Why were the 79 key points chosen and what were the reasons? The choice of key points was not explored further in the experiment part.\n3. In the SLT experiments, in past studies, the results of experiments with added gloss supervision were higher than the gloss-free method. However, the opposite results were obtained in this experiment. It is necessary to analyze and explain the reasons for this. \n4. Suggest that some methods of the visual field need to provide an explanation to increase the readability of the paper. N/A", " This paper proposes a two-stream network for sign language recognition and translation. The main idea behind this paper is using two separate S3D networks to encode RGB modality and human keypoint modality respectively according to the fact that sign languages use both manual articulations and non-manual elements to convey information. To make the two streams interact with each other, authors propose bidirectional lateral connection, sign pyramid network with auxiliary supervision, and a frame-level self-distillation strategy. Elaborate ablation studies have verified the effectiveness of each proposed component. The results are very solid. TwoStream-SLR which is designed for sign language recognition task, achieves 18.8 WER on Phoenix-2014, 19.0 on Phoenix-2014T and 25.3 on newly published CSL-Daily, which greatly outperforms prior methods by large margins. As for TwoStream-SLT, it also achieves state-of-the-art performance on Phoenix-2014T and CSL-Daily datasets. Strengths:\n1.\tThis paper is very well written and easy to follow. The motivation is very clear.\n2.\tThe proposed TwoStream network with bidirectional lateral connection, sign pyramid network, auxiliary supervision and frame-level self-distillation is technically sound. I believe this paper will facilitate this research direction.\n3.\tSystem-level experiments are very solid. This paper achieves state-of-the-art performance on two sign language understanding tasks, i.e. continuous sign language recognition and sign language translation across several datasets. It is worth mentioning that TwoStream-SLR outperforms previous best methods by large margins, especially on CSL-Daily dataset.\n4.\tA variety of ablation studies verify the effectiveness of each proposed component, as shown in Table 3 and 4.(a)-(d), as well as the Tables in appendix.\n\nWeakness:\n1.\tAuthors could move the formulation of CTC loss and translation loss from Appendix to the main paper to make the paper clearer. \n 1. Self-distillation loss brings improvement as shown in Table 3. Can authors study the weight of this distillation loss in Eq 1. \n2. As described in L169, authors utilize a Gaussian function to generate keypoint heatmaps. I don’t see studies about sigma, as well as the resolution of heatmaps. I hope authors could provide these studies since how to model keypoints is a key contribution, adding these experiments will make the submission stronger.\n Due to the data bias and data scarcity issue, there are unpredictable recognition/translation errors as shown in Table 4 in Appendix. Authors also discuss the limitations and societal impact adequately." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 9 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "XpvHf8ZLxmf", "HonYeZFzRXx", "HonYeZFzRXx", "HonYeZFzRXx", "HonYeZFzRXx", "XHTUPkHeJs", "XHTUPkHeJs", "XHTUPkHeJs", "Oc5t4lxC-A", "nips_2022_hSxK-4KGLbI", "nips_2022_hSxK-4KGLbI", "nips_2022_hSxK-4KGLbI" ]
nips_2022_GGi4igGZEB-
Characteristic Neural Ordinary Differential Equations
We propose Characteristic-Neural Ordinary Differential Equations (C-NODEs), a framework for extending Neural Ordinary Differential Equations (NODEs) beyond ODEs. While NODEs model the evolution of a latent variables as the solution to an ODE, C-NODE models the evolution of the latent variables as the solution of a family of first-order quasi-linear partial differential equations (PDEs) along curves on which the PDEs reduce to ODEs, referred to as characteristic curves. This in turn allows the application of the standard frameworks for solving ODEs, namely the adjoint method. Learning optimal characteristic curves for given tasks improves the performance and computational efficiency, compared to state of the art NODE models. We prove that the C-NODE framework extends the classical NODE on classification tasks by demonstrating explicit C-NODE representable functions not expressible by NODEs. Additionally, we present C-NODE-based continuous normalizing flows, which describe the density evolution of latent variables along multiple dimensions. Empirical results demonstrate the improvements provided by the proposed method for classification and density estimation on CIFAR-10, SVHN, and MNIST datasets under a similar computational budget as the existing NODE methods. The results also provide empirical evidence that the learned curves improve the efficiency of the system through a lower number of parameters and function evaluations compared with baselines.
Reject
This paper proposed to model the evolution of the latent variables to the characteristic curves instead of the original ODEs. Authors proved the new method C-NODE is more expressive than the original NODE. Experiments are conducted on image classification tasks to demonstrate its effectiveness. It will be good explorations to leverage the differential equation theory to improve the NODE algorithms. The insights from the journey will help innovate breakthrough directions in operator learning. During the discussion phase, reviewers had rounds of debates about whether the method is demonstrated effective on standard tasks for NODEs. Although it is a high bar for exploration style work to achieve SOTA results, we expect some insights from the investigation. For example, why the former NODE is not expressive enough in certain tasks, what might be the factors in the real tasks influencing the expressiveness, why the image classification task need extra expressiveness, etc. Simulations can also be involved to show these insights in extreme cases.
train
[ "ZsALg8Va2k7", "IMdLb_77RiV", "VFTIdQnXwvy", "OEUkF7u4SEh", "JEKa7ccxv2uv", "sC79YfM_Dif", "prXvIt60A9q", "IEnHhF7UIY9", "MZvh1kGgZYo", "x213j0xqdXO", "ETJZyE1uWT4", "acm86c_GUX3" ]
[ "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer EacU\n\nWe apologize for any inconvenience that our message may cause in advance.\n\nAgain, we would like to thank you for the time you dedicated to reviewing our paper and for your valuable comments. We believe that we have addressed your concerns.\n\nSince the end of the discussion period is close and we have not heard back from you yet, we would appreciate it if you kindly let us know of any concerns you may have, and if we can be of any further assistance in clarifying any other issues.\n\nWe humbly remain at your disposal.\n\nThanks a lot again, and with best wishes\n\nAuthors", " Dear Reviewer EacU\n \nWe would like to thank you again for the time you dedicated to reviewing our paper and for your valuable comments. We believe that we have addressed your concerns. Since the end of the discussion period is getting close and we have not heard back from you yet, we would appreciate it if you kindly let us know of any other concerns you may have, and if we can be of any further assistance in clarifying any other issues.\n \nThanks a lot again, and with sincerest best wishes,\n \nAuthors", " Dear Reviewer,\n\nWe appreciate your highly constructive suggestions and enlightening comments.\n\nIndeed, adding the discrete normalizing flows results on SVHN and parameter ablation on more NODE type algorithms will further improve the paper. If accepted, we will make sure to add those in the final manuscript.", " Dear authors,\n\nYou have sufficiently answered my questions. And I will keep my recommendation at accept.\n\nIf accepted for the camera ready version, please include discrete normalizing flows results on SVHN. Also the parameter ablation in the appendix is good, again it could be improved by running it against the other baselines of the paper (ANODE, IL-NODE).", " We thank the reviewer again for constructive comments, they really helped improving the paper!", " Thanks for answering all my questions! Despite still lacking emperical support on performance on time series datasets, I'd recommend this paper. ", " We thank the reviewer for the helpful review and the constructive comments. \nWe answer the relevant questions below.\n\n[Question 1, Section 3 Clarity]\nWe apologize for the confusion in this section. \nWe have edited the manuscript in the revision to enhance clarity. The reviewer is correct that when solving the ODE corresponding to multiple characteristics we recover the full PDE, which is the computation that we perform in the method. Interpolation between solutions is done through conditioning the characteristic curves on different initial conditions.\n\n[Question 2, Additional Time Series Results]\nWe agree with the reviewer that additional experiments would bolster the empirical evaluation.\nWe have added these to the newest revision where we compare the proposed method to NODE on a subset of the MuJoCo data set. After 100 training epochs, C-NODEs achieve 10.14\\% lower testing mean squared errors than NODEs on the prediction task on held out data.\n\n[Question 4, Missing Algorithms]\nWe thank the reviewer for pointing this out, we have added additional algorithms in the appendix D of the revision. \n\n[Question 5, Characteristic Starting Value]\nThe starting $x(s=0)$ is set to be 0 for all experiments.\n\n[Question 6, Shock waves and Rarefactions]\nThe reviewer brings up a very interesting point. \nIn practice, we did not enforce any particular structure to prevent characteristics from intersecting and inducing shockwaves. \nHowever, we believe that due to the high dimensionality of the ambient space that we consider, this is unlikely to happen. \nRarefactions generally would not be an issue since the solution is always integrated to a point where a solution exists. \nWe have modified the manuscript to include a discussion on this point.", " We thank the reviewer for the helpful comments and insightful review of our manuscript.\nWe respond to the questions below.\n\n[Question 1, structure of $g(z)$]\nWe apologize for the confusion regarding $g(z)$ in the empirical evaluation.\nFor all of the classification experiments, we set $g(z) = z$ which makes the input into C-NODE the original image. \nBy leaving the initial conditions constant, we can focus exclusively on the performance of C-NODE itself without influence from the network that is transforming the input.\nWe believe that by making $g(z) \\neq z$ the performance of the method would improve, but, as the reviewer rightly points out, this would obfuscate the contributions of C-NODE versus the network $g$.\nWe have updated the manuscript to emphasize this in the main text.\nThe only experiment where the initial condition is calculated with $g(z) \\neq z$ is for the PDE experiment due to the necessity of modeling the boundary condition.\n\n[Question 2, parameter efficiency]\n\nWe agree with the reviewer that this would be a good ablation. We added an ablation study on the parameters used in C-NODE and NODE on the image classification task with the CIFAR-10 dataset. C-NODE consistently achieves better results than NODE across the different number of parameters used along the whole training process.\n\n[Question 3, Results missing on SVHN]\nWhile we compare the results of C-NODE on generative modeling for SVHN to the continuous normalizing flow based on NODE, we did not compare to the discrete normalizing flows methods because we could not find prior work that compared to SVHN. \nGiven the time constraints, we are unable to perform the experiments ourselves to report the values, but we will update the final manuscript to include these.\n\n[Question 4, DOPRI5 Memory]\nWe agree that the memory issue for direct backpropogation should be better described.\nDue to the adaptive step size of DOPRI5, the number of steps for any given time interval can be quite large. \nIn practice, we witnessed memory errors when trying to backpropagate through the individual solver steps. \nIn the table of the classification results, we mentioned the number of function evaluations.\nFollowing Table 1 given in [1], the number of function evaluations roughly corresponds to the number of layers in a feed forward network and describes the order of the memory needed.\nSince most of the number of function evaluations is generally always greater than 50 and often over 100, one can see that this incurs a large memory footprint. \nWe have updated the manuscript to emphasize this issue. \n\n[1] Chen, Ricky TQ, et al. \"Neural ordinary differential equations.\" Advances in neural information processing systems 31 (2018).\n\n[Question 5, Stability with ANODE]\nWe agree with the reviewer that considering experiments with augmentation would be beneficial.\nThe other set of large scale experiments that we conduct are the density estimation experiments. \nHowever, since ANODE requires lifting the latent variable to a higher dimension, the direct application of the change of variables formula to describe the transport of the base density to the target density is not straightforward since it requires defining the notion of the augmenting dimension.\nFor these reasons we did not include a comparison of the stability for ANODE in the density estimation experiments. \nHowever, this does make an interesting direction for future work.\n\n[Question 6, Interpretation of C-NODEs] \nWe view C-NODE as a solver for PDEs, as the reviewer mentions, with a particular limited form on the PDE structure.\nThis is because the decomposition of the terms in the integrand leads to solving a PDE over different sets of curves, namely the characteristics. \nIn that sense, the dynamics are limited because they must share the same $J_x u$ for each point, similar to convolution layers in computer vision tasks. \nOn the other hand, since the PDE over a particular characteristic is reduced to an ODE, an ODE interpretation can be useful.\nHowever, taking only the ODE viewpoint disregards the interaction between the different ODE solutions which is a key point of using the method of characteristics. \n", " We thank the reviewer for the time and effort in providing feedback to strengthen the manuscript. \nWe provide individual responses to questions below. \n\n[Question 1, MoC Applicability]\nThe reviewer makes a good point that the MoC does not apply to all types of PDEs, and we agree with the reviewer that the motivation behind this family of PDEs should be better described. \nThe most general qualitative description of this family of equations is a transport equation -- which roughly describes the propagation of certain quantities through time. \nSuch equations are appropriate for deep learning tasks due to their ability to transport data into different regions of the state space. \nFor example, in a classification task, we consider the problem of transporting high-dimensional data points that can not be linearly separated to spaces where they can be linearly separated.\nSimilarly in a generative modeling task, we transport a Gaussian distribution to data distribution.\nIn that sense, we believe this family of equations is sufficient for relevant tasks in machine learning. \nHowever, as the reviewer points out, more general types of PDEs cannot be represented by C-NODE, which is a limitation of the method.\nWe have updated the manuscript to include this description and limitation.\n\n[Question 2, Augmenting NODEs]\nWe note that C-NODE can be applied to the augmented forms, as we demonstrated in Table 1 when modifying ANODE for use with C-NODE. We also add additional experimental results comparing C-NODE, ANODE, and NODE on time series prediction tasks, as shown below\n\n| Time | [0,1] | [1,2] | [2,3] | [3,4] | [4,5] | [5,6] |\n|--------|--------|--------|--------|--------|--------|--------|\n| NODE | 0.0322 | 0.1764 | 0.4681 | 0.8093 | 1.1911 | 1.6202 |\n| ANODE | 0.0428 | 0.0629 | 0.1248 | 0.2778 | 0.5360 | 0.9252 |\n| C-NODE | 0.0270 | 0.0365 | 0.0582 | 0.1474 | 0.3300 | 0.6054 |\n\n| Noise | 0 | 1 | 2 | 3 | 4 | 5 |\n|--------|--------|--------|--------|--------|--------|--------|\n| NODE | 0.0326 | 0.1784 | 0.7886 | 1.9685 | 3.7530 | 6.1553 |\n| ANODE | 0.04 | 0.1984 | 0.6035 | 1.0574 | 1.4850 | 2.0593 |\n| C-NODE | 0.0267 | 0.1011 | 0.3294 | 0.7148 | 1.2856 | 2.0834 |\n\nWe test C-NODEs, ANODEs, and NODEs on a synthetic time series prediction problem. \nWe define a function by $u(x,t)=\\frac{2x \\exp(t)}{2\\exp(t)+1}$, and we sample $\\tilde{u} = u(x,t) + 0.1\\epsilon_t$, where $\\epsilon_t \\sim \\mathcal{N}(0, 1)$ over $x\\in [1,2]$, $t\\in[0,1]$ to generate the training dataset. \nWe test the performance on $t \\in [n,n+1]$ with $n\\in\\{0,1,\\ldots,5\\}$. \nWe also test C-NODEs, NODEs, and ANODEs on time series prediction with different levels of noise. Specifically, using the same function as above, we form training and testing dataset with $\\epsilon_t\\sim \\mathcal{N}(0,m)$, $m\\in\\{0,1,\\ldots,5\\}$. We test the performance on the time period $t\\in[0,1]$. In both cases, we report the testing mean squared errors.\n\nSince SONODE is an alternative interpretation of ANODE, the same principle applies.\nOne difference between C-NODE and ANODE/SONODE is that both ANODE/SONODE augment the dimension, and directly applying the change of variables formula for continuous normalizing flows is not straightforward due to the augmenting dimensions.\nWe have modified the manuscript to emphasize these points. \n\n[Question 3, Two Neural Networks]\nWe agree with the reviewer that there is a level of unidentifiability within the proposed framework since both components are represented as neural networks.\nWe note that the network architecture is factored in such that one component describes the Jacobian and the other describes the characteristic.\nThis factorization enforces that the Jacobian is shared for all data points but the characteristics are modified for different data points. \n\n[Question 4, Burgers Equation]\nThe reviewer is correct that, for the case of the Burgers equation, the value of the solution does not change along the characteristic and is given by $u_0 + \\int_0^T 0 ds$. \nHowever, this need not be the case since in general, the right hand side is not equal to zero. \nSpecifically, as shown in equation (4), (5), and (6), the right hand side of the PDE is $c(x_1,...,x_k,u)$, and integrating along $s$ would result in $$\\int_0^T\\frac{d}{ds}u(x_1,...,x_k,u)ds=\\int_0^Tc(x_1,...,x_k,u)ds.$$", " The authors investigate the role of the method of characteristics (MoC) on the modeling of the latent variables of NODEs, and propose a new algorithm named C-NODEs. C-NODEs parameterize the latent variables of NODEs as the solution of a family of first-order quasi-linear PDEs along the characteristic curves. The authors prove C-NODEs can learn intersecting trajectories and also are universal approximators of homeomorphisms. Experiments empirically show C-NODEs improve the learning accuracy and efficiency on multiple tasks. The expressiveness of vanilla NODEs is indeed a problem due to the restriction of ODEs. The paper try to involve the family of PDEs to augment NODEs. The paper is easy to follow. The strength of this paper is the rich experiments and analysis of the properties of C-NODEs. However, the presented work raises several questions. The main weakness is the clarity and insufficient baselines. See questions below.\n \n1. This paper uses MoC to parameterize or augment the latent variables of NODEs. However, this parameterization restrict the latent varialbes to the solution of first-order quasi-linear hyperbolic PDEs. Does this hypothesis make sense?\n\n2. Some other papers also propose ways to augment NODEs by parameterize the latent variables, such as ANODE, Second-order NODEs (SONODE). Are C-NODEs better than these methods?\n\n3. In Equation (7) (line 129), the functions $J_x u$ and $dx/ds$ are modeled by neural networks. Using $\\text{NN}_1$ and $\\text{NN}_2$ to represent neural networks, is it equivalent to $u(x(T)) = u(x(0)) + \\int_0^T \\text{NN}_1(x,u;\\Theta_2) \\text{NN}_2(x,u;\\Theta_2)ds$? If so, the structure of Equation (4) will not be preserved.\n\n4. Near the line 111, $\\frac{d}{ds}u(x(s),t(s)) = \\frac{\\partial u}{\\partial t} + u \\frac{\\partial u}{\\partial x} = 0$. If so, by integrating over $s$, we have $u(x(T),t(T);x_0,t_0) \\coloneqq \\int_0^T \\frac{d}{ds} u(x(s),t(s))ds = \\int_0^T 0 ds = 0$ ?\n\ntypo: line 3 - ``a latent variables\" Yes", " **Edit after Reviewer and AC discussion**\n\nAfter a thorough discussion with the other reviewers and AC unfortunately I will lower my score to a 6. The main reason is that the most revealing experiment is in Table 1, which is a strong experiment. Revealing how all the baselines change with and without C-NODE. But image classification is not the best experiment for NODEs. Instead this strong comparison should be carried out on all experiments especially Normalizing Flows and Time-Series.\n\n\n**Original**\n\nThis paper adapts the Neural Ordinary Differential Equation (NODE) framework to run along the characteristic curves of partial differential equations (PDES), the new model is called Characteristic Neural ODEs (C-NODEs).\n\nThis effectively is a different way to make more expressive NODE models, similar to Augmented Neural ODEs and Neural Controlled ODEs being techniques to improve expressivity and generalization of NODEs.\n\nProofs are provided to show this improves expressivity. C-NODEs are tested on classification and continuous normalizing flow tasks (density estimation) on MNIST, CIFAR-10 and SVHN. C-NODEs are also tested on time-series and PDE tasks. C-NODEs perform well on these tasks, outperforming NODEs as well as Augmented NODEs, 2nd Order NODEs and IL-NODEs. Strengths:\n\nThe paper is generally very good, with good contributions and methdology, specifically:\n- The theoretical grounding of C-NODEs is good.\n- The experiments on the whole are good, experimenting against expected baselines on standard benchmark tests.\n- The paper is well written, with helpful examples for difficult/new concepts.\n\nWeaknesses:\n\nThe largest weakness is that the paper generally feels a little incomplete, the work is brilliant but there are some experiments that could make it a brilliant paper. Specific examples are:\n\n- There is this term $g(z)$ included in the dynamics, allowing the vector field to be conditioned on the input. It would be quite informative to see how much this term affects the answer. That is, can this term (a learnable function with the same structure) solve the classification problem on its own? This would make a convincing ablation if it can't.\n- This term $g(z)$ also seems to give the initial condition for the C-NODE, it might be helpful to visualise the C-NODEs time evolution, if it stays roughly the same $g(z)$ is doing the majority of the work.\n- Parameter efficiency is mentioned in the paper, but this doesn't seem to be explicitly tested, again this would make a good appendix study.\n- Results are missing for testing Continuous Normalizing Flows (and discrete flows) on SVHN.\n- The paper claims the adjoint method is better because it can be used with adaptive solvers due to memory efficiency. The Euler method is used with direct backprop. It would be helpful to show that Dopri5 fails when using direct backprop because memory runs out.\n- It is noted that coupling C-NODEs with ANODEs can make training more stable, apart from in the classification experiments it doesn't look like this has been tested.\n\nAside from this there are some minor typos or tiny mistakes that do not affect the quality of the paper but for the authors knowledge:\n\n- Line 26-27: It might help to clarify the adjoint method is only memory efficient in integration time, and not size of the dynamics function for example.\n- Between lines 52 and 53: I would change \"would take too much memory\" to \"would require too much memory\".\n- Line 85: \"then its log likelihood from Chen et al.:\" should be changed to \"then its log likelihood from Chen et al. is given by:\"\n- Line 107: Implies $s \\in [0, 1]$ when it seems that $s \\in[0, T]$\n- Line 200: \"NFE is a indicator\" should be \"NFE is an indicator\"\n- Line 230: \"Differential equations are solved using the adjoint method and a Runge-Kutta of order 5 of the Dormand-Prince-Shampine solver.\" Should be \"Differential equations are solved with a Runge-Kutta of order 5 of the Dormand-Prince-Shampine solver and trained with the adjoint method\".\n- Line 236: \"using a lower NFEs\" should be \"using a lower NFE\".\n- Line 265: \"and can combine with\" should be \"and can be combined with\" My only question is how best to view these C-NODEs, as they are solved with blackbox ODE solvers. Are they more like ODEs or PDEs? And if they are like ODEs is it possible for a NODE (or an ANODE/Latent NODE) to learn the same dynamics as C-NODE. Is it better to think of C-NODE as limiting the dynamics function so it has to be in the form of a characteristic curve? Similar to how convolutions share weights across linear layers to enforce translation symmetry. The paper sufficiently explores its limitations. There is no broader impact statement, it isn't really required here, however, it is good to keep in mind that the applications of NODEs and C-NODEs can include some potentially unethical ones. This is because they can generally be applied to learning time-series and now partial differential equations. As said, this is not the case in the current form of the paper, the work is incremental and not applied in those directions. ", " The authors propose Characteristics NODE (C-NODE) which parametrize the method of characteristics ODE of Hamilton-Jacobi equations. The authors provide density estimates for C-NODE. The authors backs up their claim of improving performance with experiments on images and time series. Strengths:\n- The idea of combining method of characteristics and NODE is very straightforward and new.\n- The density estimates are theoretical supported and works well on CNF type problem in their experiments\n\nWeakness:\n- Section 3 is a little bit messy. It is easy to find out the idea behind, but it is a bit confusing on things like whether they want to solve the PDE numerically or the characteristics ODE numerically; how they will do interpolation after they solve with characteristics ODE, etc.\n- Experiments on PDE modeling and time series prediction are synthetic data only. It might be converntional for PDE modeling, for time series prediction, this is not a very convincing experiment.\n- For problems without a general PDE view, like image classification and time series prediction, the model is simply combination of equations (5) and (6), which is basically more augmentations on NODE, which may not be something new. Still, this provides an interesting PDE viewpoint that is very inspiring for CNF tasks.\n - In algorithm session, only procedures for image processing tasks are discussed. What about the other ones?\n- Also in algorithm session, what is your starting x(s=0)?\n- Equations like burgers may produce shock or rarefactions during their solution. How do you deal with those?\n MOC only applies to Hamilton-Jacobi equations (not just hyperbolic), but still, it does not apply to all equations and may fail to model some type of problems. This should not be an essential problem since it could be solved with Augmentation." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "x213j0xqdXO", "x213j0xqdXO", "OEUkF7u4SEh", "IEnHhF7UIY9", "sC79YfM_Dif", "prXvIt60A9q", "acm86c_GUX3", "ETJZyE1uWT4", "x213j0xqdXO", "nips_2022_GGi4igGZEB-", "nips_2022_GGi4igGZEB-", "nips_2022_GGi4igGZEB-" ]
nips_2022_FQtku8rkp3
Pre-Trained Image Encoder for Generalizable Visual Reinforcement Learning
Learning generalizable policies that can adapt to unseen environments remains challenging in visual Reinforcement Learning (RL). Existing approaches try to acquire a robust representation via diversifying the appearances of in-domain observations for better generalization. Limited by the specific observations of the environment, these methods ignore the possibility of exploring diverse real-world image datasets. In this paper, we investigate how a visual RL agent would benefit from the off-the-shelf visual representations. Surprisingly, we find that the early layers in an ImageNet pre-trained ResNet model could provide rather generalizable representations for visual RL. Hence, we propose Pre-trained Image Encoder for Generalizable visual reinforcement learning (PIE-G), a simple yet effective framework that can generalize to the unseen visual scenarios in a zero-shot manner. Extensive experiments are conducted on DMControl Generalization Benchmark, DMControl Manipulation Tasks, Drawer World, and CARLA to verify the effectiveness of PIE-G. Empirical evidence suggests PIE-G improves sample efficiency and significantly outperforms previous state-of-the-art methods in terms of generalization performance. In particular, PIE-G boasts a 55% generalization performance gain on average in the challenging video background setting. Project Page: https://sites.google.com/view/pie-g/home.
Accept
This paper contains interesting findings in a research topic currently drawing a lot of interest from the community, i.e., RL with pretraining from large-scale general out-of-domain data. I think the use of low-level features and the batch-norm can be interesting to the community. As pointed by reviewer UwhJ, however, I agree that the authors should moderate and clarify some claims in such a way to acknowledge the fact that this line of research has already been studied recently in many works and thus it is not the first finding. I suggest updating the writing in the camera ready version to focus on the specific contributions such as the low-level features and batch-norm. In particular, the contribution (1) in the last paragraph of Introduction is wrong and thus should be changed because it is already well known but not discovered first in this paper.
train
[ "TSC1H7ofdx-", "ljbB5zTfJbJ", "B9ynDkTi07L", "W-ll74WETQe", "uo21N2xmao", "w70lo3TL7n", "U05iOyfsOx_", "bKb4rWGy8s", "ti4oo-Azay", "JJy4iFCt6Z", "cvJzfzyHFDB", "ySHR9FjkdO", "a6JdIjQ0dhh", "lHDksT-MJMo" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your efforts in reviewing our paper and your suggestions again.\n\nWe believe we have resolved all the concerns mentioned in the review. Please let us know if you have further concerns and we are more than happy to address them! Thank you very much !", " Thanks for the thorough response and many additional experiments. The results look pretty great overall. I'm a bit surprised that CLIP did so poorly given it's success in other tasks. It could be interesting to delve into that more -- perhaps CLIP is too invariant geometry. I'd be interested to also see some of the SOTA self-supervised methods added to the camera-ready too (perhaps DINO, etc). I think all the new comparisons really solidify the contribution so I'm raising my score to 6.", " We would like to first thank you again for your constructive comments and helpful suggestions. Since we are nearly at the end of the discussion phase, we would like to post a follow-up discussion. \n\nIn our previous response and revision, we have provided corresponding responses and results, which we believe have covered your concern about the resolution of the input images.\n\nWe hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.\n", " We would like to first thank you again for your constructive comments and helpful suggestions. Since we are nearly at the end of the discussion phase, we would like to post a follow-up discussion. \n\nIn our previous response and our revision, we have provided corresponding responses and results, which we believe have covered your concerns about the fairness of comparing with other baselines, the claim of universal representation, the keypoint and fundamental goal of our method, and our contributions. \n\nWe hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.\n", " We would like to first thank you again for your constructive comments and helpful suggestions. Since we are nearly at the end of the discussion phase, we would like to post a follow-up discussion. \n\nIn our previous response and our revision, we have provided corresponding responses and results, which we believe have covered your concerns about the training with other pre-trained models, the comparison with another ImageNet pre-trained method, the way of pre-training models, and our contributions.\n\nWe hope to further discuss with you whether your concerns have been addressed or not. If you still have any unclear parts of our work, please let us know. Thanks.\n", " Dear reviewer 9V7N, thank you for your detailed and thorough review. In the following, we seek to address each of your concerns.\n\n---\n\n**Q1:** *\"Unfair baseline: when evaluating generalization to unseen data, the typical game in generalization is to improve without access to new data. Here, new and uncontrolled data is being added to PIE-G, but not to the other baselines.\"*\n\n**A:** We respectfully disagree with this argument. We explain the reasons in two aspects. \n\n**First, the compared baselines also introduce the new real-world images.** The existing state-of-the-art methods SVEA [1] and TLDA [2] (i.e., the other baselines) are based on RandomOverlay (i.e., Mixup [3]) which overlays the original observations with the images from real-world image datasets (i.e., Places dataset [4]) . A plethora of works [1,2,5,6,7,11] have shown that without adding new data or injecting new visual information, the agents only trained with the observations from the fixed training environment cannot obtain generalization abilities (even generalize to similar tasks [8,9,10]). In our experiments, we have also demonstrated this phenomenon (The column of DrQ, DrQ-v2, SAC in Tables 1,2 of our paper). \n\n Second, the experiments in Section 5.4 and Appendix C.1 illustrate that compared with **the methods leveraging the same ImageNet pre-trained encoder, only PIE-G can achieve a substantial gain in generalization performance.** In Section 5.4, we show that thoughtful designs (frozen parameters, early layers, and ever-updating BatchNorm, etc.) are indispensable to the performance gain; naively importing the pre-trained encoder does not work. In Appendix C.1, we compare with RRL, a counterpart equipped with the same ImageNet pre-trained ResNet model as the encoder that can achieve comparable sample efficiency with the state-based algorithms. However, Table 8 shows that RRL struggles to adapt to new environments.\n\n In summary, the main goal of visual RL generalization is to find methods that can obtain high performance in unseen environments. The use of additional data is allowed and widely adopted in previous published works [1,2,5,6,7,11]. Moreover, experiments show that naively adding new data would hurt the generalization performance while PIE-G finds a simple and effective way to better leverage these data in comparison to all the baselines.\n\n \n ---\n\n[1] Nicklas Hansen, Hao Su, and Xiaolong Wang. Stabilizing deep q-learning with convnets and vision transformers under data augmentation. Advances in Neural Information Processing Systems, 34, 2021.\n\n[2] Zhecheng Yuan et al. Don’t touch what matters: Task-aware lipschitz data augmentation for visual reinforcement learning. arXiv preprint arXiv:2202.09982, 2022.\n\n[3] Zhang, Hongyi, et al. \"mixup: Beyond Empirical Risk Minimization.\" International Conference on Learning Representations. 2018.\n\n[4] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, “Places: A 10 million image database for scene recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. 5, 9\n\n[5] Nicklas Hansen and Xiaolong Wang. Generalization in reinforcement learning by soft data augmentation. In 2021 IEEE International Conference on Robotics and Automation (ICRA), pages 13611–13617. IEEE, 2021.\n\n[6] ​​Linxi Fan, Guanzhi Wang, De-An Huang, Zhiding Yu, Li Fei-Fei, Yuke Zhu, and Animashree Anandkumar. Secant: Self-expert cloning for zero-shot generalization of visual policies. In Proceedings of the 38th International Conference on Machine Learning, PMLR, 2021.\n\n[7] Kaixin Wang, Bingyi Kang, Jie Shao, and Jiashi Feng. Improving generalization in reinforcement learning with mixture regularization. Advances in Neural Information Processing Systems, 33:7968–7978, 2020.\n\n[8] Song, Xingyou, et al. \"Observational Overfitting in Reinforcement Learning.\" International Conference on Learning Representations. 2019.\n\n[9] Cobbe, Karl, et al. \"Leveraging procedural generation to benchmark reinforcement learning.\" International conference on machine learning. PMLR, 2020.\n\n[10] Farebrother, J., Machado, M. C., and Bowling, M. H. Generalization and regularization in dqn. ArXiv, abs/1810.00123, 2018.\n\n[11] Zhao, Yue, et al. \"Intrinsically Motivated Self-supervised Learning in Reinforcement Learning.\" 2022 International Conference on Robotics and Automation (ICRA). IEEE, 2022. \n", " We thank the reviewer for the detailed and thorough review. We added the suggested experiments to the rebuttal revision. In the following, we seek to address each of your concerns.\n\n---\n\n**Q1:** *\"More pretrained models could be tried. Are there any lessons to glean? Is it always best just to use the SOTA general-purpose vision system? How then does CLIP do, or its various follow-ups?\"*\n\n**A:** Besides ImageNet, we also implement pre-trained visual encoders with other popular datasets: CLIP and Ego4D. CLIP trained on a large number of (image, text) pairs collected from the Internet to jointly acquire visual and text representations. Ego4D is an egocentric human video dataset which contains massive daily-life activity videos in hundreds of scenarios. The following table shows that the agents pre-trained with CLIP achieve comparable performance with those pre-trained with ImageNet. The ImageNet pre-trained model is empirically slightly better than the CLIP. Since Ego4D collects the videos with the first-person view, the view difference between our target tasks and the Ego4D dataset leads to a decrease in performance; nevertheless, the Ego4D pre-trained agents still obtain comparable results with the prior state-of-the-art methods. We added this discussion to the revision of our paper.\n\n\n| Tasks | ImageNet | CLIP | Ego4D | SVEA |\n| :-----: | :-----: | :-----: | :-----: | :-----: |\n| Walker Walk | $ 600 \\scriptsize \\pm 23$ | $ 615 \\scriptsize \\pm 30$ | $ 441 \\scriptsize \\pm 15$ | $ 377 \\scriptsize \\pm 93$ |\n| Cheetah Run | $ 154 \\scriptsize \\pm 17$ | $ 115 \\scriptsize \\pm 62$ | $ 101 \\scriptsize \\pm 13$ | $ 105 \\scriptsize \\pm 37$ |\n| Walker Stand | $ 852 \\scriptsize \\pm 56$ | $ 849 \\scriptsize \\pm 23$ | $ 647 \\scriptsize \\pm 59$ | $ 441 \\scriptsize \\pm 15$ |\n| Finger Spin | $ 762 \\scriptsize \\pm 59$ | $ 676 \\scriptsize \\pm 116$ | $ 515 \\scriptsize \\pm 104$ | $ 335 \\scriptsize \\pm 58$ |\n\n---\n\n**Q2:** *\"Along similar lines, the paper could be strengthened by comparing against a variety of methods that all use the same pre-training dataset. A more apples-to-apples comparison would be to compare between methods that all are given access to ImageNet.\"*\n\n**A:** In Appendix C.1, we compare PIE-G with RRL[1], which is an algorithm leveraging ImageNet pre-trained ResNet as the encoder. RRL can achieve comparable sample efficiency with the state-based algorithms. However, in terms of generalization ability, Table 8 shows that RRL struggles to adapt to new visual environments. We believe that this is an apples-to-apples comparison, and it suggests that the designs of PIE-G such as choosing specific layers as well as the ever-updating BatchNorm are the crucial factors for bridging domain gaps and boosting the agents' generalization performance.\n\n[1] Shah, Rutav M., and Vikash Kumar. RRL: Resnet as representation for Reinforcement Learning. International Conference on Machine Learning. PMLR, 2021.\n\n---\n\n**Q3:** *\"I would have liked more detail on how the ResNet was pretrained.\"*\n\n**A:** In Section 4.1, we mention that PIE-G is as simple as importing a pre-trained ResNet model from the torchvision library The off-the-shelf model provided in torchvision is trained following the method in He et al. [2]. Moreover, all the other pre-trained models that we use in this paper are unexceptionally off-the-shelf ( links to the source are listed in Appendix.) We did not design any additional auxiliary tasks that may improve the visual encoder’s performance on RL tasks. \n\n[2] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.\n", " We thank the reviewers for all the detailed comments and helpful suggestions. We have highlighted the changes in blue in the revised version of our paper. Here we provide an overview of our changes. \n\n(i) Adopting other datasets for pre-training (Appendix C.6). We implement pre-trained visual encoders with other popular CLIP and Ego4D.\n\n(ii) Higher resolution inputs (Appendix C.7). We implement PIE-G with higher resolution images. \n\n(iii) Evaluating on CARLA (Appendix C.8). We evaluate PIE-G on a more challenging and realistic benchmark CARLA. \n\n(iv) Theoretical Analysis (Appendix C.9). we quantify the generalization gap of our zero-shot generalization problem in theory.", " **Q4:** *\"Many other papers have proposed and demonstrated essentially the same high level point. Making use of powerful pretrained representations is ubiquitous in our field right now. \"*\n\n**A:** We admit that the pre-trained representation has achieved promising results in various domains. However, few works demonstrate that the out-of-domain pre-trained representations could benefit learned agents to generalize to unseen visual scenarios. \n\nIn order to further exhibit the effectiveness of PIE-G, we conduct experiments on the CARLA autonomous driving system, which contains realistic observations and complex driving scenarios. Compared to DMC-GB whose images merely consist of a single controllable agent and the background, the observations of CARLA are visually more diverse containing distracting objects, pedestrians, etc. As shown in the following Table, all the prior state-of-the-art methods cannot adapt to the new unseen weather with different lighting, humidity, road conditions, etc. The experiment results indicate that the prior state-of-the art methods based on data augmentation to diversify data cannot acquire robust representations to tackle such complicated visual driving tasks. In contrast, we propose a new paradigm: utilizing the visual representations from the pre-trained model can improve agent’s generalization abilities on complicated scenes without large performance drop. We added this discussion to the revision of our paper.\n\n| Tasks | PIE-G | SVEA | TLDA |\n| :-----: | :-----: | :-----: | :-----: |\n| Training | $ 226 \\scriptsize \\pm 72$ | $ 251 \\scriptsize \\pm 22$ | $ 252 \\scriptsize \\pm 36$ |\n| WetNoon | $ 164 \\scriptsize \\pm 67$ | $ 45 \\scriptsize \\pm 44$ | $ 68 \\scriptsize \\pm 48$ |\n| SoftRainNoon | $ 143 \\scriptsize \\pm 81$ | $ 2 \\scriptsize \\pm 2$ | $ 4 \\scriptsize \\pm 6$ | \n| MidRainSunset | $ 156 \\scriptsize \\pm 97$ | $ 5 \\scriptsize \\pm 4$ | $ 7 \\scriptsize \\pm 4$ | \n\n\nFurthermore, we provide the theoretical analysis of our generalization problem in visual RL. We quantify that utilizing the encoder with better alignment ability can linearly narrow the generalization gap. We added discussion to the revision of our paper (Appendix C.9).\n", " We thank the reviewer for the detailed and thorough review. We added the suggested experiments to the rebuttal revision. In the following, we seek to address your concerns.\n\n---\n\n**Q:** *\"Is whether higher-level features simply underperform because of their lower resolution, or whether the more \"semantic\" information they encode is actually detrimental for the task. Would it be possible to use these features with higher-resolution images, to control for this difference?\"*\n\n\n**A:** Thanks for your suggestions. We conduct additional experiments where we enlarge the resolution of the image observations from 84x84 to 224x224. For the task of *cheetah run*, the generalization performance of the original 84x84 resolution setting with the Layer 2 as the feature is ***369±53*** while the newly conducted 224x224 resolution setting with the Layer 4 as the feature (which has a comparable resolution with the former) is ***213±25***. We conclude that it is not the resolution, but the semantic information that damages the generalization abilities.\n\nIt is also worth mentioning that with visual inputs of higher resolution, it will cost more computational resources (2.5x+ CUDA memory) as well as training time (5x+ more than the original one). We added this discussion to the revision of our paper.\n", " **Q2:** *\"The paper suggests that embeddings from a pre-trained ResNet are universal representations which are both robust and generalize well. However, it is well documented in the computer vision literature that this is not true. \"*\n\n**A:** By using “universal”, we mean “multi-purpose with the same encoder”, i.e., we can apply the off-the-shelf ImageNet pretrained encoder with the same parameters to work on multiple RL environments. By contrast, all the other methods train the encoder from scratch for each task and cannot share among tasks. We are by no means saying that it is “almighty/one-fit-for-all”, i.e., it works for all the computer vision tasks without performance drop.\n\nMoreover, we admit that the ImageNet pretrained ResNet model has its intrinsic incompetence, as is stated in your listed references. However, we find out that the representation of this model is competent enough when it serves for learning robust and generalizable policies. Our experiments show that it can concretely improve generalization abilities of the learned policies in a wide range of benchmarks . \n\n---\n\n**Q3:** *\"Is the fundamental goal to prove that better data distributions are better than better architectures for encoding images in visual RL? That would be a promising direction, but would require significantly more experiments than the current manuscript.\"*\n\n**A:** It is debatable to simply classify ImageNet as the “better data” in visual RL tasks. First, the ImageNet dataset does not contain actions. The way to leverage ImageNet should be carefully designed in decision-making generalization tasks. Second, there is a domain gap between the images from ImageNet (real-world images) and the observations from training environments (simulators). Third, as mentioned in **Q1**, in Section 5.4 and Appendix C.1, our experiments demonstrate that with the same training data distribution, only PIE-G can achieve considerable generalization performance. Moreover, existing methods [1,2,5,6] also have shown that naively adding new data will cause training divergence and be detrimental to generalization abilities due to the out-of -distribution problem. \n\nThe keypoint is how to leverage new data for visual RL tasks. This is the core of the recent researches. In our paper, we would like to illustrate that , in contrast to the existing state-of-the-art methods that design auxiliary tasks or more complex training architectures to utilize new data for acquiring representations, we propose a simple yet effective method: levaraging the off-the-shelf representations from ImageNet pre-trained encoder with thoughtful details and nuanced design choices can benefit the agents to significantly improve generalization abilities without bells and whistles.\n\n---\n\n**Q4:** *\"The current experiments show that: (1) Pre-training is better than training from scratch for the same architecture, (2) Simpler architecture with better data performs better than complex architecture with simpler data. Unfortunately, these two are both well known ideas.\"*\n\n**A:** We want to highlight that the ImageNet pre-training data is out-of-distribution data to the target RL tasks. Hence, it is unknown and unexplored that whether the data would help and how it can help in visual RL generalization. Many existing studies [12,13,14,15] are still working on it. We are the very first effort to exhibit the power of the ImageNet pre-trained model for generalizing to the unseen visual scenarios. \n\n--- \n\n[12] Khandelwal, Apoorv, et al. \"Simple but effective: Clip embeddings for embodied ai.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n[13] Simone Parisi, Aravind Rajeswaran, Senthil Purushwalkam, and Abhinav Gupta. The unsurprising effectiveness of pre-trained vision models for control. arXiv preprint arXiv:2203.03580, 2022.\n\n[14] Seo, Younggyo, et al. \"Reinforcement learning with action-free pre-training from videos.\" International Conference on Machine Learning. PMLR, 2022.\n\n[15] Shah, Rutav M., and Vikash Kumar. RRL: Resnet as representation for Reinforcement Learning. International Conference on Machine Learning. PMLR, 2021.\n", " This paper uses pretrained visual encoders to improve the sample efficiency and generalization performance of visual RL. The main methodological contributions are in the details -- which pretrained layer to transfer, updating of batchnorm parameters, freezing of the encoder, etc. Experiments demonstrate superior performance compared to RL methods that do not use out-of-domain pretraining. The main strength of this paper are the results -- it works well! The main weakness, in my opinion, is the originality: many other papers have proposed and demonstrated essentially the same high level point. I will elaborte on these points below:\n\n**Originality**\nMaking use of powerful pretrained representations is ubiquitous in our field right now (c.f. foundation model hype). This paper characterizes the results as \"suprising\" but I think the opposite is true: these results will be seen as thoroughly expected by a large portion of the field. This is in part due to the many other papers that have shown similar results in the specific domain of visual pretraining for embodied control (e.g., [76, 72, 48] and many more cited by the paper), and partly because the same trend has been so dominant in other areas of ML. I think [48] does a better job of characterizing this finding, in its title, as \"unsurprising\". So, the main message of this paper -- that visual pretraining can help, and a lot -- is one that has already, I think, been absorbed by the field (some of the cited related work may be concurrent, but there are also many papers that were published well before the neurips deadline on this topic, including [6], [76], [62], etc). That said, I do think the details in the present paper have some originality, especially the use of batchnorm for adaptation. I personally haven't seen that used before in this specific context (although batchnorm for adaptation in general is a well known method, e.g., https://openreview.net/pdf?id=BJuysoFeg).\n\n**Quality**\nThe quality of results is impressive: the paper delivers on showing that pretraining helps, both in terms of robustness/generalization and in terms of sample efficiency.\n\n**Clarity**\nThe writing and figures were all clear to me, except one aspect. I would have liked more detail on how the ResNet was pretrained. This is barely mentioned but seems essential to the success of the method. Indeed I would be interested to see how different pretraining methods and datasets affect the results.\n\n**Significance**\nDespite that I don't think readers will be surprised by the results, getting the details right, and achieving high performance with a simple system is a significant accomplishment, and could be impactful assuming the code is released. At the same time, I don't think the specific pretrained model in this paper will be used for long, and subsequent work will have to redo the analysis to pick which layers of future models to transfer, and how to tune the other methodological details.\n\n\n\n I think it would be very interesting to delve more into the question of \"what kind of pretraining helps for what kind of tasks\"? The MoCo comparison at the end of the paper was a step in that direction. More pretrained models could be tried. Are there any lessons to glean? Is it always best just to use the SOTA general-purpose vision system? How then does CLIP do, or its various follow-ups? To me these kinds of experiments would strengthen the originality of the paper, and take it beyond what are, to me, expected results.\n\nAlong similar lines, the paper could be strengthened by comparing against a variety of methods that _all use the same pretraining dataset_. It's not terribly surprising that PIE-G can outperform methods that have access to less data than it. A more apples-to-apples comparison would be to compare between methods that all are given access to ImageNet, and then see, among those, which is best, and why. The ablations in Section 5.4 are a step in this direction. I think the discussion of limitations was adequate. There was not much discussion, but I don't think much is needed for this paper.", " Summary: This paper suggests the use of an off the shelf, frozen, pre-trained ResNet model as an encoder for Reinforcement Learning, as opposed to learning an encoder from scratch. The paper suggests that this enables better generalization as the pre-trained ResNet model serves as a universal representation, which is robust and generalizes well to unseen scenarios. They also show that BatchNorm is critical for better generalizing RL agents. Strengths:\n\n1. Generalizing well to unseen environments is a fundamentally important problem, and advances in this direction are both extremely important, and useful for the community.\n\n2. The writing is clear and easy to follow. Figures do a good job of explaining the main idea visually.\n\nWeaknesses:\n\n1. Unfair baseline: The main difference in PIE-G is that the encoder is trained on different data, while the baselines are testing different training methodologies. When evaluating generalization to unseen data, control over the training distribution is absolutely essential. For instance, was the generalization better because the testing environment was maybe overlapping with ImageNet? For the unseen backgrounds case, it appears so as the backgrounds are real-world natural images. The typical game in generalization is to improve without access to new data. Here, new and uncontrolled data is being added to PIE-G, but not to the other baselines. This makes the comparison unfair, and also hard to qualitatively gauge.\n\n2. Unsupported claims: The paper suggests that embeddings from a pre-trained ResNet are a universal representations which are both robust and generalize well. However, it is well documented in the computer vision literature that this is not true [1,2,3]. For instance, works have shown that ImageNet trained models do not generalize to new ImageNet images either [1]. Learning generalizable features is an active area of research, and there is significant evidence suggesting that pretrained ResNet features cannot be called universal, robust or generalizable.\n\nIn summary, it appears that the performance gains may be due to the additional dataset which baselines do not have access to, as opposed to learning a universal representation.\n\n\nReferences:\n\n1. Recht, B., Roelofs, R., Schmidt, L. and Shankar, V., 2019, May. Do imagenet classifiers generalize to imagenet?. In International Conference on Machine Learning (pp. 5389-5400). PMLR.\n\n2. Hendrycks, D. and Dietterich, T., 2019. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261.\n\n3. He, K., Girshick, R. and Dollár, P., 2019. Rethinking imagenet pre-training. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 4918-4927). - Is the fundamental goal to prove that better data distributions are better than better architectures for encoding images in visual RL? That would be a promising direction, but would require significant more experiments than the current manuscript.\n\n- The current experiments show that: (1) Pre-training is better than training from scratch for the same architecture, (2) Simpler architecture with better data performs better than complex architecture with simpler data. Unfortunately, these two are both well known ideas, and building on top of them is necessary to arrive at the point mentioned above. Yes.", " The authors investigate using a pre-trained visual backbone in a RL setting. Specifically, they use off-the-shelf ResNet encoders trained in a supervised or self-supervised manner, and use the activations in intermediate layers as input to the policy network, as opposed to the raw pixel values. They find the resulting policy to be significantly more robust to changes in appearance than policies trained on raw pixel values, with fairly dramatic improvements under strongly distracting conditions. The paper presents a simple and effective idea. Although most of the components are standard (pre-trained visual encoders, the policy stacked on top), the authors carefully choose and calibrate these components, and the results are very compelling. \n\nIn particular, it is interesting that intermediate feature activations perform better than high-level ones, and that allowing the batch-norm statistics to adapt to the new environment is helpful. These insights make the difference between the method working or not, and will likely apply in a variety of other settings. One question we are left with however, is whether higher-level features (e.g. from levels 3 and 4 of the ResNet) simply underperform because of their lower resolution, or whether the more \"semantic\" information they encode is actually detrimental for the task. Would it be possible to use these features with higher-resolution images, to control for this difference? No ethical or societal risks here. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "a6JdIjQ0dhh", "ti4oo-Azay", "lHDksT-MJMo", "a6JdIjQ0dhh", "ySHR9FjkdO", "a6JdIjQ0dhh", "ySHR9FjkdO", "nips_2022_FQtku8rkp3", "ySHR9FjkdO", "lHDksT-MJMo", "a6JdIjQ0dhh", "nips_2022_FQtku8rkp3", "nips_2022_FQtku8rkp3", "nips_2022_FQtku8rkp3" ]
nips_2022_q-tTkgjuiv5
Graphical Resource Allocation with Matching-Induced Utilities
Motivated by real-world applications, we study the fair allocation of graphical resources, where the resources are the vertices in a graph. Upon receiving a set of resources, an agent's utility equals the weight of the maximum matching in the induced subgraph. We care about maximin share (MMS) fairness and envy-freeness up to one item (EF1). Regarding MMS fairness, the problem does not admit a finite approximation ratio for heterogeneous agents. For homogeneous agents, we design constant-approximation polynomial-time algorithms, and also note that significant amount of social welfare is sacrificed inevitably in order to ensure (approximate) MMS fairness. We then consider EF1 allocations whose existence is guaranteed. We show that for homogeneous agents, there is an EF1 allocation that ensures at least a constant fraction of the maximum possible social welfare. However, the social welfare guarantee of EF1 allocations degrades to $1/n$ for heterogeneous agents, where $n$ is the number of agents. Fortunately, for two special yet typical cases, namely binary-weight and two-agent, we are able to design polynomial-time algorithms ensuring a constant fractions of the maximum social welfare.
Reject
Reviewers agreed that the model is new and interesting and the theoretical results are solid. The main criticisms are about the model: some reviewers felt that it is too specific and there is not enough motivation. Some reviewers liked the technical depth while others felt that it is not enough to compensate for the lack of motivation. Overall, reviewers felt that the paper in its current form is not ready for publication at NeurIPS.
val
[ "QaEfojFbwD0", "MptEuRJWNkV", "h4qXVKVuMNq", "yVuM6w6C-LC", "fRofuS9NQ7c", "JnhexD57kPh", "QZm7Rt1sdGI", "DlfpV41Mdyz", "CPlFahNUQ3q", "NMLd8DxhEnT" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response and revision of the paper!", " L143: Accordingly, people mostly $\\ldots$ $\\rightarrow$ Accordingly, researchers have sought $\\ldots$\n\nL145: The line claims that EF1 is widely accepted and studied. But no citations or references are provided. I suggest that this be motivated and discussed earlier (e.g. Introduction or Related Works) and not repeated here, and only formal definitions and helpful examples be provided in the Preliminaries. $\\rightarrow$ Done. \n\nL199: $\\ldots$ decent $\\ldots \\rightarrow$ constant\n\nL200:$\\ldots$ far smaller $\\ldots \\rightarrow$ the utility of smallest bundle is much less than the largest bundle.\n\n Algorithm 3: The comments for the different cases are not made consistently. Please consider revising. \n\nWe renamed the third case by {Skip the Current Agent}\n\nL299: $\\ldots$if so we do exchange $\\rightarrow$ if so we execute exchange procedure.\n \nL302: $\\ldots$ and so on so forth. $\\rightarrow$ Done.\n\n[Theorem statements]\n\nL164: I believe this only applies to the case of homogeneous agents, which is not clear from the theorem statement. $\\rightarrow$ Done.\n \nL222: $\\ldots$ two cases happens. $\\rightarrow$ Done.\n \nL268 and L270: Please consider specifying that this is for the case of heterogeneous agents. $\\rightarrow$ Done.\n \nL297: $ \\ldots$ j envies i $\\ldots \\rightarrow$ Done.\n \nL304: Please clarify whether this result only holds for binary weights. The title of Algorithm 3 shows that this result only holds for binary weights. $\\rightarrow$ It holds for binary weights and we clarified this in the revised submission.\n\n[Arguments]\n\nL158-160: Please consider making this more precise and providing a brief argument and cite the (variant of the) Partition problem.\n \n$\\rightarrow$ Done.\n \nL169: This task is NP-hard $\\ldots \\rightarrow$ Why? [I think I know why] Please specify and consider rewriting to avoid repetitions.\n \n $\\rightarrow$ Thanks for pointing this out, and we will explain and add citation here. \n \n A special case of this problem is a set of independent edges to be partitioned into $n$ bundles so that the minimum bundle has largest weight. This is essentially a partition problem, which is NP-hard. \n \nL188-190: I don't think this statement is as helpful as it was perhaps intended to be. It is too vague. \n \n$\\rightarrow$ Done!\n\nL211: Not clear what optimal means here (could be MMS or max SW). Is O any MMS allocation?\n \n$\\rightarrow$ We apologize for being unclear here. \n The optimality is for MMS, and thus $O$ is one allocation that maximizes the value of the smallest bundle.\n\nL226-227: Please consider revising this. It is hard to parse this sentence.\n \n$\\rightarrow$ By Claim 3.5, the {\\bf while} loop will not execute Case 2 or it executes Case 1 for several times and then Case 2 for exactly once. \n \nL245: In the equation, please consider revising: for any v $\\ldots \\rightarrow$ for every v $\\ldots$ Currently, it reads like EFX\n \n$\\rightarrow$ Done.\n\n \n \nL271 and Theorem 4.1: Please consider including the here, since it is indeed a strong negative result.\n \n$\\rightarrow$ Done.\n \nL293: It is not clear why this matters. Please consider revising and illustrating this with an example.\n \n$\\rightarrow$ In the revised version, we included the following example, where the ency-cycle elimination algorithm does not have bounded approximation ratio. \n \nConsider a path of four nodes $v_1 \\to v_2 \\to v_3 \\to v_4$, and two agents have the same weight 1 on all three edges $(v_1,v_2)$, $v_2,v_3$ and $v_3,v_4$.\nBy ency-cycle elimination algorithm, we may first allocate the items in the following order: $v_1$ to agent 1, then $v_2$ to agent 2, then $v_3$ to agent 1 and finally $v_4$ to agent 2. Note that $u_1(\\{v_1,v_3\\}) = v_2(\\{v_2, v_4\\}) = 0$, however, the optimal social welfare is 2 by allocating $\\{v_1,v_2\\}$ to agent 1 and $\\{v_3,v_4\\}$ to agent 2. Thus the approximation ratio of the social welfare is unbounded. \n\n[Others]\n\nAlgorithm 1: It appears that $e_1$ is always the highest weight edge (due to it being the only edge in $M_1$) and in line 5, H is the set of all highest weight edges.\n \n$\\rightarrow$ Actually, $e_1$ may not have highest weight. We provided one example in the appendix of the revised submission. \n The high-level idea is that the edge with highest weight may not appear in a maximum matching. \n But it does not matter whether $e_1$ has highest weight or not since all the edges whose weights are no smaller than $e_1$'s will be degraded to a half of $e_1$'s weight. \n\nLine 290: Please consider providing an example here.\n \n$\\rightarrow$ We included the example in the revised submission.", " \nWe appreciate the reviewer's constructive comments.\nBefore we answer the reviewer's questions, we want to say that NeurIPS is a suitable venue for our paper because it does fall under the umbrella of the track “Theory (e.g., control theory, learning theory, algorithmic game theory)” in Call for Papers. We can see that NeurIPS is becoming more open to Algorithmic Game Theory, Combinatorial Optimization works and other interdisciplinary research, and fair division is a typical problem therein. \nThis can also be verified by the following papers that were accepted in the past few years.\n\n Fair Scheduling for Time-dependent Resources (NeurIPS 2021)\n\n Explainable Voting (NeurIPS 2020) \n\n Exploring Algorithmic Fairness in Robust Graph Covering Problems (NeurIPS 2019) \n\n Balancing Efficiency and Fairness in On-Demand Ridesourcing (NeurIPS 2019) \n\n A Graph-Theoretic Additive Approximation of Optimal Transport (NeurIPS 2019) \n\nNext we provide a point-by-point response to the reviewer's comments.\n\n[writing]\n\n L1: Motivated by the real $\\ldots$ $\\rightarrow$ Motivated by real $\\ldots$\n\n L4: We care both $\\ldots \\rightarrow$ We care about both\n\n L5: Regarding MMS $\\ldots$ $\\rightarrow$ Regarding MMS fairness\n\n L5: $\\ldots$ does not admit finite $\\ldots \\rightarrow$ $\\ldots$ does not admit a finite $\\ldots$\n\n L20-22: Needs more explanation or context of what is meant by a graphic resource (especially i did not understand how graph neural networks are relevant here)\n \nThese papers are not purely on fair resource allocation but all of them consider graphic structures.\n\n For example, [Ying et al., 2019] formulated a model that considers the mutual information between a Graph Neural Network's prediction and the distribution of possible subgraph structures in a given graph.\n Our intention was to use these papers to motivate why we are interested in graph structures. \n Since they are not about fair resource allocation, we did not include more details. \n \nWe have rewritten this part in the revised version. \n\n \nL23-26: Please cite some references (e.g. related work) for this motivating example. Also, the work on hedonic games may be relevant to discuss at some point.\n \n We wrote the motivating example and cite the corresponding papers in the revised version as follows.\n\n Peer Instruction (PI) has been shown to be an effective learning approach based on a project conducted at Harvard University, and one of the simplest ways to implement PI is to pair the students [Crouch C H et al. 2001].\n Consider the situation when we partition students to advisors, where the advisors will adopt PI for their assigned students. Note that the advisors may hold different perspectives on how to pair the students based on their own experience and expertise, and they want to maximize the efficiency of conducting PI in their own assigned students. How should we assign the students fairly to the advisors? How can we maximize the social welfare among all (approximately) fair assignments? Our results shed light on solving these two questions.\n\n[Crouch C H et al. 2001] Crouch C H , Mazur E. Peer Instruction: Ten years of experience and results[J]. American Journal of Physics, 2001, 69(9):970-977.\n\n We also added references on hedonic games in the second paragraph of introduction. \n\nL52-56: EF1 was proposed by Budish, 2011 (later) and proven to always exist by Lipton et al., 2004. This is confusing. I am familiar with the literature. Perhaps there is a better way to write this? \n\nWe rewrite this as follows: \n \n We added one footnote here: The algorithm in [Lipton et al., 2004] was originally published in 2004 with a different targeting property. In 2011, Budish [2011] formally proposed the notion of EF1 fairness.\n \n \nL127: ... if the weight function is highlighted. Consider revising this sentence.\n \nWe rewrote the sentence as follows: A problem instance is denoted by $\\cI= (G, N)$. When we want to highlight the weight function, $w$ is also included as a parameter, i.e., $\\cI = (G, N, w)$.\n \nL128-132: The imaginary experiment leaves some questions, and can perhaps be made more precise. From the description, it is not clear if the agent has knowledge of other agents' valuations and therefore of how other agents will pick the partitions. AFAIK, MMS is defined for the worst-case. Perhaps it is better to describe it as the maximum value the agent can have for the worst partition in any n-partitioning. And not involve the selections of other agents.\n \nWe rewrote this intuition in the counterpart of introduction. We removed this intuition here and directly introduced the formal definition. \n\nL136: $\\ldots$ if for all agent $\\ldots \\rightarrow \\ldots$ if for all agents $\\ldots$\nL137: $\\ldots$ is called MMS fairness $\\ldots \\rightarrow \\ldots$ is called MMS fair $\\ldots$\n \n \n \n", " We appreciate the reviewer's constructive comments. \nWe have followed all the comments to polish the paper, and fixed the typos. \nIn the following, we answer the reviewer's questions.\n\nQuestion: Related works on graph partition and fair division are properly cited. I do wonder if there are more closely related works that combine graph partition and fair division together?\n\nAnswer: We thank the reviewer for this question. Actually, there is a rich line of research papers that combine graph partition and fair division; see, e.g., [Bouveret et al., 2017; Suksompong, 2019; Bilò et al., 2019; Igarashi and Peters, 2019].\nHowever, in these works where the items are the vertices, the graph is used to restrict that the allocation to each agent forms a connected subgraph in the given graph. However, the value for the subgraph is still additive, i.e., summing up the values for the vertices/items.\nAs far as we know, the case when the value of an allocation depends on its graphical structures (like matching in our paper) has not been studied yet.\n\nQuestion: Could you give some lower bounds of the results?\n\nAnswer: This is a very good question!\n\nFor the case of homogeneous agents, despite significant effort, we were not able to provide any lower bounds.\nActually, we conjecture that there always exists an EF1 allocation that achieves the optimal social welfare. \n\nFor the case of heterogeneous agents, below we provide an instance showing a lower bound of $2$ when the agents have binary weight functions. \nConsider a square shown in Figure 3 in Appendix B.1. The maximum social welfare can be achieved by allocating the whole graph to one of the agents, resulting in a social welfare of 2. However, this allocation is not EF1.\nActually, to guarantee EF1 in this instance, we can allocate at most three vertices to one agent, resulting in a social welfare of 1.\n\nThis lower bound instance is also included in the revised submission.\n\nQuestion: Page 5, line 198, “The approximation ratio is at least 1/4”. By Lemma 3.3, why isn’t the approximation ratio 1/2?\n\nAnswer: We apologize for being unclear here. \nThe reviewer is correct that by Lemma 3.3, we lose an approximation of 1/2 when we use the greedy partition. \nHowever, before finding the maximum matching and its greedy partition, we first rounded down all the edge values to the closest power of 2, which makes the approximation ratio lose another factor of 2 by Lemma 3.4, and thus the approximation ratio is 1/4.\nWe refined this sentence in the revised submission.", " Question: My main concern to this paper is its model. I feel like the problem studied is too specific and the model has limited applications. I could understand the motivation provided by the authors. However, I still believe there are not a lot of practical scenarios where workers have to work in pairs. Why not groups of three or four, or even larger?\n\nAnswer: We thank the reviewer for the question. \nTo motivate our model, we can consider the following real-world example.\nThis example is also included in the revised version. \n\nPeer Instruction (PI) has been shown to be an effective learning approach based on a project conducted at Harvard University, and one of the simplest ways to implement PI is to pair the students [Crouch C H et al. 2001].\nConsider the situation when we partition students to advisors, where the advisors will adopt PI for their assigned students. \nNote that the advisors may hold different perspectives on how to pair the students based on their own experience and expertise, and they want to maximize the efficiency of conducting PI in their own assigned students. \nHow should we assign the students fairly to the advisors?\nHow can we maximize the social welfare among all (approximately) fair assignments?\nOur results shed light on solving these two questions.\n\n[Crouch C H et al. 2001] Crouch C H , Mazur E. Peer Instruction: Ten years of experience and results[J]. American Journal of Physics, 2001, 69(9):970-977.\n\nQuestion: For the result with 1/8-MMS, do you have any tight examples?\n\nAnswer: The $1/8$ might not be tight. The approximation ratio of $1/8$ consists of three multiplicative approximate factors of $1/2$:\n\n 1. Rounding all edge weights to powers of 2.\n\n 2. The greedy partition (i.e. $u(M_n) \\geq (1/2) u(M_1) \\geq (1/2) MMS$).\n\n 3. The value of MMS is halved when halving the largest weight in the graph. % (i.e., $MMS(I')=(1/2) MMS(I)$)\n\nActually, we can prove that all these three factors are tight, but the overall approximation ratio of $1/8$ is not tight. This is because, for example, the instance after step (a) (i.e., rounding the weights to powers of 2) does not match the tight example for step (b). \nWe conjecture that with a more involved analysis, our algorithm has better than $1/8$ approximation. \nIn the revised version, we added a sentence in this regard.", " Question: Are there any real motivating examples/usecases for the problem? \n\nAnswer: We can consider the following real-world example to motivate our model, and we have included this example in the revised version. \n\nPeer Instruction (PI) has been shown to be an effective learning approach based on a project conducted at Harvard University, and one of the simplest ways to implement PI is to pair the students [Crouch C H et al. 2001].\nConsider the situation when we partition students to advisors, where the advisors will adopt PI for their assigned students. \nNote that the advisors may hold different perspectives on how to pair the students based on their own experience and expertise, and they want to maximize the efficiency of conducting PI in their own assigned students. \nHow should we assign the students fairly to the advisors?\nHow can we maximize the social welfare among all (approximately) fair assignments?\nOur results shed light on solving these two questions.\n\n[Crouch C H et al. 2001] Crouch C H , Mazur E. Peer Instruction: Ten years of experience and results[J]. American Journal of Physics, 2001, 69(9):970-977.\n\nQuestion: What is the key central novel contribution of the work?\n \nAnswer: EF and MMS related fairness are being very widely investigated but most of the research is focusing on the additive valuations. Although there are several exceptions where submodular (or XoS and subadditive) valuations are studied, general combinatorial especially graphic structural functions have not been studied so far. Our key novel contribution is to initiate this line of research and uncover some interesting future directions.\n\nComment: \"{ ..., the approach is to find a maximum matching followed by a decomposition of the independent edges. This is really the first approach one can think of and in fact, putting aside the paper, I came up with it myself in 5 mins.}\"\n\nAnswer: We agree with the reviewer that the first attempt to solve this problem is to find a maximum matching followed by a decomposition of the independent edges.\nThe reviewer is smart and came up with this general idea quickly; however, as we highlighted in Figure 1, this idea can perform arbitrarily bad.\n \nAs we have shown in the paper, the difficulty of our algorithm is how to gradually modify the edge weights until it produces a good matching (which may not be the maximum) and can be partitioned so that the minimum bundle is large. \n\nComment: \"{..., Spending 2-3 pages on this result really underlines the technical inferiority of the work.}\"\n \nAnswer: First, we only used a single short paragraph to explain why the maximum matching + decomposition idea does not work, and the 2-3 pages are for introducing how to revise the edge weights and find a proper (decomposable) matching that may not be a maximum one, and proving its approximation ratio.\n\nSecond, as we mentioned the techniques here are not straightforward, and we believe it is necessary to rigorously prove all statements to ensure the correctness.\n \nMoreover, we want our paper to be friendly to the audience of NeurIPS who are not theoreticians. \n\nFinally, we want to use the homogeneous setting to warm up the readers in order to get a good understanding of our model and the techniques behind.", " This paper studies a variant of the indivisible item allocation problem. In this variant, the items are vertices of an edge-weighted graph. An agent’s valuation on a bundle is given by the weight of the maximum weight matching on the subgraph induced by the vertices in the bundle. In the homogeneous setting, the same edge-weighted graph is defined as the valuation of all the agents. In the heterogeneous setting, different agents have different weights for the edges in the graph. The motivation of this model is that, in a company or a department, workers many need to work in pairs. Thus, if workers are viewed as items, an agent would like to have a set of pairs of workers so that the collective contribution for each pair of workers (modeled by the weight of the edge) is large.\n\nThe authors study MMS and EF1 allocations, and the design of approximation algorithm for optimizing the social welfare. Notice that the approximation ratio is not defined in its typical way in a constrained optimization problem. In particular, the social welfare output by the algorithm is not compared with the optimal social welfare *subject to MMS/EF1*; instead, it is compared with the optimal social welfare without any fairness constraint. This is like the algorithm-design aspect of the “price of fairness”. I think this is also okay.\n\nFor the homogeneous setting, the authors provide a greedy-based algorithm that output a 1/8-MMS allocation. This is done by a combination of multiple techniques, including the greedy technique in the scheduling-like problems, the rounding of edge weights to integer powers of 2, and the treatment of the hard special case where the maximum-value bundle contains a single matching edge. The authors also present an algorithm that returns an EF1 allocation with social welfare at least about 2/3 of the optimal social welfare. For the heterogeneous setting, the authors show that MMS cannot be satisfied for any bounded approximation ratio, even for two agents with binary weight functions. The authors also show that there is a gap of 1/n between the optimal social welfare and the best social welfare subject to EF1. However, for EF1, the authors present some nice positive results for 1) binary edge-weights and 2) two agents. In both special cases, we can obtain an EF1 allocation with 1/3 approximation to the optimal social welfare. Most technical details of these results are deferred to the appendix.\n Strength:\n1. The results are relatively complete for both the homogeneous setting and the heterogeneous setting. For MMS, we know it is achievable (with a constant factor approximation) for the homogeneous setting, but the price of MMS is unbounded; we know it is not achievable in the heterogenous setting. For EF1, it is always achievable. The price of EF1 is low for the homogeneous setting, and it is high for the heterogeneous setting. The authors also have completed the results for the two special cases with 1) binary edge-weights and 2) two agents. We know MMS is not achievable even when both 1) and 2) hold, and price of EF1 is low if either 1) or 2) holds. I would say the results are quite complete.\n2. The 1/8-approximation algorithm for MMS is a neat one, although it is not very technically complicated. I have not checked the details in the appendix, so I am uncertain about the technical novelty for the remaining results. Nevertheless, it seems to me that there are some technical novelty for this paper.\n\nWeakness:\n1. My main concern to this paper is its model. I feel like the problem studied is too specific and the model has limited applications. I could understand the motivation provided by the authors. However, I still believe there are not a lot of practical scenarios where workers have to work in *pairs*. Why not groups of three or four, or even larger?\n2. The complexity results (e.g., hardness-of-approximation) are mostly absent from this paper. The authors provide many tight examples for the approximation algorithms designed in this paper. I think this paper could be improved by a lot if some lower-bound results can be proved. However, I do not view this as a major weakness of the paper.\n\nOverall, I think this is a borderline paper for a strong conference like NeurIPS. I like the fact that the results form a relatively complete landscape, and I also like the techniques behind those algorithms. On the other hand, I am uncertain about the significance of the problem studied in the paper (as I have addressed in 1 of “weakness”). I can see the motivation the authors provided, and I am convinced that the model may have some applications. However, I still do not think it is significant enough.\n For the result with 1/8-MMS, do you have any tight examples? The authors have addressed this in the conclusion section, where a list of future directions is provided. I think this is satisfactory.", " The paper considers a new variant of fair resource allocation problem: given a set of resources as vertices of a graph, the goal is to partition the resources among N agents -- an agent utility is given by the maximum matching on the induced subgraph of his partition. They consider two fairness measures on the utilities received by each agent : max min share (MMS) - where the minimum utility of agents is maximized, envy free upto one item (EF1) - where the no agent envies others' allocation after removing upto one item.\n\nThe main results of the work are the following: a 1/8 approximation algorithm for MMS version of the problem where all the agents are identical and a constant approximation for the EF1 variant. The work also gives some hardness results and better approximation in certain special cases. The paper is reasonable well written and easy to follow. Unfortunately, there are multiple severe issues with it.\n\nFirstly, I am really not convinced of the motivation behind defining the problem. The motivating example of \"pairing up employees\" is really weak and sounds made-up to fit the problem definition. \n\nIn spirit of a theoretical work, the motivation can be ignored if there are sufficiently interesting results. Unfortunately, the algorithms and analysis is just a rehashing of existing ideas. \n\nFor instance, for the MMS version of the problem, the approach is to find a maximum matching followed by a decomposition of the independent edges. This is really the first approach one can think of and in fact, putting aside the paper, I came up with it myself in 5 mins. The paper then elaborates a way to decompose the independent edges in N parts. But this is a standard problem in approximation algorithms used for example load-balancing and several other scheduling problems. Indeed, in bin packing, adding items to lowest occupied bin yields a simple 2-approximation and these results are known from the 1960s. Spending 2-3 pages on this result really underlines the technical inferiority of the work. \n As mentioned above, the authors must address two issues with the paper: are there any real motivating examples/usecases for the problem? Secondly, what is the key central novel contribution of the work. N/A", " This paper deals with a novel variant of the fair division problem which deals with the allocation of indivisible items to multiple agents with complementarities. Each agent’s valuations are given by a weighted undirected graph where there is a node for each item and the edge weights correspond to the value the agent receives when the items corresponding to the end points of the edge are paired together. The value for a bundle of items is given by the total value of the maximum weight matching that can be obtained in the subgraph induced by the nodes corresponding to the items in the bundle. The authors motivate this setting by the example of departments (agents) in an engineering firm having to pair up the engineers (resources / items) assigned to the department, and other settings where employees must be paired to achieve tasks that generate positive value.\n\nThe main results deal with computing fair, i.e., envy-free up to one item [EF1] and maximin-share [MMS], and efficient, i.e., social welfare maximizing, allocations of items to agents. The authors provide several interesting positive and negative results on efficiently computing allocations that guarantee a (fraction of) agents’ MMS share or total social welfare under constraints such as guaranteeing EF1 under settings with homogeneous (all agents have identical valuation functions / graphs) or heterogeneous (agents have possibly different valuations) agents. Strengths:\n\n[Conceptual contributions]\n+ The setting is novel and possibly interesting to the overall research community studying fair division of indivisible items.\n+ The proposed model presents interesting directions for future work, as the authors note. In particular, other valuation functions that depend on the subgraph induced by the items in a bundle open several interesting possibilities.\n\n[Technical contributions]\n+ The theoretical results are non-trivial and the proof sketches in the main body are convincing.\n+ The results in the paper are therefore a good first step.\n\nWeaknesses:\n- [Relevance] I am struggling to find the relevance to NeurIPS, although I will leave this matter for later discussion, and have not allowed it to affect my final recommendation at this time.\n- [Motivation] The motivating examples are unconvincing and lack any citations or references. For example, the authors claim that the motivating problem of a company with departments who have preferences over how to pair engineers is a real-world scenario. I think I understand what they refer to, but the authors provide no formal references or citations.\n- [Significance] While the technical results are interesting, due to the poor presentation, the significance of the results is not clear.\n\n[Presentation]\n- Poor quality of writing overall. See detailed comments below.\n- Several theorem statements are imprecise. Detailed comments below.\n- Several arguments are vague. I understand that there is a space limitation, but perhaps this can be revised in a future iteration. Detailed comments below.\n- There are some interesting technical results, but perhaps the most interesting and significant results can be highlighted more prominently. In particular, the results for heterogeneous agents on the \"price of EF1\" in terms of social welfare and impossibility of guaranteeing any constant fraction of the MMS share are interesting negative results.\n- A conjecture is presented in the Section 1.1 that an EF1 and social welfare maximizing allocation always exists but no argument is provided in the relevant Section 3.2 [I apologize if I missed it, and am happy to be corrected].\n\nDetailed comments:\n\n[Writing]\n- L1: Motivated by the real ... - > Motivated by real ...\n- L4: We care both ... -> Consider revising\n- L5: Regarding MMS ... -> Consider revising\n- L5: ... does not admit finite ... -> ... does not admit a finite ...\n- L20-22: Needs more explanation or context of what is meant by a graphic resource (especially i did not understand how graph neural networks are relevant here)\n- L23-26: Please cite some references (e.g. related work) for this motivating example. Also, the work on hedonic games may be relevant to discuss at some point.\n- L52-56: EF1 was proposed by Budish, 2011 (later) and proven to always exist by Lipton et al., 2004. This is confusing. I am familiar with the literature. Perhaps there is a better way to write this?\n- L127: ... if the weight function is highlighted. Consider revising this sentence.\n- L128-132: The imaginary experiment leaves some questions, and can perhaps be made more precise. From the description, it is not clear if the agent has knowledge of other agents' valuations and therefore of how other agents will pick the partitions. AFAIK, MMS is defined for the worst-case. Perhaps it is better to describe it as the maximum value the agent can have for the worst partition in any n-partitioning. And not involve the selections of other agents.\n- L136: ... if for all agent ... -> ... if for all agents ...\n- L137: ... is called MMS fairness ... -> ... is called MMS fair ...\n- L143: Accordingly, people mostly ... - > Please consider revising\n- L145: The line claims that EF1 is widely accepted and studied. But no citations or references are provided. I suggest that this be motivated and discussed earlier (e.g. Introduction or Related Works) and not repeated here, and only formal definitions and helpful examples be provided in the Preliminaries.\n- L199: ... decent ... -> Not clear what this means\n- L200: ... far smaller ... -> Consider revising\n- Algorithm 3: The comments for the different cases are not made consistently. Please consider revising.\n- L299: ... if so we do exchange ... -> Please consider revising.\n- L302: ... and so on so forth. -> ... and so on.\n\n[Theorem statements]\n- L164: I believe this only applies to the case of homogeneous agents, which is not clear from the theorem statement.\n- L222: ... two cases happens. -> ... two cases hold true. (or something similar)\n- L268 and L270: Please consider specifying that this is for the case of heterogeneous agents.\n- L297: ... j envies i ... -> ... j envy i ...\n- L304: Please clarify whether this result only holds for binary weights.\n\n[Arguments]\n- L158-160: Please consider making this more precise and providing a brief argument and cite the (variant of the) Partition problem.\n- L169: This task is NP-hard ... -> Why? [I think I know why] Please specify and consider rewriting to avoid repetitions.\n- L188-190: I don't think this statement is as helpful as it was perhaps intended to be. It is too vague.\n- L211: Not clear what optimal means here (could be MMS or max SW). Is O any MMS allocation?\n- L226-227: Please consider revising this. It is hard to parse this sentence.\n- L245: In the equation, please consider revising: for any v ... -> for every v ... Currently, it reads like EFX\n- L254-261: Please consider using a proof sketch environment if possible and avoid mixing descriptions of the algorithm with the proof.\n- L271 and Theorem 4.1: Please consider including the here, since it is indeed a strong negative result.\n- L293: It is not clear why this matters. Please consider revising and illustrating this with an example.\n\n[Others]\n- Algorithm 1: It appears that e_1 is always the highest weight edge (due to it being the only edge in M_1) and in line 5, H is the set of all highest weight edges.\n- Line 290: Please consider providing an example here.\n No significant questions. Please see detailed comments. The authors do not identify the societal impact of this work. [Neutral]", " This paper studies mechanisms for the graphical resource allocation problem with two types of fairness: approximate maximin share (MMS) and envy-freeness up to one item (EF1). The problem is defined as follows: the resources are represented as nodes in an undirected graph, and the goal is to allocate (or partially allocate) the nodes to n agents. Each agent has an edge weight function for each edge in the graph, and an agents’ utility on one partition is the weight of a maximum matching in the subgraph induced by the nodes in the partition. \n\nThe authors first consider a setting of homogeneous agents that all agents have identical valuation functions. They show that a naive greedy partition is an MMS allocation for unweighted graphs, but does not have any bounded approximation guarantee in more general cases. Then they propose an algorithm that computes a ⅛-MMS allocation in polynomial time. Because any bounded approximation ratio for MMS fairness could have unbounded social welfare loss, EF1 is also studied to preserve high social welfare while having fairness guarantee. Theorem 3.6 shows that a polynomial time algorithm returns an EF1 allocation with an approximate optimal social welfare. \n\nFor agents that have heterogeneous valuation functions, the authors first show some negative results that no algorithm has bounded approximation for MMS and no algorithm has better than 1/n approximation of social welfare for EF1 fairness. Then they consider two special cases: 1, when the agents have binary weight functions, Algorithm 3 returns an EF1 allocation with 1/3 -approximate social welfare. 2, In the case that there are only two agents, Algorithm 4 returns an EF1 allocation with a tight ⅓ approximation ratio to the optimal social welfare.\n * Originality: This work proposes a setting of fair division on indivisible graphical items where agents have combinatorial valuations. Two commonly used fairness concepts MMS and EF1 are studied. Related works on graph partition and fair division are properly cited. I do wonder if there are more closely related works that combine graph partition and fair division together?\n* Quality: The notations are well defined. The theorems / lemmas are neatly presented and proved. \n* Clarity: This paper is well written and organized. Some minor comments:\n * Page 1, line 15: “ensuring constant fraction” -> ensuring a constant fraction\n * Page 2, line 49: “significant amount of” -> a significant amount of\n * Page 2, line 53: “widely accepted and studies” -> widely accepted and studied\n * Page, line 83: “for two-agent case” -> for the two-agent case\n * Page 9, line 295 “smallest ” -> the smallest; page 9 line 302: “second smallest ” -> the second smallest\n* Significance: The results in both the homogeneous and heterogeneous settings are interesting and could be building blocks for future works in this line. * Are there more closely related works that combine graph partition and fair division together? See my comments in “Strengths And Weaknesses”.\n* Could you give some lower bounds of the results? See my comments in “Limitations”. \n* Page 5, line 198: “The approximation ratio is at least 1/4”. By Lemma 3.3, why isn’t the approximation ratio ½?\n One thing could make this paper stronger is to have more lower bound results. Theorem 4.4 showed that the 1/3 -approximation to social welfare with EF1 is tight, but most of the results in this paper are without lower bounds. It would be interesting to see some lower bounds, even loose ones. The authors have addressed this in Section 5 “Future Directions”." ]
[ -1, -1, -1, -1, -1, -1, 5, 3, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "yVuM6w6C-LC", "h4qXVKVuMNq", "CPlFahNUQ3q", "NMLd8DxhEnT", "QZm7Rt1sdGI", "DlfpV41Mdyz", "nips_2022_q-tTkgjuiv5", "nips_2022_q-tTkgjuiv5", "nips_2022_q-tTkgjuiv5", "nips_2022_q-tTkgjuiv5" ]
nips_2022_2yvUYc-YNUH
Test Time Adaptation via Conjugate Pseudo-labels
Test-time adaptation (TTA) refers to adapting neural networks to distribution shifts, specifically with just access to unlabeled test samples from the new domain at test-time. Prior TTA methods optimize over unsupervised objectives such as the entropy of model predictions in TENT (Wang et al., 2021), but it is unclear what exactly makes a good TTA loss. In this paper, we start by presenting a surprising phenomenon: if we attempt to $\textit{meta-learn}$ the ``best'' possible TTA loss over a wide class of functions, then we recover a function that is $\textit{remarkably}$ similar to (a temperature-scaled version of) the softmax-entropy employed by TENT. This only holds, however, if the classifier we are adapting is trained via cross-entropy loss; if the classifier is trained via squared loss, a different ``best'' TTA loss emerges. To explain this phenomenon, we analyze test-time adaptation through the lens of the training losses's $\textit{convex conjugate}$. We show that under natural conditions, this (unsupervised) conjugate function can be viewed as a good local approximation to the original supervised loss and indeed, it recovers the ``best'' losses found by meta-learning. This leads to a generic recipe than be used to find a good TTA loss for $\textit{any}$ given supervised training loss function of a general class. Empirically, our approach dominates other TTA alternatives over a wide range of domain adaptation benchmarks. Our approach is particularly of interest when applied to classifiers trained with $\textit{novel}$ loss functions, e.g., the recently-proposed PolyLoss (Leng et al., 2022) function, where it differs substantially from (and outperforms) an entropy-based loss. Further, we show that our conjugate based approach can also be interpreted as a kind of self-training using a very specific soft label, which we refer to as the $\textit{conjugate pseudo-label}$. Overall, therefore, our method provides a broad framework for better understanding and improving test-time adaptation. Code is available at https://github.com/locuslab/tta_conjugate.
Accept
All reviewers agree this paper presents a novel and principled approach to test time adaptation losses. All reviewers find the paper clearly written and contributions meaningful. I suggest acceptance.
train
[ "whpPKH9MO4", "OfHOfisYexO", "CdoWUnJldN0", "r2XLsm8si2l", "zSI5xgZQKn2", "8hOnTCJwDCp", "HuEsDLBRg0X", "ktjUR66JVXF", "ZcX6kJOuMl2", "GThCu4NlqT", "raCnVBoxDC", "QyW0P-HWZSt", "0SJ_hZwUh9J", "TNOzB9HXvG4" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We will add a detailed discussion on the current limitations of our work and the interesting future directions (as also discussed in the response to reviewer JZZE) to the paper. \n\nWe again thank the reviewer for their detailed feedback and for kindly increasing the score after our response.", " Thanks for addressing my concerns and updating the paper with additional experiments. I have increased my score.\n\nI agree that the meta-learned loss is “best”, my concern always was regarding conjugate loss recovering this “best” loss. It seems to hold true in your experiments but as the authors agreed, there could be other distribution shifts where meta-learning will still obtain the “best” loss but conjugate loss might not recover it. I agree this can be a direction for future work but should be addressed as a limitation in the paper. For example, the discussion in the limitation section can emphasize that for some distribution shifts, the following assumption from the paper might not hold (Li227): “…as long as the learnt source parameters θ is a reasonable approximation to the true optimal θ^opt on the shifted domain, self-training with the conjugate pseudo-labels provides a reasonable proxy…”\n\n", " Understood. \n", " Thanks for your feedback and the updated paper. The rebuttal clarify my concerns. I confirm my accept score.\n\n", " **Question 4 :** \n4(a) : We agree with the reviewer that the role of several heuristics in test-time adaptation is not completely understood. This includes updating only the batch norm layers as opposed to all the parameters [2]. In this work, via the meta-learning experiments, we discovered that temperature scaling is another important hyper-parameter that improves the performance of all the previous baselines as well. However, we do believe that our findings about temperature scaling are orthogonal to the contribution of conjugate pseudo-label framework. At a high level, test-time adaptation has to be appropriately regularized to prevent the updates over batches from taking the model too far: updating only a few batch norm parameters is one way to do that, and perhaps temperature scaling provides a similar beneficial regularization effect by making the network predictions on unlabeled inputs less confident. Understanding the role of these heuristics more concretely would be an interesting direction for future work. \n\n4(b) : In this work we considered the loss functions which (1) factorize over the test inputs independently, and (2) are functions of the predictions / logits. Examples of such loss functions include softmax-entropy, robust pseudo labeling[1], etc. The diversity regularization in SHOT-IM and feature alignment approaches considers loss functions defined across a set of samples and over intermediate representations, and hence cannot be covered by our framework. However, diversity regularization generally involves an additive loss term to the softmax-entropy minimization, and hence is complementary to our proposed conjugate loss. \nExtending our meta-learning framework to more involved setups where we include the intermediate representations and consider learning functions over a batch of input while accounting for their interactions is worth investigating. \n\nWe have updated the manuscript accordingly with regards to the minor comments.\n\nWe again thank the reviewer for their detailed comments on our work, identifying valid limitations and posting intriguing questions. We are more than happy to engage in further discussions during the review period. \n\n[1] Evgenia Rusak, Steffen Schneider, George Pachitariu, Luisa Eck, Peter Vincent Gehler, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. If your data distribution shifts, use self learning, 2022. \n[2] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization, 2021. \n", " We thank the reviewer for carefully reviewing our work and appreciating the key insights and contributions of our work. Below we try to clarify some of the points raised by the reviewer.\n\n**Question 1 :** Appendix A3 discusses the meta learning experiments in further details, mentioning the architecture for the meta-objective, the exact optimization algorithm (Algorithm Box 2) and the hyper-parameters used. We have further updated Appendix A3 to specifically include the clarifications as requested by the reviewer. \n1(a) : We update the same portion of the network (shift and bias parameters in BatchNorm layers) as with TENT. \n1(b) : Algorithm Box 2 lists the exact optimization setup. We used Adam optimizer, with a learning rate of $0.001$, batch size of $200$ and trained for $100$ epochs. \n1(c) : We used different batches and have corrected the manuscript. \n1(d) : Training the above setup (Algorithm 2) to convergence repeatedly gave us similar meta-loss functions (upto variance in the scale and temperature) across various architectures, datasets and noise types (see below). We find that using different batches (rather than same batch) for inner loop (updating the classifier) and the outer loop (updating the meta-loss) gives better performance. For the meta loss architecture, we find that using a neural network with a Transformer Encoder block as the backbone gives best performance. \n\nOur meta-learning experiments (Appendix A10) consistently recovered the temperature scaled convex conjugate across a range of settings we tried : \n* Datasets : CIFAR10, CIFAR100, SVHN->MNIST, SVHN->USPS.\n* Architectures : ResNet26, ResNet50\n* Different Shifts of various intensity: Various validation noises in common corruptions dataset i.e. speckle noise, gaussian blur, saturate and spatter of various intensity (1, 2, 3, 4, 5). \n\nIn Appendix A10, Figure 7 shows the learnt meta-loss when adapting from SVHN$\\\\rightarrow$MNIST and SVHN$\\\\rightarrow$USPS. Figure 8, 9 and 10 show the learnt meta-loss for various dataset, architectures and shifts of different intensity as mentioned above in the common corruptions dataset. Similarly Figure 11 shows the leant meta-loss when the source classifier is trained using squared loss, again recovering the quadratic loss as suggested by the conjugate formulation. \n\n1(e) Prediction score refers to the output of the classifier. For a classifier trained with cross-entropy loss, prediction score is the logit whereas for squared loss, it will be the direct output (we simply avoid using the term logit for source training losses other than cross-entropy). \n\n**Online vs Offline Adaptation Setup :** In the meta learning procedure (Algorithm 2), we mimic the online adaptation setup in the inner loop. Specifically, in each epoch, in the inner loop, a sequential batch of data is used to update the classifier with the meta loss. Then we update the meta loss using some supervised surrogate loss (Equation 1) over the updated classifier’s predictions on labeled samples from the shifted data. The inner loop in each epoch would thus correspond to one online adaptation process and we repeat this until the convergence for the learnable meta-loss (i.e. multiple epochs “only” during meta-learning the loss). \n\n**Question 2 :** \n2(a) : We would like to clarify that Equation 2 is a generic form in which various loss functions commonly used in machine learning can naturally be expressed by definition. This includes cross-entropy loss (L162), squared loss (L164), exponential loss (L567) and the poly-loss (L215). We have added a detailed derivation for expressing cross-entropy and squared loss in the form of equation 2, as well as a derivation of their convex conjugates in Appendix A1. \n2(b) : We used batch of data for each update. N represents total number of batches. We have updated the algorithm 1 accordingly. Thanks for pointing this out. \n\n**Question 3 :** \n3(a) : We update the same portion of the network as with TENT, i.e. the learnable scale and shift parameters of the batch normalization layers. Yes, it is the same for all the reported methods. \n3(b) : Yes, we treat the test-data in streaming fashion and adapt the model only once for each sample (Algorithm 3 in Appendix). ", " We thank the reviewer for their generous comments on our work and appreciating the test-time adaptation setup in general.\n\nWe used ResNet26 and ResNet50 for CIFAR10/100 and ImageNet respectively following the previous work as mentioned in L248-L250. Table 6,7,8 in the appendix specify the exact hyper-parameters we used for all the baselines and our method, found by gridsearch whose details are mentioned in L275-L276 in the main manuscript. Standard deviation is reported across 5 independent runs. Note that we have submitted our code in supplementary and also will be open sourcing it if the work is accepted.\n\n**Best TTA loss :** To be clear, we used the term “best” in the sense that it is what is recovered by the meta-learning procedure. We do agree that this notion of “best” is an empirical one. We tried to reflect this in our writing (L146), and have clarified this in further detail in the revision. That said, we would like to note that because meta-leant loss is parameterized by a neural network, it hence belongs to a very powerful class of functions which potentially contains (and can learn) much more complex TTA loss functions. These could have been specific to the source model and the particular distribution shift. Our claim for conjugate loss being the “best” loss is based on the fact that it consistently recovers the learnt meta-loss (upto variance in temperature and scale) across a wide range of settings as mentioned in Appendix A10. \nWe experimented with various datasets (CIFAR10, CIFAR100, SVHN->MNIST, SVHN->USPS), architectures (ResNet26, ResNet50) and different shifts (various validation noises in corruptions dataset) of various intensity, consistently recovering the temperature scaled convex conjugate. Showing the optimality of conjugate loss more concretely (along with origin of temperature scaling) is one of our limitations and worth exploring in future. \n\nWe again thank the reviewer for carefully reviewing our work posting intriguing question. We are more than happy to engage in further discussions during the review period.\n", " **SVHN$\\\\rightarrow$MNIST :** As requested by the reviewer, we performed additional experiments on SVHN $\\\\rightarrow$ MNIST and SVHN$\\\\rightarrow$USPS dataset when the source classifier is trained using cross-entropy loss. In the absence of validation data, we used a fixed learning rate of $0.01$ with Adam optimizer and T=$2$ (informed guess based on average temperature values in corruptions dataset experiments) across all the baselines. \n\n| Dataset | Temperature | Source Error | Hard PL | Robust PL | MEMO | Conjugate PL (ENT) |\n|:-------------:|:------------:|:------------:|:-------:|:---------:|:---------:|:------------------:|\n| SVHN -> MNIST | No | 34.17 | 21.54 | 27.44 | **10.67** | 14.41 |\n| | Yes | 34.17 | 21.54 | 13.26 | **9.36** | **9.26** |\n| SVHN -> USPS | No | 31.84 | 26.06 | 26.81 | 22.72 | **22.57** |\n| | Yes | 31.84 | 26.06 | **22.32** | 22.42 | **22.27** |\n\nHere again we observe that on SVHN $\\\\rightarrow$ MNIST benchmark, without temperature scaling, MEMO ($10.67\\\\%$ error) outperforms softmax-entropy ($14.41\\\\%$ error). However, similar to the observation in Table 1, we see that with temperature scaling, softmax-entropy minimization ($9.26\\\\%$ error) is able to match the performance of MEMO ($9.36\\\\%$ error). Further, on the SVHN $\\\\rightarrow$ USPS benchmark, softmax-entropy (conjugate) and MEMO perform similar even without temperature scaling. \n\nWe have updated the manuscript accordingly with regards to the minor comments. \n\nWe would like to thank the reviewer again, for their detailed comments on our work, identifying valid limitations and posting intriguing questions. We would be more than happy to engage in further discussion during the review period. \n\n[1] Evgenia Rusak, Steffen Schneider, George Pachitariu, Luisa Eck, Peter Vincent Gehler, Oliver Bringmann, Wieland Brendel, and Matthias Bethge. If your data distribution shifts, use self learning, 2022. \n[2] Marvin Zhang, Sergey Levine, and Chelsea Finn. Memo: Test time robustness via adaptation and augmentation, 2021. \n[3] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization, 2021. \n\n", " We thank the reviewer for appreciating the novelty of our work and posting intriguing questions. Below we try to clarify the points raised by the reviewer.\n\n**Temperature Scaling :** We agree with the reviewer that the role of several heuristics in test-time adaptation is not completely understood. This includes updating only the batch norm layers as opposed to all the parameters [3]. In this work, via the meta-learning experiments, we discovered that temperature scaling is another important hyper-parameter that improves the performance of all the previous baselines as well. However, we do believe that our findings about temperature scaling are orthogonal to the contribution of conjugate pseudo-label framework. At a high level, test-time adaptation has to be appropriately regularized to prevent the updates over batches from taking the model too far: updating only a few batch norm parameters is one way to do that, and perhaps temperature scaling provides a similar beneficial regularization effect by making the network predictions on unlabeled inputs less confident. Understanding the role of these heuristics more concretely would be an interesting direction for future work. \nAdditionally, our claim that conjugate PL are less sensitive to temperature scaling was based on polyloss imagenet results, but we do agree with the reviewer that this is not a global phenomenon, and so we have removed this claim entirely from the manuscript.\n\n**Regarding Validation Data :** Yes, we need validation data to tune the temperature scaling parameter, but to the best of our knowledge, all previous adaptation methods also require some form of validation data [1][2]. For example, the reviewer asked about TENT [3] – while their paper fixes a learning rate of 0.001, we found in our experiments that the performance is indeed sensitive to the choice of learning rate. So we think that our findings on temperature scaling show that it is a hyper-parameter worth tuning, along with learning rate, when doing test-time adaptation. Furthermore, temperature scaling is not necessary for using our proposed conjugate pseudo-labels. We still find performance gains over other baselines in the polyloss trained classifiers even without temperature scaling, but we can improve performance across the board via temperature scaling. \n\n**Best TTA loss :** To be clear, we used the term “best” in the sense that it is what is recovered by the meta-learning procedure. We do agree that this notion of “best” is an empirical one. We tried to reflect this in our writing (L146), and have clarified this in further detail in the revision. That said, we would like to note that because meta-leant loss is parameterized by a neural network, it hence belongs to a very powerful class of functions which potentially contains (and can learn) much more complex TTA loss functions. These could have been specific to the source model and the particular distribution shift. Our claim for conjugate loss being the “best” loss is based on the fact that it consistently recovers the learnt meta-loss (upto variance in temperature and scale) across a wide range of settings as mentioned below. Showing the optimality of conjugate loss more concretely (along with origin of temperature scaling) is one of our limitations and worth exploring in future. \n\n**Best TTA loss and various shifts :** Our experiments (Appendix A10) consistently recovered the temperature scaled convex conjugate across a range of setting we tried :\n* Datasets : CIFAR10, CIFAR100, SVHN->MNIST, SVHN->USPS.\n* Architectures : ResNet26, ResNet50.\n* Different Shifts of various intensity: Various validation noises in common corruptions dataset i.e. speckle noise, gaussian blur, saturate and spatter of various intensity (1, 2, 3, 4, 5). \n\nIn Appendix A10, Figure 7 shows the learnt meta-loss when adapting from SVHN$\\\\rightarrow$MNIST and SVHN$\\\\rightarrow$USPS for a cross-entropy trained source classifier. Figure 8, 9 and 10 show the learnt meta-loss for various dataset, architectures and shifts of different intensity as mentioned above in the common corruptions dataset. Figure 11 shows the leant meta-loss when the source classifier is trained using squared loss, again recovering the quadratic loss as suggested by the conjugate formulation. That said, we do agree that one could always construct a toy dataset where a shift is specifically added based on a pre-known optimal TTA loss. \n\nIn general, we believe that our work unearths an interesting connection between the choice of TTA loss and the source classifier training. The connection between the convex conjugate of source training loss and the learnt meta-loss do lay out interesting directions which can act as a pre-cursor for future work on various aspects still open to better understanding, as even pointed out by the reviewer. \n\n\n", " We thank the reviewer for carefully reviewing our work and acknowledging the interesting results across datasets. Below we discuss some additional limitations and future research directions for our work as requested by the reviewer. \n\nIn this work, we proposed a general test-time adaptation loss , based on the convex conjugate formulation which in turn was motivated by the intriguing meta learning experiments. The fact that meta-learning recovers the proposed loss hints at some kind of optimality of the loss, and we can prove that for a broad set of loss functions, the proposed unsupervised conjugate loss is close to the oracle supervised loss. However, we do not yet have a complete understanding of what’s the optimal test-time adaptation method and why. \n\nAchieving good test-time adaptation generally involves several heuristics like updating only batch norm parameters (as opposed to all parameters) [2], beyond the choice of the adaptation loss itself. While our work focused on the loss, via the meta-learning experiments, we discovered that temperature scaling is another important hyper-parameter that improves the performance of all previous baselines as well. At a high level, test-time adaptation has to be appropriately regularized to prevent the updates over batches from taking the model too far: updating only a few batch norm parameters is one way to do that, and perhaps temperature scaling provides a similar beneficial regularization effect by making the network predictions on unlabeled inputs less confident. Understanding the role of these heuristics more concretely would be an interesting direction for future work.\n\nIn this work we considered source training loss functions of the form which can be represented by Equation 2. A natural extension is to broaden the class of loss functions considered. This includes :\n* Studying the case when the conjugate of the source training loss function is not independent of y (as even pointed out by the reviewer). However, this might not be crucial since the current formulation covers most of the general loss functions used in machine learning. \n* Our current meta-learning framework is limited to learning a test-time adaptation loss function over the predictions of the classifier for each individual input independently. More involved setups where we include the intermediate representations and consider learning functions over a batch of input while accounting for their interactions are worth investigating.\n\nMoving on to high level extensions, a question which in general is applicable to the larger line of work around self-training is to characterize under what sort of real distributions shifts would self-training and pseudo-labeling based approaches would work. [1] provides some insights under gaussian settings. \n\nThe experimental setup in this paper considered evaluating conjugate pseudo-labels for adapting to distribution shifts. But it would also be interesting to apply conjugate pseudo-labels to the general case of standard semi-supervised and self-supervised learning.\n\nWe have updated the manuscript with a brief version of above discussion. We will add this detailed discussion to the additional page in the camera ready version (if the work gets accepted). \n[1] Ananya Kumar, Tengyu Ma, and Percy Liang. Understanding self-training for gradual domain adaptation, 2020. \n[2] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization, 2021. \n\n\n", " The authors propose an approach for soft-labeling test data for test-time adaptation (TTA) purposes based on \"conjugate pseudolabels\". Motivated by the result that metalearning the TTA objective given test time labels roughly recovers the (temperature-scaled) training objective, the authors show that if the training objective can be expressed as a conjugate function $L(x,y) = f(h(x)) - y^Th(x)$ ($x$ inputs, $y$ labels), then the optimial test time loss can be approximated by $L(y)_{conj} = -f^*(\\nabla f(h(x))$, (f* the conjugate function of f), which is independent of the label, and implies the \"conjugate pseudo-labels\" (the usual softmax of h(x) for CE loss). TTA results on CIFAR-10-C,CIFAR-100-C, and ImageNet-C demonstrate that conjugate pseudo-labels consistently outperform other PL approaches. Originality: 7\nQuality: 7\nClarity: 7\nSignificance: 7\n\n- The paper is well written and the approach and results are interesting.\n- Using conjugate PLs for polyloss (novel) delivers on-par or better results than existing methods\n- The conjugate PL approach is adequately developed in the paper, but could benefit significantly from a section discussing current limitations, the more general case of $L(y)_{conj}$ not independent of $y$, the dual view, and future research directions. See S&W section. See S&W section.", " Test-time training is an emerging and promising approach for building robust machine learning models to distribution shifts. As the model can not access ground-truth labels, the most prior test-time adaptation literatures design the unsupervised loss function, e.g. entropy minimization used in TENT. However, as claimed by this paper, it is unclear what makes a good unsupervised loss in the setting of TTA. \n\nThis paper contributes to this important problem by following three means. (1) They show that the meta-learned unsupervised objective, which can be regarded as a best unsupervised loss, recover the well-known entropy minimization when the base classifier trained with cross entropy, but it recover different unsupervised loss function when using different supervised loss function. The results suggest that the best loss function differs among the choice of the supervised loss function. (2) They analyze the phenomenon from the lens of conjugated convex function, and show that it recovers the existing test-time adaptation strategies, which is similar to the objective found by meta-learning. (3) Based on the observation, they propose conjugate pseudo-labels as a method for test-time adaptation. It can be regarded as a general case of pseudo-labels, and can be used for loss function whose pseudo-labeling strategy is not trivial (e.g., recently proposed poly loss). They empirically show that the conjugate pseudo labels match the performance of entropy minimization when the source model is trained by cross-entropy loss, and outperforms existing methods when the source model is trained by Poly loss and squared loss. As mentioned in the summary, this paper provides several interesting insights about how to design good unsupervised loss functions by the series of experiments and theoretical analysis. This paper is well motivated, each finding is well described and interesting. Not only does the paper explain why the existing method works well, but also provide more generic forms of the similar loss function for a broader supervised loss function, called conjugated pseudo-label. Empirical validation supports the merits of the proposed method, especially the source model is trained via unusual loss function, such as poly loss and squared loss. \nWhile I found the paper interesting, there are several unclear points from the current manuscripts. \n\n--- \n<Major comments>\n\n(1) The details of meta-learning experiments are not sufficiently presented. For example, what parameters of the networks were updated, and what optimizer, learning rate, batch size was used? It is not clear whether these factors affect the meta-learned objective or not. There is no clear explanation about what is the prediction score in Figure 1. \n\nRegarding the meta learning experiments, I am also concerned about whether the unsupervised loss function is learned via online adaptation setup, or offline adaptation setup. In standard meta-learning literature, in my understanding, they repeat the inner loop and outer loop many times, i.e. they assume offline adaptation setup. While the approach can reveal good objective function, it might not be effective in actual test-time (online) adaptation setup, as there is the discrepancy between meta-learning and actual test-phase. For example, in online-adaptation, adaptation speed and adaptation stability might be an issue given that there are less clues about data distribution. \n\n(2) While I like the general story of the paper and I am positive to accept the paper, I am still not convinced that the conjugated convex perspective could fully explain what makes a good unsupervised loss function in test-time adaptation setup. For example, why does the meta-learning objective recover the scaled and temperature loss function rather than the standard entropy suggested by the equation (9)? I am wondering if it is related to the assumption that the source models are sufficiently overparameterized, or the nature of test-time adaptation setup, which needs the online adaptation (and may be fast adaptation rather than other unsupervised adaptation setup). \n\n(3) Related to the above question, the results can not explain why several prior methods are often better than TENT in practice, even when the source model is trained via cross-entropy loss. For example, [1] shows that SHOT-IM (adding diversity regularization term) often has better results than TENT. They also show that the feature alignment approach often gives better results than the simple feature modulation approach. While I understand that it is out of scope of this paper, i.e., authors do not claim that the conjugated pseudo labeling is the best possible solution, I am curious whether conjugate optimization perspective could give further insights. \n\n\n(4) While it depends on readers, I found the section 4 should be more self-contained. For example, the logic behind some important equation (e.g., eq. 2, 3, 5) should be sufficiently described. \n\n--- \n<Minor comments>\n\n* In algorithm 1, no description about N. \n* The index n seems to be used with different meanings around equation 6 (n represents batch size?) and in algorithm 1 (n means the index of sample or batch). \n* In Table 1, Hard PL without temperature should be bold. \n\n ---\n1. Regarding section 3\n- (a) What parameters of networks was updated via meta-objective function? Is it entire network, or the portion of it as with TENT?\n- (b) What is the exact optimization setup? What optimizer did you use? What is the learning rate, batch size, and training epoch?\n- (c) How did you repeat equation 1? Were both update computed via the same batch, or different batches?\n- (d) Did the difference of the above setup affect the learned objective function? In line 127, authors mentioned that the architecture change does not affect the learned objective function. I am appreciate if you can clarify what architecture did you use, and how does they consistently recovered the same objective function (e.g., does they recover the loss function with the similar scale and temperature?). \n- (e) What is the prediction score in Figure 1?\n\n---\n2. Regarding section 4\n- (a) How can we derive equation 2? May be it is a elemental question, but I think it makes the paper more self-contained. At least the pointer for the derivation is necessary (same for other important equations). \n- (b) What is N in algorithm 1 represents for? Algorithm 1 seems to be assume that model parameters are updated for each sample, is it actually correct, or did you use the batch of data for each update?\n\n--- \n3. Regarding section 4\n- (a) Again, what parameters of networks was updated during test-time? Is it same for all reported methods?\n- (b) Just for sure, did you treat the test-data as stream data available in online manner, or offline data? In other words, did you repeat test-time adaptation for randomly sampled batch, or did you adapt the model only once for each samples? \n\n---\n4. Question about the comments 2 and 3 in weaknesses\n- (a) Can conjugate convex function perspective can explain why meta-learning recovered scaled temperature entropy, not the simple entropy? \n- (b) Can you explain why some heuristics (i.e., diversity regularization in SHOT and feature alignment objective in [1,2]) works better?\n\n\n[1] Kojima, Takeshi et al. “Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment.” IJCAI2022\n[2] Liu, Yuejiang et al. “TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?” NeurIPS (2021). See questions. ", " The paper probes and evaluates the use of meta-learnt convex conjugate of the original training loss for test time adaptation to distribution shifts. \nIt has shown an intuition into the development of the method via a visual analysis of the numeric form of the learnt test time loss with an analytical approximation o known losses, discovering the similarity with the so-called source loss available during training. \nThe pseudo labels formulated via the conjugate seem to provide a general method of computing a pseudo-label that works for losses besides the venerable CE. \n The reformulation of the problem within the Legendre-Fenchel duality may indeed be the first such work in this small but growing area of work. This area would arguably grow in significance as models in use encounter non-iid conditions. The ideas are clearly enunciated and are therefore easy to follow. The experiments define clear baselines and use hopefully comprehensive SOTA benchmarks. The choice of the source model, i.e. a shallow ResNet is not motivated, and several small details that aid reproducibility may be added, e.g. the number of runs for obtaining the standard deviation in Table 1. \n\n \nI did not quite follow why the method would always produce the optimal TTA loss. It is after all learnt, and operates with a non-zero risk. \n No comments on this. \n", " Paper proposes a general way of obtaining test-time adaptation loss which is used for more robust predictions under distribution shifts. First, this loss is learnt using meta-learning and the insights obtained from the same is used to obtain the proposed TTA loss (related to the conjugate of the training loss). The proposed loss is then reinterpreted as self-training with pseudolabels. **Strengths:**\n\n1. Paper is novel and useful as it provides a general framework to perform test-time adaptation for any training loss. Further, it provides an informal theory on why existing TTA loss (softmax-entropy) is a good TTA loss. \n\n2. Paper is clearly written and well-motivated through preliminary experiments. \n\n3. Experiments show advantage of the proposed training procedure, particularly in more recent non-standard loss functions.\n\n**Summary of weaknesses (details provided later):**\n\n1. The paper’s claims about the proposed loss being “best” TTA loss are not substantiated theoretically. \n\n2. The use of the temperature parameter is not motivated theoretically; it seems like an empirical fix. \n\n3. Tuning temperature parameter requires validation data from some target domains which is not the gold-standard TTA setting. Conjugate PL seems to be sensitive to temperature tuning (e.g., in Table 1 temperature scaling is required to outperform baselines). \n\n 1. I do not agree with the paper’s claims about the “best” TTA loss as:\n- “Best” loss is not defined theoretically; Equation (6) is an approximation and it is unclear how close an approximation it is. \n- The scale of the “best” TTA loss from meta-learning seems to vary with task loss (Figure 5 & 6) which suggests that Equation (6) that has no scale is not the “best”. (My concern with temperature scaled TTA loss is below; point 2).\n- “Best” TTA loss likely depends on the type of distribution shifts. The meta-learning experiments are done on CIFAR10 with noises/corruptions. But the theory does not discuss what kind of distribution shifts are allowed. For example, is the only requirement that learnt $\\theta^\\star$ is (close to) optimal for target domain as well? Could we observe different meta-learnt loss if the distribution shifts were stronger, say shifts in illumination etc.?\n\n2. My other main concern is regarding the temperature scaling. \n\n- The paper does not provide a reasonable theoretical justification of using a temperature parameter since the approximation in Equation (6) doesn’t seem to demand it. It seems to be needed empirically though, as seen from Figure 1a and results in Section 5, but it is not clear why. \n\n- The need for temperature scaling creates an issue that held-out labeled validation data from target domain is required to tune it. Further, the paper also uses validation target domain data to tune the learning rate as well. This setting is easier than the standard TTA setting and many other domain adaptation methods could be more applicable. For example, I believe the relevant prior work TENT [1] does not assume the presence of this validation data and does no tuning. \n\n\n- While the paper claims that conjugate PL is much less sensitive to tuning T, I am not convinced it is. Table 1 shows that proposed approach + cross-entropy training loss is never best without temperature scaling and becomes best with temperature scaling. It would be helpful to see if similar trends appear on other domain adaptation datasets like SVHN->MNIST, etc. with the cross-entropy loss. In Table 2, while conjugate PL outperforms other loss functions, the sensitivity to temperature scaling is still comparable with other methods (compare ENT or MEMO). \n\n3. Minor:\n- More detail regarding Equation 1 will help the reader unfamiliar with meta learning loss functions. For example, the role of $\\mathcal{L}$, clarifying its difference from the source training loss.\n\n- Section 4.1 should include a reference for the optimization problem and its solution. \n\n- $h^\\star$ is not defined; I assume it is the optimal function. It is better to use another symbol to avoid confusion with conjugate symbol. At several places (like equations 8 and 9), $f^\\star$ is used instead of $f^*$.\n\n**AFTER REBUTTAL**: The authors have adequately addressed my concerns and updated the paper. I have increased my score. \n\n\nReferences:\n\n[1] Dequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, and Trevor Darrell. Tent: Fully test-time adaptation by entropy minimization. In International Conference on Learning Representations, 2021.\n\n It would be helpful to clarify the following limitations better: What kind of distribution shifts are allowed? When is the proposed loss \"best\" with/without temperature scaling? " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "OfHOfisYexO", "ktjUR66JVXF", "HuEsDLBRg0X", "zSI5xgZQKn2", "8hOnTCJwDCp", "QyW0P-HWZSt", "0SJ_hZwUh9J", "ZcX6kJOuMl2", "TNOzB9HXvG4", "raCnVBoxDC", "nips_2022_2yvUYc-YNUH", "nips_2022_2yvUYc-YNUH", "nips_2022_2yvUYc-YNUH", "nips_2022_2yvUYc-YNUH" ]
nips_2022_MSBDFwGYwwt
TANKBind: Trigonometry-Aware Neural NetworKs for Drug-Protein Binding Structure Prediction
Illuminating interactions between proteins and small drug molecules is a long-standing challenge in the field of drug discovery. Despite the importance of understanding these interactions, most previous works are limited by hand-designed scoring functions and insufficient conformation sampling. The recently-proposed graph neural network-based methods provides alternatives to predict protein-ligand complex conformation in a one-shot manner. However, these methods neglect the geometric constraints of the complex structure and weaken the role of local functional regions. As a result, they might produce unreasonable conformations for challenging targets and generalize poorly to novel proteins. In this paper, we propose Trigonometry-Aware Neural networKs for binding structure prediction, TANKBind, that builds trigonometry constraint as a vigorous inductive bias into the model and explicitly attends to all possible binding sites for each protein by segmenting the whole protein into functional blocks. We construct novel contrastive losses with local region negative sampling to jointly optimize the binding interaction and affinity. Extensive experiments show substantial performance gains in comparison to state-of-the-art physics-based and deep learning-based methods on commonly-used benchmark datasets for both binding structure and affinity predictions with variant settings.
Accept
This is a bordeline paper. All three reviewers liked the paper and appreciated the feedback from the authors. The numerical score are 7, 5 and 5. The two 5s appear a bit on the low side given the comments. The paper tackles the difficult problem of protein-ligand docking with a geometrical GNN approach inspired in part by AlphaFold. This is an important problem and the paper presents a novel approach in a clear and convincing way. Acceptance is therefore recommended.
train
[ "nc-c12I299N", "ZYaMdfpboom", "268RSUf_xi9", "bA85oQTy4q3", "ghIBxakI6uZ", "A5haORLYM4B", "7xPVPUW9d5b", "GgqUUIBiah5", "-u8chF2HxmV", "UPeVGptF6fh", "HC9z9Xs4CZj", "B1IkRpfUwdS" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your constructive comments and encouragement. We are pleased that our responses have addressed your concerns. Please let us know if there is anything that could help improve your rating. Thank you!", " Thank the authors for the detailed responses and the revised supplementary. My concerns are addressed. ", " Thank you for your constructive feedback. Regarding technical contribution related to geometric consistency, we would like to make the following clarifications.\n\n\"Geometric consistency\", in our opinion, is more of a general idea than a plug-and-play module. To apply it to the task of interest, we need to implement a machine learning model that suits the task.\nOur work is the first implementation of \"geometric consistency\" for the modeling of the inter-molecular interaction in a heterogenous system and demonstrates its usefulness by achieving significant performance gain over baselines.\nWe are glad like that our success in setting the new state-of-the-art for the protein-ligand problem suggests the possibility of performance gain in other tasks as mentioned by the reviewer. \n\nThe tasks of protein-DNA/RNA and RNA structure prediction are all fundamental challenges facing science, and probably also requires ingenious designs and tailor-made implementation of \"geometric consistency\". For example, in RNA modeling, we need to decide whether one node represents a nucleobase or a single atom, or other choices. In addition, special consideration is required to learn from even less data available for the problem of protein-DNA/RNA prediction. If those work also achieve above SOTA performance, like TankBind does, it will be a sign that more exploration around the implementation of \"geometric consistency\" will be rewarding, and could be an interesting research area for modeling of macromolecule system. Such work, we believe, deserve to be published in a high impact conference like NeurIPS.", " Thank you for the response. It well addressed my concerns.", " 1. To clarify the contribution related to geometric consistency, AlphaFold2 achieves the geometric consistency on a single protein and TankBind use the same technique on protein-ligand pair. Is my understanding correct? If I adapt the same technique (geometric consistency) to the prediction tasks of protein-RNA complex, protein-DNA complex, RNA 3D structure, RNA-ligand complex, do the authors agree they have contributions on the machine learning model construction and should be accepted as 4 different NeurIPS papers (of course I will also adapt other established software such as P2rank in the pipeline)? \n\n2. Second, my question about the P2rank is fully addressed. Thanks.", " \n> It might take biologists decades to verify a new drug target. A more practical experiment is to show the generalizability to the new compounds instead of new protein targets.\n\nInteresting points. In fact, the majority of the 363 ligands (87\\%) of the test set are new compounds. 152 (42.4\\%) have the max compound fingerprint Tanimoto similarity against training set less than 0.5, which can be deemed as novel scaffolds. We computed the statistic for those 152 cases. The result (shown below) is about the same as the result of the complete test set (363). \n\nMethod | Mean Ligand RMSD | \\% below threshold 2Å | \\% below threshold 5Å \n-------|--------|------- | -------\nTankBind | 7.43 | 19.28 | 61.71 \nEquiBind | 8.13 | 3.31 | 39.12\nVINA | 14.65 | 5.51 | 21.21\n\n\n\n> It seems that the predicted protien-ligand complex is not feasible even using the trigonometry update function. That is why the authors need to use Eq 5 and 6. Please add discussion that why this component is still necessary when trigonometry update function is applied.\n\nTrigonometry update function is used to create a realistic interaction matrix and cannot output coordinates directly (also true for AlphaFold2). From this interaction matrix to ligand coordinates, we adopt an optimization approach (Eq 5 and 6) to find ligand coordinates that satisfy the most constraints based on the predicted interaction matrix and local atomic structure, like the first generation AlphaFold. We could also use a structure module similar to AlphaFold2, but our preliminary attempt to directly output coordinates fails to ensure the directly outputted ligand having realistic conformation, probably due to limited training data. We will leave this as future work for now. ", " We appreciate the reviewer's careful review of our paper, positive feedback and constructive comments. We thank the reviewer for acknowledging us to be the first in adopting the geometric consistency to the binding of protein and small molecules. We have addressed the reviewer's specific concerns and questions below.\n\n\n> Geometric consistency (trigonometry) is first proposed by AlphaFold2. it is easy to modify the StructureModule of AlphaFold2 to also fit this task.\n\nThe trigonometry module (part of the Evoformer module, not the StructureModule) of AlphaFold2 (as illustrated in their Supplementary Figure 6) is designed to update a N by N square matrix. A simple extension of the module to a N by M matrix, as in the case of protein-ligand pair representation, will miss the valuable information stored inside the intra-molecular pair representations of ligand and protein themselves. Our trigonometry module, as described in Eq. (1), is based on the similar idea of AlphaFold2, but is a novel extension of the idea of geometric consistency. Our trigonometry module updates the pair representation not only using itself but also using the protein matrix p and the compound matrix c with extra adaptive gate functions. We have included a schematic illustration of our trigonometry module (Figure 7) in appendix.\n\n\n> my major concern is that it is not clear what is the impact of P2rank which predicts the functional site of proteins. what if the authors first run the P2rank to predict the binding sites and then feed the predicted binding sites to AUTODOCK and EquiBind? The ablation study also shows that P2rank is the most important component and removing the P2rank will decrease the performance.\n\nP2Rank is a ligand-agnostic method. It provides a list of potential binding sites and corresponding scores. But these scores are constant for all ligands because they are computed entirely based on the protein properties. We use the top-scored pocket predicted by P2Rank and run AUTODOCK as suggested by the reviewer. Result is shown in table below, the mean ligand RMSD is 12.54Å which is slightly better than the original 14.7Å but still much worse than TankBind's 7.4Å. Same approach cannot be applied for EquiBind as it was trained (and made predictions) on the whole proteins and did not support prediction with a certain bounding box (https://github.com/HannesStark/EquiBind/issues/22). \nRegarding to the ablation study where blocks are extracted using randomly picked centers instead of the P2rank predicted centers, the deterioration of performance is mainly because, in many cases, none of the ten randomly picked blocks contains the native binding site, so the model is unable to locate the ligand to the right place. Briefly, P2rank could help us to focus on those functional blocks that have a chance to be the true binding site, but it is not a good method in selecting the exact binding site for a specific query compound.\n\n\nMethod | Mean Ligand RMSD | \\% below threshold 2Å | \\% below threshold 5Å | Mean Centroid Distance\n-------|--------|------- | ------- | -------\nVINA(P2rank pocket) | 12.54 | 3.64 | 24.57 | 9.58\n\n> What is the overlap between the training data of P2rank and Tankbind? It is necessary to check whether there is information leak for training and test datasets.\n\nThe P2rank paper was published in 2018 and the model was trained on CHEN11 dataset(an extremely small dataset with 251 complex structures chosen from the protein data bank). None of these structures is in our test set (structures deposited to protein data bank after 2019). Thus, there is no information leak for the test set.\nWe have modified the original sentence to \"x_o is the center of the functional block predicted by a widely-used ligand-agnostic method, P2rank (published in 2018)\" in the main text to clarify this.\n\n> I did not find the definition of ‘blind’\n\n'Blind' means no information regarding to the complex structure is given during prediction, in contrast to the setting where the binding site is given.\nWe added a line \"In blind docking, the protein binding site is assumed unknown\" to the main text.\n\n \n", " \nWe thank the reviewer for the time taken to review our work and constructive feedback provided. We are glad that the reviewer find our paper \"well written\" and \"technically sound\". We have addressed the reviewer's specific questions below. The updated paper has been uploaded to Open Review.\n\n\n> Although I recognize that the paper is mostly technically sound, such as using the sum of local protein and protein information to reflect the interaction, I feel some details are not well motivated. Specifically, according to Eq. (1) and Eq. (2), the method employs many gates, whose motivation is not very clear. Moreover, it could be better to visualize the gates to show some insights.\n\nPhysical interactions between ligands and proteins are relatively short-ranged because the relatively long-ranged electrostatic force is shielded by salt ions in solution.[1] Therefore, those gate functions are employed to decrease the influence of protein-ligand nodes pair when the inferred distance between them is large. We have added a section (Appendix A.4) that visualize the output of a gate function using an example (PDB 6HD6). The visualized output of the gate function indeed looks like the native inter-molecular distance matrix.\n\n> The framework includes a self-attention module to modulate the interaction between a protein node and all compound nodes by taking the whole interaction between this protein node and all compound nodes into consideration. As a self-attention module, it could be better to visualize the self-attention map via a few examples to verify the motivation.\n\nGood points. In the added section (Appendix A.4), we also include a visualized self-attention map for the same example. The map looks like the compound intra-molecular distance map. It is a reassuring sign because, in our setting, the ligand conformation is unknown, and the self-attention could predict and use the information about ligand conformation to update the protein-ligand interaction as designed to.\n\n> Trigonometry is the main contribution or novelty and is similar to or inspired by the Evoformer module used in AlphaFold2. The difference should be highlighted.\n\nWe thank the reviewer for this great question. \n\na) First, the trigonometry module in our paper is, to the best of our knowledge, the first inter-molecular modeling method that captures the geometry consistency between two heterogeneous systems. This is a novel extension of the intra-molecular trigonometry module in AlaphaFold2 (Evoformer). Traditional solutions[2,3,4] to the problem of drug-protein binding structure prediction focus on designing sophisticated two-body terms to approximate the protein-compound interaction which is, however, an intrinsically many-body interaction. Our trigonometry-aware neural network is the first neural network-based method that is capable of explicitly learning the many-body effects while maintaining a low computational cost for this important research area of protein-compound interaction. \n\nb) Second, a simple extension of the Evoformer cannot handle the heterogeneous systems. The pair representation updated by the trigonometry module of AlphaFold2 is a square matrix. To predict the complex structure of protein-ligand pair, our pair representation is not a square matrix but a N by M matrix. A direct application of trigonometry module of AlphaFold2 will miss the valuable information stored inside the intra-molecular pair representations of ligand and protein themselves. Therefore, we updated the the pair representation not only using itself but also using the protein matrix p and the compound matrix c with extra adaptive gate functions (Eq. (1), also a schematic illustration in figure 7). \nIn addition, the new trigonometry module is part of our novel design to tackle the drug binding problem. As pointed out by Reviewer YkVW, our model achieves the state-of-the-art performance in drug-protein binding structure prediction by an innovative combination of the trigonometry module, a divide-and-conquer strategy for dealing with proteins, and the joint training of binding pose and affinity with contrastive losses.\n\n\n[1] Erbaş, Aykut, Monica Olvera De La Cruz, and John F. Marko. \"Effects of electrostatic interactions on ligand dissociation kinetics.\" Physical Review E 97.2 (2018): 022405.\n\n[2] Trott, Oleg, and Arthur J. Olson. \"AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization, and multithreading.\" Journal of computational chemistry 31.2 (2010): 455-461.\n\n[3] Friesner, Richard A., et al. \"Glide: a new approach for rapid, accurate docking and scoring. 1. Method and assessment of docking accuracy.\" Journal of medicinal chemistry 47.7 (2004): 1739-1749.\n\n[4] Méndez-Lucio, Oscar, et al. \"A geometric deep learning approach to predict binding conformations of bioactive molecules.\" Nature Machine Intelligence 3.12 (2021): 1033-1039.\n", " \nWe thank the reviewer for the time taken to review our work and constructive feedback provided. We are glad that the reviewer enjoys the work, believes that the work is \"novel\" and the presentation is “well-written and easy to follow”. We have addressed the reviewer's specific questions below. The updated paper has been uploaded to Open Review.\n\n> Is the ligand conformer fixed after the 3D initialization?\n\nIn the re-docking setting (as reported in Appendix A), the ligand conformation is fixed while in the self-docking setting (as reported in the main text), the ligand conformation is not fixed. \nWe have modified the sentence \"We start with a real-world blind self-docking experiment.\" in the main text to \"We start with a real-world blind self-docking experiment, in which the ligand conformation is not fixed, and the result of re-docking experiment, in which the native ligand conformation is given, is reported in Appendix A.\" to explicitly state the difference.\n\n\n> How is the margin value epsilon determined in Equation 4?\n\nThe margin value epsilon is chosen such that the ligand binds at least one order of magnitude, as measured in unit of concentration, stronger to the native block than to non-native blocks. One order of magnitude is chosen mainly for the following two reasons.\nFirst, we do not know the exact binding affinity to non-native blocks, other than it is weaker than the affinity to the native block, a larger epsilon could impose constraints too strong to be realistic. Second, when the epsilon is too small, the native block could not be distinctly differentiated from non-native blocks. We have experimented with epsilon 2 and 0.5, both produced results slightly worse than the default value of 1.\n\n\n> In the binding affinity prediction experiment, is the same loss function (Eq. 4) applied? Is the binding site given as known information? If so, there is no need to apply the contrastive loss.\n\nYes, the same loss is applied. The binding affinity and position are optimized jointly. The native binding site is not given as known information in all of our test experiments, as the binding site for a specific molecule is not available in many realistic situation [1]. \n\n\n> In the docking experiments, are methods listed in Table 2 all deterministic? If not, have you run these methods multiple times to make results more reliable?\n\n\nThey are not deterministic, but we have run VINA, EquiBind and TankBind three times and the mean ligand RMSD is reported below. The standard deviation of the mean ligand RMSD is smaller than the difference between these methods.\n\nMethod | Repeat 1 | Repeat 2 | Repeat3\n-------|--------|------- | -------\nTankBind | 7.43 | 7.5 | 7.41\nEquiBind | 8.2 | 8.13 | 8.13\nVINA | 14.65 | 13.3 | 13.2\n\n\n[1] Song, Minsoo, and Gil Tae Hwang. \"DNA-encoded library screening as core platform technology in drug discovery: its synthetic method development and applications in DEL synthesis.\" Journal of medicinal chemistry 63.13 (2020): 6578-6599.\n\n", " This paper proposes a Trigonometry-Aware Neural Network (TankBind) for drug-protein binding site prediction. In the proposed method, the protein is first segmented into several functional blocks by an out-of-shelf model. Then, a trigonometry module inspired by Evoformer of AlphaFold2 is applied to update the pairwise embedding between protein and ligand. The contrastive loss with local region negative sampling is applied to jointly optimize the binding position and affinity. The model could achieve state-of-the-art performance in both binding position prediction and binding affinity prediction tasks. Strengths:\n- The proposed method is novel, especially combining different designs together: breaking proteins into functional blocks, applying trigonometry module to update distance maps, performing affinity and binding poses joint training with contrastive losses.\n- The paper is well-written and easy to follow.\n- The proposed method shows superior empirical performance over existing baselines.\n\nGenerally, I think it's a very good paper and didn't see any major weakness. Only some minor questions left, see details in Questions. - Is the ligand conformer fixed after the 3D initialization?\n- How is the margin value epsilon determined in Equation 4? \n- In the binding affinity prediction experiment, is the same loss function (Eq. 4) applied? Is the binding site given as known information? If so, there is no need to apply the contrastive loss.\n- In the docking experiments, are methods listed in Table 2 all deterministic? If not, have you run these methods multiple times to make results more reliable? The authors discuss several future work directions in the conclusion section. There is not negative social impact.", " Based on Geometric Vector Perceptron (GVP) for protein modelling and Graph Isomorphism Network (GIN) for drug processing, the paper proposes a method for drug-protein binding structure prediction. The main idea is to build a trigonometry constraint and explicitly attend to all possible binding sites for each protein by segmenting the whole protein into functional blocks. 1. The paper is well written. Readers can learn a lot about drug-protein interaction from the paper.\n\n2. The method is mostly technically sound. \n\n3. Comprehensive experiments show the effectiveness of the proposed method. 1. Although I recognize that the paper is mostly technically sound, such as using the sum of local protein and protein information to reflect the interaction, I feel some details are not well motivated. Specifically, according to Eq. (1) and Eq. (2), the method employs many gates, whose motivation is not very clear. Moreover, it could be better to visualize the gates to show some insights. \n\n\n2. The framework includes a self-attention module to modulate the interaction between a protein node and all compound nodes by taking the whole interaction between this protein node and all compound nodes into consideration. As a self-attention module, it could be better to visualize the self-attention map via a few examples to verify the motivation. \n\n\n3. Trigonometry is the main contribution or novelty and is similar to or inspired by the Evoformer module used in AlphaFold2. The difference should be highlighted. NA.", " In this manuscript, the authors propose Trigonometry-Aware Neural networKs for binding structure prediction, TANKBind, that builds trigonometry constraint as a vigorous inductive bias into the model and explicitly attends to all possible binding sites for each protein by segmenting the whole protein into functional blocks. The algorithm is trained by minimizing a novel contrastive loss with local region negative sampling. The model is able to predict both the binding poses and binding affinity between protein targets and small molecules. Strengths: So far as I know, this is the first algorithm which adopts the geometric consistency to the binding of proteins and small molecules. Another contribution is the loss function which treats the non-binding sites on the same protein as the negative training data. \n\nWeakness: First, I want to emphasize that geometric consistency (trigonometry) is first proposed by AlphaFold2. The embedding z_ij for a pair of amino acids i,j is updated by another two embeddings z_ik and z_kj to satisfy the triangle constraints. Tankbind changes the transformer of AlphaFold2 to another gating function. In theory, it is easy to modify the StructureModule of AlphaFold2 to also fit this task. For me, this decreases the novelty of this work. Second, my major concern is that it is not clear what is the impact of P2rank which predicts the functional site of proteins. VINA-based methods and EquiBind both identify the binding sites by themselves. If AUTODOCK predicts a wrong binding site, it will yield a super low score. However, what if the authors first run the P2rank to predict the binding sites and then feed the predicted binding sites to AUTODOCK and EquiBind? The ablation study also shows that P2rank is the most important component and removing the P2rank will decrease the performance. \n Questions:\n1. What is the overlap between the training data of P2rank and Tankbind? It is necessary to check whether there is information leak for training and test datasets.\n2. I did not find the definition of ‘blind’ \n3. It might take biologists decades to verify a new drug target. A more practical experiment is to show the generalizability to the new compounds instead of new protein targets.\n4. It seems that the predicted protien-ligand complex is not feasible even using the trigonometry update function. That is why the authors need to use Eq 5 and 6. Please add discussion that why this component is still necessary when trigonometry update function is applied. No obvious limitations and potential negative societal impact" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5 ]
[ "ZYaMdfpboom", "GgqUUIBiah5", "ghIBxakI6uZ", "-u8chF2HxmV", "7xPVPUW9d5b", "7xPVPUW9d5b", "B1IkRpfUwdS", "HC9z9Xs4CZj", "UPeVGptF6fh", "nips_2022_MSBDFwGYwwt", "nips_2022_MSBDFwGYwwt", "nips_2022_MSBDFwGYwwt" ]
nips_2022_h1IHI5sV4UQ
Reconstruction on Trees and Low-Degree Polynomials
The study of Markov processes and broadcasting on trees has deep connections to a variety of areas including statistical physics, graphical models, phylogenetic reconstruction, Markov Chain Monte Carlo, and community detection in random graphs. Notably, the celebrated Belief Propagation (BP) algorithm achieves Bayes-optimal performance for the reconstruction problem of predicting the value of the Markov process at the root of the tree from its values at the leaves. Recently, the analysis of low-degree polynomials has emerged as a valuable tool for predicting computational-to-statistical gaps. In this work, we investigate the performance of low-degree polynomials for the reconstruction problem on trees. Perhaps surprisingly, we show that there are simple tree models with $N$ leaves and bounded arity where (1) nontrivial reconstruction of the root value is possible with a simple polynomial time algorithm and with robustness to noise, but not with any polynomial of degree $N^{c}$ for $c > 0$ a constant depending only on the arity, and (2) when the tree is unknown and given multiple samples with correlated root assignments, nontrivial reconstruction of the root value is possible with a simple Statistical Query algorithm but not with any polynomial of degree $N^c$. These results clarify some of the limitations of low-degree polynomials vs. polynomial time algorithms for Bayesian estimation problems. They also complement recent work of Moitra, Mossel, and Sandon who studied the circuit complexity of Belief Propagation. As a consequence of our main result, we are able to prove a result of independent interest regarding the performance of RBF kernel ridge regression for learning to predict the root coloration: for some $c' > 0$ depending only on the arity, $\exp(N^{c'})$ many samples are needed for the kernel regression to obtain nontrivial correlation with the true regression function (BP). We pose related open questions about low-degree polynomials and the Kesten-Stigum threshold.
Accept
This paper studies using low-degree polynomials for analyzing statistical/computational gaps for high-dimensional inference problems and identify average-case settings that exhibit this gap. This is a nice paper and above the bar, though it perhaps appeal to only a theoretical audience.
train
[ "vTJ4I8vcx2_", "QXeari9lRLu", "-Sog7wSLWIG", "cGpf3WOqVWa", "lK1RPfCR7sw", "krO3MHmJoE", "zSSiwCI-14j" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their feedback and answer questions below:\n\n> In Fig.1, is it a one-off run or are the results averaged over multiple samples ? It seems weird that the RecMaj algorithm is so inconsistent depending on \\lambda_2(M)\n\nGood question — the results for the RecMaj algorithm were averaged over 16000 samples so this is really the accuracy it achieves. (We will add this information to the figure caption in the next version.) Why the curve should look the way it does is not obvious, but we expected \\lambda_2(M) to be strongly correlated with the performance of KRR and not so correlated with RecMaj so at least it is consistent with this intuition.\n\n> Can those results be extended to similar tree distributions, with the same growth rate ? For example, using the br(T) value of Evans et al. '00.\n\nIt should be straightforward to extend the negative results to different tree topologies. For positive results (e.g. information-theoretic possibility of reconstruction), it should be possible but more complicated regularity conditions would be required than just looking at br(T). For example, if M^k is rank one and the top of the tree is a length k path, then the leaves will have no mutual information with the root even though br(T) can be large. \n", " We thank the reviewer for their suggestions on the presentation and answer questions below:\n\n> I also don't know what are the exact contributions in this paper. For example Theorem 5 (Mossel and Peres 2003) appears under Section 1.2 Our Results. Is it a new result, or from a prior work?\n\nTo clarify the discussion about Theorem 5, we did intend to credit this result to Mossel and Peres 2003 and will revise accordingly. The formal Theorem 22 and its proof are included for completeness, as the fact that this estimator is robust to additional noise naturally follows from their argument but is not part of the original theorem statement. The reason we included Theorem 5 here (with the MP ‘03 reference) was as a way to introduce the context for Theorem 6, our main result. The reviewer makes a good point that since all the other stated theorems in the section are our new contributions, it is potentially confusing when skimming — we will edit the text here to further reduce confusion.\n\n> It would be great if the authors could provide some high level interpretation of their results for those who are not familiar with this line of work. \n\nWe hope the following brief summary helps clarify the context of this work and our contribution and will edit the paper accordingly. \n\nA large body of recent work attempts to predict “computational-to-statistical gaps”: situations where it is impossible for polynomial time algorithms to learn anything from the data, even though computationally inefficient (“information-theoretic”) algorithms can succeed at the same task. Such gaps appear in a large variety of problems (for example, sparse PCA) and one of the most popular and successful heuristics to predict these gaps is to look at low-degree polynomials, usually of degree O(log(N)) in problem size N, as an easier-to-analyze proxy for computationally efficient algorithms. (Standard tools like NP-hardness are not useful for analyzing average-case problems due to complexity-theoretic reasons, so such heuristics are very important.)\n\nThe main contribution of this paper is to show that for a canonical and well-studied statistical task (reconstruction on trees) the natural low-degree polynomial heuristic does not predict the correct threshold — it predicts the problem is hard when it is in fact solvable by polynomial time algorithms. Formally our main result, Theorem 6, shows that only polynomials of N variables with degree at least N^{c} for c > 0 a constant can solve this statistical task. As consequences of the main result, we can show that the related method of RBF kernel regression also fails at this task unless provided exp(N^c) many samples (Theorem 8) and that for a variant of this task, another heuristic from the literature called “Statistical Query Algorithms” (SQ) predicts the correct threshold while low-degree polynomials still fail (Theorem 11). The last result is significant because prior work has focused on equivalences between SQ and low-degree predictions (see paper “Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent”), and our work is illustrating a concrete example where the equivalence must break down. \n\n> Some acronyms are used without introduction, for example, MCMC and CSP\n\nGood point, we will edit the text to introduce these acronyms (Markov Chain Monte Carlo, Constraint Satisfaction Problem) for improved clarity.\n", " We thank the reviewer for their detailed feedback and will make the improvements suggested. \n\n> line 189 – a constant c for N^c is discussed, but that is different than the c as the root variable realization, right? If so, I’d suggest not overloading notation.\n\nYes that’s right, these are unrelated and we will change the notation. \n\n> Line 212 should ‘Suppose that …’ be there? \n\nYou are right, this is redundant with line 218 and should be removed. \n", " This paper studies the effectiveness of using the restricted computational model class of low-degree polynomials for assessing statistical-computation gaps for high-dimensional statistical inference problems. Using the problem of reconstruction on trees, the authors identify problem settings for which average-case reconstruction is impossible using low-degree polynomials yet reconstruction is possible with computationally efficient methods. Strengths (major)\n- significance: Information-computation tradeoffs for high dimensional statistical inference problems are important. There are several restricted computational models that have been used to study gaps for different problems (planted clique, sparse PCA, etc.). Low-degree polynomials are a prominent model. The authors show that there are problems where thresholds identified using low-degree polynomials do not match thresholds for existence of computationally efficient methods. \n- The extension of the lower bound to investigate when kernel ridge regression would fail is interesting.\n- The authors provide substantial discussions.\n\nStrengths (minor)\n- While work is theoretical, the authors include an experiment (Fig 1) investigate performance of low-degree polynomials for small but non-zero $\\lambda_2$, suggesting that the inaccuracy of low-degree polynomials to capture computational hardness may not narrowly occur on the limit case (of $\\lambda_2=0$) analytically studied in the paper.\n\nWeaknesses \n- I did not identify any major weaknesses.\n\nVery minor notes: \n- line 30, acronym CSP not defined\n- line 78 – the acronym for statistical query ‘SQ’ is used multiple times before being defined \n- Line 95 - Notation $c$ not introduced yet, why not $x_r$ for realization of root variable?\n- Line 103 – $\\rho$, a subscript appearing in the conditioning event, is not defined. \n- Lines 117 and 122 missing parenthesis\n- line 189 – a constant $c$ for $N^c$ is discussed, but that is different than the $c$ as the root variable realization, right? If so, I’d suggest not overloading notation.\n- line 210 capitalize ‘markov’\n- Line 212 should ‘Suppose that …’ be there? Line 218 has a condition on $m/\\delta$ and in 212 the notation $m$, $\\delta$, $\\epsilon$, and $c$ not yet specified\n- Line 217-218, I don’t think the notation $\\varphi(\\cdot)$ was introduced yet\n N/A yes", " Investigates the problem of tree reconstruction (from leaves to root) through low degree polynomials.\n Low-degree polynomials as a computational model used to study (in)tractability of learning/inference problems seems to be a useful and important model, so studying when it does/doesn't reflect the wider class of polynomial-time algorithms is very important. The results in this paper provide an important insight in this direction. The assumptions are pretty carefully justified, and the authors are largely working in standard models. None. The main limitation I can see is that the tree structure that makes this analysis go through is somewhat limited.\n", " This paper studies the problem of reconstruction on trees through low degree polynomials. The authors show that there exists simple tree models in which nontrivial reconstruction of the root value is possible in polynomial time, and if the tree is unknown but given samples with correlated root assignments, nontrivial reconstruction is possible with a statistical query algorithm. The paper also provide a result related to RBF kernel ridge regression for predicting root coloration. An open question about low degree polynomials and the KS threshold is also proposed. The topic of this paper is completely out of my area, and any technical comments I make will probably be unfair to the authors.\n\nRegarding organization: I can hardly follow the paper. Partly it is because the topic is out of my area. However in the current shape, everything including introduction, preliminaries, definitions, theorems, remarks, are all mixed into the two massive sections. While I understand there might be a lot of contents in the paper, I think the presentation can definitely be improved. \n\nI also don't know what are the exact contributions in this paper. For example Theorem 5 (Mossel and Peres 2003) appears under Section 1.2 Our Results. Is it a new result, or from a prior work?\n - Some acronyms are used without introduction, for example, MCMC and CSP.\n- It would be great if the authors could provide some high level interpretation of their results for those who are not familiar with this line of work. Not applicable.\n ", " This paper studies the problem of tree reconstruction on $d$-ary trees; the root of the tree is given a spin $X_\\rho \\sim \\nu$, which is then propagated down to the leaves according to a Markov channel $M$. The problem is then, given the spins $X_L$ at the leaves, to recover the original root spin $X_\\rho$. Several variants of this model are considered: with/without noise at the leaves, with the underlying tree known or not, or with several realizations of the same tree process.\n\nThe focus in this paper is on low-degree polynomial reconstruction: for which values of $M$ can $X_\\rho$ be estimated by a low-degree polynomial in the leaves, i.e. a function of the form\n$$ f(X_L) = \\sum_{S\\subset L, |S| = D} f_S(X_S) , $$\nwhere $X_S$ is the subset of leaves in $S$. It is already known that when $d |\\lambda_2(M)|^2 > 1$, a linear ($D = 1$) estimator suffices; on the other hand, general reconstruction using belief propagation is possible for almost all $M$.\n\nThe authors show that if $\\lambda_2(M) = 0$, then no polynomial algorithm of degree $\\leq N^c$, where $N$ is the number of leaves, can recover the true root spin. The proof is based on the property that $M^k$ is of rank 1 for some k, and hence the correlation between a vertex $x$ and its $k$-th ancestor is 0. As a corollary, they show that a kernel ridge regression method needs at least $e^{N^c}$ samples to learn the tree reconstruction problem, since it needs to approximate a polynomial of degree at least $N^c$.\n\nThe article also contains a positive result: for the case where the underlying tree is unknown (equivalently, where the leaves are known up to permutation), they show that for a fixed root spin $X_\\rho$ and a polynomial number of samples from the tree process started at $X_\\rho$, there exists a reconstruction algorithm that recovers $X_\\rho$ better than random chance. The tree reconstruction problem is ubiquitous in many inference problems (e.g. community detection), for which computational-to-statistical gaps are still fairly unexplained. It's interesting to see a low degree polynomial approach to this problem, which bridges the gap between the census reconstruction problem and BP approaches. The paper is overall well-written and easy to read; the introduced notions are clearly defined (with the notable exception of the VSTAT oracle), and the results are nicely presented. It is especially interesting that the impossibility result extends to $O(N^c)$ degrees, although this might just be a consequence of $\\lambda_2(M) = 0$.\n\nThe main weakness of this paper, in my opinion, is its specificity: all proofs hinge on the specific properties that occur when $M^k$ is of rank one, which implies very string independence properties between tree nodes. The tree structure is similarly rigid, with only the $d$-ary tree considered. However, this is a good first step which I hope will inspire more work on this topic. - In Fig.1, is it a one-off run or are the results averaged over multiple samples ? It seems weird that the RecMaj algorithm is so inconsistent depending on $\\lambda_2(M)$.\n- Can those results be extended to similar tree distributions, with the same growth rate ? For example, using the br(T) value of Evans et al. '00. The limitations have been adequately addressed." ]
[ -1, -1, -1, 7, 8, 5, 7 ]
[ -1, -1, -1, 3, 3, 1, 4 ]
[ "zSSiwCI-14j", "krO3MHmJoE", "cGpf3WOqVWa", "nips_2022_h1IHI5sV4UQ", "nips_2022_h1IHI5sV4UQ", "nips_2022_h1IHI5sV4UQ", "nips_2022_h1IHI5sV4UQ" ]
nips_2022_R7qthqYx3V1
Discovering Design Concepts for CAD Sketches
Sketch design concepts are recurring patterns found in parametric CAD sketches. Though rarely explicitly formalized by the CAD designers, these concepts are implicitly used in design for modularity and regularity. In this paper, we propose a learning based approach that discovers the modular concepts by induction over raw sketches. We propose the dual implicit-explicit representation of concept structures that allows implicit detection and explicit generation, and the separation of structure generation and parameter instantiation for parameterized concept generation, to learn modular concepts by end-to-end training. We demonstrate the design concept learning on a large scale CAD sketch dataset and show its applications for design intent interpretation and auto-completion.
Accept
As summarized by reviewer 5G2f, this paper proposes a novel learning-based approach to discover the modular concepts (i.e., modular structure) from raw CAD sketches. To tackle the problem, the authors first define a domain specific language (DSL) such that modular concepts can be represented in a network-friendly manner. A Transformer-based detection module takes in a CAD sketch sequence and outputs a a set of latent embeddings, which are further decoded to parameterized modular concepts by a generation module. The whole model is trained in an end-to-end self-supervised manner, using reconstruction loss plus regularization terms. The authors perform experiments on a large scale CAD sketch dataset and mainly demonstrate its applications for design intent interpretation (i.e., parse modular concepts from a raw CAD sketch) and auto-completion (i.e., complete a partial CAD sketch). All reviewers recognize the novelty and contribution of this work, and the reviewer-author discussion was quite fruitful as many points, ranging from designer/user interaction, comparison to baseline methods, and issues with the library size are discussed and addressed. With such clear contribution and applicability to the CAD domain, I highly recommend the acceptance of this work.
train
[ "DNjsQEtz3pa", "6CMalp0hzfV", "fwGk4PVmMOq", "YJIpcHAjWKH", "V1IkzoIVY9", "2tLJG1GhMZ0", "BCDckhqesQ9", "toACMtioLL", "zgLl2uUZxKO", "7YEnpZ_wdKL", "AWefkvOr9Zw", "9KyYZKpahHF" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their efforts in responding to reviewer comments.\n\nThe new submission reads better than the original with more explanations. I appreciate the new histograms for concept complexity. They do provide some new insights into the model.\n\nI already had a high score for the paper, so I won't change it.\n\nBut I would still ask the authors to add equation numbers and improve the summation indexing. In addition, I would also like to see preliminary qualitative results on the DeepMind dataset.\n\nI disagree with the authors on the comment that the DeepMind and the SketchGen datasets have similar complexity. In my experience, the DeepMind dataset has more diversity and complexity. Therefore, the concepts learned there should be a lot more complex.\n\nI think the authors have responded in detail to the other reviewers as well, so I still recommend acceptance.", " I appreciate the detailed response by the authors. I think all my questions are clarified and the revised Figure 1 makes the concept much more clear to me. No further questions from my side.", " **Q1**: Failure cases in Fig.9.\n\nSince symmetry regularity is not explicitly formulated by the objectives of library induction learning, there is no guarantee that the discovered interpretations always observe symmetry. However, the objective of modularity should implicitly encourage symmetry to improve reuse. Indeed, this is empirically confirmed by the numerous sketches that have been interpreted with diverse degrees of symmetry despite lacking an explicit requirement, as shown in e.g. Figs.1,3,4,9,10, etc.\n\nAdditionally in a future work, symmetry along with other expert designed regularity priors can be integrated into the learning objective, to discover concepts with stronger patterns.\n\n\n**Q2**: Multiple completions.\n\nWe agree that providing multiple completions can be even more useful. Currently our framework only provides one plausible completion based on the training data distribution. In future work, we may introduce random sampling schemes to the decoding process, to allow for multiple completions. Such randomized decoding has been explored in works like [1,2]. \n\n[1] Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-Predict: Parallel Decoding of Conditional Masked Language Models. In EMNLP.\n\n[2] Huiwen Chang, Han Zhang, Lu Jiang, Ce Liu, William T. Freeman. 2022. MaskGIT: Masked Generative Image Transformer. In CVPR.\n\n\n**Q3**: Fig.1 types.\n\nThanks for pointing this out. We have revised Fig.1 to include the complete DSL program representing the sketch graph.\n\n**Q4**: Index embedding.\n\nYes, the index is embedded through a positional embedding layer that turns an integer into a latent code.\n\n**Q5**: $R$ matrix.\n\n$R$ contains two parts: for the reference entries inside a concept, they are directly copied from $R_{T^1}$; for the references across concepts, they are copied from $R_{cref}$ which is computed by multiplying $R_{T^1}$ and $R_S$ (line182). \n\n**Q6**: Dashed lines.\n\nThese are construction lines, according to the convention of SketchGraphs.\n\n**Q7**: Motivation of vector quantization.\n\nThe quantization of concept codes is necessary so that we can find a library of reusable concepts applicable for the whole dataset. Without this module, there are infinite variations of latent concept representations and no finite library can be obtained.\n", " **Q1**: Loss weights tuning.\n\nSince the loss terms are all normalized by their corresponding set sizes, significant tuning of loss weights probably can be avoided when training on new datasets. We didn’t apply extensive parameter tuning to find these empirical weights, and found that the results are quite stable across a wide range of weight variations.\n\n**Q2**: Are the discovered concepts advanced primitives used by other generative models?\n\nBoth our approach and the other generative models operate on the same set of primitives, i.e. points, lines, arcs, etc. and their constraints. Since the other generative models output such primitives one-by-one, it’s unclear what their advanced primitives are. \n\n**Q3**: Applying to free-hand sketches.\n\nYes, our approach can be used to convert freehand sketches into vectorized and structured sketches. This is shown in A.8 of supplementary. Following the other generative models, we use the free-hand sketches as conditional input.\n\n**Q4**: Learning queries $[\\overline{q}_i]$.\n\nThese variables are trained end-to-end along with other network parameters.\n\n**Q5**: Definition of $\\mathbb{L}^1$.\n\nWe define the syntax of $\\mathbb{L}^1$ concepts as shown in List 1 and learn the content of $\\mathbb{L}^1$ library by end-to-end inductive learning. \n\n**Q6**: Different compositions for a sketch.\n\nIndeed as you have observed, there can be multiple compositions for a given raw sketch, so there is frequently no unique ground-truth design intent for a sketch. Our approach allows for discovering one possible decomposition for a sketch, guided by the necessary conditions of reconstruction and modularity. As can be seen from the results and noted by expert designers (see **Q1** of common questions), these discovered concepts are generally plausible.\n\n**Q7**: Three concepts per sketch.\n\nThe maximum number of concepts per sketch is specified by $k_{qry}$. Most results shown in the paper are obtained with $k_{qry} = 5$. There are quite a few samples with 4 or 5 concepts (see Figs. 5, 9,10). A discussion of the impact of $k_{qry}$ on modularity and reconstruction quality is given in A.7 of supplementary, where we can see that a larger $k_{qry}$ leads to more scattered concepts with reduced modularity (see also Fig.14). \n\n**Q8**: Scalability with large library size.\n\nWe note that the library is not directly consumed by transformer modules, but only used to provide quantized concept codes for a fixed number of queries per sample (i.e. $k_{qry}$). The quantization computation has linear complexity with respect to the library size. Therefore there is no scalability issue with library size.\n", " **Q1**: How removed primitives are represented.\n\nThese primitives are simply removed from the input sequence. Therefore neither our approach nor the baseline approach has any information of how long the target sequence should be. Indeed, the input to both methods is exactly the same.\n\n**Q2**: Scope of novelty.\n\nThanks for the question. We don’t think that the implicit/explicit representation is novel at its most general level, as essentially all works that deal with explicit structures with neural networks have to transit from an implicit feature extraction stage to an explicit structure generation stage, as done in e.g. DETR and [1]. However, we believe that in terms of inductively learning declarative first-order programs as formulated by our DSL, the implicit-explicit representation and the structure-parameter separation have proved feasible through our work for the first time. Although we demonstrate the feasibility by CAD sketch concept learning, we believe that such programs are not restricted to CAD and can be interesting to a broader audience. We have mentioned this around line348. We will revise to make our contributions more clear in the introduction.\n\n[1] Tian, Y., Luo, A., Sun, X., Ellis, K., Freeman, W. T., Tenenbaum, J. B., & Wu, J. (2019). Learning to infer and execute 3d shape programs. ICLR.\n\n**Q3**: Extension to higher order.\n\nOur framework can potentially be extended to higher order induction through recursion. A general scheme has already been demonstrated in DreamCoder. More specifically within our setting, given a first-order library, we can process the entire dataset and replace each raw sketch with a composition of first-order library concepts, to obtain a new dataset whose elements are first-order concepts. Our framework can then be applied onto this new dataset to inductively learn second-order concepts that are composed of first-order concepts; applying this procedure recursively would build a hierarchy of concept libraries. We would like to explore this extension in future work.\n\n**Q4**: Limitations.\n\nThanks for pointing this out. We agree that structural validity considerations are essential; our concept discovery facilitates design but does not replace other critical procedures in CAD and manufacturing. We will emphasize this point in the revision.\n", " **Q12**: Test on the larger dataset.\n\nThanks for the suggestion! We will try to run on this larger dataset with 5million sketches, while our results have been obtained on 1million sketches. We note that both datasets originate from OnShape user creations and probably have very similar distributions.\n\n\n**Q13**: Ablation on hyperparameters.\n\nThroughout our experiments, we find the hyperparameters $k_{qry}$ and $k_{arg}$ have a clear impact on modularity and reconstruction quality. The network layers do not impact much, except for extremely small networks that cannot learn well and too large networks that do not fit into our limited GPU memory.\n\n**Q14**: Unstable matching.\n\nWe find that on convergence the matching is stable. In addition, the type casting and soft reference matrix all contribute to the insensitivity and robustness of our loss computation to matching variations. So we find the training processes are generally quite stable.\n", " **Q1**: Reproducing the results.\n\nWe will release all code and data that can reproduce the results shown in the paper.\n\n**Q2**: Equation numbers, notations, index range.\n\nThanks for the comments. We will revise for better readability.\n\n**Q3**: Discussion of weakness.\n\nThanks for pointing this out. When new primitives or concepts need to be introduced, the whole model should at least be finetuned, if not retrained from scratch. Meanwhile, we note that a close variation of the vector quantization bottleneck can potentially avoid training the whole network for introducing new concepts, which is to use key-value pairs to localize the impact of new concepts and preserve learned ones, as shown in [1].\n\nWe will highlight the handling of constraint parameters in the main text. Moreover, we note that these parameters can be included in the generation model without difficulty; we have chosen to skip them for the sake of design simplicity, as they can be reliably recovered from the generated primitive shapes. \n\n[1] Frederik Träuble, Anirudh Goyal, Nasim Rahaman, Michael Mozer, Kenji Kawaguchi, Yoshua Bengio, Bernhard Schölkopf. 2022. Discrete Key-Value Bottleneck. arXiv:2207.11240.\n\n**Q4**: Encoding of autoregressive baseline.\n\nAs discussed in line311-312, we use the same encoding as ours for the baseline model for fair comparison, as the previous works have each used slightly different encodings.\n\n**Q5**: Citation for lambda calculus formulation.\n\nThanks for pointing this out! We will add references to facilitate understanding. \n\n**Q6**: Fig.8, trivial $\\mathbb{L}^1$ concept containing a single $\\mathbb{L}^0$ typed element.\n\nGiven a sketch, all the $\\mathbb{L}^0$ elements are converted into encapsulating $\\mathbb{L}^1$ typed concepts by our network. Therefore, an extra $\\mathbb{L}^0$ element not fitting to any modular structure will be contained within a $\\mathbb{L}^1$ concept to ensure reconstruction of the whole sketch. However, such *trivial* $\\mathbb{L}^1$ types are not common. We have plotted the histogram of $\\mathbb{L}^1$ type complexity in the revised draft (please see Fig.11(b) of supplementary), and find that there are 10 such $\\mathbb{L}^1$ types out of the 1000 set.\n\nOn the other hand, if we vary the number of allowable concepts per sketch, as shown in the experiment of changing $k_{qry}$ (A.7 of supplementary), we can see that modularity degrades with increased numbers of concepts. This however does not mean trivial $\\mathbb{L}^1$ types abound, as e.g. when $k_{qry}=10$ the percentage of *trivial* $\\mathbb{L}^1$ is 3.7%. \n\n**Q7**: Overlap in subfigure.\n\nThis is a layout problem with the graphics software, and there should be no overlap. We have fixed it in the revision.\n\n**Q8**: $b_{dash}$.\n\nIt means the line is a construction line and is drawn as dashed according to SketchGraphs convention.\n\n**Q9**: Type casting. \n\nAs an example, a line primitive is matched to a circle target, then the parameter code of the primitive is decoded by $dec_{param}()$ into a 256-dim code (Fig.7), out of which we only take the segment corresponding to the target circle type. The segment encodes quantized circle properties and is compared with the target circle properties for loss computation.\n\n**Q10**: $dec_{param}()$.\n\n$dec_{param}()$ has a mirrored structure of $enc_{param}()$. It takes a latent parameter code as input and decodes it into a 256-dim code (Fig.7), which contains several segments corresponding to different primitive types. Each primitive property is represented by a 14-dim embedding code, from which a quantized property value is recovered by an inverse-embedding layer. During this inverse-embedding process, the logits are processed by argmax to query the quantized value. Following previous works (CAD-As-Language, SketchGen, Vitruvion), we always work with quantized attribute values as categorical variables during network training and inference.\n\nWe will revise to make the decode process more explicit in the supplementary.\n\n**Q11**: 100% modularity in Table 1.\n\nWe compute modularity as the percentage of in-concept references, out of *correctly reconstructed* constraints (line281-282). So Table 1 shows that, without the sharpness loss, very few constraints are properly reconstructed, and for these constraints the references are entirely within their encapsulating concepts, which is an expected result of the modularity enhancing bias loss (Sec.5.3).\n\n", " We thank the reviewers for the constructive and thought-provoking comments, and for the recognition of the novelty and contribution of this work. In this post we answer the common questions from several reviewers, and reply to the other questions separately. To support the discussion, we have updated the draft and supplementary with additional data and highlighted the changes; line numbers and figure/table indices in the discussion refer to the updated draft. Later we will revise the paper thoroughly based on all comments. \n\n**Q1**: How expert designers assess the discovered concepts. (**Jko6**, **5G2f**)\n\nWe have informally discussed the question of modular sketch concept discovery and reviewed the results obtained by our method with expert designers. Their comments are generally positive. First, they note that in daily practice they would build their own libraries of reusable modular components, so that when new design requirements arrive they can quickly adapt and compose the components to fit for the changes. Second, they comment that such inductive modular concept learning over large scale dataset can be quite helpful for designers who serve a diverse range of customers with very different design requirements. Third, they note that the discovered components by our method are generally quite common for them, even though the complete sketches from SketchGraphs dataset containing those components can be unusual in terms of being incomplete or inexact, which is understandable as the SketchGraphs dataset collected from OnShape user creations has a lot practice drawings.\n\n**Q2**: Limited comparison against baseline works. (**vFn4**, **VMk7**)\n\nUp to now we still cannot find the official implementations of comparing autoregressive baselines. Therefore, we have implemented an autoregressive baseline following the works of SketchGen and Vitruvion, both of which consists of a primitive model and a constraint model pointing to the primitives. For fair comparison, we ensure the autoregressive baseline uses the same input sequence encoding as ours, and has the same scale of transformer layers as ours. \n\n\n**Q3**: Impact of library size. (**vFn4**, **VMk7**)\n\nIn the revised draft, we provide the tests with different library sizes in Table 4 of supplementary, from which we see that the library size (i.e. VQ codebook length) should be sufficiently large to cover the concept variations in 1million sketches and allow for good reconstruction. Beyond that, extra library entries cover rare structures and have little impact on the overall results. \n", " **Summary:**\n\nThe paper presents aims to present a method for decomposing a CAD sketch into a set of of _modular-concepts_ that are learned from a large corpus of CAD sketches. The modular concepts are compostions of lower-level primitives connected via constraints.\n\nIn order to do this, the authors design a DSL for expressing these modular-concepts, inspired by Lambda-Calculus. Then they use sequence encoder + vector quantized compression as a bottleneck to force the network to compress the sketch into a bunch of _concepts_. A sequence decoder takes concepts from the concept library and instantiates them with parameters. The whole pipeline is trained end-to-end under a (permutation-equivariant) reconstruction loss, with permutation equivariance guaranteed by graph matching. **Strengths:**\n\n- The paper has a very original idea - it uses previously existing building blocks like Transformers, Vector Quantization and Hungarian Matching for set-structured data - but in a creative way. I commend the authors on this.\n- I found the paper very logically structured with a good introduction to the problem the authors are trying to set-up and solve. It clearly highlights how it is different from previous work [1,2], which relies on reconstruction from lower-level primitives (line, point, arc...).\nI am not an expert in compilers/grammars/linguistics so I am not sure about the quality of the references in that domain, but they seem sufficient to me as an introduction to the state of the field, although they seem to a bit old judging from the years of the papers cited [3,4,5]. I cannot recite off the top of my head papers the authors might need to cite in order to get a better view of the current (~2022) landscape, but perhaps other reviewers could chime in here.\n- The qualitative results are plenty and seem to support the paper well in terms of the diversity and quality of the discovered concepts. \n- The authors use the supplementary material very well and I think there are enough details to reimplement the paper, but perhaps not enough to reproduce the results.\n- The qualitative figures are top-notch at showcasing what the method does.\n\n**Weaknesses:**\n- While I commended the authors on the general writing, some writing is very stilted. For starters, the authors do not use equation numbers in all places which makes it harder to reference equations.\n- Continuing with equations, the notation seems a bit lacking - using $\\mathbf{q}$, $\\hat{\\mathbf{q}}$,${\\mathbf{q'}}$ makes it a bit harder to read the figures as well as the equations. Perhaps some cleaning up would be nice - as an example p-> primitives, e-> (contextualized) embeddings, and you can retain q and q' for unquantized and quantized versions of the concepts respectively. \n- Again continuing with equations, you mention indices implicitly often with \"where the [index] iterates over ...\". This often makes the sums harder to parse. It might be better to list out the indices explicitly. As an example, on L136, it is not really clear until ones reads the supplementary, what exactly $t.x$ runs over. \n- There are not a lot of comparisons but that is mostly fine as there is not a lot of code from competitors.\n- The authors do not discuss the weaknesses very well - to me a major weakness would be that a) the number of concepts per model is fixed b) the number of primitives per model is fixed and that to change both one needs to retrain the model. Additionally, the authors should mention explicitly that they do not handle constraints with parameters and engage in a preprocessing step. This is not a big deal as others simply drop them. But I believe this should be mentioned explicitly. Currently it is a throwaway line in the supplementary.\n- The authors do not describe their autoregressive baseline that well - how is the encoding performed - like the current submission or like SketchGen[1] or Vitruvion[2]? Please try and explain it in greater detail.\n\n**Minor Weaknesses:**\n- The paper would benefit by explictly stating that the constraints are described by a lambda calculus and adding a citation to an introductory book on the subject. Right now it is unclear as to why constraints have a $\\lambda$-dot notation.\n\n\n\nOverall, the paper is very original, has high quality, is reasonably clear and I hope is significant in the field.\n\n**References:** \n\n[1] Para W, Bhat S, Guerrero P, Kelly T, Mitra N, Guibas LJ, Wonka P. Sketchgen: Generating constrained cad sketches. Advances in Neural Information Processing Systems. 2021 \n\n[2] Seff, A., Zhou, W., Richardson, N., & Adams, R.P. (2021). Vitruvion: A Generative Model of Parametric CAD Sketches. ArXiv, abs/2109.14124.\n\n[3] Kevin Ellis, Daniel Ritchie, Armando Solar-Lezama, and Josh Tenenbaum. Learning to infer graphics programs from hand-drawn images. In Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.\n\n[4] Kevin Ellis, Maxwell Nye, Yewen Pu, Felix Sosa, Josh Tenenbaum, and Armando Solar-Lezama. Write, execute, assess: Program synthesis with a repl. In Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019\n\n[5]Lazar Valkov, Dipak Chaudhari, Akash Srivastava, Charles Sutton, and Swarat Chaudhuri. Houdini:Lifelong learning as program synthesis. In International Conference on Neural Information Processing Systems, 2018. - On P16, Fig. 8, subfig 4 from the top. We see the salmon colored concept which is basically one _Equal_ constraint. This seems sketchy. There are many other concepts that basically amount to a concept containing a single constraint. My first question is how many concepts were learned for this qualitative result, and could you please present a histogram of how many primitives/constraints each concept contains when training with different number of allowable concepts. It seems weird that the network would basically call an $\\mathbb{L}^0$ type an $\\mathbb{L}^1$. Does it have something to do with the data? I would like an explanation.\n- The second question relates to the same subfigure - why do we have overlapping concepts - is this a drawing artifact or something significant?\n- In List 1, what exactly is $b_{dash}$? I can assume it means whether to draw the line as dashed, but it would be much better to have an explanation.\n- You mention type casting in P6, L214 but how exactly does that help you compare different types? Would be nice to have an example. Like if a prmitive is a line and the target is a circle, how exactly is the loss computed?\n- In P5, L198, \"decoding layer $dec_{param}(·)$ that is inverse of $enc_{param}(·)$ in Sec.4.1.\". What exactly does inverse mean here? Do you convert logits to actual values by argmaxing and querying the unquantized value? Do you take a weighted sum of the logits? Please explain in more detail.\n- Could you explain the 100% modularity in Table 1, when removing the sharpness loss? What exactly does it mean? As it is just a bunch of primitives ( constraint F1 score is too low), how exactly does the concept become 100% modular?\n- (*Speculative*) This question more curiosity but would it be possible to run the method on the dataset from [1]? This dataset is a lot more complex than SketchGraphs, and it would be interesting to see the concepts that emerge from this more complex dataset.\n\n**References:**\n\n[1] https://github.com/deepmind/deepmind-research/tree/master/cadl **Negative Social Impact**\n\nI foresee no negative social impact from this work unless our Future Robot Overlords are unhappy with the concepts generated running this method.\n\n**Technical Limitations**\n\nI would like to delineate between the limitations of this paper versus that of the method.\n\n_Paper:_\n\n- The paper severely lacks good comparisons to other methods. While this is mainly because of a lack of code, it is still a limitation.\n- The paper lacks comprehensive ablations - while the authors do a good job of ablating their contributions and how much they affect the modularity of their _concepts_, practioners might also want ablations relating to how many layers they need, and how to tune the VQ and how to rank concepts and so on. This does not hold me back from accepting the paper, but the paper might be improved if the authors add to the paper their negative results as well - eg that having too many codes in the VQ led to ..., or that too many layers never converged. I think adding such practical advice to the paper will make it more useful.\n\n_Method:_\n\n - The limitations of this method would be having to retrain for every desired level of modularity.\n - Unless I am very wrong, I guess that the mathching part of the algorithm is somewhat unstable (The authors can correct me here). This is based on my experiences with DETR, and requires designing the cost matrices with considerable care.", " This paper contributes an end-to-end multi-module architecture that learns design concepts from CAD sketches, a specific type of \"sketches\" that are composed with well-formed primitives and constraints to be used for further CAD modelling. The author(s) define a DSL to encode CAD sketches, and construct implicit and explicit representations of deep-learning based methods to detect design concepts and generate sketch graphs. The implicit and explicit representations respectively correspond to the two primary modules in the architecture, which first transforms raw sketches into learnt discrete latent representations (i.e., implicit), and subsequently a generation module takes these implicit representations and generates the final graph in an explicitly defined format.\n\nThis model is trained end-to-end on self-supervised objectives contributed by the author(s). The author(s) evaluated the models to have learnt reasonable design concepts and achieved greater auto-completing accuracy over a representative autoregressive baseline, on an established large-scale dataset of CAD sketches.\n Strengths:\n- Novel and well-designed architecture that both utilises latest advances in deep-learning and generates human-interpretable sketch graphs\n- Detailed and well-defined DSL for encoding and interpreting CAD sketches\n- Showed promising results in balancing reconstruction accuracy of graphs and ensuring modularity of outputs\n- Achieved better performance on a significant auto-completing task over a representative baseline\n\nWeaknesses:\n- The greatest weakness of the current paper is the lack of explicit evaluation on the learnt design concepts (which is the primary claimed goal that the model should achieve). While I appreciate that the authors have shown extensive qualitative examples in the paper and supplementary materials, I am curious to see if a designer and/or domain expert find these learnt concepts to be useful as some of them are relatively trivial (e.g., rectangles).\n- When comparatively evaluating autoregressive baselines and the proposed method on the auto-completing task, how were the \"removed primitives\" represented respectively? If this is done in the proposed method by replacing existing primitives with an additional 'mask' class while keeping other aspects of inputs unchanged, it might be slightly unfair to the autoregressive baseline. This is because the information of the overall length or some further details about the sketch to be auto-completed might be leaked through the masked input to the proposed method. \n- The proposed method tackles a very specific task in an expert domain. However, I believe there could be greater applications for the proposed framework of implicit / explicit representation, which makes this a more minor weakness.\n- Related to the above point on the lack of generality: the current description on the exact scope of novelty in the paper is not clear. Is this implicit / explicit representation framework entirely novel at the highest level, or are there instantiations of it that already exist in other domains? I also recommend the author(s) to list the specific contribution(s) at the end of the introduction.\n- The current method only works at the first order, which the author(s) acknowledged. I wonder if they can further discuss how might the framework be applied (potentially recursively) to discover higher order concepts. Most of my questions are already embedded in the Weaknesses section above. To reiterate in more directly and clearly:\n\n- Is this implicit / explicit representation framework entirely novel at the highest level, or are there other instantiations that already exist in other domains? (i.e., Is your contribution only applying this framework to CAD sketches, or did you contribute this framework in general?)\n- Did a designer and/or domain expert review the learnt concepts formally or informally? If so, did they find them to be useful?\n- Would you further clarify how the \"removed primitives\" were represented in the auto-completion task in the proposed method?\n- How might this framework be applied to discover higher order concepts? I think it is generally reasonable to consider the targeted task of CAD sketch modelling to not have significantly negative societal impacts as the author(s) have outlined. However, one might further consider the minor impact when these models are deployed in real CAD use-cases when the user might become over-reliant on trusting the model as an absolute source-of-truth. This can have varying degrees of negative consequences depending on the final application of the CAD models, and can become harmful such as when the generated CAD model is structurally unsafe. ", " The authors proposed a method to decompose CAD sketches into sub modular concepts, for design intent parsing. Specifically, the task is treated as a problem of program library induction. The goal is to discover sketch concepts (modular structures) from raw sketches in the format of sketch graphs. DSL is formulated to enable a concise way to represent sketch concepts. There are two key factors to make concepts searching trainable, I.e., the dual representation and the parameterized structure for concepts. In particular, authors utilize a transformer-based object detection framework for implicitly detecting and encoding concepts. Then, a generation module is to explicitly generate shapes corresponding to the found concepts. Finally, a self-supervised reconstruction loss coupled with other two losses (concept quantization loss and modularity enhancement loss) are severed as objectives.\n\nExperiments are conducted on a subset of SketchGraphs dataset. Reconstruction accuracy and sketch concept modularity are used for evaluation. The proposed method also is capable of auto-completion for CAD modeling. Strengths\n\n- To parse raw sketch graphs into sketch concepts to reveal design intents is meaningful. This work proposed a method to explicitly discover sketch concepts, which seemed different from shape primitives used by previous works. This work is more like to segment sketches into semantic parts (sketch concepts).\n\n- The proposed method can be trained in a self-supervised manner, which facilities easier network training.\n\n- The paper is organized and well-writing, I really enjoy the paper reading.\n\nWeaknesses\n\n- The total loss is a bit complex, with multiple losses combined. And each wights are set empirically, which potentially requires efforts to fine-tune especially when the dataset is changed. This would lead prohibition in practical usage.\n\n- The experiments are somehow limited. It would be better to provide more baseline methods for comparison if possible. In addition, the experimental analysis is limited too, especially about the design intent interpretation. So it is unclear how the discovered sketch concepts support design intent parsing. (1) I am curious if the discovered sketch concepts are actually some advanced primitive shapes used by other generative models for CAD sketches, such as [6] [13][17]? \n\n(2) The input sketch is $L^0$ typed instance, i.e., CAD sketches. Could the proposed method possibly to be applied to free-hand sketches?\n\n(3) How to learn the set of concept queries $[\\bar{q}_i]$? \n\n(4) How library $L^1$ is defined? And what is the impact of the library size which is 1000 currently?\n\n(5) Is the output ( a set of sketch concepts) unique for an input sketch? Since different sets of basic shapes can compose the same raw sketch as well, and multiple answers might be all correct. Is that really safe to say the discovered sketch concepts are well-aligned to groundtruth design intents with a high reconstruction score and sketch concept modularity?\n\n(6) Is there a scalability issue as I saw most of the cases shown in this paper produce three sketch concepts for a single sketch input? I doubt there is a scalability issue when the library size increases (1000 for the current setting), since transformer-based framework is used. As we know, the number of tokens has a significant impact on the computational cost.", " This paper proposes a novel learning based approach to discover the modular concepts (i.e., modular structure) from raw CAD sketches. To tackle the problem, the authors first define a domain specific language (DSL) such that modular concepts can be represented in a network-friendly manner. \n\nA Trasnformer-based detection module takes in a CAD sketch sequence and outputs a a set of latent embeddings, which are further decoded to parameterized modular concepts by a generation module. The whole model is trained in an end-to-end self-supervised manner, using reconstruction loss plus regularization terms.\n\nThe authors perform experiments on a large scale CAD sketch dataset and mainly demonstrates its applications for design intent interpretation (i.e., parse modular concepts from a raw CAD sketch) and auto-completion (i.e., complete a partial CAD sketch).\n Strengths:\n\n- From the motivation side, I agree that CAD design intents, which are implicitly encoded in the CAD sketches, are best reflected in mid-level recurring patterns. It's an intellectually interesting problem whether one can discover these concepts from raw sketches. As perhaps a first attempt, this paper formulate this problem in a principled way by defining a domain specific language. It brings new knowledge to the field of ML-aided CAD design.\n- The main technical contribution, from my point of view, is the concept representation that is based on the defined DSL. The proposed framework is built upon such representation and there are many interesting technical designs: 1) a DETR-like detection module in the context of CAD sketches; 2) the separation of topological structure and parameterization; 3) formulation of loss functions based on reference matrices.\n\nWeakness:\n\n- The design intent interpretation gives good results on some examples (Fig.3) but seem to fail in many cases (Fig. 9). For example, for those symmetric CAD sketches (e.g., Fig. 9 first row first column, sixth row first column, sixth row four column), the parsed decompositions do not respect the symmetry, even though they are relatively simple sketches. The question then is how often the method fails? It would be helpful to have a user study to see how much percentage of the results agree with human interpretation, but I understand the difficulty to do that.\n- Auto completion is indeed an crucial feature in CAD design and I agree that the modular concepts should be helpful if they are used properly. Conceptually, a CAD auto completion method should be able to give multiple plausible suggestions for the user. However, this method is determinstic --- it only generates one completion results given the partial input. It doesn't seem easy to extend this work to allow multiple possible outputs, which limits its practical potentials.\n- The presentation for the defined DSL could be improved. I would suggest explicitly writing out all the types for the Fig. 1 example such that readers can align the abstract concepts with graphical illustrations.\n\nOverall, I think that the paper formulates an intellectually interesting problem in a princepled way and tries to solve it with many unique technical designs. Though the results are not good enough, making it still far from practical usage, I think they are acceptable for a first attempt in this particular problem. In addition, I do think its technical contributions are novel and beneficial to the community. \n - Line 73-77, the description for $L^1$ composite type is confusing, probably because it's a abstract concept. I can understand it conceptually but hard to match the mathematic formulation. If possible, I would suggest explicitly writing out the $L^1$ composite types in Figure 1 example such that readers can understand this concept more precisely.\n\n- Line 134-135, \"each constraint reference as a primitive index is directly embedded as a code\", can you be more specific? How does a primitive index be embeded as a code? Is it via positional encoding?\n- Line 184-185, \"Collectively...R of shape $(2k_{qry}\\cdot k_{L^0})\\times(k_{qry}\\cdot k_{L^0})$\". How is the complete reference matrix R generated? From Line 168 and Line 175, in-concept and cross-concept reference matrices $(R_{T^1}, R_S)$ are generated separately. So is R just composed of $R_{T^1}$ and $R_S$?\n\n- Figure 5(a), what is the dash line (on the left)? \n- Sec. 5.2, what's the motivation for using a vector quantization loss? In other words, what's the benefit of building a discrete concept code space? Additionally, There is no ablation study on this loss term. \n I think limitations are well discussed and I do not foresee any potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "2tLJG1GhMZ0", "fwGk4PVmMOq", "9KyYZKpahHF", "AWefkvOr9Zw", "7YEnpZ_wdKL", "zgLl2uUZxKO", "zgLl2uUZxKO", "nips_2022_R7qthqYx3V1", "nips_2022_R7qthqYx3V1", "nips_2022_R7qthqYx3V1", "nips_2022_R7qthqYx3V1", "nips_2022_R7qthqYx3V1" ]
nips_2022_ITqTRTJ-nAg
HyperMiner: Topic Taxonomy Mining with Hyperbolic Embedding
Embedded topic models are able to learn interpretable topics even with large and heavy-tailed vocabularies. However, they generally hold the Euclidean embedding space assumption, leading to a basic limitation in capturing hierarchical relations. To this end, we present a novel framework that introduces hyperbolic embeddings to represent words and topics. With the tree-likeness property of hyperbolic space, the underlying semantic hierarchy among words and topics can be better exploited to mine more interpretable topics. Furthermore, due to the superiority of hyperbolic geometry in representing hierarchical data, tree-structure knowledge can also be naturally injected to guide the learning of a topic hierarchy. Therefore, we further develop a regularization term based on the idea of contrastive learning to inject prior structural knowledge efficiently. Experiments on both topic taxonomy discovery and document representation demonstrate that the proposed framework achieves improved performance against existing embedded topic models.
Accept
Hyperbolic embeddings were a fascinating alternative to Euclidean embeddings that never seemed to take off, despite having significant conceptual advantages in representing the oddities of semantics. I am happy to see more work on curved spaces as a tool for semantic analysis! This work has strong reviews, and reviewers were generally happy with the author responses. I'd like to see it published.
train
[ "k98ljqSD2UE", "xcGABl-COv", "HxqTc5I8xSb", "l6y4rrmFyDD", "rpJJWCv0LkS", "dY2NVL2AS24", "p5H_zxNobIj", "_UAM9yI8Asb" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' answers to my questions, and I agree with those, and hope that can improve the paper a bit more. The discussion on the limitation is on point, and I thought the same about antonymy and meronymy, so a caveat would be this might need to be used together with another embedding if those are important for a certain domain. Clarifying this limitation would be helpful to future readers, but this paper does a solid job within the proposed problem.", " **Q2:** \nWhat is the impact of the positive and negative selection in the contrastive loss?\n\n**A2:** \nIn the standard contrastive loss, each example picks the positive sample from one of its own augmented views and selects other examples in the mini-batch as negative samples. Generally, the number of negative samples can have a noticeable impact on performance. Therefore, in our hyperbolic contrastive loss, we keep using each node's first-order neighbors as its positive samples, but select a different number of negative samples from the non-first-order neighbors to study its impact.\nThe comparative results are presented below.\n\n**Table 4:** performance on document clustering. (Dataset: 20NG)\n| **Number of negative samples** | **km-Purity** | **km-NMI** |\n|--------------------------------|---------------|------------|\n| 128 | 0.4237 | 0.4240 |\n| 256 | 0.4498 | 0.4331 |\n| 512 | 0.4323 | 0.4285 |\n| All non-first-order neighbors | 0.4522 | 0.4496 |\n\n**Q3:**\nPlease address my confusion regarding the Lorentz vs. Poincare approaches.\n\n**A3:** \nWe acknowledge that the equivalence between Poincaré and Lorentz similarities may not be well understood from the mathematical formulation alone, so we refer to Figure 1 of \"*Nickel, M. and Kiela, D., Learning continuous hierarchies in the Lorentz model of hyperbolic geometry, ICML 2018*\" for an intuitive understanding. Figure 1b clearly describes the projection of the geodesic on the Lorentz surface to the geodesic in the Poincaré disc, and the length of the geodesic (the shortest distance between two points) in the Poincaré disc is defined by $d_{\\mathcal{P}}(x, y)$ in Equation (1), the length of the geodesic on the surface of a Lorentz space is given as $d_{\\mathcal{L}}(x, y)$ in Equation (5). These formulations are followed by our manuscript. Since the points in the Poincaré disc and the points in the Lorentz space can be mapped to each other (all geometric properties including isometry are preserved), we say the two models are mathematically equivalent. However, it does not mean that the actual lengths calculated by $d_{\\mathcal{P}}(x, y)$ and $d_{\\mathcal{L}}(x, y)$ are exactly the same. We will also add the corresponding diagram in the revision to help readers understand this better.\n", " We sincerely appreciate your valuable comments and suggestions. As you mentioned that the gains in Figure 3 are generally small, we will add corresponding examples to visualize the content of learned topics. In addition, we will try our best to address the reorganization and presentation issues in the revision based on your advice. And your main questions have been addressed as below.\n\n**Q1:** \nWhat is the impact of changing aspects of the taxonomy (more or less, or regrouping concepts)?\n\n**A1:** \nTo investigate the impact of changing aspects of the taxonomy, we conducted complementary experiments. Specifically, in the manuscript we use a concept taxonomy with a depth of 5 for all experiments. Here we additionally consider three concept taxonomies with depths setting as 3, 4, and 6, respectively. The following tables show the document clustering performance and topic quality of HyperMiner-KG guided by taxonomies of different scales. \n\n**Table 1:** performance on topic coherence. (Dataset: 20NG)\n| **Taxonomy Structure** | **Layer 1** | **Layer 2** | **Layer 3** | **Layer 4** | **Layer 5** | **Layer 6** |\n|------------------------|-------------|-------------|-------------|-------------|-------------|-------------|\n| 83-12-2 | 0.4180 | 0.2893 | 0.2043 | - | - | - |\n| 325-83-12-2 | 0.4004 | 0.3788 | 0.2667 | 0.1878 | - | - |\n| 560-325-83-12-2 | 0.3160 | 0.3568 | 0.3223 | 0.2561 | 0.1925 | - |\n| 620-560-325-83-12-2 | 0.2329 | 0.2440 | 0.2445 | 0.2614 | 0.2154 | 0.1954 |\n\n**Table 2:** performance on topic diversity. (Dataset: 20NG)\n| **Taxonomy Structure** | **Layer 1** | **Layer 2** | **Layer 3** | **Layer 4** | **Layer 5** | **Layer 6** |\n|------------------------|-------------|-------------|-------------|-------------|-------------|-------------|\n| 83-12-2 | 0.8200 | 0.9400 | 1.0000 | - | - | - |\n| 325-83-12-2 | 0.5921 | 0.8124 | 0.9267 | 0.9600 | - | - |\n| 560-325-83-12-2 | 0.2278 | 0.3896 | 0.5833 | 0.7162 | 0.8000 | - |\n| 620-560-325-83-12-2 | 0.1681 | 0.0890 | 0.1245 | 0.1540 | 0.3568 | 0.7400 |\n\n**Table 3:** performance on two evaluation metrics for the clustering task. (Dataset: 20NG)\n| **Taxonomy Depth** | 3 | 4 | 5 | 6 |\n|--------------------|---|---|---|---|\n| **km-Purity** | 0.4325 | 0.4637 | 0.4498 | 0.4269 |\n| **km-NMI** | 0.4152 | 0.4473 | 0.4331 | 0.4286 |", " We appreciate your constructive comments and suggestions, which are helpful for us to further improve the presentation quality of our paper. The weaknesses have been addressed as follows.\n\n**W1:**\nThe variants HyperETM, HyperMiner and HyperMiner-KG are not explained, and it thus reduces the reproducibility of the experiments.\n\n**A1:** \nWe apologize for not explaining the variants HyperETM, HyperMiner and HyperMiner-KG in the manuscript. HyperETM and HyperMiner are variants of ETM and SawETM, respectively. In both variants, words and topics are embedded into a shared hyperbolic space, so that the semantic hierarchy among words and topics can be better captured by the distance between their embeddings. Besides,\nHyperMiner-KG is developed based on HyperMiner to inject external prior structural knowledge through our proposed hyperbolic contrastive loss. We will add these explanations in the revision.\n\n**W2:** \nThe topic taxonomy (Figure 4b) is hard to relate without document examples and/or summary of the topics.\n\n**A2:** \nWe agree with you that Figure 4b could be difficult to interpret without a summary of the topics. Since each topic in our model (HyperMiner-KG) is learned under the guidance of a prior concept, we can annotate the name of the corresponding concept to evaluate whether the learned topic is successfully influenced by the prior concept. This issue will be addressed in our revision. \n\n**W3:** \nDiscussion of side-effect is missing. As the hyperbolic space is enforced, the freedom of representing a topic decreases. As such, there can be side-effects and a discussion of desired properties in topic modeling and side-effects from this approach would be beneficial. For example, is the tree-structure the best or can it be a rather limiting factor?\n\n**A3:** \nThe side-effect discussion is meaningful and necessary. Indeed, the freedom of representing a topic might somewhat decrease as our approach enforces the hyperbolic space, but we believe that the capacity of such a space is sufficient for topics to learn good representations. More importantly, the freedom of representing every single topic may not be a decisive factor for the performance in topic modeling tasks. Instead, the relative distance between topics and words can be more significant since we expect it to well reflect the semantic relatedness, which is critical for learning interpretable topics. In view of the inherent semantic hierarchy of words and the topic hierarchy in hierarchical topic modeling, a distance function with the property of expressing such hierarchical relationships would be better. And the distance metric in hyperbolic space just meets this need, our experimental results also demonstrate that hyperbolic space is suitable for the task of topic modeling. Further, considering that semantic relations include not only hypernymy but also antonymy and meronymy, the tree-structure may not be the best, but at least it is effective in hierarchical topic modeling. ", " We appreciate your detailed comments and suggestions. We will take your advice to include more discussions of related work and fix all identified typos in the manuscript. Regarding the question of interest to you, we make the following statements.\n\n**Q1:** \nIf the ability of Euclidean space is bounded by its dimensionality, then what interests the reviewer is that if the dimensionality of one hyperbolic space is $d$, then what the dimension would be for a Euclidean space to achieve comparable performance in the topic modeling task (e.g., $k \\times d$ where $k$ is a fixed number)?\n\n**A1:** \nWe acknowledge that it can be difficult to theoretically analyze the dimension required for Euclidean space to achieve comparable performance when the dimensionality of hyperbolic space is $d$. Although we can mathematically show that hyperbolic space has the advantage of a more spacious room than Euclidean space (e.g., the area of a circle in two-dimensional Euclidean space is $2\\pi r^2$ and the disc area in a two-dimensional hyperbolic space is given as $\\pi(e^{-r} + e^{r})$, indicating that the area expands exponentially with $r$ in hyperbolic space while grows polynomially in Euclidean space), this does not guarantee better performance on topic modeling tasks. Empirically, however, we may gain some insights from comparative experiments. In our manuscript, Table 3 presents the comparison results of the two spaces in the document classification task under different embedding dimension settings. Here we give additional results on the quality of learned topics in the two spaces of different dimensionalities. \n\n**Table 1:** comparative performance on topic coherence. (Dataset: 20NG)\n| **Method** | **Dimensionality** | **Layer 1** | **Layer 2** | **Layer 3** | **Layer 4** | **Layer 5** |\n|------------------------|--------------------|-------------|-------------|-------------|-------------|-------------|\n| | 2 | 0.1770 | 0.1251 | 0.1031 | 0.0470 | 0.0563 |\n| **SawETM** | 5 | 0.1800 | 0.2379 | 0.1547 | 0.1095 | 0.0495 |\n| **(Euclidean space)** | 20 | 0.2265 | 0.2488 | 0.1744 | 0.1398 | 0.0654 |\n| | 50 | 0.3044 | 0.2729 | 0.1843 | 0.1634 | 0.0679 |\n| | 2 | 0.1800 | 0.1729 | 0.1388 | 0.1587 | 0.1871 |\n| **HyperMiner** | 5 | 0.2004 | 0.2654 | 0.2618 | 0.2005 | 0.1763 |\n| **(Hyperbolic space)** | 20 | 0.2440 | 0.3072 | 0.2659 | 0.2472 | 0.1836 |\n| | 50 | 0.2954 | 0.3437 | 0.3101 | 0.2753 | 0.1843 |\n\n**Table 2:** comparative performance on topic diversity. (Dataset: 20NG)\n| **Method** | **Dimensionality** | **Layer 1** | **Layer 2** | **Layer 3** | **Layer 4** | **Layer 5** |\n|------------------------|--------------------|-------------|-------------|-------------|-------------|-------------|\n| | 2 | 0.0085 | 0.0107 | 0.0188 | 0.1200 | 0.5800 |\n| **SawETM** | 5 | 0.0579 | 0.0607 | 0.0857 | 0.1600 | 0.5200 |\n| **(Euclidean space)** | 20 | 0.1551 | 0.1945 | 0.2632 | 0.3546 | 0.6800 |\n| | 50 | 0.1706 | 0.2367 | 0.3809 | 0.5471 | 0.7400 |\n| | 2 | 0.0137 | 0.0166 | 0.0241 | 0.1500 | 0.6200 |\n| **HyperMiner** | 5 | 0.0596 | 0.0831 | 0.1838 | 0.3400 | 0.5600 |\n| **(Hyperbolic space)** | 20 | 0.1684 | 0.2148 | 0.2971 | 0.4733 | 0.7600 |\n| | 50 | 0.1987 | 0.3679 | 0.5133 | 0.6362 | 0.7800 |\n", " This paper studies the hierarchical topic modeling problem. Existing studies generally learn topic representations in the Euclidean space, which might lead to some limitations in capturing hierarchical relations. To be more specific, the ability to model complex patterns is inherently bounded by the dimensionality of the embedding space. On the other hand, side information such as taxonomy of concepts is sometimes available to guide the learning of hierarchical topics, which might be challenging to preserve in the Euclidean space. As a consequence, this paper proposes a novel framework that introduces learning word and topic representations in the hyperbolic space. Experiments on three public text datasets and auxiliary ablation/case studies demonstrate the effectiveness of the proposed framework. Strengths:\n1. This paper studies an important task. Hierarchical topic modeling is a well-studied yet important problem that has the potential of benefiting a wide spectrum of downstream applications such as classification, named entity recognition, etc.\n2. The motivation of the solution is very clear. As is well known, Euclidean space falls short when it comes to modeling hierarchical structures. In contrast, hyperbolic space has nice properties of hierarchical awareness and spacious room which benefits hierarchical relationship modeling.\n3. The experiments are thorough. They are sufficient to justify the superiority of hyperbolic space under the problem set. An ablation study between HyperMiner and HyperMiner-KG validates the modeling of hierarchical external knowledge. Meanwhile, the experiments also show that the proposed method is extendible to all kinds of ETM models.\n4. The paper is carefully written and well organized.\n\nWeaknesses:\n1. More related works could be potentially included and discussed [1].\n\n[1] Hierarchical Topic Mining via Joint Spherical Tree and Text Embedding\n\nMinor issues:\n- Typos\nIn line 137, \"Sine\" should be \"Since\"\nIn Appendix line 93, \"wth\" should be \"with\" 1. If the ability of Euclidean space is bounded by its dimensionality, then what interests the reviewer is that if the dimensionality of one hyperbolic space is d, then what the dimension would be for a Euclidean space to achieve comparable performance in the topic modeling task (e.g., $k * d$ where $k$ is a fixed number)? As mentioned in the appendix, the main limitation of this work is the mismatch problem between the given structural knowledge and the target corpus. To provide proper guidance for mining an interpretable topic taxonomy, the prior structural knowledge should be well matched with the corresponding dataset.", " This paper improves embedded topic modeling by using a hyperbolic space. The proposed approach particularly helps considering hierarchical structure of words and topics. In addition, the authors propose a revised contrastive loss to inject prior knowledge. The evaluation compares the proposed approach with 6 existing embedding and non-neural-embedding-based topic modeling approaches on 4 datasets. The proposed approach is shown to outperform the existing approaches in most configurations. Some analyses including the 2D visualization show the proposed approach performs as intended and encode the hierarchical information. Strengths\n- S1. The proposed approach encodes hierarchical information, and can incorporate the existing knowledge by adopting the hyperbolic space into an existing neural topic modeling approach.\n- S2. The paper has good presentation and readability in general.\n- S3. The solid empirical results show the performance of the constructed topic model and the analysis shows that the approach works as intended.\n\nWeaknesses\n- W1. The variants HyperETM, HyperMiner are HyperMiner-KG are not explained, and it thus reduces the reproducibility of the experiments.\n- W2. The topic taxonomy (Figure 4b) is hard to relate without document examples and/or summary of the topics.\n- W3. Discussion of side-effect is missing. As the hyperbolic space is enforced, the freedom of representing a topic decreases. As such, there can be side-effects and a discussion of desired properties in topic modeling and side-effects from this approach would be beneficial. For example, is the tree-structure the best or can it be a rather limiting factor? The paper is quite straightforward and I appreciate it. Please take a look at W1-W3 for suggestions to further improve the presentation. I cannot spot any negative societal impact of this work.", " This paper presents a neural hierarchical topic model that uses\nhyperbolic embeddings. The paper uses a Gamma-Poisson factorization to\nfacilitate optimizing a variational objective. They augment this\nobjective with a taxonomy-aware contrastive loss as a way of\naddressing hyperbolic manifold learning. Across four datasets, they\ncompare against six baselines. Averaged across multiple runs, they\ndemonstrate small but consistent improvements in their metrics (all\nvariances are fairly small). Update post-response: I thank the authors for their answers to my questions. \n\nI understand there's a limited amount that can be done during the author response. That said, while the additional taxonomy experiments are encouraging, though for inclusion in a paper they would need additional discussion. \n\nFor whatever revision is next, I suggest that while the proposed clarifications of Poincare vs. Lorentz may be helpful for some readers, the main question is, \"is this level of detail necessary for the main paper?\" If the answer is yes, then this point needs to be made more immediately obvious. \n\n\n=========================\n\nOverall, this paper has a clear mathematical overview. For the\nsingular main contribution (hyperbolic hierarchical topic modeling)\nthis paper presents a decent amount of experimental evidence that the\nproposed approach is effective on purely quantitative measures. While\nthis paper does not address the impact of the proposed approach on the\ntopic model as a language model (perplexity), the classification\nresults give some evidence as to the effect on document modeling.\n\nHowever, lack of examples and experimentation when it comes to the\ntaxonomy is noticable. This is especially the case given the tied\nnature of the model to the taxonomy (L167-169). This is important not\njust for overall completeness, but to better understand the results\nthat have been presented. The gains, e.g., in Figure 3, are generally\nsmall but consistent. That alone is not a negative. However, those\nsmaller gains are less grounded without additional examples or\nexperimentation.\n\nThis paper has some reorganization and other presentation\nissues. While these issues could arguably be viewed as \"easily\naddressable,\" they do impact the ability of a reader to udnerstand\nwhat the paper's take-aways are.\n\n* The current organization of the paper suggests that the authors\n apply hyperbolic embeddings to a Poisson Gamma belief network\n only. However, as the results show, this is not the case. I\n recommend reorganizing so that the main aspect of \"hyperbolic\n embeddings applied to (hierarchical) topic models\" stands out.\n\n* Since Figure 3 only makes distinctions based on color, it is a bit\n difficult to quickly interpret.\n\n* The mathematical introduction of both the Lorentz and Poincare\n similarities is confusing and, given the amount of space dedicated\n to the equivalence explanation (L140), too nuanced. Indeed, even\n from the appendix this equivalence is not clear (am I missing\n something obvious)?\n\n* While Fig. 4b is helpful, 4a is too busy. The concept hierarchy is\n clear (but that's a given), while the lexical hierarchy, except at a\n very coarse granularity, is not. Q1. What is the impact of changing aspects of the taxonomy (more or less, or regrouping concepts)?\n\nQ2. What is the impact of the positive and negative selection in the contrastive loss?\n\nQ3. Please address my confusion regarding the Lorentz vs. Poincare approaches. No: see taxonomy experiments/Q1." ]
[ -1, -1, -1, -1, -1, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "l6y4rrmFyDD", "HxqTc5I8xSb", "_UAM9yI8Asb", "p5H_zxNobIj", "dY2NVL2AS24", "nips_2022_ITqTRTJ-nAg", "nips_2022_ITqTRTJ-nAg", "nips_2022_ITqTRTJ-nAg" ]
nips_2022_Lvlxq_H96lI
Learning Manifold Dimensions with Conditional Variational Autoencoders
Although the variational autoencoder (VAE) and its conditional extension (CVAE) are capable of state-of-the-art results across multiple domains, their precise behavior is still not fully understood, particularly in the context of data (like images) that lie on or near a low-dimensional manifold. For example, while prior work has suggested that the globally optimal VAE solution can learn the correct manifold dimension, a necessary (but not sufficient) condition for producing samples from the true data distribution, this has never been rigorously proven. Moreover, it remains unclear how such considerations would change when various types of conditioning variables are introduced, or when the data support is extended to a union of manifolds (e.g., as is likely the case for MNIST digits and related). In this work, we address these points by first proving that VAE global minima are indeed capable of recovering the correct manifold dimension. We then extend this result to more general CVAEs, demonstrating practical scenarios whereby the conditioning variables allow the model to adaptively learn manifolds of varying dimension across samples. Our analyses, which have practical implications for various CVAE design choices, are also supported by numerical results on both synthetic and real-world datasets.
Accept
**Summary**: This paper studies the behavior of variational auto-encoders (VAEs) and conditional VAEs (CVAEs) when trained on data that embeds a low-dimensional manifold into a higher-dimensional space. The authors demonstrate that VAEs are able to learn the intrinsic manifold dimension of the data at optimality. They also show that a similar result exists for CVAEs, and that effective conditioning should reduce the loss function at optimality. The paper then examines some common design choices in VAEs and CVAEs, observing that conditioned and unconditioned priors are theoretically equivalent, learning the decoder variance should result in better performance, and that a common weight-sharing technique in autoregressive models should be avoided. The experiments section directly addresses their claims, particularly on the synthetic dataset with known ground truth intrinsic manifold dimension. **Strengths**: Reviewers were in agreement that this is fairly original work that yields several interesting and important insights about the ability of VAEs to estimate the intrinsic dimension of a data manifold [R6JP,ZAHe,pxuR]. Reviewer [ZAHe] sees Theorem 1 as a major contribution; it demonstrates that well-optimized VAEs can estimate the intrinsic manifold dimension, under the assumption that we have a reliable method to estimate the number of active latent dimensions. The reviewer also notes taht Theorem 2 part (i) is sensible and intuitive; adding good conditioning variables can reduce the loss, and that Theorem 3 is also an interesting result that appears to shed light on an issue with a common practical technique. Moreover, reviewer [ZAHe] notes that empirical results are strong and that the introduction and motivations are very clear, and the overall structure is easy to follow. **Weaknesses**: The main criticisms from reviewers focused on numerous issues with clarifty, with many examples given by each reviewer [R6JP,pxuR]. Reviewer [R6JP] notes that the paper would be easier to follow if it included some form visualization on a toy problem and included some qualitative experiments, and that there are several aspects of writing that could be improved. The introduction takes too long to explain what the contribution is. Captions could be more informative (examples given). Reviewer [ZAHe] finds that while theoretical results link intrinsic dimension of data to activations in the latent space, there is no corresponding result that links activations in the latent space to the dimensionality of generated data. This is true in both Theorem 1 and in Theorem 2 / Corrolary 2.1. This reviewer also notes a large number of minor issues and/or addressable weakenesses (19 examples given) . **Author Reviewer Discussion**: The authors provided clarifications on many points to reviewer [R6JP] and have updated the manuscript accordingly. In response to reviewer [ZAHe] they clarified how dimesionality of generator manifold follows from latent activations, fixed the error in corrolary 2.1 pointed out by reviewer, and provided numerous responses to other questions and comments. Reviewer [pxUR] comments that while the contribution relative to (Dai & Wipf ICML 2019) regarding active dimensions is a little oversold, the paper makes other contributions. Reviewer [R6JP] updated their score 5->6. After extnsive discussion, reviewers [ZAHe] and [pxuR] also updated their scores 5->6 and 4->5. **Reviewer AC Discussion**: Reviewers were in consensus that author responses had improved the paper. All reviewers indicate that consider this paper above the bar for acceptance, but do think that this paper is somewhat borderline and could also be rejected. **Overall Recommendation**: The AC is of the opinion that the evaluation and discussion that has taken place for this paper is sufficiently thorough, and will follow the recommendations by reviewers. This is a paper that is just about above the bar for acceptance, but may also need to be rejected to make room for other papers.
train
[ "6ALF0EWxPJg", "q9N7qz4xiB0", "UqOW-2cG_Q6", "QpiNn56H5rj", "MCmbWia9lG1", "QQwswo4wtB", "pqFMFx2OCoyz", "qjj8Wk0f-he", "YoKN5Oc89qD", "IwIvHXuEUbs", "VDrJiWC5PZe", "ekGIh5H9vta", "uRcCqQaiJs", "UTYDEsYtWpm", "bNFFIb3L0iY", "1BAbEyvjfYN", "FCZg4spwdDb", "BDpk3S0cezf", "JYa-FgO2UDh" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your continued engagement with our paper. And per the reviewer's suggestion, we can certainly update the paper to include all the discussed changes, noting that NeurIPS allows for an additional (10th) page to address reviewer comments in the final version.", " Thanks for your continued engagement with our paper. And per the reviewer's suggestion, we can certainly update the paper to include the discussed changes in the extra (10th) page that NeurIPS allows in the final version.", " Thanks for your continued engagement with our paper. And per the reviewer's suggestion, we can certainly update the paper to include the discussed changes, noting that NeurIPS allows for an additional (10th) page to address reviewer comments in the final version.", " I have updated my score to reflect the other reviews and the author response.", " I thank the authors for their reply. My main concerns about clarity have been adequately addressed and I am increasing my score, as I believe the mathematical presentation is now clear and understandable. I would just like to ask the authors to appropriately modify the camera-ready version to include not only the clarity changes that have mostly already been made, but also the discussion about Theorem 5 in [4].", " Thank you for the clarification, both here and in the second part of my response.\n\nI think the discussion that we've had has made things much more clear for me, and this discussion should easily be accommodated in the extra page that will be available if this paper makes it to publication. From my perspective, the paper has already been improved, but I believe with further improvements to *clarity* as discussed, this paper would be elevated to the level of *Weak Accept*. I will trust that the authors will provide these improvements and will thus raise my score accordingly.", " We appreciate your quick follow ups, and hope our answers below have addressed them:\n\n- Definition 5 is still poorly phrased. The way it is currently phrased implies that for every $x$, there exists a $g$ such that $g(c)=\\varphi(x)_t$, which I do not think is the intended definition, as this would always be trivially satisfied. I believe what is meant is that there should exist a single $g$ such that for every $x$, $g(c)=\\varphi(x)_t$.\n\n**A**: Thanks for pointing out the confusion, and we'll update the main text accordingly. What we really try to say is that for every $x$ and its condition $c$, \n\n1. There exists a **diffeomorphism** $g$ and an integer $t$ such that $g(c)=\\varphi(x)_t$, and\n2. There is no **diffeomorphism** $g$ satisfying $g(c)=\\varphi(x)_{t+1}$. \n\nConsider this case as a visualization of definition 5: for any $x\\in \\mathcal{X}$, we let its $c$ be the first dimensions of $\\varphi(x)$. Obviously, let $g$ be identity mapping and $t=1$, we have $g(c)=\\varphi(x)_t$, and there exists no diffeomorphism $g$ such that $g(c)=\\varphi(x)_2$. In this case the effective dimension is 1.\n\nEssentially effective dimension characterizes the maximum number of dimensions of the manifold for $x$ that $c$ can recover. With $t$ being the effective dimension, we agree that it might be trivial to find a desired $g$, but at the same time it is impossible to have such a $g$ if $t$ goes larger than that. In definition 5, we use the existence of such a $g$ as a tool to introduce the concept of effective dimension, rather than examine possible examples of $g$s.\n\n- Could the authors please further clarify?\n\n**A**: Theorem 5 in reference [4] only shows that perfect reconstructions will occur, but it does not at all suggest that this perfect reconstruction will be achieved at global optima with the fewest number of active dimensions. And it is easy to envision underdeterimined scenarios whereby there exists a multitidude of solutions capable of producing zero reconstruction error with varying numbers of active dimensions (this is analogous to underdeterimined linear systems with an infinite number of feasible solutions with varying $L_0$ norm). While it is true that later, below Theorem 5 in [4] there is some loose reasoning to suggest that unnecessary latent dimensions might be pruned away by the VAE, there is definitely not anything close to a formal proof. In fact this was part of our original motivation, to formally prove the conjecture first advanced in [4].", " - I appreciate that the theory says you can make this as large as you want, but it still feels very strange! I think it is worth considering or discussing why a latent dimension of 90 would be useful in such a setting, as the intrinsic dimensionality is only 12 and the ambient dimension is only 20.\n\n**A**: Decreasing the latent dimension in practice reduces the decoder capacity, thus in order to achieve on par performance we need to tweak the architecture and model dimensions in other layers. After adding more layers to the decoder, we observe that when $\\kappa=20$, the model can learn the correct number of active dimensions with small reconstruction error. The following is a table with $r = 12, d = 20, \\kappa = 20$. We'll include this analysis and result in the finalized paper.\n\n| $t$ | True AD | AD with attention | Recon | -ELBO with attention |\n|:---:|:-------:|:-----------------:|:------:|:--------------------:|\n| 2 | 10 | 10 | 0.0066 | -41.487 |\n| 4 | 8 | 8 | 0.0146 | -20.522 |\n| 6 | 6 | 6 | 0.0051 | -73.259 |\n| 8 | 4 | 4 | 0.0057 | -80.642 |\n| 10 | 2 | 2 | 0.0141 | -55.140 |\n\n------\n\n- Yes I am aware of the intention of the experiment, but my question still stands: the conditional prior seems to do better than the unconditional prior in the difficult case of $\\log \\gamma = 20$. I think this is worth delving into deeper, and may indicate a disparity between the theory that unconditional priors are equivalent to conditional priors, and the practical result that conditional priors might end up working better in some cases.\n\n**A**: Thanks for diving deeper in that. In short, the reasons for a gap at $\\log \\gamma = 20$ are:\n\n1. Models with a bad initialization can potentially converge to sub-optimal solutions (note that our theory only applies to globally-optimal solutions).\n2. For the unconditional prior in numerical experiments, We did not construct the corresponding decoder in such a way that it was strictly equivalent to the model with a conditional prior.\n\nIn the proof of Remark 1 we have shown that the equivalence between a trained CVAE model with a parameterized prior and an unparameterized one, and this equivalence is strictly achieved by moving existing computational components in the parameterized one without any further training.\n\nWe prove the equivalence by construction, thus if we build the unparameterized CVAE from any parameterized one in numerical experiments, we should observe identical training loss and inference results.\n\nOn the other hand, the proof shows the existence of equivalence without excluding the other possible ones. In the numerical experiments, we intend to show that in practice, given enough model capacity and good training parameters, one can directly train an unparameterized CVAE and achieve similar performance, instead of starting from training a parameterized one. Having $\\log \\gamma = 20$ means the models are not optimally initialized and that would impact model convergence negatively.\n\nThe following is the result from a CVAE with an unparameterized prior and a decoder with increased capacity, where $d = 20, r = 10, \\kappa = 20, t = 5$. Together with Table 7 in the paper, we can conclude that in practice the performance at convergence heavily depends on both model architecture design and hyperparameters. Results with less careful architecture or hyper-parameters may not reflect the true global optimal.\n\n| Init $\\log \\gamma$ | AD | -ELBO |\n|:------------------:|:--:|:-------:|\n| -20 | 5 | -41.202 |\n| -10 | 5 | -44.530 |\n| 0 | 5 | -44.375 |\n| 10 | 5 | -43.717 |\n| 20 | 5 | -45.217 |\n\nWe will include the above details to motivate this experiment and emphasize its differences with Remark 1 in the finalized paper.\n\n\n- As with my other reply, it may be worth stating this more plainly in the text?\n\n**A**: Thanks for the suggestion. \n\n\"When the decoder is an L-Lipschitz continuous function and learns the ground truth diffeomorphism, the number of active dimensions determines intrinsic dimensionality of the generative model in data space.\"\n\nWe'll include this statement in Section 2.1, following Theorem 1, in the finalized paper.", " We appreciate your quick follow ups, and hope our answers below have addressed them:\n\n- Is it possible to include such a discussion in the paper?\n\n**A**: Thanks for the suggestion. We'll include the above discussion in the finalized paper.\n\n- Is the point here that the different data manifolds modelled by $c_1$ and $c_2$ will necessarily be of dimension $r-t_1$ and $r-t_2$, respectively?\n\n**A**: Yes, the $t$ of each $c$ represents the maximum number of dimensions of the manifold for $x$ that $c$ can recover. With a $r$-dimensional manifold, the model conditioned on $c_1$ would only need to recover the remaining $r-t_1$ dimensions with $z$, and likewise for $c_2$.\n\n- I see in your other comment you write out what the optimal $\\gamma$ is. Is it possible to include this expression in the main text? I still find sentences like \"Specifically, when a VAE model converges to its optimum, γ will go to zero,\" in the manuscript to be confusing. Some more discussion on the behaviour of $\\gamma$ would be helpful.\n\n**A**: Thanks for the suggestion. As shown in prior work and this paper, a VAE model with sufficient capacity will be such that as we approach any global minimum, $\\gamma \\rightarrow 0$. We'll include this discussion in the finalized paper.\n\n- I still think the section is heavily oversold. \"Weight Sharing\" is far more general than just the specific type employed in that section, and people might get the wrong impression that Theorem 3 extends beyond just sequential data. Can you please modify the title of Section 3.3?\n\n**A**: While the weight-sharing issue we raise can impact a broad class of VAE models, we agree with the reviewer that as things presently stand, only the sequential case has been directly addressed. We can reword to reflect this limitation and avoid any confusion.", " - `A tuned hyper-parameter, for better performance` I appreciate that the theory says you can make this as large as you want, but it still feels very strange! I think it is worth considering or discussing why a latent dimension of $90$ would be useful in such a setting, as the intrinsic dimensionality is only $12$ and the ambient dimension is only $20$. I don't think this should simply be dismissed.\n- `It is to show an extremely large initial will have a negative impact on the stochastic optimization of training, and that leads to a compromised model performance.` Yes I am aware of the intention of the experiment, but my question still stands: the conditional prior seems to do better than the unconditional prior in the difficult case of $\\log \\gamma = 20$. I think this is worth delving into deeper, and may indicate a disparity between the theory that unconditional priors are equivalent to conditional priors, and the practical result that conditional priors might end up working better in some cases.\n- `Yes. When the decoder is a L-Lipschitz continuous function and learns the ground truth diffeomorphism, the number of active dimensions determines intrinsic dimensionality of the generative model in data space.` as with my other reply, it may be worth stating this more plainly in the text?", " Thanks for your reply and your modifications to the manuscript. I appreciate the discussion here but still have some follow-up questions and points of confusion:\n\n- `Previous work (e.g., [4]) has demonstrated that global minima of VAE models can achieve zero reconstruction error for all samples lying on the data manifold. However, it was not previously demonstrated in a general setting that this perfect reconstruction was possible using a minimal number of active latent dimensions, and hence, it is conceivable for generated samples involving a larger number of active dimensions to stray from this manifold. In contrast, to achieve perfect reconstruction using the minimal number of active latent dimensions, as we have done under the stated assumptions, implies that generated samples must also lie on the manifold (the noisy signals from inactive dimensions are blocked by the decoder and therefore cannot produce deviations from the manifold).` Is it possible to include such a discussion in the paper?\n- __On Corollary 2.1__: Is the point here that the different data manifolds modelled by $c_1$ and $c_2$ will necessarily be of dimension $r - t_1$ and $r-t_2$, respectively? \n- Fair enough point about how everybody does modelling of low-dimensional data with high-dimensional methods.\n- I see in your other comment you write out what the optimal $\\gamma$ is. Is it possible to include this expression in the main text? I still find sentences like \"Specifically, when a VAE model converges to its optimum, γ will go to zero,\" in the manuscript to be confusing. Some more discussion on the behaviour of $\\gamma$ would be helpful.\n- `We used sequential data as the representation of Theorem 3.` I still think the section is heavily oversold. \"Weight Sharing\" is far more general than just the specific type employed in that section, and people might get the wrong impression that Theorem 3 extends beyond just sequential data. Can you please modify the title of Section 3.3?", " I thank the authors for their reply. I have read the updated version of the manuscript and think that the mathematical presentation is much improved. I have some follow-ups, however:\n\n1. Definition 5 is still poorly phrased. The way it is currently phrased implies that for every $x$, there exists a $g$ such that $g(c)=\\varphi(x)_t$, which I do not think is the intended definition, as this would always be trivially satisfied. I believe what is meant is that there should exist a single $g$ such that for every $x$, $g(c)=\\varphi(x)_t$.\n\n2. I am still not convinced about some differences with [4]. You say that `Just to clarify, reference [2] proves that active dimensions at VAE global minima will estimate the correct/true principal subspace of the data under the assumption of a linear decoder. However, this work does not consider nonlinear manifolds. In contrast, reference [4] does consider general nonlinear manifolds, but does not rigorously prove that when such manifolds are present, and the decoder is nonlinear and suitably matched as well, that the number of learned active dimensions of VAE global minima will align with the true data manifold dimension.` I agree about [2], but that is not really what I meant, and the difference with [4] is still unclear to me: Theorem 5 in [4] shows that, as long as $\\kappa \\geq r$, VAEs can achieve perfect reconstructions. From there, it seems straightforward to argue that only $r$ dimensions will be active, and that the remaining $\\kappa - r$ will have their variance set to $1$ so as to minimize their KL against the prior. Indeed, this argument is made in [4] `Therefore, in the neighborhood of optimal solutions the VAE will naturally seek to produce perfect reconstructions using the fewest number of clean, low-noise latent dimensions`, which although not a formal statement in the form of a theorem, seems to say exactly the same thing being said here as a direct consequence of Theorem 5 in [4]. Could the authors please further clarify?", " We thank the reviewer for the detailed review as well as the suggestions for improvement. Also, we appreciate that the reviewer recognize the question our paper studies is relevent and understudied.\n\nBased on the review, we have uploaded a new pdf with updated definition and theorem statements. Some text descriptions are also revised. Hope this may provide a clearer presentation of our work.\n\nNext, we answer the questions in the review:\n\n**Answers to Weaknesses**\n\nThanks for the detailed review. Apart from the below answers, we have addressed many points in the updated version.\n\n1. $\\gamma$ is a trainable scalar in $\\theta$.\n2. We have made it more specific with \"$\\mu_z(x)$ and $\\mu_x(z)$ are arbitrarily $L$-Lipschitz continuous\".\n3. We have revised Definition 2 accordingly (Definition 3 in our new version).\n4. We have revised Definition 2 accordingly (Definition 3 in our new version).\n5. Yes, the optimal solution is not unique. Thank you for pointing it out. We replace it with \"any global optimal solutions\".\n6. We have revised Theorem 1 and 2.\n7. We have added prior to both VAE and CVAE definitions.\n8. We have updated the notation in the definition of *effective dimension*. We also illustrate the $\\{x, c\\}$ pairs in Line 110 to make it more clear.\n9. We have fixed them according to the reviews.\n10. We have fixed them according to the reviews.\n11. In the updated version, we consider each $x \\in \\mathcal{X}$ has a corresponding $c$. In this sense, an integral over $\\mathcal{X}$ would also cancel $c$ on the right hand side. We will consider to improve the paper with a more precise formulation in its finalized version.\n12. Fixed by summing over $l$.\n13. Thank you for the suggestion. We will consider this point and improve the paper for its finalized version.\n14. For $p_\\theta(z| c)$ we add a footnote that that we slightly abused the notation of $\\theta$ as parameters for both the prior and decoder. For $q_\\theta(z|x, c)$, it is a typo, and should be $q_\\phi(z|x, c)$.\n15. We have added some details of the experiments both in the captions and texts.\n16. Thank you for catching the typos and grammatical errors. We have fixed them in the updated version.\n\n**Answers to Questions**\n\n- It is claimed that \"this has never been rigorously proven\" in the abstract, and that \"this has only been proven under an assumption that the decoder is linear or approximately linear [2,4]\" when talking about the number of active dimensions estimating the intrinsic dimension. Could you please explain why this is the case? \n\nJust to clarify, reference [2] proves that active dimensions at VAE global minima will estimate the correct/true principal subspace of the data under the assumption of a linear decoder. However, this work does not consider nonlinear manifolds. In contrast, reference [4] does consider general nonlinear manifolds, but does not rigorously prove that when such manifolds are present, and the decoder is nonlinear and suitably matched as well, that the number of learned active dimensions of VAE global minima will align with the true data manifold dimension.\n\n- In Table 4, it is not clear how the AD values are obtained: were these averaged over different $c$'s?\n\nYes, they are averaged over different classes.\n\n- How relevant is the VAE setup here? For example, [B] obtains very similar results about the value of the optimal log-likelihood to those of Theorem 1, in a general context of density estimators rather than just VAEs. Similarly, [C] extends the result of [A], which is quite related to the results in the paper being reviewed, from VAEs to density estimators in general. It feels like the ideas presented here about conditional models might be extended to the general setting of density estimators as well. Could you comment on this? To be clear, I am aware that [B] and [C] are very recent and I do not see them not being discussed or compared against in the submitted version as a problem, although I do believe some discussion should be added when the paper is updated.\n\nThanks, these are very interesting references and we will add them to our related works. However, our paper is about understanding the properties of \\(C\\)VAE and its ability for estimating the ground truth manifold dimensions. If other techniques are capable of doing it, that is interesting but separately from our purpose.", " 6. We have revised it in the updated version.\n7. It can be seen from $\\gamma$'s optimal value, i.e. $\\gamma^* = \\arg \\min_\\gamma \\mathcal{\\tilde{L}(\\theta, \\phi)} = \\frac{L^2}{d} \\mathbb{E}_{\\varepsilon \\sim N(0, I)} [||\\sigma_z(x)_{1:r} \\varepsilon||^2]$\n8. Remove integration over data\n9. $\\kappa$ is defined in $\\kappa$-simple VAE model definition, i.e. the dimension of latent variable in the model. In the bounds as long as $\\kappa \\geq r$, $\\kappa$ will not affect the rate of the bounds, so it is involved in the constant $O(1)$.\n10. Yes, there are the same.\n11. It is a typo. Thanks for pointing it out.\n12. Consider this example: the conditioning variable equals to two dimensions of the manifold that data lie on. In this case, the conditioning variable provides information for the two dimensions thus the number of active dimensions would be two less. We will improve the statement in the finalized paper.\n13. We intended to describe that the two datasets are well balanced between simple synthetic data and high-resolution natural images. For synthetic data, the mapping between the manifold and ambient space might be too simple to fit by neural networks. On the other hand, high-resolution natural images may require non-trivial efforts in architecture design to capture the mapping, which is not the main focus of our paper. We will improve the statement in the finalized paper.\n14. Yes, here we use identity mapping, and we have clarified it in the updated text.\n15. Yes. The number of dimensions depends on two things: one is the complexity of the dataset, the other is the structure of the decoder. Thus it is a function of the capacity of the decoder, and the decoder could be better aligned with one dataset but not the others.\n16. A tuned hyper-parameter, for better performance. Note that $\\kappa$ can be arbitrarily large without affecting the model performance, since an optimal model only has $r$ active dimensions.\n17. The ELBO term shows we have learnt the manifolds. For the problem of $t=0$, we need to consider the data. When data is a union of manifolds, without the condition, the model need to use the maximal intrinsic dimension among the manifolds for reconstruction, which will result in an increase in the loss. On the other hand, if we add a discrete $c$ to indicate the manifold, the difference of the number of active dimensions is $t$.\n18. Thanks for pointing it out. The parameters of data are now listed in the caption.\n19. It is to show an extremely large initial $\\gamma$ will have a negative impact on the stochastic optimization of training, and that leads to a compromised model performance.\n\n\n**Answers to Questions**\n \n- Can the authors provide an explanation of the link between active latent dimensions and some notion of intrinsic dimensionality of the generative model in data space?\n\nYes. When the decoder is a L-Lipschitz continuous function and learns the ground truth diffeomorphism, the number of active dimensions determines intrinsic dimensionality of the generative model in data space.\n\n- How do we reconcile the simple Riemannian manifold assumption with the idea of using conditioning to learn \"unions of manifolds\"?\n\nWe have updated the paper, with more precised definition and description.\n\n- How can we expect to learn $\\gamma$ at all in light of the discussion in Section 3.2?\n\nIn point 3. When $\\theta$ and $\\phi$ are fixed, there exists an optimal $\\gamma$, and that $\\gamma$ is proportional to the reconstruction loss. If we fix $\\gamma$, when reconstruction error decreases, its weight will be smaller than the optimal value, which prevent the model to optimize further.\n\n- How valid is maximizing likelihood in the presence of manifold-supported data? \n\nIn point 1. This is a really valid question. We agree that it could be problematic to apply maximum likelihood to data lies on a low-dimensional manifold, but this is what everyone is doing because images are on a manifold and people apply likelihood-based generative moldes to this data. Therefore, our work provide a new method of analysis on this scenario. \n\n- How can we say that the results on MNIST and Fashion MNIST are consistent with intuitive visual complexity?\n\nOn the one hand, the number of dimensions depends on two things: one is the complexity of the dataset, the other is the structure of the decoder. Thus it is a function of the capacity of the decoder, and the decoder could be better aligned with one dataset but not the others. On the other hand, in some way Fashion MNIST has been normalized and centered more clearly than MNIST, so we can reasonably expect the active dimensions of MNIST is larger.\n\n- Can we get $t$ in practice for realistic conditioning variables, and if so can we get more practical understanding of intrinsic dimension from this?\n\nYes we can. First train a vanilla VAE to get the intrinsic manifold dimension $r$ of the data, then train a CVAE to get the conditioned manifold dimension $r-t$.", " We thank the reviewer for the detailed review as well as the suggestions for improvement. Also, we appreciate the reviewer for recognizing the originality of our work and considering three theorems as strong, intuitive and interesting.\n\nBased on the review, we have uploaded a new pdf with updated definition and theorem statements. Some text descriptions are also revised. Hope this may provide a clearer presentation of our work.\n\nNext, we answer the questions in the review:\n\n- I cannot easily see such a link between the two quantities, which reduces the significance of the result. \n\n**A**: Previous work (e.g., [4]) has demonstrated that global minima of VAE models can achieve zero reconstruction error for all samples lying on the data manifold. However, it was not previously demonstrated in a general setting that this perfect reconstruction was possible using a minimal number of active latent dimensions, and hence, it is conceivable for generated samples involving a larger number of active dimensions to stray from this manifold. In contrast, to achieve perfect reconstruction using the minimal number of active latent dimensions, as we have done under the stated assumptions, implies that generated samples must also lie on the manifold (the noisy signals from inactive dimensions are blocked by the decoder and therefore cannot produce deviations from the manifold). Very good/insightful question.\n\n\n- It is stated in the text that active latent dimensions are those \"used for reconstruction\"; is it then the case that the \"inactive\" latent dimensions do not provide any dimensionality to the generative model?\n\n**A**: Yes. Inactive dimensions are used to optimize the KL term so they do not provide any dimensionality to minimize the reconstruction error. These inactive dimensions are with a variance that has a positive lower bound even as $\\gamma$ goes to 0. When such latent dimensions are greater than $d - r$, no matter how many inactive dimensions there are, the model cannot perfectly reconstruct even a single manifold dimension.\n\n\n- simply stating that the encoder variance goes to zero with $\\gamma$ does not provide much insight.\n\n**A**: Active dimensions are dimensions of $z\\sim q(z|x)$ where each such dimension has its variance $\\sigma_z(x)_j^2$ going to zero if $\\gamma$ goes to zero, because a small variance enables smaller reconstruction error. On the contrary, each inactive dimension has its variance $\\sigma_z(x)_j^2$ going to prior variance, as these dimensions are not necessary helping reconstruction thus instead used for an optimal KL-divergence with the prior. Please refer to the proof for more details.\n\n\n- Why should changes in effective dimensionality in the latent space correspond to changes in intrinsic dimensionality in the data space?\n\n**A**: Thanks for pointing it out. We have fixed the error in Corollary 2.1 and the definition of effective dimension. The encoder learns the dimensionality in the latent space instead of data space. \"different data manifolds\" means for data with multiple conditions, under different conditions, the subset of data may have different manifold dimensions. And the changes in effective dimensionality in the latent space determine which subset of the manifold the latent variable $z$ will learn.\n\n- How do we reconcile these two seemingly contradictory points?\n\n**A**: We add a new definition of data for the training data (see Definition 1). When the data is unconditioned, it is exactly lies on a simple Riemannian manifold while when it is conditioned on some $c$, data will lie on a union of manifolds corresponding to $c$.\n\n**Answers to Minor Weakness, Issues, and Calls for Clarification**\n\nThanks for the detailed review. Apart from the below answers, we have addressed many points in the updated version.\n\n1. We agree that it could be problematic to apply maximum likelihood to data lies on a low-dimensional manifold, but this is what everyone is doing because images are on a manifold and people apply likelihood-based generative moldes to this data. Therefore, our work provide a new method of analysis on this scenario. \n2. Yes, we agree that comparing ELBOs under different manifolds can be problematic. However, here we reported negative ELBO as a conventional reported figure, which is not the central part of our analysis and is not for comparison.\n3. When $\\theta$ and $\\phi$ are fixed, there exists an optimal $\\gamma$, and that $\\gamma$ is proportional to the reconstruction loss. If we fix $\\gamma$, when reconstruction error decreases, its weight will be smaller than the optimal value, which prevent the model to optimize further.\n4. We used sequential data as the representation of Theorem 3.\n5. We have made it more specific with \"$\\mu_z(x, c)$ and $\\mu_x(z, c)$ are arbitrary $L$-Lipschitz continuous functions\".", " We thank the reviewer for the detailed review as well as the suggestions for improvement. Also, we appreciate that the reviewer recognize our paper as having \"several interesting and important theoretical insights\".\n\nBased on the review, we have uploaded a new pdf with updated definition and theorem statements. Some text descriptions are also revised. Hope this may provide a clearer presentation of our work.\n\nNext, we answer the questions in the review:\n\n- I am unclear about what it means to find an optimal solution under $\\gamma = \\gamma_0$ .\n\n**A**: $\\gamma$ is a trainable parameter via the opimizer. We have revised Theorem 1 and 2.\n\n- What is $\\varphi$ here? Is it the same as L52? Are these the first $t$ dimensions of that vector? Some $t$ dimensions? It is a bit unclear to me.\n\n**A**: It means a diffeomorphism between manifold and ambient space. Please refer to Definition 1 in the newly uploaded version.\n\n\n- How do you decide that a dimension is active?\n\n**A**: Although in numerical experiments a variance cannot be exact zero, we can still observe from a converged model that each variance of encoder is either close to 1, or close to zero. The encoder variance matrix of CVAE model on MNIST dataset is added in the new appendix, which can help to show that the active dimensions variance will approach zero when the model is converged.\n\n\n- Can the authors comment on the lack of visual toy problems? \n- Further, there are no qualitative experiments in the paper.\n\n**A**: Thanks for the suggestion. Due to bandwidth and space limitation we didn't include visualization in the paper in submission and author respond period. We'll improve the paper with visualizations in its finalized version.\n\n\n- In L217, the authors say that the generated results are controlled by parameters $r, d, t$ and $\\kappa$. I am unsure how influences the synthetic data generation process. Isn't $\\kappa$ a parameter of the inference?\n\n**A**: Thanks for pointing it out. The data generation process is determined by $r, d$ and $t$. $\\kappa$ is a hyper-parameter we set in the experiments. It is a typo and we have revised it.\n\n\n- Further, I am unsure if the authors report ELBO or negative ELBO under the NLL column (L226).\n\n**A**: In the updated version, we have replace `NLL` with `nagetive ELBO` and hope it is clearer.\n\n\n- Also, I think the NLL values are not really informative as they cannot be compared across the rows?\n\n**A**: Yes, we agree that comparing ELBOs under different manifolds can be problematic. However, here we reported negative ELBO as a conventional reported figure, which is not the central part of our analysis and is not for comparison.\n\n\n- Furthermore, if I am correct, the last block of rows should be comparable with the second block rows. So why is the reconstruction error lower when we have less number of AD than $r$?\n\n**A**: In Table 1, the reconstruction error in the second block is lower than that of the last block, i.e. the reconstruction error is actually higher when we have less number of AD than $r$.\n\n\n- In the introduction, L32, the authors mention how the different digits can lie on a different manifold. It would have been interesting if the authors analyzed their learned manifolds in this light.\n\n**A**: Thanks for the suggestion. Our experiment indeed shows that digits may lie on different manifolds. We will improve the paper with relevant analysis in its finalized version.\n\n\n\n- The writing of the introduction builds a lot of suspense about what is being addressed.\n\n**A**: Thanks for the suggestion. We will improve the paper with updated introduction in its finalized version.\n\n- The captions to tables should be more informative than they are now.\n\n\n**A**: Thanks for pointing out the missing in Table 7. We have added the data information of each experiments in the captions.\n\n\n**Minor things**\n- L32. \"each could like on.\"\n- L34. \"expand fully in the ambient space but become on a low-dimensional manifold when conditioned.\"\n- Several other typos. Please, do a thorough re-read.\n\n**A**: Thanks for catching them, and we have fixed them in the updated version.\n", " In this work, the authors theoretically analyze the properties of Variational Autoencoders (VAEs) and Conditional VAEs (CVAEs) when the data lies on a low-dimensional manifold. Notably, they demonstrate, under assumptions, VAEs will converge to a solution that will correspond to using the same number of latent dimensions (active dimensions) as the number of dimensions in the data manifold; therefore, allowing for the identification of the number of manifold dimensions. They also extend the analysis when the data lies on a mixture of low-dimensional manifolds. Further, the authors analyze several active practices in their framework and provide theoretical insights into why they may or may not work. Finally, the authors offer some experimental support to their claims. # Strengths\n### Theoretical Analysis. \nIn this paper, the authors propose theoretical insights into the behavior of VAE and Conditional VAE when the data lies on low dimensional manifolds. Under assumptions, the converged solutions can also be used to determine the manifold dimensions. The authors also analyze several active practices in their framework and provide theoretical insights into why some of them may or may not work. I think there are several interesting and important theoretical insights in this paper. \n\n\n# Weakness\n### Theorem 1 \nI am unclear about what it means to find an optimal solution under $\\gamma = \\gamma_0$. From definition 1, it seems that $\\kappa$-simple VAE is defined under the assumption of $\\gamma$ being a tunable parameter. Can the authors comment on what I am missing here? \n\n### Definition 4\nWhat is $\\psi(x)$ here? Is it the same as L52? Are these the first $t$ dimensions of that vector? Some $t$ dimensions? It is a bit unclear to me. \n\n### Choosing active dimensions \nI am a bit confused about how the active dimensions are being determined. What does $O(\\gamma_0)$ mean in practice in Theorem 1 and 2 mean? Can't there be a huge constant with big-O notation? How do you decide that a dimension is active? I think this crucial to the entire paper, and I could not find any discussion on this.\n\n### Qualitative experiments and visualizations \nCan the authors comment on the lack of visual toy problems? It is often easier to argue with a running example for manifold learning papers. However, I believe the authors can exploit this when discussing the different active practices and how they can harm the final learned model; clear visual benefits for hypothesized techniques can be compelling. \n\nFurther, there are no qualitative experiments in the paper. It would be better if there were visual comparisons alongside the numbers. It is difficult to put the numbers by themselves in perspective without a visual yardstick. \n\n### Synthetic Experiments \nIn L217, the authors say that the generated results are controlled by parameters $r, d, t,$ and $\\kappa$. I am unsure how $\\kappa$ influences the synthetic data generation process. Isn't $\\kappa$ a parameter of the inference? \n\nFurther, I am unsure if the authors report ELBO or negative ELBO under the NLL column (L226). Also, why not report it as ELBO? If you want to claim NLL, probably do other estimation techniques like Importance Weighted ELBO or as Annealed IS-based estimator. \n\nAlso, I think the NLL values are not really informative as they cannot be compared across the rows? In each row, the data changes, and therefore there is a different target NLL value? I think a more informative experiment would be where you vary $\\kappa$ and demonstrate how using more or less capacity changes the NLL, KL, and Recon for the same data? \n\nFurthermore, if I am correct, the last block of rows should be comparable with the second block rows. So why is the reconstruction error lower when we have less number of AD than $r$? \n\n### Real World experiments\nIn the introduction, L32, the authors mention how the different digits can lie on a different manifold. It would have been interesting if the authors analyzed their learned manifolds in this light. \n### Writing\nThe writing of the introduction builds a lot of suspense about what is being addressed. While this may be nice sometimes, providing a summary of the contribution can also be beneficial without forcing the reader to go through the entire section to understand what was done and how. \n\nThe captions to tables should be more informative than they are now. For instance, in Table 3, r = 10 is implied or hidden in the text. Further, it should be mentioned which results do the tables correspond to. For example, it seems to me that Table 2 and 4 are the only real-world results and the rest are all synthetic experiments. The experimental section goes back and forth, and clear descriptions can be helpful. Another example is in Table 7. What is the true number of manifold dimensions?\n\n# Minor things\n- L32. \"each could **like** on.\" \n- L34. \"expand fully in the ambient space **but become on a** low-dimensional manifold when conditioned.\"\n- Several other typos. Please, do a thorough re-read.\n Please refer to the Strengths and Weaknesses section. I think the authors captured the limitation reasonably well in their last comment in the paper. I agree with the authors and believe their insights must be tested thoroughly in different empirical settings. As is, the take-aways from the paper seem minimal if the experiments do not follow the results. Please refer to the Strength and Weaknesses section for more detailed comments. ", " This paper studies the behaviour of variational auto-encoders (VAEs) and conditional variational auto-encoders (CVAEs) in the presence of data supported on a low-dimensional manifold.\nThe authors demonstrate that VAEs are able to learn the intrinsic manifold dimension of the data at optimality.\nThey also show that a similar result exists for CVAEs, and that effective conditioning should reduce the loss function at optimality.\n\nThe authors then probe some common design choices in VAEs and CVAEs, noting the following:\n1. Conditioned and unconditioned priors are theoretically equivalent.\n2. Learning the decoder variance should result in better performance.\n3. A common weight-sharing technique in autoregressive models should be avoided.\n\nThe experiments section then directly addresses their claims, particularly on the synthetic dataset with known groundtruth intrinsic manifold dimension.\n\n### EDIT AFTER REBUTTAL\n\nI am happy with the discussion that we've had and believe the clarity of the paper can be easily increased beyond the recent revisions which have themselves already increased the clarity. Thus, I am raising my score to a *Weak Accept*. \n## Overall Assessment\n\nAlong the standard review dimensions of __quality__, __clarity__, and __significance__, we have *both* strengths and weaknesses, while on the dimension of __originality__ we have only strengths.\nAltogether, I believe the weaknesses outweigh the strengths to some degree, and thus I would weakly recommend rejection at this time.\nIf the authors can adequately answer my questions about the paper overall and perhaps address some of the issues with __clarity__, I would be inclined to increase my recommendation.\nI'll elaborate below.\n\n## Strengths\n\n1. I consider part (ii) of __Theorem 1__ to be a major strength of the paper. The utility of this result is clear: VAEs which are well-optimized have the ability to estimate intrinsic manifold dimension, under the assumption that we have a reliable method to estimate the number of active latent dimensions.\n2. I found the paper overall to be a fairly original work. Of course there are quite strong connections to previous work -- namely the _Diagnosing and Enhancing VAEs_ paper -- but the analysis provided here is sufficiently novel and comes with further useful insights.\n3. I found the introduction and motivation of the paper to be very clear, and the general storyline to be straightforward to follow.\n4. __Theorem 2__ part (i) is quite sensible and intuitive: adding good conditioning variables can reduce the loss function.\n5. I find Section 3.1 to be quite interesting.\n6. I think pursuing ideas around \"unions of manifolds\" to be quite intuitive and worthwhile. I appreciate that this paper is bringing this idea to the table.\n7. __Theorem 3__ is also an interesting result that appears to shed light on an issue with a common practical technique.\n8. Table 1 is a strong result - it is nice to see that $r$ = AD in almost all cases where $\\kappa \\geq r$.\n9. Table 3 is also a strength - we can clearly see AD = $r - t$ here.\n10. Table 8 is a strong result.\n\n## Weaknesses\n\nThere are some major weaknesses to which I would first like to devote individual sections, and then I'll comment on minor issues later on.\n\n### On the Relationship Between Active Latent Dimensions and Manifold Dimension\n\nTheorem 1 shows that a VAE trained to optimality will have its number of active latent dimensions exactly equal to the intrinsic manifold dimension of the data.\nWe would ideally also have some result that directly relates the number of active latent dimensions to the intrinsic dimensionality of the overall generative model, so that we could say samples from the generative model have dimensionality approximately equal to the true data dimension.\nYet I cannot easily see such a link between the two quantities, which reduces the significance of the result.\nIt is stated in the text that active latent dimensions are those \"used for reconstruction\"; is it then the case that the \"inactive\" latent dimensions do not provide any dimensionality to the generative model?\n\nAdmittedly, it is possible that I have not fully understood Definition 2 in-depth, but I would also argue that the Definition 2 (and Definition 5 by extension) is itself not very clear.\nI am left lacking intuition on what active dimensions actually _mean_: simply stating that the encoder variance goes to zero with $\\gamma$ does not provide much insight (and by the way, how does $\\gamma_0$ factor in here? It appears that we are replacing $\\gamma$ in the $\\kappa$-simple VAE definition with $\\gamma_0$, but then why do we need to say $\\gamma_0 \\leq \\gamma$?).\n\nSome clarity and further discussion would be appreciated around both of these points here.\n\n### Significance of Theorem 2 / Corollary 2.1 and CVAE Analysis\n\nAs far as I understand, Theorem 2 part (ii) and Corollary 2.1 (by the way, I think there is an error in Corollary 2.1: shouldn't the active dimensions be $r-t_1$ and $r-t_2$, respectively?) are statements about the mechanics of the encoder in latent space.\nYet I am again left wondering how this relates to the dimensionality learned by the generative model in data space, as I see no direct correspondence between the two.\nWhy should changes in effective dimensionality in the latent space correspond to changes in intrinsic dimensionality in the data space?\nHow does this result in \"different data manifolds\"?\n\nI am also a bit confused about how the proposed capacity of CVAEs to learn unions of manifolds relates to the original assumption from L61 that states the data lives on a simple Riemannian manifold with constant dimension and a single chart. \nHow do we reconcile these two seemingly contradictory points?\n\n\n### Minor Weakness, Issues, and Calls for Clarification\n\n1. I appreciate that Eq. (1) is still a lower bound on the average log-likelihood of the density $p_\\theta$ over the data distribution $\\omega_{gt}$. However, I am curious about the authors' opinion of the validity of the maximum likelihood objective for a full-dimensional density in the presence of manifold-supported data? In particular, we can no longer interpret the maximum likelihood objective as minimizing the KL divergence from $\\omega_{gt}$ to the distribution induced by $p_\\theta$ over $\\mathbb R^d$, as $\\omega_{gt}$ is not absolutely continuous with respect to $p_\\theta$'s distribution.\n2. Along the same vein, it is inappropriate to compare NLL/ELBO values across data with different manifold dimensions and thus dominating measures; yet the experiment section appears to do this a few times. Some care here would be prudent.\n3. I find Section 3.2 difficult to understand. How can we expect $\\gamma$ to adaptively balance the two terms in the loss function when the optimal learning behaviour is simply to make $\\gamma$ as small as possible? Based on this analysis, I find it remarkable that learning $\\gamma$ can succeed at all.\n4. The headline of Section 3.3 does not accurately represent the scope of Theorem 3, which strictly relates to conditional autoregressive models. While Theorem 3 is indeed interesting, the section purports to make a more general statement about weight sharing that does not appear.\n5. What is meant by \"sufficiently complex\" in L70?\n6. [4] and [5] are the same citation.\n7. L97: It is claimed that \"the ratio of the norm term in reconstruction and $\\gamma$ is constant, i.e. the data dimensions $d$.\" Why wasn't this ever checked in practice?\n8. Parentheses appear to be missing from the integrand in lines 85, 133, and 172\n9. What is the role of $\\kappa$ in Theorems 1 and 2 beyond simply $\\kappa \\geq r$? Why doesn't $\\kappa$ appear in the bounds?\n10. In definition 4, is $\\phi$ the same $\\phi$ as in L62?\n11. Why does the prior variance take $x$ as an argument in definition 5?\n12. L153-155 is written in a very unclear way.\n13. L219-221 is a very weak and nebulous discussion.\n14. In L249 it is stated that $c = u_{1:t}$, but earlier we have $c = h'(u_{1:t}) = G'u_{1:t}$. Which one is correct? Did you just take $G' = I_t$? \n15. In L239-240 it is stated that \"the number of active dimensions equals to data's intrinsic dimension\" and L256-257 it is stated that \"This characterization of dataset complexity is consistent with their intuitive visual complexity\". I would have to strongly disagree with this: in my opinion, MNIST is \"less complex\" than Fashion-MNIST, so I don't quite understand why it ends up with lower active dimensions and thus lower intrinsic dimension. Would the authors clarify what was meant here?\n16. Why is $\\kappa = 90$ so large in the Continuous Condition experiment? Is this a typo? Seems very strange.\n17. I generally find the details around Table 5 and Table 6 to be lacking. How many datapoints from each class? How is the reconstruction error for each class, i.e. has the CVAE learned each of the manifolds or just the dimensionalities? Is $t$ considered to be $0$-dimensional in the case of Table 5?\n18. What is the correct number of AD in Section 4.4?\n19. Is there some gap between the theory (Theorem 3) and the result in Table 7 about the equivalence of $p(z)$ and $p(z \\mid c)$? It appears that the conditional prior does a better job in the more difficult case of $\\log \\gamma = 20$ initially. Is it perhaps easier to learn with the conditional prior? These are the main questions I have of the authors. There are others listed above as well but they are more minor.\n1. Can the authors provide an explanation of the link between active latent dimensions and some notion of intrinsic dimensionality of the generative model in data space?\n2. How do we reconcile the simple Riemannian manifold assumption with the idea of using conditioning to learn \"unions of manifolds\"?\n3. How can we expect to learn $\\gamma$ at all in light of the discussion in Section 3.2?\n4. How valid is maximizing likelihood in the presence of manifold-supported data?\n5. How can we say that the results on MNIST and Fashion MNIST are consistent with intuitive visual complexity?\n6. Can we get $t$ in practice for realistic conditioning variables, and if so can we get more practical understanding of intrinsic dimension from this? I think the authors have done a reasonable job of addressing the practical limitations of their work.", " This paper aims to understand the relationship between the true intrinsic dimension $r$ of the data, which is assumed to be smaller than the ambient space dimension $d$ on which the data lives (i.e. the paper assumes the manifold hypothesis holds, $r < d$), and the latent variable dimension $\\kappa$ in Gaussian VAEs. The authors aim to prove that, when $\\kappa \\geq r$, the optimal ELBO achieved by a Gaussian VAE with fixed decoder variance $\\gamma$ behaves like $(d-r)\\log \\gamma$, and that the number of activate latent dimensions (i.e. those whose posterior variance goes to 0) is equal to $r$. The authors argue that this insight can be used to estimate the intrinsic dimension $r$. The authors then aim to extend this result to the setting of conditional VAEs (CVAEs). In a final result, the authors aim to show that the practice of sharing weights between the prior and the approximate posterior, which is commonly used in CVAEs, cannot achieve the same optimal ELBO value as could be achieved without the parameter sharing from their earlier result. The authors then claim that this means the practice of sharing these parameters should not be used.\n\nUnfortunately, the paper is very poorly written, to the point where even mathematical definitions cannot be understood in a precise manner. While I detail why I think this is the case below, I highlight that I am giving this paper a soundness score of 2 not because I necessarily think the results are wrong, but presentation is so unclear that it's very hard to judge. Similarly, I give this a contribution score of 2 not because I think the contribution itself would be bad (I actually think a properly re-written version of the paper could be quite strong), but because once again, it's very hard to judge. My confidence score of 5 aims to reflect the fact that I feel very confident that the paper has very poor exposition.\n\n-------------------------------------------------\nUPDATE AFTER REBUTTAL\n-------------------------------------------------\n\nThe authors have significantly addressed the clarity issues present in their original submission. While I still believe the contribution over [4] on the active dimensions being used at optimality is somewhat oversold, this is not the only contribution made by the paper, and I believe the insights presented about conditioning and intrinsic dimension will be valuable to the community. I have thus increased my score accordingly. ## Strengths\n\nI believe this paper considers a relevant question in VAEs, whose answer will be of interest to the community. I also believe that the explicit consideration of the interaction between conditioning and how this affects intrinsic dimension to not only be relevant, but also understudied. The authors have correctly spotted a poorly understood area, and I think this type of research to be valuable, when properly communicated.\n\n## Weaknesses\n\nAs I mentioned in my summary, I believe the biggest weakness of this paper is poor exposition. Below I give a list of points that are not properly explained in the mathematical exposition paper (in order of appearance):\n\n1. In definition 1, $\\gamma$ is called a \"tunable scalar\", this really makes it unclear whether or not it is a parameter of the VAE or not. It only becomes clear later, after continuing to read the paper, that the authors mean a fixed hyperparameter.\n\n2. Again, in definition 1, functions being assumed to be \"sufficiently complex\" is not a precise mathematical assumption and should be dropped from formal definitions.\n\n3. In definition 2, $\\gamma_0$ is assumed to be \"arbitrarily small\" satisfying $\\gamma_0 \\leq \\gamma$. This is again highly confusing, as $\\gamma$ was previously \"defined\" as being part of the definition of a $\\kappa$-simple VAE, so this phrasing makes it seem like the VAE does not depend on $\\gamma_0$. Yet, the requirement that $\\sigma_z(x)^2_j = O(f(\\gamma_0))$ would be nonsensical if $\\sigma_z(x)^2_j$ did not depend on $\\gamma_0$. In other words, either $\\sigma_z(x)^2_j$ depends on $\\gamma_0$, in which case we just have a $\\kappa$-simple VAE with $\\gamma_0$ and the requirement that $\\gamma_0 \\leq \\gamma$ makes no sense; or $\\sigma_z(x)^2_j$ does not depend on $\\gamma_0$, in which case it's $\\sigma_z(x)^2_j = O(f(\\gamma_0))$ that doesn't make sense. The notation from this paper is clearly inspired by that of [A], and I would recommend the authors take this one step further and make the dependence of the involved neural networks on their parameters more explicit, as in [A]. For example, changing $\\sigma_z(x)^2$ to $\\sigma_z(x; \\phi)^2$ would make statements much more mathematically precise, e.g. $\\sigma_z(x; \\phi^*_{\\gamma_0})^2_j = O(f(\\gamma_0))$ is much clearer.\n\n4. Again, there is yet more ambiguity in definition 2: the phrasing \"where $f$ is any function that goes to $0$ when $\\gamma_0$ goes to zero\" does not make sense. Requiring that $\\sigma_z(x)^2_j = O(f(\\gamma_0))$ for any such $f$ is equivalent to requiring that $\\sigma_z(x)^2_j = 0$. I believe the authors meant that there exists one such $f$, not that this holds for any $f$. Furthermore, even if I am correct and the authors meant that such an $f$ exists, I believe this would simply be equivalent to requiring that $\\sigma_z(x; \\phi^*_{\\gamma_0})^2_j \\rightarrow 0$ as $\\gamma_0 \\rightarrow 0^+$, which would be a much clearer way to state the requirement (big-O notation is really meant to bound convergence rates, which do not seem to be of relevance in the definition if $f$ is merely required to exist).\n\n5. Theorem 1 says \"the global optimal...\", which implicitly assumes the solution is unique in $\\theta$ and $\\phi$, which I do not believe holds if the neural networks involved are \"sufficiently complex\".\n\n6. Statement $(i)$ in Theorem one provides $\\int_\\mathcal{X} (d-r) \\log \\gamma_0 + O(1) \\omega_{gt}(dx)$, an integral which does not seem to depend on $x$, and which I believe can be written simply as $ (d-r) \\log \\gamma_0 + O(1) $. It would also be good to be more precise as to what is considered a constant and what is not in the $O(1)$ notation. Finally, similarly to the point above, it is not clear what is meant by \"can be uniquely achieved\", and I do not believe this holds: for example, if a particular VAE configuration achieves the optimal ELBO, applying an orthogonal transformation to the input of the decoder would leave $p(x|z)$ unchanged, and thus achieve the exact same optimum using a different function (and changing the encoder appropriately).\n\n7. In definition 3, the prior is missing from the definition of $\\kappa$-simple CVAE. While this is not relevant for VAEs as the prior is just assumed to always be an isotropic Gaussian, this is relevant to the ensuing discussion about the prior being learned as well in the context of CVAEs.\n\n8. Definition 4: This is where it really gets incomprehensible. What is meant by subindexing a vector with $t$? That is, $\\varphi(x)$ is, by definition, a vector in $\\mathbb{R}^r$. In the manuscript, subindexing with an integer has been used before to denote coordinates (e.g. $\\sigma_z(x)^2_j$ is the $j^{th}$ coordinate of $\\sigma_z(x)^2$, which by the way is actually defined as a matrix in definition 1, but at least this abuse is understandable), so it seems that $\\varphi(x)_t$ should be the $t^{th}$ coordinate of $\\varphi(x)$, but the authors go on to say that $\\varphi(x)_t \\in \\mathbb{R}^t$ without explaining exactly what $\\varphi(x)_t$ means. Is it the first $t$ coordinates of $\\varphi(x)$? Furthermore, even if the previous questions were answered, what is $x$ here? The definition involves $g(c) = \\varphi(x)_t$, but it is never explained what $x$ is: the left side seems to depend on $c$ but not $x$, and the right side on $x$ but not $c$.\n\n9. Definition 5: Same issues as in definition 2.\n\n10. Theorem 2: Same issues as in Theorem 1.\n\n11. Corollary 2.1: $\\theta^*$ and $\\phi^*$ are defined as optimal solutions to a CVAE, but given the loss defined for CVAEs in equation 2, these optimal solutions should depend on $c$, and they do not. Usually the objective being minimized for CVAEs is not really equation 2, but rather an expectation of equation 2 over some prior $p(c)$, so it is not clear exactly what $\\theta^*$ and $\\phi^*$ mean, as they do not seem to depend on $c$.\n\n12. Theorem 3: Once again, the notation for the loss on equation 3 is sloppy and not too clear, as the actual loss should not really depend on $l$, but the RHS does. Is there a missing summation over $l$ somewhere? Or should $x_l$ be replaced by $x_{\\geq l}$?\n\nNow I move on to other, non-mathematical, issues I have with the presentation of the paper:\n\n13. The authors write down some issues, (i) through (iv) in the introduction and keep referring to them, and the readers keeps having to go back to see what the point was. It would be clearer to avoid this.\n\n14. In line 161, the prior in CVAEs is defined as $p_\\theta(z|c)$, which is rather confusing notation as $\\theta$ is already used to denote the parameters of the decoder. This notation obfuscates the discussion about parameter sharing, and becomes even more confusing when using $q_\\theta(z|x,c)$ in line 162.\n\n15. Several experiments are carried out on simulated data, but not enough details are given about how this was generated. For example, section 4.3 should really refer to a detailed appendix explaining the setup in more detail.\n\nFinally, the paper has several grammatical errors, typos, and minor points which are technically incorrect; all of which I would encourage the authors to try to fix. While I see these points as very minor compared to the other issues, I list some below:\n\n- line 32: \"each could like on a manifold\"\n- line 33: \"manifold dimension of an image of 1 ...\": a single image has intrinsic dimension 0, it is the manifold itself which has intrinsic dimension $>0$.\n- lines 35-36: \"Such an example ... as the conditioning variable\" is pretty hard to understand as a sentence.\n- line 128: \"Theorem.1\"\n- line 182: \"The cost of VAE balances the three modules with the two terms\" is also hard to understand.\n- Citations are also sloppy: [4] and [5] are the same, and at least one paper is cited as an arxiv preprint rather than as a conference paper, namely [13], which appeared in ICLR 2021. Note that [13] is simply the one I noticed, but I did not extensively check this, and would not be surprised if this happened more than once.\n\n\n[A] Diagnosing and Enhancing VAE models; Dai & Wipf (2019)\n I have the following questions:\n\n16. It is claimed that \"this has never been rigorously proven\" in the abstract, and that \"this has only been proven under an assumption that the decoder is linear or approximately linear [2,4]\" when talking about the number of active dimensions estimating the intrinsic dimension. Could you please explain why this is the case? I do not think [4] uses neither a linear nor an approximately linear assumption, and it is also not clear how exactly the results in [4] (particularly the discussion after equation 9 of the arxiv version of [4]) are not rigorous.\n\n17. In Table 4, it is not clear how the AD values are obtained: were these averaged over different $c$'s?\n\n18. How relevant is the VAE setup here? For example, [B] obtains very similar results about the value of the optimal log-likelihood to those of Theorem 1, in a general context of density estimators rather than just VAEs. Similarly, [C] extends the result of [A], which is quite related to the results in the paper being reviewed, from VAEs to density estimators in general. It feels like the ideas presented here about conditional models might be extended to the general setting of density estimators as well. Could you comment on this? To be clear, I am aware that [B] and [C] are very recent and I do not see them not being discussed or compared against in the submitted version as a problem, although I do believe some discussion should be added when the paper is updated.\n\n\n[B] LIDL: Local Intrinsic Dimension Estimation Using Approximate Likelihood; Tempczyk, Michaluk, Garncarek, Spurek, Tabor & Golinski (2022)\n\n[C] Diagnosing and Fixing Manifold Overfitting in Deep Generative Models; Loaiza-Ganem, Ross, Cresswell & Caterini (2022) I find it hard to properly assess the limitations of this work in its current state given its poor presentation. If my doubts are clarified during the review process, particularly point 8 about definition 4, and question 16 about the differences with [A], I will update this section later on. That being said, part of the point of this paper is that intrinsic dimension can be estimated with VAEs, yet no comparison is carried out against other methods of estimating intrinsic dimension, e.g. [D].\n\n[D] Maximum Likelihood Estimation of Intrinsic Dimension; Levina & Bickel (2004)" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5 ]
[ "MCmbWia9lG1", "QQwswo4wtB", "QpiNn56H5rj", "1BAbEyvjfYN", "pqFMFx2OCoyz", "YoKN5Oc89qD", "ekGIh5H9vta", "IwIvHXuEUbs", "VDrJiWC5PZe", "UTYDEsYtWpm", "bNFFIb3L0iY", "uRcCqQaiJs", "JYa-FgO2UDh", "BDpk3S0cezf", "BDpk3S0cezf", "FCZg4spwdDb", "nips_2022_Lvlxq_H96lI", "nips_2022_Lvlxq_H96lI", "nips_2022_Lvlxq_H96lI" ]
nips_2022_Sw_zDFDTr4
APG: Adaptive Parameter Generation Network for Click-Through Rate Prediction
In many web applications, deep learning-based CTR prediction models (deep CTR models for short) are widely adopted. Traditional deep CTR models learn patterns in a static manner, i.e., the network parameters are the same across all the instances. However, such a manner can hardly characterize each of the instances which may have different underlying distributions. It actually limits the representation power of deep CTR models, leading to sub-optimal results. In this paper, we propose an efficient, effective, and universal module, named as Adaptive Parameter Generation network (APG), which can dynamically generate parameters for deep CTR models on-the-fly based on different instances. Extensive experimental evaluation results show that APG can be applied to a variety of deep CTR models and significantly improve their performance. Meanwhile, APG can reduce the time cost by 38.7\% and memory usage by 96.6\% compared to a regular deep CTR model. We have deployed APG in the industrial sponsored search system and achieved 3\% CTR gain and 1\% RPM gain respectively.
Accept
The paper focuses on the application of click-through rate (CTR), and proposes input-aware model parameters which are dynamically generated in order to boost representation power of deep CTR prediction models. To reduce time and memory complexity, the method decomposes the parameters and dynamically generates only part of the decomposed parameters. Improved results are shown on three public datasets and an A/B testing on an industrial system as claimed. Overall, this is a nice application-focused work that applies the widely studied idea of parameter generation and decomposition onto the new problem of CTR.
train
[ "KpguolYzbB0", "qY4KA8Vrod", "ARoG9h9uSHp", "nULeKTP6QGV", "FgW6ok6RD9A", "FKJzQhgnkQ", "4AFod-XUlel", "uecTo0qKW8sI", "-mbAGqRnl5j", "8XEei-0D_pP", "72gezee88WT", "8YuC0fIwpl3", "_v42aLEn2S", "PxO8VzNqYr", "gWqCpdDspSP" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks and I don't have further questions.", " Thanks to the author for the detailed reply. Most of my concerns have been addressed. I keep my original rating.", " Thanks for your valuable comments. Please allow us to further address your concerns below. \n\n**1、[Novelty]**\n\nAs mentioned in points 2 and 3 in the previous response, we have shown:\n* For hypernetworks [1], the pioneer work in CTR prediction can be recognized in our paper.\n* For HyperGrid [2], although similar techniques (e.g., low-rank parameterization) are adopted in HyperGrid and ours, the novelty in motivations behind these techniques are more important and could be appreciated\n* Apart from some similar techniques (mentioned in reviews), we also design a number of novel techniques in this paper (e.g., decomposed feed-forwarding, over-parameterization, and condition strategies).\n\nBesides, thanks for pointing out a new related paper ALBERT [3] mentioned in this feedback. However, the parameter sharing used in [3] is a different concept from our paper. For ALBERT, parameter sharing refers to sharing parameters across different layers. For APG, parameter sharing refers to sharing parameters across different instances in the same layer. Furthermore, ALBERT is also not in the concept of hypernetworks and it focuses on the model reduction of Bert.\n\n**2、[Valuable work]**\n\nApart from the novelty of our paper (mentioned in the above point 1), we also want to highlight that it is a valuable work:\n\n* New direction. In the current stage, most of the existing CTR-related works focus on architecture design and ignore the improvement of the model parameters $\\Theta$. We believe the idea of APG can bring a new direction and encourage many follow-up works.\n\n* Universal module. The proposed APG is a universal module that can be easily applied to most existing deep CTR models and improve the performance in all cases (see Table 2).\n\n* Real industrial application. We have successfully deployed APG in an industrial sponsored search system and achieved significant gains.\n\nAll in all, we show the novelty of the ideas and techniques in this work in the above point 1. At the same time, we hope the valuable parts and further influence of this work could be recognized, which could also play an important role in evaluating whether a paper is good or not.\n\n**3、[Not a simple application]**\n\nWe would like to re-emphasize that APG is not a simple application. There naturally exist gaps between CV/NLP and CTR prediction, which leads to great challenges. More details:\n\n* Time sensitive. Different from CV/NLP, for the practical problem of CTR prediction, users are sensitive to the response time when they are searching in a recommendation system, and a slight increase in inference time might not be tolerated. More seriously, there has a heavy dense matrix product in CTR prediction models, which further aggravates the efficiency problem (see Section 1 in this paper). Hence, settling such a challenge significantly supports the novelty and the valuable idea of this paper.\n* Big vocabulary size. Compared with a limited vocabulary size used in NLP (e.g., ten thousand unique words [4]), for CTR prediction, the data is constructed by a large number of users and items (e.g., 100 million users and 80 million items in this paper). A huge vocabulary size gap drives us to carefully model the correlation and the diversity among different instances. Hence, two kinds of parameters (including shared and specific parameters) are proposed and various condition strategies are designed.\n\nHere, we really hope reviewer gHsX could think twice about the difference between CV/NLP and CTR prediction rather than directly noting “application paper”. It's these differences that make our work valuable and inspire the novel techniques in this paper. ", " \n**4、[Deep insight about APG]**\n\nThanks for your advice on giving a deep insight about APG. We agree conducting the theoretical analysis can better support and understand APG, and make this work stronger.\n\nHere, we give a theoretical analysis of the expressive power of APG and the guidelines of condition design.\nWe will add this part to the revised manuscript.\n\nInspired by Cohen et al. [5], which establishes a tensor analysis approach to prove the expressive power of a deep neural network increases super-exponentially with the network depth, based on the network width, we also characterize the expressive power of APG in this tensor perspective.\n\nTo begin, an instance is defined as a collection of vectors $(x_1,...,x_N)$, where $x_i \\in \\mathbb{R}^s$ refers to a feature vector of this instance. Then we can represent these different features by a (positive) representation function:\n\n\\begin{align}\nf_{d_i}(x_i)\n\\end{align}\n\nwhere $d_i \\in [1,2,..,M]$ and $M$ is the number of different feature representation. Then, for a classification task, we take a deep neural network as a mapping from an instance to a cost function over label $y$. Following Cohen et al. [5], such a mapping can be represented by a tensor $\\mathcal{A^y}$ operated on the combination of the representation functions:\n\n\\begin{align}\nh_y(x_1,...,x_N)=\\sum_{d_1,...,d_N=1}^{M} \\mathcal{A}^y_{d_1,...,d_N} \\prod_{i=1}^{N}f_{d_i}(x_i)\n\\end{align}\n\nThen the expressive power of this network is defined as the ability to construct labeling to differentiate input values. More precisely, to be able to distinguish data instances $x$ from $\\hat{x}$ ($x \\neq \\hat{x}$), $h_y(x_1,...,x_N) - h_y(\\hat{x}_1,...,\\hat{x}_N)$ is required to be nonzero, i.e.,\n\n\\begin{align}\n\\sum_{d_1,...,d_N=1}^{M} \\mathcal{A}^y_{d_1,...,d_N} \\left( \\prod_{i=1}^{N}f_{d_i}(x_i)-\\prod_{i=1}^{N}f_{d_i}(\\hat{x}_i) \\right) \\neq 0 ~~~ (1)\n\\end{align}\n\nIt can be seen that the inequality is held when the difference $\\prod_{i=1}^{N}f_{d_i}(x_i)-\\prod_{i=1}^Nf_{d_i}(\\hat{x_i})$ is not in the null space of $\\mathcal{A}^y_{d_1,...,d_N}$. Hence, the expressive power is equivalent to the rank of the tensor $\\mathcal{A}^y$. Then the rank of tensor $\\mathcal{A}^y$ scales as $n^{2^L}$ with measure 1 over the space of all possible network parameters ($n$ is the network width and $L$ is the network depth) [5].", " In a neural network with the same or shared parameter, the mapping $\\mathcal{A}^y$ is fixed for all instances. Then, such a network can be able to distinguish between features in a subspace of dimension $n^{2^L}$. While for APG, the mapping $\\mathcal{A}^y$ depends on data which enhances the model capacity. Next, we will show the expression power of APG can be bigger than $n^{2^L}$.\n\nSpecifically, for APG, the difference of $h_y(x_1,...,x_N) - h_y(\\hat{x}_1,...,\\hat{x}_N)$ is:\n\n\\begin{equation}\nh_y(x_1,...,x_N) - h_y(\\hat{x}_1,...,\\hat{x}_N)\n\\end{equation}\n\n\\begin{equation}\n=\\sum_{d_1,...,d_N=1}^{M} \\mathcal{A}^y_{d_1,...,d_N} \\prod_{i=1}^{N}f_{d_i}(x_i) - \\sum_{d_1,...,d_N=1}^{M} \\mathcal{\\hat{A}}^y_{d_1,...,d_N} \\prod_{i=1}^{N}f_{d_i}(\\hat{x}_i)\n\\end{equation}\n\n\\begin{equation}\n=\\sum_{d_1,...,d_N=1}^{M} \\mathcal{A}^y_{d_1,...,d_N} \\left( \\prod_{i=1}^{N}f_{d_i}(x_i)-\\prod_{i=1}^{N}f_{d_i}(\\hat{x}_i) \\right) ~~~ (2)\n\\end{equation} \n\n\\begin{equation}\n+\\sum_{d_1,...,d_N=1}^{M} \\left(\\mathcal{A}^y_{d_1,...,d_N} - \\mathcal{\\hat{A}}^y_{d_1,...,d_N} \\right) \\prod_{i=1}^{N}f_{d_i}(\\hat{x}_i) ~~~ (3)\n\\end{equation} \n\nIf term (2) and term (3) above are both nonzero, having term (2) exactly equal to the negative of term (3) results in zero measure over the space of network parameters, since the parameter generation network is independent from the deep ctr models. Then we can simply consider the cases where term (2) is zero (note it refers to the indistinguishable case in above Eq 1 of the base method) and discuss whether term (3) is also zero. Here we take the group-wise strategy as an example, since other strategies can be taken as special cases of the group-wise strategy by setting a different number of instances in each group. We assume all instances are dividing into $T$ groups (i.e., $\\{G_1,G_2,...,G_T\\}$). Then the probability of choosing exactly the same generated parameters for different instances (i.e., $\\mathcal{A}^y_{d_1,...,d_N} = \\mathcal{\\hat{A}}^y_{d_1,...,d_N}$) is:\n\n\\begin{equation}\n\\frac{|G_1|}{\\sum_{i}|G_i|} \\times \\frac{|G_1|-1}{\\sum_{i}|G_i|-1}+\\frac{|G_2|}{\\sum_{i}|G_i|} \\times \\frac{|G_2|-1}{\\sum_{i}|G_i|-1}+...+\\frac{|G_T|}{\\sum_{i}|G_i|} \\times \\frac{|G_T|-1}{\\sum_{i}|G_i|-1}\n\\end{equation} \n\n\\begin{equation}\n=\\frac{\\sum_{i}(|G_i|(|G_i|-1))}{\\sum_{i}|G_i|(\\sum_{i}|G_i|-1)} \n\\end{equation} \n\n\\begin{equation}\n=\\frac{\\sum_{i}(|G_i|(|G_i|-1))}{D(D-1)} \n\\end{equation}\n\nwhere $D=\\sum_{i}|G_i|$ refers the number of all instances.\nTherefore, with a positivity of the representation functions, the probability of $h_y(x_1,...,x_N) - h_y(\\hat{x}_1,...,\\hat{x}_N) \\neq 0 $ is\n \n$1-\\frac{\\sum_{i}(|G_i|(|G_i|-1))}{D(D-1)} $.\n\nIt means that there is a $1-\\frac{\\sum_{i}(|G_i|(|G_i|-1))}{D(D-1)} $ probability that the expressive power of APG is bigger than $n^{2^L}$ (the base). Furthermore, from $\\frac{\\sum_{i}(|G_i|(|G_i|-1))}{D(D-1)}$, we can find that a more sparse group division can lead to a lower value of $\\frac{\\sum_{i}(|G_i|(|G_i|-1))}{D(D-1)}$ and result in a higher expressive power. Thus, the self-wise strategy is recommended since the value in each dimension of input $z_i$ in Eq 1 of our paper can be different from different instances.\n\n\n**5、[At Last]**\n\nWe really appreciate your valuable and high-quality reviews, and hope to have addressed your concerns. At last, considering the pioneer and valuable work in CTR prediction, the novel techniques, and the theoretical analysis (in the above point 4), we sincerely hope you could re-evaluate our paper.\n\n\n[1] Hypernetwork. ICLR 2017\n\n[2] HyperGrid Transformers: Towards A Single Model for Multiple Tasks. ICLR 2021\n\n[3] ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, ICLR 2020\n\n[4] How Large a Vocabulary Does Text Classification Need? A Variational Approach to Vocabulary Selection, NAACL 2019\n\n[5] On the expressive power of deep learning: A tensor analysis. In Conference on Learning Theory, 2016.", " I really appreciate your responses, very powerful and touching.\n\nBut honestly, I don't really learn too much new ideas from this version -- parameters generated by hypernetworks (Hypernetwork. ICLR 2017), parameters sharing (ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, ICLR 2020), low-rank parameterization (HyperGrid ICLR 2021) are all existing methods.\n\nOr either you don't have many deep insights about why instance-based learning is better than group-based learning or whole date-based learning, only an intuitive statement \"Ideally, besides modeling the common pattern, the parameters should be more adaptive and can be\n49 dynamically changed for different instances to capture custom patterns at the same time.\"\n\nLike I mentioned, if this is KDD or Recsys, I will vote for accept without doubt, since you successfully adopt these existing methods to this new domain -- CTR.\n\nPlus, I am not picky guy, I give 6 score to the other two papers under my review.", " Thanks for your valuable comments. Please allow us to address your concerns below.\n\n**1、[Lack of related works of hypernetwork (W1、W2 and Q1)]**\n\nThe idea of generated weights for different instances was originally proposed in DFN [1] which inspired us and has been cited in our paper. Hypernetwork [2] also claims its work is an extension of DFN (see Section 2 in [2]). Considering this, we cited DFN in the current version. Hypernetwork is also an important work that has a great impact on related areas and we will include it and other related work thoroughly in the revised manuscript. Thanks for pointing this out.\n\n**2、[Not new idea of low-rank parameterization and Parameter sharing (W1)]**\n\nThanks for pointing out a related paper HyperGrid [3]. We agree that HyperGrid seems to adopt similar techniques including low-rank parameterization and parameter sharing. But we would like to note that the motivation is not the same.\n\n* For HyperGrid, the key problem is the task conflict in a single model. To address this problem, the authors introduce local-global parameters which lead to a decomposed implementation and the low parameter cost introduced by low-rank parameterization is the result rather than the motivation. Besides, HyperGrid cares more about the parameter cost difference between multiple models and a single model rather than the cost difference between w/ and w/o low-rank parameterization. \n* For ours, the key problem is the efficiency problem in CTR prediction. Thus low-rank parameterization is proposed to address this problem. A direct justification is that for the sake of efficiency, only S_i is used as the specific parameter. This is also different from HyperGrid.\n\nAlthough there may be some similar techniques used in different works at the first glance, we appreciate that the novelty in motivations behind these works can be recognized.\n\n**3、[Novelty (W1 and Q1)]**\n\nWe also want to re-highlight some of the novel aspects of this paper and will make the presentation of the novelty more explicit in the revised manuscript.\n\n* It is true that the idea of generating parameters has been widely studied in CV and NLP. But they do not diminish the novelty of our work. We believe such a direction in CTR prediction can boost and encourage many follow-up works, and create chances of cooperation among CV, NLP, and CTR prediction. Furthermore, we also would like to share some cases: the main contribution of swim transformer (the ICCV 2021 best paper) is successfully introducing transformer from NLP to CV. \n* It is not a simple attempt or application where both the efficiency and effectiveness are key in CTR predictions. There is no trivial solution to address these challenges. Therefore, the novel method APG is proposed. More details: \n\t* Apart from low-rank parameterization and parameter sharing mentioned in the above point 2, we want to emphasize the novelty of the proposed decomposed feed-forwarding. Actually, it is the key to addressing the computation efficiency problem and allowing APG to be efficiently serving online. \n\t* We also want to highlight the novelty of over-parameterization which enriches the model capacity without any additional memory and time cost during inference.\n* We propose novel condition strategies which are seldom discussed in most related works in CV and NLP. \n\nAll in all, considering the pioneer work in CTR prediction and the novel techniques proposed in this paper (e.g., decomposed feed-forwarding, over-parameterization, and condition strategies), we sincerely hope reviewer gHsX could re-judge the novelty of our paper.", " **4、[First hypernetwork paper in CTR prediction? (Q2)]**\n\nYes. As far as we know, it is the first work to adopt parameter generation for different instances in CTR prediction. We also would like to note that the adoption is not a trivial task. Some special challenges such as computational efficiency and prediction effectiveness need to be addressed to make APG work and successfully serve online in real CTR prediction applications.\n\n\n**5、[Application or Theory or ? (Q3 and Q4)]**\n\nWe appreciate your positive comments \"It is no doubt that this paper is a good application paper\". However, we have some concerns with the concepts of \"application papers\" and \"theory conferences\". More details:\n\n* _Theory conference._ NeurIPS is a comprehensive conference that includes various topics, e.g., application, theory, and so on (see Call For Papers in NeurIPS 2022). We believe it is because of such diversity that makes this conference flourish.\n* _Application paper._ It is true that our work focuses on the practical problem of CTR prediction. But it is not a simple application of hypernetworks. On the one hand, a series of novel techniques and architectures are proposed. Novel ideas to address the efficiency and effectiveness problems in CTR prediction are presented in this paper. On the other hand, we also give a theoretical analysis of the complexity of the proposed method.\n\n\n**6、[The improvement is limited on public datasets (W3)]**\n\nThe significant improvement of our method on public datasets can be summarized as follows:\n\n* It is a universal module that can be easily applied in most existing deep CTR models and improve the performance in all cases (see Table 2). \n* It is true that in some cases the improvement seems not that large (as mentioned in W3). On the one hand, 0.1% absolute AUC gain is regarded as significant for the CTR prediction task [4,5,6]. On the other hand, DIFM can achieve SOTA performance due to high-consuming-computation operations (see DIFM paper). It is more meaningful where a model (e.g., WDL+APG) with low-computational cost can achieve competitive or even better performance than SOTA DIFM (see Table 2).\n* The significance of APG is not only reflected in the effectiveness (Section 4.2) but also the efficiency (see Section 4.4) \n\n\n[1] Dynamic Filter Networks. NIPS 2016\n\n[2] Hypernetwork. ICLR 2017\n\n[3] HyperGrid Transformers: Towards A Single Model for Multiple Tasks. ICLR 2021\n\n[4] Deep interest network for click-through rate prediction. KDD 2018.\n\n[5] Autoint: Automatic feature interaction learning via self-attentive neural networks. CIKM 2019\n\n[6] Wide & deep learning for recommender systems. RecSys workshop 2016.", " Thanks for your valuable comments. Please allow us to address your concerns below.\n\n**1、[Q1:The novelty of such fine-grained input-aware parameter allocation methods. The related work section states that this is almost first work to apply such heuristic, right?]**\n\nYes. As far as we know, it is the first work to adopt parameter generation for different instances in CTR prediction. We also would like to note that the adoption is not a trivial task. Some special challenges such as computational efficiency and prediction effectiveness need to be addressed to make APG work and successfully serve online in real CTR prediction applications.\n\n\n**2、[Q2:Do we have benchmark on the effectiveness of various condition designs?]**\n\nAs mentioned in Appendix C, empirically speaking, the self-wise strategy shows the best performance and it is also easy to apply since it does not need any additional prior knowledge. Of course, sometimes, you may have a special purpose. For example, if you want to build different model parameters for different users, you can consider the group-wise strategy.\n\n\n**3、[Q3:The gain from v4 to v5 is huge, why an over parameterization trick so powerful? If parameter sharing is beneficial, base model should not be such poor?]**\n\nActually, from Table 4, the gains come majorly from allowing parameters to be sensitive to different instances rather than over-parameterization. Specifically, the gains for different techniques are 0.27(adding specific parameters from base to v1), 0.13(adding shared parameters from v2 to v4), and 0.12 (adding over-parameterization from v4 to v5).\n\nThe gains of over-parameterization come from the following parts:\n\n* Without over-parameterization, the amount of shared parameterization is not enough (due to small K) to model the common patterns. Thus, adding over-parameterization can significantly improve the performance. While for the base model, the shared parameters are large enough, and simply increasing the number of parameters may not give much improvement. Besides, from Appendix E.1, we can also find when P is large enough, the gain is convergent.\n* Actually, as mentioned in Section 3.2, over-parameterization can result in an implicit regularization and thus enhance generalization.\n\n\n**4、[More effort is needed on Ablation Study (L1)]**\n\nThanks for your advice on paying more effort to the ablation study. We will make a more detailed analysis in the revised manuscript. Here, we briefly summarize the key points of the improvement with different versions:\n\n* base -> v1, as described in Section 1, parameter personalization can enrich the expression power of deep CTR models.\n* v1 -> v2, since the weight matrix resides on a low intrinsic dimension [1,2], it encourages a low-performance drop when adopting a low-rank based method. Besides, we also conduct a detailed experiment to analyze the impact of the rank k (see Appendix E.2).\n* v2 -> v4, as mentioned in Section 3.2, parameter sharing can capture common and general information which improves the performance.\n* v4 -> v5, see the above answer (i.e., point 3).\n\n**5、[Weak baseline on parameter allocation.(W1) ]**\n\nAs far as we know, currently, almost all related works in CTR prediction focus on coarse-grained parameter allocation. Thus, if a strong baseline needs to be established, a possible perspective is to directly take the basic model (proposed in Section 3.1) as a strong baseline.\n\n[1] Measuring the intrinsic dimension of objective landscapes. arXiv 2018\n\n[2] Intrinsic dimensionality explains the effectiveness of language model fine-tuning. arXiv 2020", " Thanks for your valuable comments. Please allow us to address your concerns below.\n\n**1、[Q1: In a group-wise conditional design, is there any extra effort required to divide instances into different groups?]**\n\nGroup-wise conditional design can be flexible. One can simply divide instances into different groups by different item categories, since this kind of information is usually directly available. One can also divide instances into different groups with some clustering methods.\n\n\n**2、[Q2: S is used to capture custom patterns because of its low rank. Have you tried using U and V for this purpose? Did they achieve similar results?]**\n\nThanks for your advice. Here we conduct additional experiments by using U and V as specific parameters.\nThe results are as follows: \n\n|Version | MovieLens | Amazon | IAAC | Ave(AUC) | Ave($\\Delta$ ) |\n| -------- | -------- | -------- |-------- |-------- |-------- |\n|$U_i(S(V_ix_i))$ | 79.64\t | 69.27 | 65.80 | 71.57 | +0.39 |\n\nGenerally speaking, although using both U and V can achieve similar performance, it is costly compared to using only S. The reason is that the generation complexity of the specific parameter is sensitive to N and M when using U and V.\n\n\n**3、[Q3: V5, over parameterization, appears to be a separate technology. How it connects to APG.]**\n\nAs mentioned in Section 3.2, although low-rank can significantly reduce the model complexity, the number of the shared parameter is reduced which may hurt the model performance. Thus, over-parameterization is proposed.\n\n\n**4、[Q4: What is the APG version reported in Table 2?]**\n\nIt is V5 in Table 2 and we are sorry to make you confused about the version in Table 2. We will clarify it in the revised manuscript\n\n\n**5、[Q5: The training time and memory complexity of v5 is not provided in Table 5. Is it similar to the basic version? What is the trade-off between the efficiency and effectiveness of v5?]**\n\nWe sincerely apologize for our misleading writing in the current version in Table 5. Actually, the time refers to inference time in Table 5, and for CTR prediction we care more about the online inference efficiency. As mentioned in Section 3.2, over-parameterization does not introduce any additional latency or memory cost to inference. It means the time and memory cost is similar to V4. It is true that adding over-parameterization will bring additional cost to training. But considering the efficiency during inference, for the CTR prediction tasks, it is willing to adopt over-parameterization.\n\nAgain, we are sorry for the misleading expression and we will make it clear in the revised version.\n\n**6、[L1 and W1: Can provide a comparison between using generated and fixed S in the current v1-v5 version.]**\n\nHere we provide additional results of the version using fixed S (i.e.,$U(S(Vx_i))$ ).\n\n|Version | MovieLens | Amazon | IAAC | Ave(AUC) | Ave($\\Delta$ ) |\n| -------- | -------- | -------- |-------- |-------- |-------- |\n|$U(S(Vx_i))$ | 79.09\t | 69.21 | 65.26 | 71.19 | +0.01|\n\nWe can see that using fixed S achieves similar performance to v1.\n\n\n**7、[L2 and W2: Can provide comparisons with existing methods of automatically generating weight matrices (e.g., in the AutoML domain).]**\n\nThank you for your nice idea to consider the comparison with AutoML methods. As far as we know, the methods using AutoML are more likely to be used for automatically selecting operations or architectures like [1,2] rather than weight generation. We will also discuss the relationship with AutoML in the revised manuscript. \n\n**8、[typos (W3)]**\n\nThanks. We will fix it.\n\n[1] AutoGroup: Automatic feature grouping for modeling explicit high-order feature interactions in CTR prediction, SIGIR 2020\n\n[2] Neural input search for large scale recommendation models. KDD 2020 ", " Thanks for your valuable comments. Please allow us to address your concerns below.\n\n**1、[Q1:In Line 283 “Since over parameterization does not introduce any cost, it is not considered here”: Why? It seems that over parameterization would introduce more computational cost during training.]**\n\nWe sincerely apologize for our misleading writing in the current version in Table 5. Actually, the time refers to inference time in Table 5, and for CTR prediction we care more about the online inference efficiency. As mentioned in Section 3.2, over-parameterization does not introduce any additional latency or memory cost to inference. It means the time and memory cost is similar to V4. It is true that adding over-parameterization will bring additional cost to training. Again, we are sorry for the misleading expression and we will fix it in the revised version.\n\n\n**2、[typos (L1)]**\n\nThanks for your careful review. We will definitely polish the paper and a thorough proofreading will be conducted.", " The authors of the paper address the problem of click-through rate (CTR) prediction customized to different instances. They propose an adaptive parameter generation (APG) method for generating weight matrices in instance-adapted CTR systems. Generating the entire weight matrix directly is very time and memory consuming. To reduce time and memory complexity, they propose to decompose the weight matrix into a product of three vectors, namely S, U, and V. Only S is dynamically generated to capture different instances of custom patterns, while U and V are shared instances that capture common patterns. Results from three public datasets and real search systems are shown. Strengths:\n1. By decomposing the matrix and then decomposing the feedforward, the time and memory complexity of the CTR model with APG is greatly reduced.\n2. Improve scalability to large inputs through over-parameterization.\n3. The paper is well written.\n\nWeakness: \n1. Does not provide a comparison of between generating the S vector and using fixed S vector on the current v1-v5 versions. \n2. It does not provide a comparison with existing methods that automatically generate weight matrices (e.g., in the AutoML field). \n3. A small typo on line 210. 1. In a group-wise conditional design, is there any extra effort required to divide instances into different groups?\n2. S is used to capture custom patterns because of its low rank. Have you tried using U and V for this purpose? Did they achieve similar results?\n3. V5, over parameterization, appears to be a separate technology. How it connects to APG.\n4. What is the APG version reported in Table 2?\n5. The training time and memory complexity of v5 is not provided in Table 5. Is it similar to the basic version? What is the trade-off between the efficiency and effectiveness of v5? 1. Can provide a comparison between using generated and fixed S in the current v1-v5 version.\n2. Can provide comparisons with existing methods of automatically generating weight matrices (e.g., in the AutoML domain).", " This paper studies the problem of enabling input-aware model parameters which are dynamically generated in order to boost representation power of deep CTR (click-through rate) prediction models. A novel and general method APG (Adaptive Parameter Generation) is proposed, being able to work together with a variety of existing CTR prediction models. By means of employing several techniques such as low-rank parameterization, decomposed feed-forwarding, parameter sharing and over parameterization, APG generates adaptive parameters in an efficient and effective way. Computational complexity is analyzed carefully; empirical results demonstrate APG's efficiency and effectiveness. Notably, APG is claimed to be deployed in an industrial sponsored search system, achieving performance gain in online A/B testing. * **Originality**: Dynamic neural networks are widely studied in computer vision and natural language processing, in which model parameters and architectures could be input-aware and thus dynamically generated. It is claimed that this paper is the first to bring the idea about dynamic neural networks into deep CTR prediction models.\n* **Clarity**: The paper is well organized and clearly presented. However, typos are widespread throughout the text.\n* **Significance**: Computational complexity of the proposed APG is carefully analyzed, and the efficiency is also validated empirically. A wide variety of baselines are employed to demonstrate the generality of APG. It is noteworthy that APG is claimed to be deployed in an industrial sponsored search system, achieving performance gain in online A/B testing. In Line 283 “Since over parameterization does not introduce any cost, it is not considered here”: Why? It seems that over parameterization would introduce more computational cost during training.\n\n\n The writing has to be polished carefully. Several typos are listed as below:\n\n* Line 87: analysis → analyze\n* Line 106: is lack → lacks\n* Line 107: kinds → kind\n* Line 118: General → Generally\n* Line 215: detailed analyze → analyze in detail\n* Line 242: Since APG → APG\n* Line 282: analysis → analyze\n* Line 349: detailed analyzed → analyzed in detail", " This paper proposes to learn input-aware parameters in deep CTR models to boost its representation power. A clear method iteration trace is presented with clear design and solid analysis. The final version archives both efficiency gain and accuracy improvement. Most impressively, the method is tested in industrial production system. Strengths\nThe paper is well written and quite readable. Methods' iteration records are plausible.\n \nWeaknesses\nCurrent baseline on parameter allocation is too weak. This is not necessarily a weakness since advanced baseline to learn fine-grained parameters for model parameters may not be existed. I am not an expert of related literatures. The novelty of such fine-grained input-aware parameter allocation methods. The related work section states that this is almost first work to apply such heuristic, right?\n\nDo we have benchmark on the effectiveness of various condition designs?\n\nThe gain from v4 to v5 is huge, why an over parameterization trick so powerful? If parameter sharing is beneficial, base model should not be such poor? The method iteration are surprisingly positive which may imply some deep insights. Let's say \n- base -> v1 parameter personalization improves performance \n- v1 -> v2 low rank parametrization doesn't change performance too much.\n- v2 -> v4 appropriate parameter sharing improves performance \n- v4 -> v5 over-parameterization (more sharing parameters) largely improves performance\n\nIt seems model favors heavy parameter sharing plus certain parameter personalization. More effort is needed to study the root factor and a mode direct approach based on it.", " This paper aims to improve an important application -- CTR prediction.\n\nThe authors claim that a static way of parameterization is suboptimal, where the weight of CTR network is shared by all the instances.\nInstead, they prefer a dynamic way of parameterization, where a personalized weight is generated for each instance.\n\nSpecifically, they apply a hypernetwork to generate the weight for each instance. Moreover, they apply techniques like low-rank parameterization, Decomposed Feed-forwarding, Parameter sharing and Over Parameterization to improve the efficiency and effectiveness of their method.\n\nThey present the experimental results on both public datasets and online real-world search system. Strengths:\n\n(1) The paper is well-organized. The motivations for using low-rank parameterization, Decomposed Feed-forwarding, Parameter sharing and Over Parameterization are presented in a very clear way. The figures are very clear and beautiful.\n\n(2) The experiments are comprehensive, including public datasets and online real-world search engine.\n\nWeakness:\n\n(1) The novelty is limited. The core idea of this paper is using a shared hypernetwork to generate weight for each instance or group, which was originally proposed in \"Hypernetworks, David Ha, Andrew Dai, and Quoc V Le. 2016.\" and has been widely applied like in \"Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks, ACL 2021\" The techniques like low-rank parameterization and Parameter sharing are also not new, which could be found in \"HyperGrid Transformers: Towards A Single Model for Multiple Tasks, ICLR 2021.\"\n\n(2) Related work is missing. Hypernetwork should be an important piece of the related work in the paper but authors seem to completely unaware of the existing of this piece.\n\n(3) The improvement is limited on public datasets. In Table 2, it seems to me that the improvement of APG is not that large. For example, the average gain of APG is only 0.24 in terms of AUC on MovieLens dataset. In particular, the SOTA CTR architecture DIFM has only been improved 0.1, 0.06 and 0.66 on the three datasets, without significant test, it is really hard to be convinced that this method can substantially improve the CTR performance.\n May I ask if authors could read more related papers of hypernetworks and put this work into that context and re-evaluate the novelty?\n\nIs this one the first paper that introduce hypernetworks into CTR prediction?\n\nThe current version seems to be a better fit to some application-toward conferences like SIGIR, KDD, considering this paper neither provide deep and theoretical analysis about why same and shared parameters for all instances is suboptimal nor present original idea of hypernetworks.\n\nIt is no doubt that this paper is a good application paper that apply hypernetworks for CTR prediction. This paper has no potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "-mbAGqRnl5j", "8XEei-0D_pP", "FKJzQhgnkQ", "FKJzQhgnkQ", "FKJzQhgnkQ", "uecTo0qKW8sI", "gWqCpdDspSP", "gWqCpdDspSP", "PxO8VzNqYr", "8YuC0fIwpl3", "_v42aLEn2S", "nips_2022_Sw_zDFDTr4", "nips_2022_Sw_zDFDTr4", "nips_2022_Sw_zDFDTr4", "nips_2022_Sw_zDFDTr4" ]
nips_2022_F8UV5CItyRG
On the Effective Number of Linear Regions in Shallow Univariate ReLU Networks: Convergence Guarantees and Implicit Bias
We study the dynamics and implicit bias of gradient flow (GF) on univariate ReLU neural networks with a single hidden layer in a binary classification setting. We show that when the labels are determined by the sign of a target network with $r$ neurons, with high probability over the initialization of the network and the sampling of the dataset, GF converges in direction (suitably defined) to a network achieving perfect training accuracy and having at most $\mathcal{O}(r)$ linear regions, implying a generalization bound. Unlike many other results in the literature, under an additional assumption on the distribution of the data, our result holds even for mild over-parameterization, where the width is $\tilde{\mathcal{O}}(r)$ and independent of the sample size.
Accept
Overall: The paper focuses on an end-to-end learning guarantee for gradient flow on shallow univariate neural networks in a binary classification setting. Reviews: The paper received four reviews. 2 Strong accepts (confident and fairly confident), Accept (confident), borderline accept (fairly confident). It seems that there are at least three reviewers that will champion the paper for publication. The reviewers found the paper is clear and has a clean presentation. The findings are interesting. The authors have provided extensive answers to reviewers' comments, answering most of them successfully. After rebuttal: A subset of the reviewers engaged in a consensus that the paper should be accepted. Confidence of reviews: Overall, the reviewers are confident. We will put more weight to the reviews that got engaged in the rebuttal discussion period.
train
[ "_r_J2cXwhy", "bMLdBT2KL3w", "1pdw8prR8aG", "Ju6rLXzKzRM", "AoZAALIASEG", "75GcvPS-ihv", "fDorkePMLi-", "iLeOXmHahX", "38AsjP-_mqZ", "PqiyswGS7p", "kTOkr8zQE36" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their response and clarifications to my questions. As this resolves my questions to the authors, I keep my score unchanged.", " Thank you for your response.", " Thanks for the response!", " Thank you for the positive feedback and support.\n\n\n1. “The results of this paper only apply…” - This is an excellent question. We hope that it might be possible to extend our analysis to homogeneous univariate networks of depth larger than 2, but there are technical difficulties that we currently do not know how to overcome. We believe that it is a tantalizing direction for future research.\n\n2. “This paper gives a nice characterization” - We agree that considering different architectures is an interesting research direction, although there seems to be a limited number of possibilities for univariate depth-2 networks.\n", " Thank you for your review and positive feedback.\n\n\n1. Mulayoff et al. - Thank you for bringing this paper to our attention.\n\n2. \"Can you please clarify the difference between $k$ and $k'$ in Remark 2.1\" - $k$ is the width of the student in the mild over-parameterization regime, namely $k$ does not scale with $n$. $k'$ is the width of the student in the extreme over-parameterization regime, namely $k'$ is larger than $n$. This implies that if $n\\to\\infty$ then $k\\ll k'$ and hence the terminology mild vs. extreme over-parameterization.\n\n3. We will explain the term \"breakpoints\" as suggested.", " Thank you for the positive feedback and thorough review.\n\n- \"The student network is required to be extremely wide (much larger than the teacher network)\" - The student's required width depends mainly on the magnitude of $\\rho$. For example, when the labels are determined by $f_r$ which is defined in Eq. (25), then the student's required width is roughly $\\mathcal{O}(r\\log(r))$, so there are cases where the student doesn't need to be much larger than the teacher.\n\n- \"The statement in the abstract...\" - We will clarify this.\n\n\nQuestions:\n\n- \"In what way would the authors argue...\" - We do not argue that our analysis in the restricted setting necessarily implies insights on the multivariate or deeper setting, however one can hope, as mentioned to a different reviewer, that there is also a certain bias towards networks with a small number of linear regions. While this is merely wishful thinking, it does provide a concrete property and an interesting question to explore in a future work.\n\n- \"Similarly, what are the main challenges...\" - We believe that analyzing the implicit bias in a multivariate setting is the main bottleneck for achieving such generalizations, however we are convinced that generalizing our optimization guarantees (Theorem 3.1) to the multivariate case is certainly possible, if suitable assumptions on the underlying distribution and teacher network are made. We are still not sure if and how our techniques will generalize to deeper architectures.\n\n- \"In the lines 360-363...\" - In our proof we show that if there are too many linear regions in an interval where the labels are similar, then it contradicts the KKT conditions. A more precise argument seems to involve much more technical details, and we will attempt to come up with a more intuitive explanation.\n\n- \"Please clarify line 284:...\" - This paragraph refers to the case of mild over-parameterization as explained in Remark 2.1. We assume that the sample size (i.e. number of observations) is sufficiently large and much bigger than the number of parameters in the student (i.e. the trained network), which implies a generalization bound irrespective of our implicit bias result. We will clarify this paragraph.", " Thank you for the feedback and detailed review.\n\n- \"The results of this paper build upon...\" and \"Since the theoretical results are derived...\" - We agree that our results make essential use of the assumptions on the data and the architecture, however we believe that we must first understand the univariate setting which in turn may shed more light on the more complex and interesting multivariate setting. Indeed, the univariate setting has been studied extensively in recent years (see related work section in the paper). As a technical side note, we believe that the study of the univariate case can identify certain properties that facilitate our analysis (such as counting the number of linear regions in the student) which could potentially extend to the multivariate setting, pinpointing concrete and interesting questions to explore in future work.\n\n- \"Compared with existing literature...\" - We believe that our technical contribution is two-fold: (i) We develop a technique which allows one to show that the objective satisfies the PL-condition locally in a neighborhood around the initialization, for network widths that are independent of the sample size. To the best of our knowledge, our paper is the first to provide such a guarantee in a setting where the output neuron's weights are being trained (see the Teacher-student setting and mild over-parameterization paragraph in the paper). (ii) Our characterization of the implicit bias indeed builds on existing works such as Lyu and Li [2019] and Ji and Telgarsky [2020]. However, our technique is based on a careful analysis of the KKT conditions that the point of convergence must satisfy, which elucidates a novel constraint on the number of linear regions such a point can have. We hope that our proof ideas will stimulate more such analyses in the future.\n\n- \"I understand this is a theoretical paper...\" - We agree that additional experimentation can support our theory and we will examine adding such experiments in a revised version.\n", " This paper studies the dynamics and implicit bias of gradient flow (GF) on two-layer univariate ReLU Networks in a binary setting. It shows that under a teacher network (with r neurons), (over-parameterized) GF is guaranteed to achieve perfect training accuracy, which learns a network that has at most O(r) linear regions. This is characterized by the paper as the implicit bias of GF, which leads to a generalization bound. In addition, the paper proves a necessary condition for successful learning with over-parameterization in terms of a lower bound on the number of neurons. Strength:\n\n1. The paper studies an important problem in deep learning theory. The provided characterizations of the underlying dynamics of gradient flow may explain fundamentally the success of gradient-based deep learning.\n\n2. The paper is well-structured, which presents the theoretical results in a coherent and logical way: over-parameterization -> small training error -> implicit bias towards a network with small complexity -> a generalization bound.\n\n3. The result on the sufficiency of over-parameterization for successful learning is new.\n\n\nWeakness:\n\n1. The results of this paper build upon a couple of assumptions with respect to the underlying data distribution and the neural network class. In particular, it assumes a univariate neural network setting, where the inputs have a single feature. This seems to be a strong assumption to me, which should be discussed more in the paper. The challenges of extending the presented results to a multivariate setting, such as [1], are not clear to me.\n\n[1] GRADIENT DESCENT MAXIMIZES THE MARGIN OF HOMOGENEOUS NEURAL NETWORKS, Kaifeng Lyu & Jian Li, ICLR 2020\n\n2. Compared with existing literature that aims to understand the dynamics of gradient descent, the technical contributions of this paper are not clearly presented. It seems that the proof techniques of this paper are based on existing known techniques.\n\nMinor ones:\n\n1. I understand this is a theoretical paper, but synthetic experiments which test the theoretical claims (or even better, experiments under settings that go beyond your assumptions) will largely strengthen the paper. \n\n2. A Typo: point our that -> point out that in line 193\n 1. Since the theoretical results are derived under a simplified setting, I am wondering what are the implications of your results for more realistic settings that are typically adopted for deep learning? Yes.", " The paper studies two-layer neural networks with a single input dimension for binary classification with exponential or logistic loss in a teacher-student setting. The teacher network is assumed to have a fixed number of hidden neurons (hence a fixed number of linear regions), and the paper explores the type of solution that gradient flow converges to, if the student network has a (much) larger number of hidden neurons. This reveals that, no matter how many neurons the student network has, gradient flow converges to a solution with small training loss and a restricted number of linear regions. \n\nThe proof is achieved in two steps: (i) Overparameterization implies convergence to sufficiently small loss (with high probability over a certain initialization) where all samples are classified correctly (ii) Gradient Flow then converges to a solution with a limited number of linear regions that depends (linearly) on the target network width (and hence the number of linear regions of the target network). \\+ The authors follow a novel approach that depends on the number of linear regions of the teacher network, irrespective of the sample size. The implicit bias found by the authors in their setting, is geometrically understandable in the sense that the point of convergence is a network with limited number of linear regions in relation to the target network function‘s number of linear regions. \n\n\\+ Since these main results are independent of the sample size, they can then be nicely combined with a generalization bound for finite VC dimension to obtain a generalization bound also for the solution of the gradient flow algorithm.\n\n\\+ While the setting is quite limited (single input dimension, single hidden layer), the results are important for the understanding of the implicit bias of gradient flow (i.e., what solutions are favored by training with gradient flow). The techniques are interesting and novel, and they build upon two recent papers by Lyu, Li and Ji, Telegarsky. The paper provides complete, detailed proof, which look sound and correct to me (I did not check all the proofs in the supplement in complete detail).\n\n\\+ The authors provide an additional result, which shows that overparameterization is necessary for learnability, i.e., that small loss can be achieved with high probability over the weight initialization no matter the input distribution (with labels given by a 2-layer target network). This rounds up the (simplified) take-away that sufficient overparameterization is both necessary and sufficient for learning with gradient flow (in their setting).\n\n\\+ The paper is very well-written. In particular I appreciated that the author‘s motivate the key ideas of the proofs and clarify their assumptions and limitations. \n\n\\- The main weaknesses come from the very restricted setting and certain assumptions that are (technically) necessary for the results. Firstly, the proof technique seems to strongly depend on the univariate setting (one input dimension) and the non-existence of additional hidden layers. So it remains unclear how and whether the results could generalize to deeper networks in a more general setting. Secondly, while the assumptions in Section 2.1 are indeed very mild, the additional assumptions in the main theorems appear stronger in the sense that the network‘s under study may not be similar to networks used in practice: The student network is required to be extremely wide (much larger than the teacher network), and the initialization technique is unconventional (hidden weight samples from a normal distribution with extremely large variance and output weights sampled from a normal distribution with extremely low variance).\n\n*****\nTaken together, I consider this a very nice, carefully composed paper, where the problem of applying only to a restricted setting is largely outweighed by the novel insight.\n*****\n\nMinor additional comments:\n\n- Lines 360-363 explain the idea of the proof, but fail to give an intuition of it: In particular, it would be helpful to (intuitively) explain why(!) the networks converge to weight configuration with a constant number of kinks in an interval where the labels do not change sign.\n\n- The statement in the abstract that something \"may already hold“ is hard to follow if no additional context is given why the result may or may not hold.\n - In what way would the authors argue that their results in the extremely restricted setting (univariate, one hidden layer) can tell us something about deeper networks on more input dimension? \n\n- Similarly, what are the main challenges to go beyond the univariate setting or deeper networks? Is it conceivable that the methods develeoped here can be extended to a more general setting?\n\n- In the lines 360-363: Is it possible to provide an intuitive explanation why the networks converges to a weight configuration with a constant number of kinks in intervals of unchanged label?\n\n- Please clarify line 284 :\"since the number of observations in the dataset far exceeds the degrees of freedom in the trained model“:\nWhat are the \"observations\" this sentence refers to, and does the \"trained model\" refer to the student network? I suppose that the\"observations“ refer to the samples, but since no assumption on the number of samples is made, their number cannot be used as a basis for an argument (\"since the number of....\"). Similarly in the following sentence („since the width of the network and thus also the model’s capacity scale with them, implying a bound that explicitly depends on them“), does\"them“ still refer to these observations?\n The paper studies a very restricted setting (see above), but all the necessary assumptions are clearly indicated and discussed.", " The paper studies the optimization dynamics and implicit bias of GF on univariate single hidden layer ReLU networks in a binary classification setting. Under some assumptions on the initialization and data distribution, the authors show that sufficiently wide networks attain, with high probability, a training error $\\le \\frac{1}{2n}$, where $n$ is the number of training samples. Then, the authors show that if at some time GF attains a training error smaller than $\\frac{1}{n}$ then it converges to zero loss and converges in direction to a network with at most $O(r)$ linear regions. This implicit bias characterization yields a generalization bound. Finally, the authors combine their results to derive their main result that characterizes GF implicit bias and, with high probability, guarantees convergence to zero loss and a generalization bound for sufficiently wide networks under some assumptions on the initialization and data distribution. To complement these results, the authors demonstrate that without sufficient over-parametrization GF is unable to achieve population loss below some absolute constant. *Strengths:*\nThe paper is clear and well written, and it establishes several important results, detailed above, on GF optimization dynamics and implicit bias for univariate single hidden layer ReLU networks in a binary classification setting.\n\n*Weaknesses:* \nThe main weaknesses, in my opinion, are the assumptions regarding GF (in contrast to GD) and the assumption on $\\sigma_h$ in the initialization, which is not trivial. However, the authors discussed these issues in the paper and despite these limitations, I still think the paper made a significant contribution.\n 1. Missing citation: \nMulayoff, R., Michaeli, T. and Soudry, D., 2021. “The implicit bias of minima stability: A view from function space”. NeurIPS 2021.\n2. Can you please clarify the difference between $k$ and $k’$ in Remark 2.1?\n3. I think that adding an explanation for what are “breakpoints” will make the paper clearer. The authors mentioned and justified the limitations of their results. ", " This paper studies the convergence and generalizations of two layer ReLU networks trained by gradient flow (GF). It first gives a convergence guarantee of gradient flow. Then, using this result, the author proves that the gradient flow converges to a predictor whose number of linear regions is minimized up to a constant factor. Finally, using this characterization of implicit regularization, the author shows that two layer ReLU network trained by gradient flow generalizes. The results in this paper are new to the best of my knowledge. The result on implicit regularization of two layer ReLU networks is very interesting. The result on generalization of two layer ReLU network is significant as it helps us understand why neural network generalizes. The presentation is clear.\n\nOne weakness of this paper is that the results are limited to two layer ReLU networks. Thus, it does not explain the success of deep neural networks, which occur more frequently in practice. However, I think the results in this paper is a good beginning to understand implicit regularization and generalization of deep neural networks. Another weakness is that this paper studies gradient flow instead of gradient descent, which is more interesting. However, I still think the results are interesting enough as it is common to first prove something on gradient flow and then discretize the results. (1) The results of this paper only applies to depth two ReLU networks and it seems that the main reason for this restriction is that ReLU network with bias terms is no longer homogeneous when the depth is greater than two. Does a similar result holds for homogeneous ReLU networks (the one without bias terms) of general depth? If the results in [1] could be extended to the non-homogeneous case (i.e. neural networks with bias terms), could the results of this paper be extended to ReLU networks with general depth? Indeed, as observed in [1], the generalized results of [1] still holds experimentally even if the network has bias terms.\n\n(2) This paper gives a nice characterization of the implicit bias of ReLU networks but the same results seem to work for all architectures (I mean the underlying graph of this network). As a result, it does not study how changes in architectures could affect generalization. Is it possible to extend the results in this paper to study the relationship between architectures and generalizations? In linear case, this question has been studied for various architectures [2]. It might be interesting to have some discussions of possible fusions of results in [2] and results in this paper.\n\n\nReferences:\n\n[1] K. Lyu and J. Li. Gradient descent maximizes the margin of homogeneous neural networks. arXiv\npreprint arXiv:1906.05890, 2019.\n\n[2] Zhen Dai, Mina Karzand, and Nathan Srebro. Representation costs of linear neural networks:\nAnalysis and design. In A. Beygelzimer, Y. Dauphin, P. Liang, and J. Wortman Vaughan, editors,\nAdvances in Neural Information Processing Systems, 2021. The authors addressed the limitations of this work but not social impact. However, I think this is fine since the work is theoretical." ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 8, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "75GcvPS-ihv", "AoZAALIASEG", "Ju6rLXzKzRM", "kTOkr8zQE36", "PqiyswGS7p", "38AsjP-_mqZ", "iLeOXmHahX", "nips_2022_F8UV5CItyRG", "nips_2022_F8UV5CItyRG", "nips_2022_F8UV5CItyRG", "nips_2022_F8UV5CItyRG" ]
nips_2022_WHFgQLRdKf9
DNA: Proximal Policy Optimization with a Dual Network Architecture
This paper explores the problem of simultaneously learning a value function and policy in deep actor-critic reinforcement learning models. We find that the common practice of learning these functions jointly is sub-optimal due to an order-of-magnitude difference in noise levels between the two tasks. Instead, we show that learning these tasks independently, but with a constrained distillation phase, significantly improves performance. Furthermore, we find that policy gradient noise levels decrease when using a lower \textit{variance} return estimate. Whereas, value learning noise level decreases with a lower \textit{bias} estimate. Together these insights inform an extension to Proximal Policy Optimization we call \textit{Dual Network Architecture} (DNA), which significantly outperforms its predecessor. DNA also exceeds the performance of the popular Rainbow DQN algorithm on four of the five environments tested, even under more difficult stochastic control settings.
Accept
The reviewers found this to be a well-executed technical contribution, and all reviewers agree it meets the bar for acceptance. While this paper does not seem to provide a breakthrough novel insight, it does contribute useful information for the field, and I believe sharing with the community is beneficial. I recommend accepting this paper.
test
[ "njXheTdfOuN", "qa-y8k4_kQO", "SATyGHnbux4", "K5wsjC4l3w", "9lYBki-B37", "eEC4T3t3L4c", "W1xL2YV0Hk", "C0Qo8kb4AJ2", "55fSq-JlTBg", "5WZG58YkovP", "s_IRiwZ13w4", "RSVgUSKSYvK", "knm5U764qDf", "AxIeU3lsGyb", "Al6MZAjkI-2" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review and feedback. We have amended the language in the paper to clarify that our results show DNA outperforming PPG specifically on the Atari-5 benchmark rather than the previous (broader and erroneous) claim that it outperforms it in general. This change will appear in the camera-ready version.", " Thank you to the authors for addressing my concerns.", " I appreciate the authors’ effort in revising the paper and conducting more experiments which overall improved the paper. However, from the empirical performance, it is unclear if DNA is better than PPG. Authors speculate that using a replay buffer helps PPG achieve superior performance in Procgen. However, not using a replay buffer is one of the motivating factors of DNA. I suggest toning down the claim of DNA having a better empirical result than its predecessor, for example, in the Abstracts, and throughout the paper.\n\nOverall, empirical evidence is lacking regarding the advantage of DNA over relevant baseline (e.g., PPG). However, I also think that should not be the only criteria to judge the contribution of a paper. Thus, I increase my score from 5 to 6. \n", " We have now added Appendix H, which gives Pseudocode for how the $\\sigma$ values were calculated for Figure 2, along with an expanded discussion on the topic. We hope this clarifies the exact process of going from gradients to an estimate of the noise scale. As is sometimes the case, we were happy to discover a slight improvement in our algorithm while formalizing this procedure, which we have also noted in the appendix and will use in future work.\n\nRegarding the size of the plots, we agree. Where possible, we have adjusted the figures' size to make them more readable but are quite limited by the 9-page limit. We would be happy to make use of the 10th page available after acceptance to resolve this issue by increasing their size further. We will also use the extra page to add short statements to the captions to highlight the relevant takeaways, as we agree this would also improve the paper.\n\nWe have now updated both the main paper and the supplementary material, marking the changes in $\\color{red}{\\text{red}}$ for readability.\n\nIf there are further improvements, you would like, please let us know.", " I appreciate the authors putting in the effort to perform additional experiments! I also appreciate the author's honesty in reporting Procgen results as well even though they are somewhat worse compared to PPG. Considering these revisions, I would like to bump up my score to 7.", " Regarding Eq 4, 5 and the discussion in 3.2, it is still unclear to me how Eq 4, 5 relates to Eq 3. I understand Eq 4, 5 are calculated to obtain an unbiased estimate of the LHS of Eq 3, but how do you obtain this estimate after calculating Eq 4 and 5? I think if you include pseudo-code of the entire procedure to calculate the gradient noise scale, it would be quite helpful. Sorry if I miss this. \n\nRegarding the figures, I think more can be done to help the reader interpret the take-aways. The text in the figures are generally very small, and coupled with my previous point, makes interpreting the figures quite non-trivial.\n\nRegarding referring to [20], I think it would be helpful if the paper is more self-contained. It seems okay to me if you include more detailed discussion to make the paper self-contained in the Appendix, including what you said in the rebuttal.\n\nCan the authors please make the edits to the paper in a different color so it is easier to tell what the changes are? \n\nIf these changes are addressed, I am happy to increase my score to 7, mainly because I think the analyses regarding noise scale is novel in the context of RL and is a re-usable technique in future algorithmic development.", " Thank you very much for everyone's kind suggestions. We have made several changes to the paper based on your feedback. A revised version how now been uploaded with the following modifications.\n\n- Appendix added with MuJoCo results (added to supplementary material only and in the previous update)\n- Appendix added with Procgen results (added to supplementary material only and in the previous update)\n- Fixed typo on line 185 (policy->value) (thanks for spotting this).\n- Added a new section in the literature review, giving a bit more context to PPO and the recent developments.\n- Added an explanation of Atari-5 and why we used it. [section 5, first paragraph]\n- Modified section 4.2 final paragraph to explain why we hypothesised that bootstrapping + bias would be more of a problem than bootstrapping + variance.\n- Made it clearer why our experiments started with a search over training epochs.\n- Included a discussion regarding UTD under future work. (our guess is that the optimal UTD ratio is environment dependant, and that, counterintuitively, high variance environments require fewer epochs, so you don't overfit the noise, but we will leave that for later...).\n\nWe had to cut out a few paragraphs to fit some of the new changes into the 9-page limit. However, we believe an additional page is allowed for the document-ready version, so we will restore these paragraphs in the final version. \n\nAgain, thank you, everyone, for your reviews and feedback. We enjoyed reading them and believe the paper is much improved as a result.\n", " We have uploaded a revised appendix as part of the supplementary material. This appendix contains two new sections K, and L, detailing our supplementary experiments on the MuJoCo and Procgen benchmarks. We will upload a revised version of the main paper in the next few days that addresses the remaining issues.", " A general theme of the feedback is that the paper would benefit from additional experiments. We wholeheartedly agree. Therefore we have run DNA on the MuJoCo and ProcGen benchmarks. Our results will be uploaded via an update to the supplementary material over the next few days.\n\nThe summary of the results is that DNA outperforms PPO in 5/8 of the MuJoCo environments tested and is roughly equal in performance on the other 3 (DNA and PPO both likely score maximum or close to maximum points on these other environments)\n\nFor ProcGen, DNA outperforms PPO and underperforms PPG. This was expected as ProcGen's random generation requires a large replay buffer to stabilize training over the 200 variations that must be learned. \n\nWe ran these experiments last week and did not have time to fully tune hyperparameters. Despite this, the results still show a clear improvement of DNA over PPO. This puts DNA better on 3/3 benchmarks compared to PPO and better on one, worse in another compared to PPG. \n", " Thank you for your review. I will try to address each point below.\n\n1. Noise scale - We were unsure how much detail to go into regarding the noise scale. In the end, we chose to simply refer readers to McCandlish's paper [20], as they do a great job of explaining it. However, we make explicit our interpretation of the ratio between |G|^2 and S, which is the signal to noise and is discussed in the Preliminaries section. \n\n2. Sigma - The plot in figure 2 is log_10(1+\\sigma), where sigma is sqrt(|G|^2/S) and is defined in section 3.2. We would be happy to make this more explicit in the text.\n\n3. Noise and the direction of the gradient - This is a very interesting point. In their paper McCandlish et. al [20]; provide a noise estimate that does include the direction, which they call B_noise. We chose to use their simplified approximation B_simple, which assumes the Hessian is a scalar multiple of the identity matrix, and which simplifies down to requiring just |G| instead of G. See Page 7 of their paper for more details. Practically speaking, calculating the Hessian was not an option.\n\n4. Self-contained description of the Atari-5 - Yes, we agree and have now added a paragraph describing and giving justification for Atari-5. Because Atari-5 is new, we also include full results on all 57 Atari games and now also have results on ProcGen and MuJoCo. We hope this will elevate the issue of introducing a new algorithm and a new benchmark at the same time.\n\n5. Bias and bootstrapping - We agree this was not very clear and have modified the text to explain our intuitive reasoning better here. The idea is that if a value function uses itself to update, then variance errors will average out over time, but bias errors will not. This, of course, may not be true, which is why we tested it empirically.\n\n6. Preliminary experiments - Regarding the first experiment. Our algorithm introduces several new hyperparameters, and the first experiments are to establish what values should be selected. A key feature of DNA is the ability to adapt individual settings to the policy and value learning tasks. Noise influences the optimal minibatch size, and optimal minibatch generally influences the number of epochs (i.e. larger mini batches typically want more epochs). That is why we needed to check these values. We appreciate the feedback and have adapted our wording to make this point more clear.\n\nQuestions\n\nUTD Paper - Great question. We're glad you referenced this paper. We had not seen it before, and it seems highly relevant, given our results, especially now that we have included supplementary results on MuJoCo. We have now included a paragraph discussing it in our paper. One of our more surprising results is that a *low* UTD is critical for PPO and that PPO performs quite well on Atari with just one update. We can only speculate, as others have, that this is due to overfitting (which we discuss briefly in our paper). However, in our additional experiments on Mujoco (included in an updated supplementary), we also found, like the referenced paper, that *high* UTD was critical for MuJoCo. We are not sure why this is the case for Mujoco but not for Atari. An investigation into why atari wants UTD=1-2 and MuJoCo wants 20-40 would be a very interesting future research direction, and we have now included a note regarding that under future work.\n\nWe hope these changes have addressed your concerns.", " Hi,\n\nThank you very much for your review. We will address your questions and weakness.\n\n1. Lack of empirical evidence - We agree with this, and other reviewers have made similar comments. Because of this, we have now produced results on two additional benchmarks, MuJoCo and ProcGen, both of which show a significant improvement over PPO.\n\n2. Procgen - We have now included results on the ProcGen benchmark. Because of the random procedural generation, ProcGen benefits greatly from a large replay buffer, which we explicitly did not include in DNA. This is because, for many tasks, it is not needed (as shown by our Atari and MuJoCo). Our results on ProcGen, which we will include in an updated appendix shortly, puts DNA between PPO and PPG on this benchmark. Outperforming PPG on this benchmark is very difficult, as it was specifically designed to address the unique challenges of the problem. Despite this, DNA can produce results nearing PPG's performance using only four updates (compared to PPGs 8) and without the need for a large replay buffer.\n\n3. Score - The combined score was generated using a weighted geometric mean. This procedure is described in our accompanying paper Atari-5, but for clarity, we have now included a paragraph explaining the process (and justification) in the paper itself. In terms of performance improvement, DNA outperforms PPO in our experiments on all five games tested. However, not all of these results are to a statistically significant degree. When averaged across all games, the difference becomes quite large. PPG's performance was hampered by poor scores on NameThisGame, Qbert and Pheonix.\n\n4. Other metrics - This is an interesting idea. We like the idea of using measures outside of the expected score. We have some future work that takes a deep dive into this. \n\n4. Seeds - We performed either 3 or 5 seeds. Each figure has a note indicating how many seeds were used to produce it.\n\n5. Procgen - We agree with this point and have now run DNA on the ProcGen benchmark. Results will be uploaded shortly.\n\nWe hope that the addition of these supplementary experiments will address your concerns.", " Hi,\n\nThank you for your review. We agree with the importance of adding experiments on MuJoCo. One of the reasons we had not done this is that PPO already performs quite well on this benchmark but has traditionally performed poorly on Atari. We wanted to address this weakness. However, out of curiosity, we ran some experiments on Mujoco earlier this week and found that DNA is surprisingly strong on this MuJoCo, outperforming PPO on 5/8 of the tasks tested and being of roughly equal performance on the remaining 3. \n\nWe have now included these new results in the appendix and will upload a revised copy shortly. Thank you for suggesting this. We are also in the process of adding results on ProcGen, as suggested by another reviewer. We hope these two experimental additions address your concerns and round out the paper nicely.\n\nRegarding the literature review, we have found some additional references that would be useful here and would be happy to modify the literature review to include a broader discussion on PPO. We have quite a long list of papers already, but if you have any specific ones you think should be included, please let us know.", " The paper analyzes various design decisions made by PPO, a popular actor critic policy gradient method, and proposes various fixes to the existing algorithm/architecture to improve the performance. Specifically, they study the impact of the objective noise levels on the policy training and how it interacts with the various design decisions like the network architecture, return estimation, batchsize etc. They use the analysis to propose improvements to the algorithm and the architecture which results in improvements to the policy performance when tested on the Atari-5 benchmark. **Strengths**\n1. The paper provides simple, well motivated fixes to PPO/PPG that demonstrate improved performance on the Atari Benchmark!\n2. The paper is very well written and easy to follow. I also really liked the motivating example in section 3.1 to demonstrate the impact of noise scale on the architecture.\n3. I liked the careful ablations in sections 6.1 and 6.2 analyzing the impact of the specific hyperparameters.\n**Weaknesses**\n1. Although the paper mentions that they leave the continuous control tasks for future work, I would still have liked some analysis of it in this paper itself to better understand the implications on those settings as well since PPO is widely used there.\n2. There have been various other papers in the field analyzing the various aspects of PPO over the past few years. I would have liked a more thorough review of the literature especially along those lines and see this work placed in the context of those papers.\n\n I think both the points in the Weaknesses sections are fixable and would appreciate the authors making amends along those directions.\n\n**Minor Comments**\n1. I think the following was a typo : Line 185 : policy -> value Yes.", " This paper proposes a dual network architecture extension of PPO. It considers three modifications: a separate neural network for value function and policy, calibration of bias-variance trade-off, and constrained distillation. The method is evaluated on a new Atari-5 environment and compared with a similar baseline Phasic Policy Gradient (PPG).\n ### Strength:\n- Tackle the important problem of improving the policy gradient method, which has wide applicability\n- Overall, well-written paper and easy to follow.\n- Show some improvement over baseline PPG on the Atari-5 benchmark.\n\n### Weakness (Details in Questions):\n- Lack of empirical evidence that demonstrates DNA’s advantage.\n- Missing experiment on Procgen benchmarks where the baseline PPG was evaluated.\n How is the combined score in Figure 6 achieved? The performance improvement is visible only in the “Phoenix” environment and overlaps in other environments. Any aggregated metrics, such as the probability of improvement proposed in [1], might give a better understanding. How many seed run was used to generate Figure 6? It is mentioned that “multiple seeds run” in line 175.\n\nThe baseline PPG was heavily tested on the Procgen benchmark [2] for sample efficiency. Thus a comparison of the Procgen benchmark seems critical to assess the significance of the proposed method. Therefore, I suggest adding the experiment on Procgen to verify how the proposed method performs on existing and well-established benchmarks in addition to the newly proposed Atari-5.\n Yes. Limitations and potential societal impacts are discussed.", " The paper identifies that the noise level between learning the value function and the policy in reinforcement learning can have an order of magnitude difference. The paper thus proposes to learn these two tasks independently followed by a constrained distillation phase. The paper also argue that the policy gradient noise level can be reduced by using a lower variance return estimate. On the other hand, a lower bias estimate should be used to reduce the value learning noise level.\n Strength\n\nThe paper demonstrates the issue of jointly learning two tasks with different noise level in a simple motivating example. The motivating example clearly illustrates destructive interference and provides a great starting point for the discussion in the remainder of the paper.\n\nThe paper introduces additional modification on top of PPO, motivated by the previously identified issues, that leads to significant performance Improvement.\n\nWeakness\n\nI would have liked to see a discussion on how the reader should interpret equation 4 and 5 in the paper. Also, How does the y-axis of Figure 2 relate to the left hand side of equations 4 and 5? This is not clearly explained in the paper if I am not mistaken. It would have been helpful if the authors include pseudocode on how the y-axis of Figure 2 is calculated.\n\nAlso, the noise scale as explained in equation 3 does not seem to take into account the directions of the true gradient. Is that true? If that is true then the quantity used to compute the noise scale seems a bit odd to me because shouldn't we take into account the direction of the true gradient and the direction of the noise vector as well? \n\nThe paper also introduces a new benchmark. Even if an anonymous copy is included, I would have liked to see a self-contained description in the main paper on why results on the new benchmark are meaningful or more meaningful compared to the previous version of the arcade learning environment benchmark.\n\nCan the authors also please elaborate on the discussion at the end of section 4.2 – why is it that because of bootstrapping, estimate used for V Target benefit from being low bias? \n\nI found the exposition in the experimental results section a bit unclear. If the paper is about noise level then why is it that the first experimental result presented has to do with the number of training epochs? \n Can the authors also reconcile the results presented in the paper with this paper https://arxiv.org/abs/2101.05982? This paper seems to suggest that we should use a high update to data ratio, whereas the current paper seems to suggest that we should perform fewer updates. \n Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "SATyGHnbux4", "K5wsjC4l3w", "s_IRiwZ13w4", "eEC4T3t3L4c", "RSVgUSKSYvK", "5WZG58YkovP", "55fSq-JlTBg", "55fSq-JlTBg", "nips_2022_WHFgQLRdKf9", "Al6MZAjkI-2", "AxIeU3lsGyb", "knm5U764qDf", "nips_2022_WHFgQLRdKf9", "nips_2022_WHFgQLRdKf9", "nips_2022_WHFgQLRdKf9" ]
nips_2022_hd5KRowT3oB
Self-Organized Group for Cooperative Multi-agent Reinforcement Learning
Centralized training with decentralized execution (CTDE) has achieved great success in cooperative multi-agent reinforcement learning (MARL) in practical applications. However, CTDE-based methods typically suffer from poor zero-shot generalization ability with dynamic team composition and varying partial observability. To tackle these issues, we propose a spontaneously grouping mechanism, termed Self-Organized Group (SOG), which is featured with conductor election (CE) and message summary (MS). In CE, a certain number of conductors are elected every $T$ time-steps to temporally construct groups, each with conductor-follower consensus where the followers are constrained to only communicate with their conductor. In MS, each conductor summarize and distribute the received messages to all affiliate group members to hold a unified scheduling. SOG provides zero-shot generalization ability to the dynamic number of agents and the varying partial observability. Sufficient experiments on mainstream multi-agent benchmarks exhibit superiority of SOG.
Accept
The reviewers carefully analyzed this work and agreed that the topics investigated in this paper are important and relevant to the field. They generally expressed positive views on the proposed method but also pointed out a few possible limitations of this paper. One reviewer argued that the authors properly outlined the downsides of alternative approaches and the importance of communication as a way of dealing with partial observability. This reviewer, however, brought up one limitation: that the conductor selection may benefit from/require prior information. After reading the rebuttal, however, this reviewer said that the authors satisfactorily answered most of their questions, and further argued that the insights from this paper will most likely be interesting to the emergent communication community. Another reviewer claimed that this paper introduced a novel group-based communication scheme for MARL. They argued that the ideas explored here are intuitive and well-motivated. This reviewer initially believed that some of the experimental results were inconclusive (e.g., regarding the claims that RL-based selection of conductors improves performance w.r.t. to random selection). The reviewer also commented, in their original review, on the possible lack of novelty: out of four components that compose this method, two are novel, one may follow from [4], and one may follow from [A]". After carefully analyzing the authors' rebuttal, however, this reviewer increased their score: they believe that the authors' detailed rebuttal helped clarify minor concerns and that the reviewer's initially-voiced major doubts (e.g., regarding the significance of this work) were mostly rectified. Overall, this reviewer believes (post-rebuttal) that this is indeed a novel and interesting paper introducing news ideas toward communication in MARL—all of which were well executed and appropriately studied. Another reviewer expressed concerns that the paper did not discuss important prior work [1-3], but was satisfied with the authors' responses and thanked them for adding more baselines as part of the experiments. Finally, one reviewer argued that even though this is an interesting method, they still had (pre-rebuttal) three main points of concern. After reading the authors' rebuttal, this reviewer said that "the authors gave detailed responses to address my concerns and I appreciate the additional experiments". Overall, thus, it is clear that all reviewers were positively impressed with the quality of this work and look forward to an updated version of the paper that addresses the suggestions mentioned in their reviews and during the discussion phase.
train
[ "Gw8Io6XHZ9H", "4FspeBs-L10", "4KwzlWXNp3d", "Ak9dJfsZswA", "Q1MLcLKgn0P", "MHcP7B4tEw", "1j7PCMvA1LHI", "BRpW2ueYOEv", "HBOGb5xOkK1", "eF5radKObCG", "d0nwoK4hbLY", "uk-w6pBHPG8", "Fo8ZepA2bCe", "mBPLnQ4Vqgo", "He2V3in1Cnq" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " <eom>", " Thank you for your revision and comment! We have included NCC and Gated-ACML in the experiment discussions, as well as the above mentioned paper in the related works.", " Thank you for your comment. We have included MAGIC and HetNet in the related works, and given a guidance of their experiment results and SOG's limitation discussions to the Appendix in the main paper.", " The authors give detailed responses to address my concerns, and I appreciate the additional experiments. Based on the rebuttal, I dicide to raise my score, but recommend the authors to include the responses to the paper to clarify the differences with previous works.", " Thank you for your revision and recommendation. We attach the training return curves of SOG, NCC, MAGIC and Gated_Qatten in Appendix K. To make the curve more clear, we do not show algorithms with relatively low returns, including QMIX_atten, EMP and REFIL. The results show that SOG has no obvious superior performance than NCC or MAGIC in training, which indicates that the advantage of SOG is specific to its generalisation ability.", " I thank the authors for their rebuttal and clarifications.\n\n- The corrected legend in Fig 5b clarifies my misunderstanding of the benefits of the proposed conductor election process and shows benefits in particular for the RL-based election process which consistently outperforms the DPP and random election. Also, the RL-based election process appears the most robust in transferring scenarios as shown in Fig 5b. As stated in my review, I consider the conductor-election process novel, but was previously not convinced of its significance. These changes rectify that and I am now convinced by its novelty and significance.\n- I appreciate the author's clarifications regarding SOG's novelty. In particular the difference to prior MI objectives was helpful and I apologise for my prior misunderstanding.\n- I also thank the authors for running additional seeds and comparisons found in the appendix as stated in other rebuttal messages.\n\nFurther recommendation:\nIt might be helpful to explicitly state that performance plotted in Fig 4 a,b and Fig 5 a are for testing scenarios never trained in. Showing learning curves for training performance would also be interesting to see whether SOG's advantage is already visible during training or specific to its generalisation ability.\n\nGiven the clarifications provided in all rebuttals I will raise my score of my review. I believe this work would be a good contribution to MARL generalisation and communication.", " Thanks to the authors for their rebuttal. The provided information is helpful.\n\nIt would be helpful if the main paper included the limitations section, as asked for by NeurIPS. \n\nThe commentary re: MAGIC and HetNet is reasonable and should be included in the related works (or results) section of the main paper.", " Thank you for your constructive feedback.\n### Q1: Relation between SOG and zero-shot generalization ability.\nThe generalization ability mainly comes from the time-varying group communication mechanism. Many existing proximity-based methods regard the communication between agents as a graph. When transferred to unseen scenarios, the graph's degree and the number of edges may change a lot, i.e., the communication pattern changes a lot, thus causing the performance drop. In contrast, through our SOG mechanism, the agent can maintain a similar communication pattern to the training condition. For example, when training a model with 2-agent team to fulfill a task, it may perform worse in a 4-agent scenario, since the agent is only trained to cooperate with another one agent, and it may be confused by the message sent by the other 2 agents. However, our proposed SOG mechanism has a high probability to divide the 4-agent into 2 groups, and then preventing the message delivery between the two groups. By doing so, in unseen scenarios, the agent may find the communication pattern is similar to that in training, and perform a similar 2-agent coordination pattern. Therefore, an organized group under the unified command of a conductor can better adapt to an unseen scenario than individuals.\n\n### Q2: Missing related works.\nSince the related works [1,5] are either actor-critic or using fixed agent number, we can not directly apply these methods to the environment we used. Therefore, we have implemented the above two methods under our value-based framework with entity-wise input. ~~Due to our limited computing resources, we first show the results of [5] in Appendix J and will attach the results of [1] as soon as possible.~~ 8.8 update, we show the results of [1]\\(Gated_Qatten\\) and [5]\\(NCC and NCC_MN\\) in Appendix J. The multi-neighborhood version of NCC is suitable for short sight range while the single-neighborhood is good for large SR. Gated_Qatten shows relatively better performance than the 2 versions of NCC, and SOG-rl performs better on both. We will also give more detailed related works about [2,3,4] in the final version. \n\n### Q3: The difference between our Conductor and previous works.\nIn many previous works including [1,2], the conductor is usually designed as a module in end-to-end neural networks, rather than the agent itself. In contrast, we elect conductor from homogenous agents and wish to introduce heterogeneity to the agents. The advantage of our conductor is that it brings no extra computational cost when building a heterogeneous communication mechanism.\n\n### Q4: The difference between our Message Summarizer and previous works.\nThe message generation in [3,4] is designed for reducing the entropy, so that to communicate under the low-bandwidth limitation. However, the target of our Message Summarizer is discarding the unrelated information for predicting the future state. Therefore, our optimization objective is different from that in [3,4]. Since our testing environment assumes no bandwidth limitation, the results in Table 1 shows the effect of Message Summarizer on improving performance and zero-shot generalization ability.\n\n### Q5: The difference between SOG and Neighborhood Cognition Consistent.\nNeighborhood Cognition Consistent[5] can be regarded as a soft constraint on the cognition of nearby agents, while SOG's communication mechanism is a hard constraint in the group. Another difference is that SOG has no constraint between groups, which we think is useful for zero-shot generalization ability. As we stated in Q1, NCC regards the constraint between agents as a graph, and the graph may change a lot when transferred to unseen scenarios, causing the performance drop. In contrast, SOG cuts off the connection between groups, and agents may find more familiar coordination pattern.", " Thank you for your constructive feedback. \n\n### Q1: Comparison to IMAC.\nIMAC provides a regularizer to communicate under limited bandwidth. The objective is similar to our implementation of $\\mathcal{L}_{FP}$, where SOG minimizes $KL(p(m)|q(m))$ and IMAC minimizes $KL(p(m)|z(m))$. The difference between $q(m)$ and $z(m)$ is that $q(m)$ is a variational estimator learned by neural network, and $z(m)$ is a fixed prior, induced by the bandwidth. We think it is unfair for IMAC to implement the IMAC objective in our environment with no bandwidth limit, since the target of IMAC is to decrease the bandwidth while that of SOG is to predict the future state. We have tried to change our $q(m)$ to a Gaussian prior $z(m)$ and find the performance degrading to qmix_atten level. Therefore we think the scheduler in IMAC is necessary for its objective. We will try to implement the scheduler and show IMAC's performance in our final version.\n\n### Q2: Comparison to MAGIC.\nThe original MAGIC uses the actor-critic structure, and can not handle the entity-wise input. But its core structure: the Scheduler and Message Processor, can be incorporated into our structure. So we implement a qmix_atten based MAGIC according to its original code, adding a one- or two-layer Scheduler and Message Processor to agent with entity-wise input. We show the results in Appendix J. We find that the 2-layer MAGIC is better than the 1-layer one. And SOG-rl shows better zero-shot generalization ability.\n\n### Q3: Comparison to HetNet.\nHetNet aims to communicate between heterogeneous agents, and each kind of agent doesn't share parameters. However, our environment doesn't give each agent a class label, so the heterogeneity part in HetNet can hardly be compared in our setting. However, the binary message part in HetNet can be applied to our structure. Due to our limited computing resources, we run the experiments of MAGIC first. We will attach the results of binary code message in our final version.\n\n### Q4: Writing && Color scheme inconsistent.\nWe have fixed the problem you mentioned and upload a new version of paper.\n\n### Q5: Communication with CTDE.\nOur communication condition is the same as that in Ding et al., (NeurIPS’20). As it suggests, SOG is compliant with the CTDE paradigm, since the communication is range-limited.\n\n### Q6: Performance as a function of the communication range.\nWe test the communication range on Resource Collection, sight range=0.5. Each communication range is run for 6 seeds and $8\\times 10^6$ time steps. The mean+std results are as follows:\n\n|Communication Range | 0.1| 0.3|0.5|0.7|0.9\n|:--------------------------------|:-------------|:-------------|:-------------|:-------------|:-------------|\n|Test Return Mean|282.5|355.13|457.6|423.33|402.83|\n|Test Return Std|91.76|102.15|98.02|93.76|124.82|\n\nWe can find that when the communication range is equal to the sight range, the agents perform best. A possible reason is that Resource Collection is a task that agents need not care about the entity that is far from itself. Therefore, messages from the far agent is not useful for the current agent and will complicate the learning, causing the performance drop.\n### Q7: Limitations.\nWe are sorry for not presenting limitations directly for saving space. As stated in the section 4.2.2, we only test SOG on the perfect communication channel. Its performance may fall down when faced with broken communication channel. Another limitation for random CE is that the expectation of the size of each group is decided by a pre-defined hyperparameter, i.e., the probability of agent elected as a conductor. When transferred to unseen scenarios, if the size of each group differs a lot from the training condition, it may cause the performance drop. We try to git rid of the hyperparameter by introducing DPP-based CE and RL-based CE, but they both require a centralized conductor elector, which is not full CTDE.", " Thank you for your constructive feedback. We will first claim your concerns about our novelty. Then we will answer the questions.\n\n### Q1: Random CE and RL-based CE.\n\nWe are so sorry that we gave the wrong legend in Fig.5(b) by mistake. Actually it's color of each algorithm is in agreement with that in Fig.5(a), therefore the legend word of \"SOG\" and \"SOG_rl\" should be exchanged in Fig.5(b). You can also validate this by comparing the performance in Fig.5(a)\"3-8sz_symmetric_D\" and Fig.5(b)Dispose SR=3(Training). \n\nAfter fixing the legend bug in the revision version, we can find that SOG_rl performs better than SOG on most dynamic teams and varying partial observability, except the varying team condition on SC2 map \"3-8MMM_symmetric\". A possible reason is that the \"MMM\" map in SC2 includes more kinds of agents, which needs more exploration on the group formation. And the random CE can be regarded as a structural exploration method. We gave more discussion of random CE in Appendix G.2.\n\n### Q2: Novelty of our paper.\n\n(1) Effective communication is just a byproduct of our proposed SOG, which mainly focuses on the zero-shot generalization ability. Previous methods usually aim for reducing the communication cost, hardly generalize to unseen scenarios, since the change of the neighbors' number may break the learned communication patterns. On the contrary, our proposed conductor-follower mechanism can maintain a similar communication pattern to the training condition. Taking a 2-agent- to 4-agent-system generalization task as an instance, the trained 2-agent team may perform worse in a 4-agent scenario, since the agent is only trained to cooperate with another one agent, and it may be confused by the message sent by the other 2 agents. However, our proposed SOG mechanism has a high probability to divide the 4-agent into 2 groups, and then preventing the message delivery between the two groups. By doing so, in unseen scenarios, the agent may find the communication pattern is similar to that in training, and perform a similar 2-agent coordination pattern. Therefore, an organized group under the unified command of a conductor can better adapt to an unseen scenario than individuals.\n\n(2) As far as we know, we are the first to introduce a reinforcement learning-based conductor election method for multi-agent systems. As shown in the experiment and Q1, RL-based conductor election shows competitive performance.\n\n(3) Our MI-objective is totally different with that in NDQ. We introduce an MI-objective with the function of future prediction into the communication message. NDQ maximizes $I(m_i;a|\\tau,m_{-i})$. The message in NDQ is expected to affect the action selection, while our objective only cares about the state and future observation trajectories, having no business with the action. NDQ also minimize $I(m;\\tau)$ as an entropy regularizer, while one of our object is maximizing $I(m;\\tau)$, completely opposite.\n\n\n\n### Q3: Large variance in predator prey.\n\nOriginal table 1 shows the result of 6 seeds. To give a more convincing results, we re-run each ablation experiment for another 6 seeds. We update the result in the revision version by the mean+std of the medium 6 of the 12 seeds, i.e., 4th-9th seed.\n\n### Q4: Demo depicting downsides of CTDE.\n\nWe are designing a demo on gridworld environment to depict the downside of CTDE. We will attach the result in the future.\n\n### Q5: Is SOG REFIL-based? && What is imaginary objective?\nImaginary objective is the objective that REFIL used. It is useful in hard environment. We use REFIL as baseline in SC2, and use QMIX in another two environments.\n\n### Q6: Parameter sharing.\nAll baselines and SOG share parameters across all agents.\n\n### Q7: In Resource Collection, larger sight range brings worse performance.\nYes, this is a common, but strange phenomenon. COPA[D] also reports this phenomenon in experiments, and the author thinks it's because the larger sight complicates the learning. We think another reason may be the defect of parameter sharing on entity-wise input: If each agent see all the entities, they will get \nthe same embedding after the attention layer, and get the same local Q. This may confuse the mixer and is harmful for coordination.\n\n### Q8: Meaning of shading.\nIn all tables and curves, the shading means std. We run each experiment with 6 seeds.\n\n### Q9: Group strategy for single agent.\nEach of these single agents is in a group alone. Agents can not communicate beyond boundaries of visibility.\n\n### Reference\n[D] Coach-Player Multi-Agent Reinforcement Learning for Dynamic Team Composition.", " Thank you for your constructive feedback.\n\n### Q1: The effect of the entropy regularizer?\nThe entropy regularizer is widely used in many RL/MARL methods, added to the RL objective[A], the agent role[B], or the communication message[C]. The purpose of our adding the entropy regularizer is to keep the diversity of the message, preventing it from falling into local optima. We regard it as a way to encourage exploration, while it may affect the accuracy of the message. In the Predator-prey environment where we take ablations, the regularizer seems taking no effect. A possible reason is that the environment is relatively easy that we need more exploitation than exploration. \n\n### Q2: Message dimension concern.\n\nYes, we have tested the message dimension problem in Table 2. The message dimension 10 performs a little better than the message dimension 3, while message dimension 1 has the worst performance. Dimension 10 requires a larger network and more GPU memories. Large message dimension also complicates the learning, and its performance improves a little slower during training. The most appropriate message dimension is task specific. For SC2 we need 5-dimensional message. And for Resource collection and Predator prey, messages with 3 dimensions are enough. Messages with lower dimensions may be restricted by information bottleneck and can not convey enough information.\n\n### Q3: About random conductor election.\n\nIn addition to the random CE, we also design a DPP-based CE and a PG-based CE in Section 4.2.2. In some experiments, PG-based CE performs better. For more discussion about random CE, please refer to Appendix G.2.\n\n\n### Reference\n\n\n[A] Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor.\n\n[B] Multi-Agent Reinforcement Learning with Emergent Roles.\n\n[C] Efficient Communication in Multi-Agent Reinforcement Learning via Variance Based Control.", " This paper approaches the problem of allow members in a team to group together by allowing “conductor agents” to temporally construct groups and then allowing messages to be passed from agents to conductors (with a variational message summariser) in a unified scheduling approach. They show that in the case of having a dynamic number of agents, and also having partial observability, this method has good zero-shot generalisation performance on standard MARL benchmark tasks like predator-prey.\n 1. This paper is clearly written and the motivation of this work is very sound\n2. The authors spend some amount of time first outlining the downsides of centralised training and decentralised execution and why communication between agents is important and can help handle partial observability.\n3. The motivation for self-organised grouping in order to build communication mechanisms that satisfy properties of being lightweight and robust is also clearly laid out.\n 1. Are there any insights on the differences between the message summariser using an entropy regulariser? It intuitively seems like it should work but from the results stated it is unclear why it does not\n2. Are there results on increasing the message dimension upwards of 3? Do you have thoughts on why the different message dimensions (and ability to store more bits of information) do not impact performance?\n I think the conductor selection might benefit from some prior information/thought put into selecting the conductors rather than randomly selecting.\n\nNit: some typos throughout the paper, particularly occurrences of “aims->aim” e.g., line 196, 204\n", " This work proposes a novel group-based communication scheme for MARL which is trained under the CTDE paradigm. Agents are periodically grouped together with each group having a single conductor which receives messages from each agent and sends back a summary of all its received messages. Messages are received through a variational encoder-decoder architecture and aim to encode information about the local trajectory of agents. Three approaches to select conductors within groups based on random selection, a policy gradient RL agent and determinal point process are proposed. The resulting Self-Organized Group (SOG) algorithm is evaluated in varying environments and demonstrates generalisation ability over varying numbers of agents and varying partial observability across training and testing tasks without further fine-tuning. # Strengths\n1. Generalisation and communication for cooperative MARL are very relevant and difficult problems.\n2. The idea to group agents and efficiently communicate among agents based on agent proximity to improve coordination is intuitive and well motivated. The method is clearly outlined and well depicted as three intertwined components.\n3. Generalisation to tasks with a different number of agents and varying degrees of partial observability are considered in the conducted evaluation.\n4. Experiments are conducted in three different environments and comparison is made to several MARL algorithms with similar architecture and communication. Message learning components are ablated in Table 1 and further experiments investigating generalisation to varying degrees of partial observability at constant number of agents are presented in the appendix.\n\n# Weaknesses\n**Major:**\n1. I have concerns regarding the novelty and significance of this work. The core components of SOG appear to be (1) identifying groups of agents to communicate through conductor election, (2) encoding messages through a variational encoder optimized with a mutual information objective, (3) summarising/ aggregating messages, and (4) being invariant to number of agents due to architecture design. Out of these, it appears to me that only (1) and (3) are novel with (4) following from the proposed architecture of REFIL [12], and (2) being proposed similarly for NDQ [A]. Also, it is worth noting that prior work already considered the problem of identifying which agents should communicate with each other for most effective communication [A, B, C, D]. The aggregation of messages in (3) is simply a sum and the conductor election process (1) was shown to be comparable to just randomly selecting an agent as conductor in experiments of this work.\n\n2. In l. 325ff the authors state that the RL-based selection of conductors for communication improves generalisation performance compared to the random selection of conductors based on their experimental results, but the results are not conclusive and in some cases even indicate better performance for the random selection process.\n\n**Minor:**\n\n3. For results in the predator prey environment, it is stated that “SOG performs better when transferred to complicated evaluation scenarios” (l. 364) but generalisation returns are very similar to baselines with large variance indicating no significant difference across algorithms.\n4. The downsides to CTDE described is mostly a downside of a lack of information (no specific failure/ downside of CTDE). Also the described failure case should be investigated with a simple small experiment of the described task to verify the statements, QMIX and similar algorithms are not designed for these scenarios but agents might implicitly follow a pattern to still effectively coordinate.\n\n[A] Wang, Tonghan, Jianhao Wang, Chongyi Zheng, and Chongjie Zhang. \"Learning nearly decomposable value functions via communication minimization.\" International conference on learning representations, 2020.\n\n[B] Du, Yali, Bo Liu, Vincent Moens, Ziqi Liu, Zhicheng Ren, Jun Wang, Xu Chen, and Haifeng Zhang. \"Learning correlated communication topology in multi-agent reinforcement learning.\" In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, pp. 456-464. 2021.\n\n[C] Jiang, Jiechuan, and Zongqing Lu. \"Learning attentional communication for multi-agent cooperation.\" Advances in neural information processing systems, 2018. 1. Is the evaluated SOG algorithm based on REFIL as the MARL foundation with the additional communication paradigm? This does not quite come across clearly in Section 4.3.\n2. Do all the evaluated baselines also apply parameter sharing across all agents? I would expect that parameter sharing helps significantly, in particular when communication is applied as agents follow the same communication and therefore already have a “common language” without further coordination. If not, all baselines should be (re-)evaluated with parameter sharing.\n3. In Figure 4a, b, it appears that training performance in the Resource Collection task with larger sight range (1.0) is worse than in the task with shorter sight range (0.5)? This appears strange given that inputs are of the same size and in the shorter sight range case more values would simply be masked to 0s. Why do you think this appears?\n4. What does shading in all training learning curves represent (standard deviation, variance, confidence intervals, …)? Same for tables 1 and 2.\n5. In line 321 regarding the SMAC experiments, it is stated that a “imaginary objective” is applied as in prior work - what does this objective contain and incentivize?\n6. In lines 167ff you state - “For those who receive no group invitation or leave the sight of the constructor during in-group communication, they form a group of themselves.” - it is unclear to me whether this means each of these agents is in a group alone or all these agents are together in a group? Latter would allow agents to communicate beyond boundaries of visibility. The authors have sufficiently discussed limitations of their work.", " This paper develops a multi-agent reinforcement learning approach with attention to enable zero-shot generalization across scales of homogeneous agents. The approach, self-organized groups (SOG), develops mechanisms for a decentralized team to form ad hoc groups. Within each group, a conductor is elected. The conductor then provides messages to each of the other members of the group. A message summary procedure is added to maximize the mutual information between personal messages and future agent trajectories. Results across StarCraft and Predator-Prey show promising results. Ablation studies are provided to show the contribution of various conductor election schemes. Zero-shot results are provided for scaling the number of agents in Starcraft. Strengths:\n+ Section 3 provides a helpful background to formally define the problem being addressed.\n+ Figure 3 provides a helpful illustration describing the model architecture.\n+ Figure 4 shows promising results. It might be helpful to choose a color scheme that more clearly reflects that SOG and its ablations are a group, where as COPA, EMP, etc. are baselines.\n+ The approach (and ablations) are evaluated in computational studies in StarCraft and Predator-Prey. \n+The evaluations show positive results in answering three key questions: (1) does the approach improve coordination vs. fully-connect communication models, (2) can the approach enable zero-shot generalization, and (3) how does performance vary with the CE selection mechanism?\n\nWeaknesses:\n- The color scheme from Figure 4 to Figure 5 is inconsistent. In one, COPA is red; in the other SOG is red.\n- The paper seems to have accidentally missed some prior work that have addressed related concepts of scalability and message efficiency.\n1) Wang et al. (ICML) develop a MARL approach, called IMAC, that uses variational information reasoning to learn communication protocols, similar to the message summary mechanism in SOG. As such, it would have been helpful to benchmark against IMAC – particularly to compare the ability to learn efficient, information-rich messages.\n2) Niu et al., (AAMAS’21) developed a MARL approach, called MAGIC, with a centralized scheduler and message processor that performs message aggregation. The Niu et al. approach is scalable (using attention), but it has the weakness that it brings some centralization as it relies on an agent to act as a message aggregator and scheduler. It would be helpful if this paper would benchmark against MAGIC, as the problem setup is analogous (or provide sufficient justification for why HetNet is not a suitable baseline).\n3) Seraj et al., (AAMAS’22) developed a CTDE MARL approach (with communication) for heterogeneous agent teams, called HetNet. The approach scales (again, using attention) with the number of agents of various types and compresses messages to binary. It would be helpful if this paper would benchmark against HetNet (or provide sufficient justification for why HetNet is not a suitable baseline).\n\nWang, R., He, X., Yu, R., Qiu, W., An, B. and Rabinovich, Z., 2020, November. Learning efficient multi-agent communication: An information bottleneck approach. In International Conference on Machine Learning (pp. 9908-9918). PMLR.\n\nNiu, Y., Paleja, R.R. and Gombolay, M.C., 2021, May. Multi-Agent Graph-Attention Communication and Teaming. In AAMAS (pp. 964-973).\n\nSeraj, E., Wang, Z., Paleja, R., Martin, D., Sklar, M., Patel, A. and Gombolay, M., 2022, May. Learning efficient diverse communication for cooperative heterogeneous teaming. In Proceedings of the 21st International Conference on Autonomous Agents and Multiagent Systems (pp. 1173-1182).\n\nWriting:\n-“ which is featured with conductor election (CE) and message summary (MS)” is not grammatically quite correct. Perhaps “a message summary (MS) mechanism” would be better.\n-Line 16 should be “have” instead of “has”\n-Line 21, no comma before “while”\n-Line 49 should have an endash before “based”\n-Line 135 “like QMIX [25]” should be offset with commas\n-Line 350 should have a comma after “First”\n-Line 354 should not have a period after “Table”\n-Line 372, it is helpful not to start a sentence with “but”\n-Line 328 should have a comma after “Then”\n-Line 344 – it is best not to have contractions\n\n====== Post-rebuttal ======\n\nI have raised my score to reflect the improvements the authors have made in their paper.\n - In Lines 137-139, the paper seems to suggest that agents that communicate are not part of the CTDE paradigm; however, communication (at least communication that is range-limitted) can be to be compliant with the CTDE paradigm. For example, see Ding et al., (NeurIPS’20). Does the paper mean to suggest otherwise?\n-How well does the approach perform as a function of the communication range (and associated ability of the conductor to communicate with just one or all of the agents? \n\nDing, Z., Huang, T. and Lu, Z., 2020. Learning individually inferred communication for multi-agent cooperation. Advances in Neural Information Processing Systems, 33, pp.22069-22079. The paper says that limitations were addressed and to see Section 4.2. However, word “limit” and “limitation” doe not appear anywhere in the paper except in regards to limitations of prior work or the ranges of agents tested (Line 319). As such, it is hard to see where limitations are sufficiently addressed as the checklist seems to intend. Note: The paper does state some assumptions in Line 148 (“assuming that all the messages can be delivered accurately without delay and each agent can handle all the received messages at the same time”) and Line 218 (“we assume a perfect communication channel”). However, the reproducibility checklists seems to differentiate between limitation and assumptions (though, they are semantically related). It would have been nice if the paper had proactively addressed the shortcomings of the work, when it would fail, when it might be disadvantageous to use the approach, when societal harm might occur (e.g., military applications), etc.", " This paper proposes a Self-Organized Group mechanism for Cooperative MARL, which is claimed to have zero-shot generalization ability with dynamic team composition and varying partial observability. The mechanism first selects a condutor for each group; then the condutor summarizes the messages (by conditional entropy bottleneck objective) from all agents within this group, and sends the summarized message to all agents; finally, the agents generate actions based on the observation and the summarized message. For the condutor selection, the authors proposed Random, DPP-based and Policy Gradient-based methods. The authors conducted experiments on three environments to show the effectiveness of the proposed methods. Strengths \n* Clear paper writting. In general, this paper is easy to read and follow. \n* Interesting methods for Conductor Election.\n\n\n\n\n\nWeaknesses \n* The methods and the claims are not related. I do not see the direct relation between the proposed Self-Organized Group mechanism (i.e., both Conductor Election and Message Summary) and the so claimed zero-shot generalization ability with dynamic team composition and varying partial observability. In my opinion, the zero-shot generalization ability is mostly because of the mean message mixer (i.e., Eq 1) and the MHA-based model structure (i.e., Fig 3). \n* Missing a lot of related works. For example, the Conductor is similar to the centralizer/coordinator used in many MARL communication methods [1,2]. The entropy bottleneck based Message Summary is also related with entropy bottleneck based communication message generation [3, 4]. The group-based multi-agent Message Summary is also similar to the Neighborhood Cognition Consistent proposed in [5], where summarized message is equal to the consistent cognition. But the authors missed these works. \n* Due to the above reason, I found the baselines used in the experiments are not strong. For example, the authors mentioned \"Since many MARL methods with communication are not suitable for dynamic team composition\", but as far as I know, [1, 4, 5] and many other MARL methods can be applied for dynamic team composition.\n\n\n[1] Learning agent communication under limited bandwidth by message pruning. AAAI 2020. \n[2] Learning multi-agent communication with double attentional deep reinforcement learning. AAMAS 2020. \n[3] Deep multi agent reinforcement learning for autonomous driving. 2020. \n[4] Multi-agent Communication with Graph Information Bottleneck under Limited Bandwidth. 2021. \n[5] Neighborhood cognition consistent multi-agent reinforcement learning. 2020. * What is the relation between the proposed Self-Organized Group mechanism (i.e., both Conductor Election and Message Summary) and the so claimed zero-shot generalization ability? \n* Could the authors give some results by comparing with [1, 5] or other more strong baselines? Because [1] has a message prunning mechanism, which is like the T-timestep communication in the current paper. Similarly, the group-based multi-agent Message Summary is similar to the Neighborhood Cognition Consistent proposed in [5], and [5] can also be applied to dynamic team composition. Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "4KwzlWXNp3d", "Ak9dJfsZswA", "1j7PCMvA1LHI", "BRpW2ueYOEv", "MHcP7B4tEw", "eF5radKObCG", "HBOGb5xOkK1", "He2V3in1Cnq", "mBPLnQ4Vqgo", "Fo8ZepA2bCe", "uk-w6pBHPG8", "nips_2022_hd5KRowT3oB", "nips_2022_hd5KRowT3oB", "nips_2022_hd5KRowT3oB", "nips_2022_hd5KRowT3oB" ]
nips_2022_hMGSz9PNQes
MaskTune: Mitigating Spurious Correlations by Forcing to Explore
A fundamental challenge of over-parameterized deep learning models is learning meaningful data representations that yield good performance on a downstream task without over-fitting spurious input features. This work proposes MaskTune, a masking strategy that prevents over-reliance on spurious (or a limited number of) features. MaskTune forces the trained model to explore new features during a single epoch finetuning by masking previously discovered features. MaskTune, unlike earlier approaches for mitigating shortcut learning, does not require any supervision, such as annotating spurious features or labels for subgroup samples in a dataset. Our empirical results on biased MNIST, CelebA, Waterbirds, and ImagenNet-9L datasets show that MaskTune is effective on tasks that often suffer from the existence of spurious correlations. Finally, we show that \method{} outperforms or achieves similar performance to the competing methods when applied to the selective classification (classification with rejection option) task. Code for MaskTune is available at https://github.com/aliasgharkhani/Masktune.
Accept
The paper considers the problem of reliance of NN models to spurious correlations and proposes MaskTune a method to alleviate this. MaskTune is a masking strategy to prevent over-reliance on spurious correlations. The masking strategy forces the model to reduce reliance on spurious correlations and learn new features via the masked examples. The empirical results show that MaskTune improve model performance on multiple datasets and has applications to the task of selective classification. The paper focuses on a very important current problem with DNN models, and the proposed idea works on a number of benchmarks. The paper is well-written too. There were a number of questions raised by the reviewers and additional experiments requested that were addressed by the authors during the rebuttal period. Therefore I vote for acceptance and I ask the authors to update the paper accordingly for the camera ready version.
train
[ "COH2OrZbHp", "Jo3dEKv5okk", "TaPy2AgnX8L", "rDTvn76szzvV", "OAHRQmygdIj", "q0fy5uGEYc3", "4oGdQ8-P8ZZ", "2POs4tIptwA", "u3mjW_Q9QIH", "FOUVKiMGol0" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response, I continue to recommend acceptance. ", " - *What happens in the synthetic MNIST experiment when there are two spurious correlations. E.g., two squares instead of one?* \n\nPlease see the general response section\n\n- *Why only fine-tune for one epoch? What happens when you do more?*\n\nBy finetuning for more epochs, the model starts forgetting the previously learned features which might lead to overall performance drop.\n\n- *Does MaskTune help on natural distribution shifts such as ImageNet -> ImageNetV2.*\n\nThank you for the suggestion. We will test our method on ImageNetV2. An interesting future development will be to investigate how MaskTune can be modified to take into account the natural distribution shifts.\n\n- *What happens when there is no spurious correlations, does MaskTune reduce accuracy? What happens if you run MaskTune on the benchmark ImageNet dataset.*\n\nWe ran MaskTune on ImageNet. The overall test accuracy did not decrease significantly i.e., from 78.862 (ERM) to 78.213 (MaskTune). After applying MaskTune classification accuracy on certain classes improved significantly e.g., by over 50%. For example for class “projectile, missile” the accuracy was improved from 25% to 80%, for class “crane” from 44% to 82%, for class “skunk, polecat, wood pussy” from 49% to 69%, for class “coffeepot” from 48% to 64%, etc.\n", " Thank you for the suggestions on improving Fig 1 and writing. We will fix these in the final version.\n\n- *Error intervals (whether these correspond to standard error or deviation is not stated) are given for only a subset of the methods and datasets (none of the results for selective classification have them, for instance).*\n\nThe paper that we used to report other methods' accuracy, only reports the accuracy (not the error intervals). But we report the error intervals for ours in table 5 in appendix. We will move Table 5 to the paper from the appendix.\n\n- *I found it odd that in explaining the ensembling and threshold-selection procedure for selective classification (paragraph beginning line 241) the authors choose to introduce new notation for the initial and final models when notations different from that established in Section 2 of the paper.*\n\nWe will fix this in the final version\n\n\n- *How robust is the chosen thresholding heuristic? Do you expect it to hold in general and can you suggest a more rigorous approach to setting it for if not (one which is based on something more concrete than avoiding masking too many/too few variables)?*\n\nWe previously tried several different thresholding strategies:\n\n1- mean over all training samples (this is too strong)\n\n2- chose top K number of highly activated pixels (here K needs to be tuned)\n\n3- soft masks (i.e., just normalize the heatmaps between 0 and 1 and do not threshold)\n\n4- mean over each image\n\n5- mean + 2std over each sample\n\n6- mean + 3std over each sample\n\nWe found #5 works in general better across all datasets.\n\n- *Is a two-stage process, whereby spurious features are identified based on only a single round of training, sufficient in general to avoid the learning of spurious correlations? It seems that a dataset consisting of multiple sets of features that spuriously correlate with the target (for instance, background colour and background texture) would pose problems for MaskTune.*\n\nPlease see the general response section.\n\nWe will expand the conclusion section by elaborating more on the limitations of our work such as what kinds of spurious features might be missed by our method, running times, etc. and future research opportunities.", " - *The positions of sections 3 and 4 should be switched. Implementations should be introduced before experimental results.*\n\nThank you for pointing this out. We will make this change in the final version.\n\n- *Experimental evaluation is only performed using toy datasets, such as MNIST and CelebA. It is unclear whether the proposed method could be applied in more challenging real-world datasets such as those from the WILDS benchmark [1].*\n\nWe have tested on the WaterBirds dataset (Table 2) and the Background challenge (Table 3) as well. Our method has significantly improved the baselines on both the datasets. We ran our method on ImageNet. The overall test accuracy did not decrease significantly i.e., from 78.862 (ERM) to 78.213 (MaskTune), but after applying MaskTune classification accuracy on certain classes improved significantly e.g., by over 50%. For example for class “projectile, missile” the accuracy was improved from 25% to 80%, for class “crane” from 44% to 82%, for class “skunk, polecat, wood pussy” from 49% to 69%, for class “coffeepot” from 48% to 64%, etc. \n\n- *The writing can be improved. In the abstract and the introduction section, the authors mentioned the selection classification task and did not give too much introduction to this task. Only after reading Section 2.2, the readers can understand what is the selection classification task. This could be unfriendly for readers not familiar with this task.*\n\nWe agree. We will fix this by adding more background info in the introduction on selective classification.\n\n- *On line 107, the authors mention that one major benefit of using xGradCAM is that it could produce dense heatmaps. What would happen if sparse heatmaps are generated?*\n\nThis is an important point. In comparison to masking only the core of a dense map, sparse heatmaps often mask larger areas of the input. Because there isn't much information left in the input after masking a lot of pixels, learning the second best discriminative features is impossible.\n", " - *Baselines: the randmask baseline is interesting but quite underutilized in the experiments on real datasets. Also, the paper does not provide any details about this method (e.g., how aggressive is the random cropping? how many copies of the same image with different random masks are used?). The gap between randmask and masktune seems a bit too significant even for the toy MNIST dataset and I am wondering if a stronger version of randmask can be used in these experiments.*\n\nWe ran another version of a random masking method with different window sizes: On MNIST data, for each image, we randomly selected a window of {2x2, 3x3, 4x4, .... nxn} pixels where n is the image size divided by 2. As you can see [here](https://imgur.com/Jp2rEB7), MaskTune still outperforms the random masking approach. \n\n\n- *Saliency maps: The masktune approach heavily relies on explanation / saliency maps working “as expected”. However, feature attribution methods in general are hard to evaluate and work on evaluation of interpretability methods have shown that these methods have major failure modes. There is work [R1, R2, R3] that indicate that saliency maps are ineffective at detecting (and so mitigating) spurious correlations. A discussion of this issue and about the usage of a specific saliency map method (xGradCAM) is missing. In particular, how robust are these results to a different choice of explaination method (on realistic datasets such celebA and waterbirds)?*\n\nThank you for the question. According to our experiments, several explainability (saliency) methods would work well if masking thresholds were properly set, i.e., not all masking methods work with the same threshold. ScoreCAM, for example, would result in similar performance to xGradCAM, but we found ScoreCAM to be very slow because it needs to run on all the 2D feature maps at the end of the network, i.e., the network should run 2048 times per sample in the ResNet50 case. We agree that explainability methods may be ineffective for detecting certain types of spurious correlations (R1, R2, R3). However, by masking initially discovered discriminative features, our MaskTune leads to the exploration of more features, even if they are not spurious.\n\n- *Spurious correlations and locality: it seems like this method would only work in cases where the spurious correlation can be localized in input space. That is, the spurious correlation “region” in the image is different from the “region” of the desirable / core feature. For example, in the MNIST experiment, if instead of adding a patch, I add a red vs. blue tint to the entire image, the model will just learn the color feature. Will masktune mitigate shortcut learning in this case? It would be great if the authors can shed light on this.*\n\nPlease see the general response section\n\n - *Are the number in table 3 and 4 averaged over multiple runs or is it best-of-K? are the differences / improvements statistically significant?*\n\nTo be consistent with other works, in Table 4 we reported the accuracy, but in the appendix (Table 5), we have the results with standard deviation over 3 runs. The standard deviation over 3 runs is small. We will replace Table 5 in appendix with Table 4 in the paper and report standard deviation for Table 3 as well.\n\n", " We value the reviewers' incisive observations and compliments on the novelty of our work and clarity of writing. We begin with three general comments and then address the specific comments made by each reviewer. We will incorporate all of the reviewers' recommendations for minor writing edits, improving/adding figures, and other changes into the final version.\n\n**1- Is it harmful to use MaskTune when we are unsure whether there is a spurious correlation in our data?**\n\nEven if there are no spurious features in the input or detecting the spurious feature is difficult, MaskTune masks the initially found most discriminative features, resulting in the discovery of the second discriminative set of features. For example, if an ERM model initially finds the cow's head to be the most discriminative feature, masking it (which is not spurious) forces the model to find the second best feature as well, which could be the texture of the cow's skin. The final model would learn both the cow's head and skin texture as discriminative features, rather than just the head. \n\n\n**2- Does MaskTune handle all types of spurious correlations?**\n\nAlthough Masktune can help reduce the effect of many types of spurious correlation, such as texture, color, and localized nuisance features e.g., artifacts added to x-ray images by medical imaging devices, there are some cases where MaskTune may not be effective, such as when a small transformation is applied to all pixel values of some of the images in a dataset. This occurs in medical devices or cameras that add almost imperceptible color (values) to captured images.\n\n**3- What happens if we have more than one spurious feature?**\n\nRunning MaskTune for only one iteration on a dataset with more than one spurious feature still performs better than ERM. We tested on MNIST. Please see the results [here](https://imgur.com/pm13NYc) . If we use MaskTune iteratively, the performance improves even more. We ran two iterative versions of MaskTune. We added two coloured patches to MNIST digits as two distinct spurious features. We ran 1, 2, and 3 iterations of masking. The method works well when we do iterative accumulative masking (i.e., add new masks to the previously masked samples), but when we apply masks from each iteration to the raw input(non-accumulative), the results are not as good as the accumulative version. To reduce the running time, we only used one iteration of masking. One way to stop the accumulative masking is to monitor the training accuracy. If the model is not able to fit the data after N masking iterations because there are no useful features left, we can stop.\n", " This paper focuses on the important problem of mitigating reliance on spurious correlations. In particular, the paper proposes MaskTune, a masking strategy to prevent over-reliance on spurious correlations. The masking strategy forces the model to reduce relaince on spurious correlations and learn new features via the masked examples. The empirical results show that MaskTune improve model performance on multiple datasets and has applications to the task of selective classification. **Strengths**\n\n- Well-written: The problem is well-motivated, figure 1 does a good job at describing the masking approach, the experiment setup, datasets, and results are easy to follow.\n- The proposed masking approach to spread reliance across multiple features is conceptually simple + effective and does not require knowledge of contextual / background information.\n- The empirical results on mitigating spurious correlations are significantly better than existing unsupervised methods when you do not have access to supervision at model selection time (a more realistic setting than what has been considered previously)\n- The connection between spurious correlations (narrow subset of features) and selective classification is quite interesting; the paper presents initial theoretical results in the overparameterized linear regression setup and the ensemble idea is empirically better than three existing methods on three datasets.\n\n**Weaknesses**\n\n- Baselines: the randmask baseline is interesting but quite underutilized in the experiments on real datasets. Also, the paper does not provide any details about this method (e.g., how aggressive is the random cropping? how many copies of the same image with different random masks are used?). The gap between randmask and masktune seems a bit too significant even for the toy MNIST dataset and I am wondering if a stronger version of randmask can be used in these experiments.\n- Saliency maps: The masktune approach heavily relies on explanation / saliency maps working “as expected”. However, feature attribution methods in general are hard to evaluate and work on evaluation of interpretability methods have shown that these methods have major failure modes. There is work [R1, R2, R3] that indicate that saliency maps are ineffective at detecting (and so mitigating) spurious correlations. A discussion of this issue and about the usage of a specific saliency map method (xGradCAM) is missing. In particular, how robust are these results to a different choice of explaination method (on realistic datasets such celebA and waterbirds)?\n- Spurious correlations and locality: it seems like this method would only work in cases where the spurious correlation can be localized in input space. That is, the spurious correlation “region” in the image is different from the “region” of the desirable / core feature. For example, in the MNIST experiment, if instead of adding a patch, I add a red vs. blue tint to the entire image, the model will just learn the color feature. Will masktune mitigate shortcut learning in this case? It would be great if the authors can shed light on this.\n- Are the number in table 3 and 4 averaged over multiple runs or is it best-of-K? are the differences / improvements statistically significant?\n\n[R1] Post hoc Explanations may be Ineffective for Detecting Unknown Spurious Correlation\n\n[R2] Do Input Gradients Highlight Discriminative Features?\n\n[R3] Sanity Simulations for Saliency Methods Please see strengths and weaknesses Please see strengths and weaknesses", " Spurious learning or shortcut learning problem has received increasing attention from the community recently. This work suggests the masking approach MaskTune, in order to avoid relying too heavily on spurious (or a small number of) features. More specifically, MaskTune forces the trained model to explore new features during a single epoch finetuning by masking previously discovered features. Experimental results show that the proposed method could improve the worst-group accuracy for spurious learning tasks. In addition, it also achieves decent performance for the selection classification task.\n Advantages:\n1. Spurious correlation is a problem worth studying. The idea of encouraging the model to explore new features is sound.\n2. The paper is overall well written.\n3. Experimental results validate that the proposed method can be used to alleviate the reliance on spurious correlations and be applied in the selection classification task.\n \n\nSome minor concerns:\n1. The positions of sections 3 and 4 should be switched. Implementations should be introduced before experimental results.\n2. Experimental evaluation is only performed using toy datasets, such as MNIST and CelebA. It is unclear whether the proposed method could be applied in more challenging real-world datasets such as those from the WILDS benchmark [1].\n3. The writing can be improved. In the abstract and the introduction section, the authors mentioned the selection classification task and did not give too much introduction to this task. Only after reading Section 2.2, the readers can understand what is the selection classification task. This could be unfriendly for readers not familiar with this task.\n4. On line 107, the authors mention that one major benefit of using xGradCAM is that it could produce dense heatmaps. What would happen if sparse heatmaps are generated?\n\n[1] Koh, Pang Wei, Shiori Sagawa, Henrik Marklund, Sang Michael Xie, Marvin Zhang, Akshay Balsubramani, Weihua Hu et al. \"Wilds: A benchmark of in-the-wild distribution shifts.\" In International Conference on Machine Learning, pp. 5637-5664. PMLR, 2021. \n Please refer to the Strengths And Weaknesses part Yes", " The authors propose a two-stage pipeline ('MaskTune') for mitigating spurious correlations where the the first stage simply entails training a convolutional model via ERM. GradCAM is is then applied to the resulting model to generate salient maps; these are converted into masks by thresholding and used to mask out the inputs during the second stage wherein the model is fine-tuned for a single epoch, the goal being to force the model to explore alternative views of the data. Experiments are preformed on a range of image dataset for both regular classification and selective classification, in which the option to abstain from predicting is added. The authors propose to ensemble the models obtained from the first and second stages of training to obtain the probabilities used for the latter task, optimising the threshold to meet the desired coverage using a validation set. The proposed method performs competitively with other recent domain generalisation methods without needing the groups/environments/domains to be annotated, as is the case for popular methods such as GDRO.\n # Strengths\n- Despite the similarity in spirit to pre-existing method (which are appropriately cited) the method nonetheless appears to be novel and is appealing for its relative simplicity compared with other domain generalisation methods.\nThat said, I worry that it might be too brittle to be practically useful, for several reasons:\n1) From what I gather, the efficacy of the method is highly dependent on the choice of masking threshold, $\\lambda$ -- choosing a value that is too low would allow for leakage of the spurious features, while, as discussed in the paper, choosing a value that is too low leads to portions of the samples being masked out, including those salient features which we hope to learn during the fine-tuning phase. The authors propose to mask using a heuristic based on (standard) deviations from the mean, however the justification for it feels a bit hand-wavy. Moreover, the method relies heavily on the efficacy of the mask-generation method and it seems artefacts induced by upscaling (which may be significant depending on the choice of architecture and the input resolution) could prove detrimental. \n2) It seems likely that in many cases several a single iteration of the method would be insufficient, as -- to continue with the motivating example adduced in the paper -- one can imagine only a portion of the background features being needed to reliably distinguish between cows and camels in the training set, resulting in other background features remaining unmasked during fine-tuning. One could repeat the procedure until some stopping criterion is reached but it's not obvious whether problems such as catastrophic forgetting begin to factor into the optimisation process, hindering the efficacy of MaskTune. \n3) If there are no spurious correlations in the training set and the set of unmasked features contain no information predictive of the class label there may be risk of an over-parameterised model resorting to memorisation -- that we only fine-tune for a single epoch may mean that this typically isn't a problem, but there nonetheless seems like such a risk is there with the current incarnation of the algorithm.\n4) Dovetailing from the previous point, a single epoch of fine-tuning may not be a sufficient duration for learning more complex sets of features/unlearning the bias learned during the first stage of training, yet training training for longer may also carry its share of problems (such as memorisation/catastrophic forgetting).\nThe advantage of the method, however, is that the above problems can, in theory, be readily identified -- this demands an auditing process however, one that is likely to be time-consuming, especially if multiple iterations are required\n\n - Figure 1 provides a straightforward illustration of the process, however it would be useful to have a legend \n denoting the meaning of different mask colourings.\n\n- Comparison with other works is sufficiently thorough -- highlighting the distinction between the proposed method\nand Just train Twice (JtT), which adopts a similar two-stage approach (and how the latter can fail) being particularly pertinent.\n\n- Good range of datasets and a good number of relevant baselines which cover both standard classification and selective classification,\n- While currently listed as an avenue of future work, I think how one MaskTune might be concretely adapted to other\ntasks (e.g, segmentation) and modalities warrants some discussion.\n\n- Indications table indicating what sources of supervision are required for the methods run on the Waterbirds dataset is nice to see\n but exactly what is meant by model-selection supervision isn't explained ('Train Supervision' should also perhaps be indicated to be w.r.t. the group labels) -- it's meaning can be inferred but I don't think it's self-evident to forgo explanation (even if only at the caption level_.\n\n- The paper is for the most part easily understandable and soundly structured. However there are some areas of the text which could be clearer; the area which perhaps stood out the most in this respect was that relating to the biased MNIST setup, with the sentence \"We place 99% and 1% of all digits labelled \"0-4\" and \"5-9\" in the training set on a background with a small blue square on the top left corner, respectively, and keep the remaining data intact\" being quite difficult parse. On the topic of this section, it would also be useful to have Figure 2 include a visual description of the two setups considered.\n\n- Error intervals (whether these correspond to standard error or deviation is not stated) are given\nfor only a subset of the methods and datasets (none of the results for selective classification \nhave them, for instance). \n\n- I found it odd that in explaining the ensembling and threshold-selection procedure for selective classification\n(paragraph beginning line 241) the authors choose to introduce new notation for the initial and final models when\nnotations different from that established in Section 2 of the paper.\n - How robust is the chosen thresholding heuristic? Do you expect it to hold in general and can you suggest a more rigorous approach to setting it for if not (one which is based on something more concrete than avoiding masking too many/too few variables)?\n- Is a two-stage process, whereby spurious features are identified based on only a single round of training, sufficient in general to avoid the learning of spurious correlations? It seems that a dataset consisting of multiple sets of features that spuriously correlate with the target (for instance, background colour and background texture) would pose problems for MaskTune. I am satisfied with the extent to which the societal impacts of the method are addressed -- the interpretability of the work could be highlighted to further bolster the paper in this respect. The authors address a couple of limitations of the method -- it's unimodality and sample inefficiency -- at the end of the conclusion, however I would like to see the the discussion on this topic expanded, both with regard to those limitations already noted and in the regard to addressing some of those concerns such as those raised in \"Strengths are Weaknesses\"/\"Questions\" (or to see relevant ablations provided if in fact those concerns are in fact unjustified)", " This paper proposes a simple technique to avoid reliance on \"spurious correlations\": 1) train a model, 2) fine-tune for an additional epoch, but, mask the part of the data that is deemed important for making the predictions. This forces the classifier to \"explore\" and learn something beyond the initial \"spurious correlation\". Results on e.g., Waterbirds and CelebA, the method outperform other methods which don't use group information. Strengths:\nThe method is straightforward and, for the most part, very intuitive. Moreover, the performance is validated empirically on multiple datasets, where MaskTune (the method) outperforms other methods which do not have access to group information and approaches the performance of methods which do.\n\nWeaknesses:\nThe main concern is that this method is only tested on datasets where spurious correlations exist. E.g., I am not surprised the MNIST case works. My concern is that, if I don't know apriori that spurious correlations exist, should I apply MaskTune or not? Are spurious correlations really a problem on datasets like ImageNet? Currently it is unclear if I should apply this method when training on ImageNet. It is even possible that MaskTune could reduce accuracy when training on datasets without spurious correlations, as the important features could be masked out. I do not expect that all of these questions are answered.\n- What happens in the synthetic MNIST experiment when there are two spurious correlations. E.g., two squares instead of one?\n- Why only fine-tune for one epoch? What happens when you do more?\n- Does MaskTune help on natural distribution shifts such as ImageNet -> ImageNetV2.\n- What happens when there is no spurious correlations, does MaskTune reduce accuracy? What happens if you run MaskTune on the benchmark ImageNet dataset. This is marked as Yes in the checklist but a limitations section could be appreciated in the revision." ]
[ -1, -1, -1, -1, -1, -1, 7, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "Jo3dEKv5okk", "FOUVKiMGol0", "u3mjW_Q9QIH", "2POs4tIptwA", "4oGdQ8-P8ZZ", "nips_2022_hMGSz9PNQes", "nips_2022_hMGSz9PNQes", "nips_2022_hMGSz9PNQes", "nips_2022_hMGSz9PNQes", "nips_2022_hMGSz9PNQes" ]
nips_2022_iAktFMVfeff
House of Cans: Covert Transmission of Internal Datasets via Capacity-Aware Neuron Steganography
In this paper, we present a capacity-aware neuron steganography scheme (i.e., Cans) to covertly transmit multiple private machine learning (ML) datasets via a scheduled-to-publish deep neural network (DNN) as the carrier model. Unlike existing steganography schemes which treat the DNN parameters as bit strings, \textit{Cans} for the first time exploits the learning capacity of the carrier model via a novel parameter sharing mechanism. Extensive evaluation shows, Cans is the first working scheme which can covertly transmit over $10000$ real-world data samples within a carrier model which has $220\times$ less parameters than the total size of the stolen data, and simultaneously transmit multiple heterogeneous datasets within a single carrier model, under a trivial distortion rate ($<10^{-5}$) and with almost no utility loss on the carrier model ($<1\%$). Besides, Cans implements by-design redundancy to be resilient against common post-processing techniques on the carrier model before the publishing.
Accept
The reviewers are generally positive, with Reviewer Lxtr being mostly enthusiastic about the work. Both Reviewer Lxtr and 5E9b believe the work of encrypting and transmitting secrete data with DNN is novel and the work is well executed with thorough experimental results and technical details. The reviewers raised several questions about the related work and baselines, as well as algorithmic clarity. The authors should further address these points by incorporating additional discussions and results from the rebuttal in the revision.
train
[ "5cZMLUlDy0c", "cO5ibES8Zxm", "F4lv_Jk8M4", "0jFiDYh8jwN", "D2z4ZMMSjCW", "eas_tQVjUQi" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nThank you for the encouragement on our work. Below, we reply one by one to the comments.\n\n**W1.** ***The paper is missing an integration of the main algorithmic steps (Fill, Propagate, Decode) with the overarching flow diagram in Fig 1 which creates a gap in the presentation.***\n\n**Re:** We will refine Fig.1 in our paper to integrate the main algorithmic steps in our camera-ready version. \n\n**W2.** ***The abstract and main text make inconsistent claims about the transmission capacity.***\n\n**Re:** We have fixed the typos in the claims of the introductory part in the rebuttal revision. According to the statistics in Section 4, the size of 10000 real-world data samples from FaceScrub is 220x larger than the size of the carrier model. \n\n**W3.** ***Definitions of metrics and illustrations of qualitative results are insufficiently described and included.*** \n\n**Re:** We will add more explanation on the learning objective in Section 3.3, define the performance difference and hiding capacity in equations in Section 7, and refined the presentation of Fig.3 in the camera-ready version. \n\n**W4.** ***The choices and constructions of a secret key and noisy vectors are insufficiently described. What are the requirements on creating the noisy vectors?***\n\n**Re:** We will incorporated more discussion on the choices and the constructions of the secret keys in Section 3 of the camera-ready version. The secret keys in our scheme is more similar to the one in the symmetric key cryptography. Concisely, our schemes adopt two groups of secret keys: \n\n(1) The first group contains (C+1) secret integers randomly sampled from [0, |P|) (i.e., |P| is the size of the weight pool, which is at the scale of 10^8 in our experiments, and C is the number of secret tasks). Used in the **Fill** algorithm, the first secret integer determines how the weight pool is filled into the carrier model, while the next C secret integers determine how the weight pool is filled in the secret models \n\n(2) The second group contains C secret random integers sampled from [0, MAX_INT]. The i-th integer severs as the random seed for the i-th secret task. Specifically, a set of noise vectors are generated under the given random seed and are used in each secret task as a one-to-one correspondence with the victim data. In our current implementation, the noise vectors are sampled from a standard Gaussian under the given random seed.\n\n**Q1.** ***Please, fix the typos including the title (Stegnography), “Under the umberalla of DLPS,”*** \n\n**Re:** Thank you for the careful reviewing. We have fixed the typos in the rebuttal version. \n\n**Q2. *How is encoding a secret dataset into a neural network model different from injecting a trojan into a neural network model***\n\n**Re:** This is a very interesting question. From our perspective, the trojan/backdoor attacks on DNN (e.g., BadNet, TrojanNN, Composite Backdoor, etc.) mainly modifies the victim model so that it can triggered by attacker-specified inputs to make targeted misclassification. This is mainly for evasive purposes (e.g., evading a face recognition system). Differently, our attack modifies the carrier model to hold a privacy backdoor, i.e., a secret dataset can be decoded from the normal model via an attacker-specified procedure. \n\n**Q3.** ***Why is the method denoted as “capacity-aware” as opposed to “capacity-limited”? The encoding of secret dataset is either successful or not, but it is not probing the carrier model for its capacity, correct?*** \n\n**Re:** We are afraid but our approach indeed exploits the learning capacity of deep neural networks (i.e., both the carrier and the secret models) to hide more samples from the secret datasets. In our scheme (i.e., Section 3.3), the encoding of the secret dataset is converted into a learning objective w.r.t. the weight pool. Therefore, the outcome of the encoding process is not simply successful or not. When the victim dataset is over the learning capacity, the recovered images can still be recognizable despite a certain level of distortion (cf. Fig.4(b)). \n\n**Q4.** ***What is the weight pool restoration algorithm?*** \n\n**Re:** When the weight pool is set to be N times smaller than the carrier model, there would be N-redundancy for the weight pool encoded in the carrier model. A general weight pool restoration algorithm works by first extracting the N copies of the weight pool from the carrier model and fuse them into the final weight pool. For example, if the carrier model undergoes pruning, the fusion works by assembling the final weight pool from the non-zero (i.e., unpruned) values from each copy. In this way, the final weight pool can be almost recovered despite each copy may have certain values to be pruned.\n", " Thank you for the encouragement on our work. We have incorporated the discussion on image steganography via deep learning techniques (including [1] which kindly pointed out by the reviewer) in the \"*Data Hiding in the Deep Learning Era*\" part of Section 2 of our rebuttal revision. \n\n[1] UDH: Universal Deep Hiding for Steganography, Watermarking, and Light Field Messaging, NeurIPS'20", " Thank you for the encouragement on our work. Below, we reply one by one to the comments.\n\n**W1.** ***There is no related work section. Please include related image steganography models and generative models in the related work.***\n\n**Re:** We have incorporated a brief review on the related image steganography models and generative models in the \"*Data Hiding in the Deep Learning Era*\" part of Section 2 of our rebuttal revision.\n\n**W2. *I have a concern that this paper does not include any baselines. It is hard for me to get a sense of how good the proposed model is. [1] [2] use a similar metric to evaluate the steganography performance. Please consider comparing the model with them.***\n\n**Re:** In Section 4, we did incorporate three baseline approaches which also attempt to hide data in a deep neural network via conventional steganography approaches (Ref. [39] of our paper). As shown in Fig.3, our capacity-aware approach outperforms all of the baselines in the hiding capacity. For example, in Fig.3(a), the baseline approaches could not be executed to hide over 4096 images from CIFAR-10, while our approach still works and the recovered images remain close to the original ones in both SSIM (Fig.3(a)) and the perceptual quality (Fig.4(b)). To the best of our knowledge, the covered baselines are the only ones which use the deep neural networks as the carrier medium to hide information. We are afraid but [1][2] mainly exploits deep learning techniques (e.g., GAN or encoder-decoder) towards image steganography, i.e., taking an image as the carrier medium and hiding secret bit strings in the image. Therefore, their task is different from ours and could not serve as a baseline. Nevertheless, we have incorporated the discussion on such works in Section 2 of our rebuttal revision.\n\n**W3. *The algorithms in the supplement are a little hard to understand. Please consider adding some explanation. It is the key concept of the paper. Please also consider moving them to the main paper.***\n\n**Re:** We will incorporate the explanation on the algorithms in our main text also in the supplements to make it self-contained in the camera-ready version. Besides, we will put a simplified version of the algorithms in the supplements in our main text. \n\n**Q1.** ***How many secret keys did the authors use when training the model? I think if we use too few secret keys, the stealer might be easy to recover the model by credential stuffing.***\n\n**Re:** We also agree that using few secret keys would expose the encoded training data under the risk of being decoded by irrelevant users. Therefore, our schemes adopt two groups of secret keys, which are rather complicated for credential guessing: \n\n**(1)** The first group contains (C+1) secret integers randomly sampled from [0, |P|) (i.e., |P| is the size of the weight pool, which is at the scale of 10^8 in our experiments, and C is the number of secret tasks). Used in the **Fill** algorithm, the first secret integer determines how the weight pool is filled into the carrier model, while the next C secret integers determine how the weight pool is filled in the secret models \n\n**(2)** The second group contains C secret random integers sampled from [0, MAX_INT]. The i-th integer severs as the random seed for the i-th secret task. Specifically, a set of noise vectors are generated under the given random seed and are used in each secret task as a one-to-one correspondence with the victim data. \n\nDuring the decoding phase, the adversary first uses the first set of secret integers to decode the weight pool from the carrier model and constructs the C secret models with the weight pool. Then, he/she uses the second set of secret integers to generate the same set of noise vectors used in the encoding phase and feeds them into the secret models to dump the victim data.\n\n**Q2.** ***How long will it take to train the model? Does training with the weight pool make the training process very slow?***\n\n**Re:** To compare the overhead of introducing weight pool in training, we conduct a demonstrative experiment in the rebuttal phase. Specifically, we consider three comparison groups: (i) Normal Training (i.e., w/o. Weight Pool, w/o. secret task), (ii) Normal Training with Weight Pool (i.e., w/. Weight Pool, w/o. secret task), (iii) Training with both weight pool and one secret task (i.e., w/. Weight Pool, w/. secret task). The following table compares the time of training 100 iterations under the three configurations (with 10 repetitive tests in the same computing environments). As is shown, the introduction of the weight pool and the joint training with secret tasks won’t make the training process very slow. \n\n| | (i) w/o. weight pool, w/o. secret task | (ii) w/. weight pool, w/o. secret task | (iii) w/o. weight pool, w/o. secret task |\n| --- | --- | --- | --- |\n| Time Cost (sec./100 iteration) | 9.80 ( $\\pm$ 0.09) | 12.57 ( $\\pm$ 0.04) | 15.08 ( $\\pm$ 0.04) |\n", " This paper studies how to use models to hide data. Strengthes: I checked the performance, which seems to be very impressive and is the main reason I argue for acceptance. The paper is also well written and organized. Reasoanble analysis is provided to justify the reported results. I do not see a clear weakness of this paper. Some papers might be worth a citation, such as UDH: universal deep hiding from NeurIPS2020. To my knowledge, it is not a common practice to publish this topic at NeurIPS. Maybe it is worth a discussion. Since I am not very familiar with this topic, I am willing to adjust my score based on the questions from other reviewers and rebuttal from the authors. N.A.", " This paper presents a new steganography model to recover the images from noise messages. It has two key contributions. First, it proposes a weight pool such that the model is assembled from the weights in this pool with the secret key v. People are hard to get the correct model with the key. Second, they design a trainable pipeline to co-train the generated model and classification model with the weight pool. The authors evaluate the model on three different datasets with SSIM, MSE, and Performance Difference metrics. Strengths \n1. The key idea sounds novel to me. The generation model is assembled from a weight pool by using the private key. Therefore, if stealers do not know the key, it is hard for them to get the correct generation model.\n2. The authors proposed a novel training pipeline to train the generation model with the weight pool. \n3. The results look promising.\n4. The authors provide the details to implement and train the model in the supplementary. \n\nWeakness:\n1. There is no related work section. Please include related image steganography models and generative models in the related work.\n2. I have a concern that this paper does not include any baselines. It is hard for me to get a sense of how good the proposed model is. [1] [2] use a similar metric to evaluate the steganography performance. Please consider comparing the model with them.\n3. The algorithms in the supplement are a little hard to understand. Please consider adding some explanation. It is the key concept of the paper. Please also consider moving them to the main paper.\n\n[1] Zhang, Kevin Alex, et al. \"SteganoGAN: High capacity image steganography with GANs.\" arXiv preprint arXiv:1901.03892 (2019).\n[2] Kishore, Varsha, et al. \"Fixed Neural Network Steganography: Train the images, not the network.\" International Conference on Learning Representations. 2021. 1. How many secret keys did the authors use when training the model? I think if we use too few secret keys, the stealer might be easy to recover the model by credential stuffing.\n2. How long will it take to train the model? Does training with the weight pool make the training process very slow?\n This paper proposes an image steganography method. One possible negative societal impact is people might use it to steal secret information.", " This paper presents a method called Cans for encoding secret datasets into deep neural networks (DNNs) and transmitting them in a openly shared “carrier” DNN. In contrast to existing steganography methods encoding information into least significant bits, the authors encode the secret dataset into a trained publicly shared DNN model such that the public model will predict weights for secret key inputs (known to covert operatives), the weights are used to populate a secret DNN model and the secret DNN model predicts secret dataset for noisy inputs (known to covert operatives). The main advantage of the Cans encoding is that it can covertly transmit over 10000 real-world data samples within a carrier model which has 100× less parameters than the total size of the stolen data, and simultaneously transmit multiple heterogeneous datasets within a single carrier model, under a trivial distortion rate (< 10−5) and with almost no utility loss on the carrier model (< 1%). Strength: \n•\tThe authors nicely combine cryptographic application and DNN modeling. The idea of hiding the secret dataset in a shared model parameters is very interesting.\n•\tThe authors also nicely presented their experimental work with accuracy and robustness evaluations. \n\nWeaknesses: \n•\tThe paper is missing an integration of the main algorithmic steps (Fill, Propagate, Decode) with the overarching flow diagram in Fig 1 which creates a gap in the presentation.\n\n•\tThe abstract and main text make inconsistent claims about the transmission capacity:\no\tAbstract: “.. covertly transmit over 10000 real-world data samples within a carrier model which has 220× less parameters than the total size of the stolen data,”\no\tIntroduction: “… covertly transmit over 10000 real-world data samples within a carrier model which has 100× less parameters than the total size of the stolen data (§4.1),”\n\n•\tDefinitions of metrics and illustrations of qualitative results are insufficiently described and included. \no\tFor example, the equation for a earning objective in section 3.3 should be clearly described. \no\tPage 7: define performance difference and hiding capacity in equations. \no\tFig 3 is too small for the information to be conveyed (At the 200% digital magnification of Fig 3, I can see some differences in image qualities). \n \n•\tThe choices and constructions of a secret key and noisy vectors are insufficiently described i.e., Are the secret keys similar to the public-private keys used in the current cryptography applications? What are the requirements on creating the noisy vectors? \n •\tPlease, fix the typos including the title (Stegnography), “Under the umberalla of DLPS,”\n•\tHow is encoding a secret dataset into a neural network model different from injecting a trojan into a neural network model, for instance, in the TrojAI program – see https://pages.nist.gov/trojai/docs/about.html?\n•\tWhy is the method denoted as “capacity-aware” as opposed to “capacity-limited”? The encoding of secret dataset is either successful or not, but it is not probing the carrier model for its capacity, correct?\n•\tWhat is the weight pool restoration algorithm?\no\t“We initialize the weight pool with a customized size 5× smaller than the carrier model, which allows us to implement a weight pool restoration algorithm to recover the pruned parameter by selecting the non-zero value from each weight pool copy, i.e., the fusion mechanism. “\n •\tHow is the information redundancy built into the Fill, Propagate, Decode algorithms? \no\tIn reference to the sentence “ Finally, by comparing the performance of the secret model with or without fusion, we conclude that the robustness of Cans largely comes from the information redundancy implemented in our design of the weight pool” \n" ]
[ -1, -1, -1, 5, 5, 8 ]
[ -1, -1, -1, 2, 3, 4 ]
[ "eas_tQVjUQi", "0jFiDYh8jwN", "D2z4ZMMSjCW", "nips_2022_iAktFMVfeff", "nips_2022_iAktFMVfeff", "nips_2022_iAktFMVfeff" ]
nips_2022_0SVOleKNRAU
Mirror Descent Maximizes Generalized Margin and Can Be Implemented Efficiently
Driven by the empirical success and wide use of deep neural networks, understanding the generalization performance of overparameterized models has become an increasingly popular question. To this end, there has been substantial effort to characterize the implicit bias of the optimization algorithms used, such as gradient descent (GD), and the structural properties of their preferred solutions. This paper answers an open question in this literature: For the classification setting, what solution does mirror descent (MD) converge to? Specifically, motivated by its efficient implementation, we consider the family of mirror descent algorithms with potential function chosen as the $p$-th power of the $\ell_p$-norm, which is an important generalization of GD. We call this algorithm $p$-$\textsf{GD}$. For this family, we characterize the solutions it obtains and show that it converges in direction to a generalized maximum-margin solution with respect to the $\ell_p$-norm for linearly separable classification. While the MD update rule is in general expensive to compute and not suitable for deep learning, $p$-$\textsf{GD}$ is fully parallelizable in the same manner as SGD and can be used to train deep neural networks with virtually no additional computational overhead. Using comprehensive experiments with both linear and deep neural network models, we demonstrate that $p$-$\textsf{GD}$ can noticeably affect the structure and the generalization performance of the learned models.
Accept
This paper studies mirror descent in the classification setting with exponential and logistic losses. The reviewers agreed that the problem is important, and the paper is clear and well written.
train
[ "xELAewaRd6", "0BYvWvCZA-", "9rlyzxqsxHv", "pBFAuX2w-F-", "_vRiw-Ero_", "ZtFoRhhV4qL", "jDqygQUjlk5", "vNbcF0mvbcR", "Af58mAVKfF", "tUUAjBY9IZS", "8IGO3iPzdo", "vJp4SeRPOPr", "-YG12kyj4hC", "jtHM-8KbZyH", "rOt8CnU0Klq", "O9DXIixpyAM", "mxq_CkMBun", "ldCobU8Rpm" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Once again, we thank all the reviewers for the insightful discussion, which helped us further improve the paper. We have uploaded a new revision and highlighted all changes in orange.", " Thank you again for your comments and for increasing your score. We agree that a deeper understanding of the effect of $p$ on generalization performance would be an interesting future work. We have incorporated this into Section 5 in the new version.", " Thank you again for your valuable feedback and for increasing your score. We have integrated the remaining suggestions into the paper.", " Thank you again for your thoughtful comments. To further reflect some of the points that arose from this discussion, we made another round of updates to the paper (see all the changes in orange, such as line 71 in our contributions). We hope that these changes address the reviewer’s main comments.", " Thanks for the answers and clarifications. As a personal preference i would still like to see a structured study regarding choosing algorithm of p, or at least some case study, but this wouldn't stop me from increasing my score to weak accept. \n\nThanks for the detailed response !", " Thank you for these thoughtful comments. Let us answer your questions one by one:\n\n- We expect the empirical convergence rate of $p$-GD will be very comparable to that of gradient descent. For the final version, we will add the loss curves of $p$-GD with different values of $p$.\n- Training techniques such as weight decay are fully compatible with $p$-GD. We did not include weight decay to isolate the effects of different implicit biases induced by $p$-GD. We mentioned the lack of weight decay only to explain the difference between the performance of GD in our ImageNet experiment vs the standard baseline.\n\nThank you for your suggestion for future experiments. We agree that it will be great to find a task and show that replacing GD with $p$-GD can beat the SOTA, and we will be on the lookout for that.\n\nRe minor question: We currently do not have an explanation on why $p$-GD with $p=3$ performs better, but it certainly merits further investigation in future work.\n", " Thank you for the response. I think that this work should be published in NeurIPS even if the relevance p-GD remains to be seen. It is likely that other researchers will continue studying the properties of p-GD in future work. ", " I want to thank the authors for the detailed responses. Overall, I am satisfied with the response and leans to accept this paper. However, I get some additional questions regarding the first response. The authors claim that \"one of the goals of our paper is to demonstrate the practicality of $p$-GD for deep learning\". However,\n\n1. How is the (empirical) convergence speed of $p$-GD? An essential evaluation criterion for optimizers is the convergence speed, which is why Adam is still popular while practitioners have found it (usually) generalizes worse than SGD. It will be appreciated if the authors can provide the training loss curves of the experiments in the camera-ready version if accepted.\n\n2. How is the compatibility of $p$-GD with other training techniques? I notice that in Line 707, the authors say \"In particular, we find that not having weight decay costs us around $3%$ in validation accuracy in the $p = 2$ case\". Does weight decay conflict with $p$-GD ($p\\ne 2$)?\n\nAll in all, I appreciate the theoretical efforts of this paper (I also find it very interesting (mathematically) if the result can be extended to MD with a larger class of potential functions). However, to attract the practitioners' interest in $p$-GD, the experiments in this paper are still a bit toy (only CIFAR 10 and Imagenet with low training accuracy). It may be a good idea to find a task and show that replacing GD with $p$-GD can achieve SOTA (this is definitely beyond the scope of this paper so please consider this as a suggestion for future work :)). \n\nA minor question: it seems that in the experiments, $p$-GD with $p=3$ outperforms GD consistently. Do the authors have any insight into this (just curious, it is totally fine if you do not know why)?\n", " I find the authors' response and revision regarding the smoothness of the exponential loss to be satisfactory, so as promised, I will increase my score to **accept**.\n\nI am glad to see that the authors have already incorporated some of my suggestions, and I hope that they will incorporate the remainder of them in the camera-ready revision, as I believe that they will improve the clarity of this paper.", " Thank you for your valuable comments and feedback.\n\nAt a high level, you are correct that $p$ can be treated as another hyperparameter. However, we note that one may be able to choose it in a more clever way than through a grid search. In particular, if we have some knowledge about the underlying geometry of the problem, we can pick appropriate values of $p$ as prescribed by our theory. Alternatively, if we desire a certain quality from the learned classifier, e.g., a sparse network or weights with a small dynamic range, we can pick $p$ to induce such properties in the classifier (i.e., $p$ close to 1 or large, respectively).\n\nWe understand that the setting of linear classification may seem restrictive. However, this is already challenging and is a standard setting in the study of implicit regularization, see [1-5]. Additionally, to the best of our knowledge, even in the well-studied case of gradient descent, the current state-of-art analysis is concerned with relatively simple models such as two-layer neural networks or linear networks without activation functions, see [6-8], which still require significant effort.\n\n[1] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 2018.\n\n[2] Ziwei Ji, and Matus Telgarsky. \"The implicit bias of gradient descent on nonseparable data.\" In Conference on Learning Theory, pp. 1772-1798. PMLR, 2019.\n\n[3] Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. \"Characterizing implicit bias in terms of optimization geometry.\" In International Conference on Machine Learning. PMLR, 2018.\n\n[4] Muthukumar, Vidya, Adhyyan Narang, Vignesh Subramanian, Mikhail Belkin, Daniel Hsu, and Anant Sahai. \"Classification vs regression in overparameterized regimes: Does the loss function matter?.\" The Journal of Machine Learning Research, 2021.\n\n[5] Ziwei Ji, Nathan Srebro, and Matus Telgarsky. \"Fast margin maximization via dual acceleration.\" In International Conference on Machine Learning. PMLR, 2021.\n\n[6] Suriya Gunasekar, Jason D. Lee, Daniel Soudry, and Nati Srebro. \"Implicit bias of gradient descent on linear convolutional networks.\" Advances in Neural Information Processing Systems, 2018.\n\n[7] Lenaic Chizat, and Francis Bach. \"Implicit bias of gradient descent for wide two-layer neural networks trained with the logistic loss.\" In Conference on Learning Theory. PMLR, 2020.\n\n[8] Gal Vardi, and Ohad Shamir. \"Implicit regularization in ReLU networks with the square loss.\" In Conference on Learning Theory. PMLR, 2021.\n", " \nWe thank the reviewer for their valuable comments and suggestions. \n\nLet us first address the reviewer’s main concern regarding Lemma 2 (Lemma 3 in the revision). It is true that the exponential loss is not globally smooth whereas the logistics loss has the global smoothness property. However, we’d like to highlight that Lemma 2 only relies on local smoothness at the iterates, and this observation has been made in the literature for the GD case ($p=2$) as well [1, 2]. Below, we briefly explain how this is addressed and argue that the same argument can be applied to our setting as well.\n\nThe key observation is that the exponential loss function is not smooth only when the loss is arbitrarily large. Therefore, based on the value of the loss at the initial iterate, we can upper bound the smoothness of the exponential loss function for all points that have a lower loss value than the initial point. Then, we can apply Lemma 2 to all the iterates of mirror descent. A similar argument had been raised in footnote 2 of [1]. Therefore, the exponential loss function fits under our framework. *And therefore our analysis is general in the sense that it applies to a variety of common loss functions.* To address the reviewer’s concern, we have further clarified this point in our first Remark.\n\n\nRegarding the reviewer’s suggestions:\n- We made additional remarks about the separability assumption where we introduced our problem setting at line 88.\n- Regarding the comment about the hinge loss, as noted by the reviewer, it attains its minimum at a finite point and does not satisfy the assumption of monotonically decreasing loss. Therefore, our analysis does not extend to this case, similar as in prior work in the literature [1, 2, 3]. However, the reviewer does raise an interesting point and we would like to investigate this question if it has not already been answered in literature.\n- We used the notation $u^\\textsf{r}_p$ and $u^\\textsf{m}_p$ because we feel that the alternative of using hats and overlines is less aesthetically pleasing. We chose to use a different font for the superscripts to reduce the chance of any confusion. If the reviewer deems a different notation more appropriate, we would be happy to change that.\n- 1-norm is not strictly convex and therefore cannot be used to define a Bregman divergence. This is why we restrict our analysis to the case where $p > 1$.\n- We appreciate the relevant reference mentioned and have added that in the revised version.\n\nRegarding formatting:\n- We have updated the numbering of theorems/definitions/etc as the reviewer recommended.\n- We thank the reviewer for pointing out the typos, and we have corrected them in our revision.\n\nRe limitations: We want to reiterate that, as discussed above, our analysis is quite general and applies to both exponential and logistics losses.\n\n[1] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 2018.\n\n[2] Ziwei Ji, Miroslav Dudík, Robert E. Schapire, and Matus Telgarsky. \"Gradient descent follows the regularization path for general losses.\" In Conference on Learning Theory. PMLR, 2020.\n\n[3] Suriya Gunasekar, Jason Lee, Daniel Soudry, and Nathan Srebro. \"Characterizing implicit bias in terms of optimization geometry.\" In International Conference on Machine Learning. PMLR, 2018.\n", " We thank the reviewer for the valuable comments and feedback. \n\nWe would like to first clarify that one of the goals of our paper is to demonstrate the practicality of $p$-GD for deep learning. While mirror descent is a general family of optimization algorithms, it has not been employed in deep learning (other than GD, of course). The primary reason for that is the MD update rule requires applying the inverse of the mirror map, which is computationally expensive, especially for deep neural networks with many parameters. However, the significance of the $p$-GD class of mirror descent for deep learning is that (1) the update rule for $p$-GD becomes fully parallelizable and thus efficiently implementable in the same manner as GD; (2) different $p$ lead to weight vectors with significantly different geometries, and our experiments in Section 4.2 demonstrate that this leads to different generalization performance. \n\nRegarding the synthetic experiment from Soudry et al.: This is exactly what we have done. More specifically, inspired by the experiments in Soudry et al. [1] and Nacson et al. [2], we conducted a set of synthetic experiments in Section 4.1 to verify the convergence to the (generalized) max-margin solution, and those results corroborate with our main theoretical results. \n\nThe biggest challenge in extending the result to a more general potential function is the homogeneity property. With our choice of potential function, if we scale two vectors by the same constants, then their Bregman divergence is also scaled by a constant. However, this property no longer holds for general potential function and therefore it is unclear how to deal with vector normalizations in such cases. \n\n\n[1] Daniel Soudry, Elad Hoffer, Mor Shpigel Nacson, Suriya Gunasekar, and Nathan Srebro. The implicit bias of gradient descent on separable data. The Journal of Machine Learning Research, 2018.\n\n[2] Mor Shpigel Nacson, Jason Lee, Suriya Gunasekar, Pedro Henrique Pamplona Savarese, Nathan Srebro, and Daniel Soudry. \"Convergence of gradient descent on separable data.\" In The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.\n", " We thank the reviewer for their valuable comments and feedback. We would like to first clarify that one of the goals of our paper is to demonstrate the practicality of p-GD for deep learning. While mirror descent is a general family of optimization algorithms, as noted by the reviewer, it has not been adopted in deep learning (other than GD, of course). The primary reason for that is the MD update rule requires applying the inverse of the mirror map, which is computationally expensive, especially for deep neural networks with many parameters. However, the beauty of the special class of p-GD is that (1) the update rule becomes fully parallelizable and thus efficiently implementable in the same manner as GD; (2) different $p$ lead to weight vectors with significantly different geometries, and our experiments in Section 4.2 demonstrate that this leads to different generalization performance. Lastly, when we said “out of scope” in Section 4.2, we only meant that our theory does not directly predict the generalization performance of the trained classifier. We have rephrased this statement in the revision.\n\nTo address the minor points:\n- The argmin and argmax in eq (2) and (3), respectively, are unique for $p > 1$. A proof of uniqueness in eq (3) can be found in Appendix B.5. A similar argument works for eq (2).\n- Thank you for spotting the typo in eq. (p-GD). The correct formula should be $|w_t^+[j]|^{\\frac{1}{p-1}} \\cdot \\mathrm{sign}( w_t^+[j])$. So, the output of the update rule is not necessarily positive. We have corrected this in the revised version.\n", " We have uploaded a revision of our paper and highlighted our changes in orange. We hope that our response clarifies any remaining questions and concerns from the reviewers.", " This paper studies the implicit bias of the solution found by mirror descent in the problem of classification with a linear model.\nSimilar problems have been studied for linear regression and/or gradient descent, but not simultaneously for mirror descent and classification.\nTo be fair, mirror descent is a very general optimization method, but here the paper considers a very specific setting, which they call \"p-GD\".\nThis stands for p-norm gradient descent, since (roughly speaking) the (p-1) power of the weights is used to update them.\nPerhaps unsurprisingly, the main theoretical result is that p-GD finds the max-margin solution with respect to the p-norm, thus generalizing the previous finding that GD finds the max-margin wrt the 2-norm.\nThe results of some experiments are shown in support of the theoretical results.\n Strenghts:\n- The paper is very clear and well written.\n- The study of implicit bias of solution found by optimization is an important problem.\n\nWeaknesses:\n- It is unclear to me how much this \"p-GD\" optimizer is relevant for the machine learning community, I don't know of anyone using it. \nSection 4.2 reports an application of p-GD to deep neural networks and image classification but, as acknowledged by the authors, those results are \"out of the scope of our theoretical results\", so it remains a bit unclear what conclusions we are supposed to draw.\n\nOverall my opinion is that the strengths slightly outweight the weaknesses, so I vote for a weak accept. \n Minor:\n- How do we know that the argmin in Eq.2 and the argmax in Eq.3 are unique?\n- Eq.(p-GD), first row, w*sign(w) is just the absolute value of w, which implies that the value of the weight w_{t+1} is always positive. \nDoesn't that limit drastically the solutions that can be possibly found by the optimizer?\n OK", " This paper derives the implicit regularization effect for the mirror descent algorithm for linear classification tasks with the exponential-tailed loss and separable data. Specifically, with the potential function being $\\Vert * \\Vert^p_p$, it is shown that mirror descent converges to the $\\ell^p$ max-margin solution. Experiments over linear models and deep neural networks are conducted to support the theoretical findings. Strengths:\n1. This paper is well-written and I enjoy reading the paper.\n2. The theoretical analysis is solid and explicitly explained (the connection with [Ji et al., 2020] is clearly pointed out). The experiment is detailed.\n\nWeakness:\nThe major weakness lies in the motivation of the paper. The implicit bias of gradient descent (with momentum) and adaptive first-order optimizers over classification problems [Soudry et al., 2017; Qian et al. 2019; Wang et al., 2021] are studied due to their popularity in deep learning's application over classification tasks. However, mirror descent is less employed. Therefore, while it is good to know such a result, this problem has limited importance.\n\nReference:\n1. Ji et al. Gradient descent follows the regularization path for general losses, 2020\n2. Soudry et al. The Implicit Bias of Gradient Descent on Separable Data, 2017\n3. Qian et al. The Implicit Bias of AdaGrad on Separable Data, 2019\n4. Wang et al. Does Momentum Change the Implicit Regularization on Separable Data, 2021 1. More discussion on the importance of mirror descent in model deep learning is appreciated.\n2. In [Soudry et al., 2017], a synthetic dataset is constructed with its max-margin solution known in prior, and the convergence to max-margin solution can thus be verified directly. It is suggested that the authors apply this methodology to verify the correctness of the theoretical results.\n3. In this paper, special potential functions are considered. What is the difficulty to extend the results to general potential functions? Please refer to the \"weakness\".", " This work studies mirror descent in the classification setting with exponential and logistic losses and $p$-norm (to power $p$) potential functions, proving that when data is linearly separable and the step size is sufficiently small, this method converges to the maximum margin direction under the $p$-norm. The authors also describe rates of convergence as a function of iteration. Lastly, the authors perform numerical experiments demonstrating the claims of the theory for toy linear classification problems, as well as application to common deep network models on CIFAR-10, demonstrating the resulting weights have a distribution that reflect the structure imposed by the implicit norm regularization. **Originality:** \nThis work builds on a rich literature of previous work in theoretical analysis of implicit regularization and mirror descent. The main theoretical contribution appears to be that the authors have extended the analysis of Azizan et al. on mirror descent to the classification case where the loss is minimized only for an infinite-norm solution. While Gunasekar et al. have studied classification previously under exponential loss with steepest descent, this work analyzes mirror descent with separable update rules for the logistic loss, which is a more practically relevant setting. I think this work could do a better job of contrasting itself with previous analysis \n\n**Quality & Clarity:** \nI believe the theoretical results presented to be technically sound. The biggest issue is that there appears to be some inconsistency with respect to the loss function from result to result, which is a bit confusing to draw conclusions from (I elaborate in the *Questions* section). \n\n**Significance:**\nIt is of critical importance that we understand the implicit biases of our optimization algorithms in this era of overparameterized machine learning, and this work makes an important contribution by demonstrating how $p$-norm regularization can be achieved solely via efficient optimization methods in classification problems. In particular, it has been recently shown [1] that fast rates for interpolation in noisy settings require $p < 2$, so it is crucial that we go beyond standard gradient descent, both in theory and in practice. This work is an important step in that direction.\n\n[1] Donhauser et al., \"Fast rates for noisy interpolation require rethinking the effects of inductive bias.\" https://arxiv.org/abs/2203.03597 I have one major question for the authors below. I assume that they will answer it satisfactorily, so I have decided to tentatively give this paper a **weak accept**. Depending on the response, I will increase my score to **accept** or decrease to **reject**.\n\n**Questions:**\n- The authors appear to be somewhat inconsistent regarding which loss function each result applies for. At the start of the paper (line 84), they suggest that most results will apply for exponential loss and logistic loss. However, it appears that the exponential loss cannot satisfy Lemma 2, because $\\psi - \\eta \\exp(\\cdot)$ can never be convex. On the other hand, the results on convergence rate are given only for the exponential loss (with proof for the logistic loss left as an exercise for the reader). Can the authors either 1) fix Lemma 2 so that the exponential loss works, or 2) specialize all results to the logistic loss, and omit the exponential loss in result statements?\n\n**Suggestions:**\n- Point out that when $d > n$, linear separability of the data is trivially satisfied. Additionally, comment more on the importance of the separability assumption (allowing interpolation / the loss to going to zero).\n- Make a note that while the hinge loss does not satisfy the assumptions (it attains its minimum), it gives the same maximum-margin solution.\n- More clearly define $u_p^\\mathsf{m}$ and $u_p^\\mathsf{r}$, and make it clear that the superscripts $\\mathsf{m}$ and $\\mathsf{r}$ are not parameters of the problem such as sample sizes. \n- Explain why the choice of $p = 1$ doesn't work.\n- Add the citation [1] discussed under *Significance* above.\n\n**Formatting/typographical:**\n- Use a unified numbering scheme for theorems/remarks/definitions/etc. With so many, it is difficult to find them.\n- (minor) Tables: use `booktabs` package\n- line 206: \"has exponential tail\" -> \"have exponential tail\"\n- line 251: \"space constraint\" -> \"space constraints\"\n\n***\n\n**Edit after review responses** \n\nThe authors have satisfactorily responded to my main question, so I have increased my score from **weak accept** to **accept**. As mentioned above, the analysis as-is appears to only apply to the logistic loss, even though it is presented in a generic manner that would appear to be able to accommodate other common losses like exponential and hinge. I found no ethical limitations with this work.", " This paper studies theoretical details of convergence point of Mirror Descent algorithm those are having potential function as p-th power of l_p norm. Moreover, in the case of linearly separable classification the convergence has been shown to be in the direction which is called generalized maximum margin. Strengths:\n\n- Paper is theoretically sound, and investigates theoretical characteristics of MD with more suitable potential function 1/p || . ||_p^{p} which could be applicable to train Deep Learning trainings.\n- The paper studies further characteristics of linearly separable classes which is an important first step question to answer.\n- There has been experiments performed to verify results.\n\n\nWeaknesses:\n\n- This would be interpreted as question as well instead of weakness, how do you choose the best p depending on the problem. Is this a new hyper-parameter to tune or there is a structured method to find the necessary p. I am more pointing this issue in terms of experimentation ?\n- Studying characteristics of linearly separable problems is interesting, but is there any way that p-GD could be studied more complicated decision boundaries ?\n - To me it seems that if we check multiple values of p, one of them likely to overperform p=2 case (SGD), so i was curious if there is way to do this process better. Also, it would be nice to have accuracy studies in experimentation with other Optimization Algorithms.\n\n- Is it possible to understand characteristics of p-GD solutions with more complicated decision boundaries ?\n\n -" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "jtHM-8KbZyH", "_vRiw-Ero_", "Af58mAVKfF", "jDqygQUjlk5", "tUUAjBY9IZS", "vNbcF0mvbcR", "-YG12kyj4hC", "vJp4SeRPOPr", "8IGO3iPzdo", "ldCobU8Rpm", "mxq_CkMBun", "O9DXIixpyAM", "rOt8CnU0Klq", "nips_2022_0SVOleKNRAU", "nips_2022_0SVOleKNRAU", "nips_2022_0SVOleKNRAU", "nips_2022_0SVOleKNRAU", "nips_2022_0SVOleKNRAU" ]
nips_2022_9u05zr0nhx
DIMES: A Differentiable Meta Solver for Combinatorial Optimization Problems
Recently, deep reinforcement learning (DRL) models have shown promising results in solving NP-hard Combinatorial Optimization (CO) problems. However, most DRL solvers can only scale to a few hundreds of nodes for combinatorial optimization problems on graphs, such as the Traveling Salesman Problem (TSP). This paper addresses the scalability challenge in large-scale combinatorial optimization by proposing a novel approach, namely, DIMES. Unlike previous DRL methods which suffer from costly autoregressive decoding or iterative refinements of discrete solutions, DIMES introduces a compact continuous space for parameterizing the underlying distribution of candidate solutions. Such a continuous space allows stable REINFORCE-based training and fine-tuning via massively parallel sampling. We further propose a meta-learning framework to enable the effective initialization of model parameters in the fine-tuning stage. Extensive experiments show that DIMES outperforms recent DRL-based methods on large benchmark datasets for Traveling Salesman Problems and Maximal Independent Set problems.
Accept
This paper proposes a differentiable meta-solver applicable to large-scale combinatorial optimization. After a thorough discussion phase, all the reviewers are on the positive side of this paper. The reviewers appreciated the novelty of this paper and the importance of scaling neural combinatorial optimization for large-scale instances. Overall, I recommend acceptance for this paper. However, the reviewers also showed concerns about the presentation of this paper. The gap between generalization and testing performance is not clearly discussed and the connection to prior works using continuous latent space should be clearly stated. Since scalability is an important issue, it would be useful to clear up time/objective comparison and unify experimental settings as suggested by Reviewer fQdp and fe3B.
train
[ "UWYjLP5Axu0", "-Glzk0IaDkB", "AUSaU0GY-a0", "FgHaW-LXrxv", "Od5NVv2NKaB", "tb_7mQqdP9M", "0FC8-ME0apIl", "OWLeMiuivgLR", "OB5icDMhwEp", "PLMwagGUXS_", "PVApuj_Kl5K2", "s3cS73Hhm6k", "1OJx9gpifSDB", "nj5C_rkr1LQ", "RCjO6sksdh", "0dxK52_x6w_", "2J0haKZfP9v", "DtfZJx_gy-c", "H2NQ-lK9GmK" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your further response. \n\nFor point 1, I can understand the authors' viewpoint on the contribution, but still think a new decoding method over the same compact output matrix (and indeed the whole model structure) cannot fully support the claim that it \"introduces a compact continuous space for parameterizing the underlying distribution of candidate solutions\".\n\nFor point 2, thank you for the explanation, and the extremely fast training time is quite surprising and promising. I agree with the authors that adding this discussion and the figure on training dynamic to the revised paper could be very helpful.\n\nI raise my score to 6.", " We thank the reviewer for their insightful follow-up questions, and here are our responses.\n\n**As for point 1:** The main difference between previous “heatmaps” and our “continuous parameterization” is that each value in a previous “heatmap” is the *marginal probability* of an edge to be included in the optimal solution, while in our model $\\theta_{i,j}$ is the *conditional probability* of the next node ($i$) conditioned on the (embedded) partial solution so far (i.e., we use a Markov-chain of order k >= 1). We hope this would make the distinction clearer. Therefore, while the compact output matrix structure is the same, the probability models (which we argue is not simply interpretation) are different.\n\n**As for point 2:** According to our experimental observation, there are three factors that seem to contribute to the fast training of DIMES:\n\na) Meta-learning helps DIMES to have a more stable training process.\n\nAs is shown in Figure 3 in [1], the stability of training seems fairly related to the convergence performance. In our experiments, the loss curve of DIMES is more stable, which may explain its fast convergence. Moreover, the stability of training enables us to use a larger learning rate. (Our learning rate is 1e-3, while that of AM and POMO is 1e-4.) This also accelerates training.\n\nAlthough we do not have a theoretical explanation, we have a conjecture: the inner optimization in meta-learning helps the NN not to produce extreme values of $\\theta$ and hence improves stability. Since the optimal probability distribution should be 1 for the optimal solution and 0 for other solutions, the softmax operation will require the near-optimal parameterization $\\theta$ to have large values. When there is no inner optimization, the model has to produce large values of $\\theta$, so training the model is more likely to be unstable. When using inner optimization, the model does not need to produce large values of $\\theta$, because inner optimization can help push small values of $\\theta$ to large values.\n\nb) The loss function of DIMES seems to have lower variance, making DIMES more sample-efficient.\n\nHere is a comparison among AM, POMO, and DIMES:\n\n| Method | AM | POMO | DIMES |\n|-|-|-|-|\n| Training scale | TSP-100 | TSP-100 | TSP-500/1000/10000 |\n| Total descent steps of GNN | 250,000 | 312,600 | 120/120/50 |\n| Batch size | 512 | 64 | 3 |\n| Total training instances | 128,000,000 | 20,000,000 | 360/360/150 |\n| Training GPUs | 2 | 1 | 1 |\n| Per-step training time | 0.66s | ~0.28s | ~45s/51s/12m |\n| Total training time | ~46h | ~1 day | ~1.5h/1.7h/10h |\n\n(Note: Per-step training time is calculated based on total training time reported in the papers.)\n\nThe table shows that DIMES is more sample-efficient than AM/POMO, achieving stable training using only 3 instances per meta-gradient descent step. Hence, its total training time is accordingly much shorter, even though its per-step time is longer.\n\nc) The sampling scheme of DIMES is more scalable, which requires less GNN computation than AM/POMO.\n\nAM/POMO requires $n$ times of GNN computation to sample a solution on TSP-$n$, because it has to update context embedding at each autoregressive step. For DIMES, it needs GNN computation only once to compute $\\theta$ in a non-autoregressive manner, and then it autoregressively samples solutions using $\\theta^{(T)}$ without GNN re-computation. Hence, the sampling scheme of DIMES is more scalable. Besides, we implement the sampling procedures in C++ LibTorch to speed up further.\n\nWe will add a figure on training dynamics to the revised version of the paper to make the information more complete.\n\n[1] Kwon et al. POMO: Policy Optimization with Multiple Optima for Reinforcement Learning\n", " Thank the authors for the detailed feedback. I will retain my borderline accept score.\n\nI have an additional suggestion: the time/objective in your tables seems kind of messy. Is it possible to update the results by adjusting the running time of most methods? I think it will serve as a better benchmark for following research works on TSP.", " We thank the reviewer for their constructive feedback. Here’re our responses to the reviewer’s further comments.\n\nAs for point 1: The main difference between previous “heatmaps” and our “continuous parameterization” is that each value in a previous “heatmap” is the **marginal probability** of an edge to be included in the optimal solution, while in our model $\\theta_{i,j}$ is the **conditional probability** of the next node ($i$) conditioned on the (embedded) partial solution so far (i.e., we use a Markov-chain of order k >= 1). We hope this would make the distinction clearer.\n\nAs for point 2: Thanks for your comment. We will adopt your suggestion.\n\nAs for points 3, 4, and 8: To make our experimental results more informative, we further evaluate the performance of DIMES trained on TSP-100 and evaluated on larger graphs. \n\n| Method | Type | TSP-500 | TSP-1000 | TSP-10000 |\n|-|-|-|-|-|\n| LKH-3 | OR | 16.55\\* | 23.12\\* | 71.77\\* |\n| EAN | RL+S | 28.63 | 50.30 | OOM |\n| EAN | RL+S+2-OPT | 23.75 | 47.73 | OOM |\n| AM | RL+S | 22.64 | 42.80 | 431.58 |\n| AM | RL+G | 20.02 | 31.15 | 141.68 |\n| AM | RL+BS | 19.53 | 29.90 | 129.40 |\n| GCN | SL+G | 29.72 | 48.62 | OOM |\n| GCN | SL+BS | 30.37 | 51.26 | OOM |\n| POMO+EAS-Emb | RL+AS | 19.24 | OOM | OOM |\n| POMO+EAS-Lay | RL+AS | 19.35 | OOM | OOM |\n| POMO+EAS-Tab | RL+AS | 24.54 | 49.56 | OOM |\n| Att-GCN | SL+MCTS | 16.97 | 23.86 | 74.93 |\n| DIMES trained on TSP-n | RL+S | 18.84 | 26.36 | 85.75 |\n| DIMES trained on TSP-100 | RL+S | 19.21 | 27.21 | 86.24 |\n\nWe can see that the performance of DIMES does not drop too much. One of our hypotheses is that the graph sparsification schema in our neural network (See appendix C.1) avoids the explosion of activation values in the graph neural network. Another hypothesis is that meta learning tends to not generate too extreme values in $\\theta$ (see point 9 of our previous response) and hence improve the generalization capability.\n\nOne may wonder how to make the previous methods scale up to TSP-500/1000/10000, and if such scaling were successful, what would be the relative performance of those methods compared to DIMES? Answering these questions requires further research, and is beyond the scope of this paper. \n", " I appreaciate the detailed comments from the authors.\n\nAlthough I concern the generalization ability when I review the paper, I can understand what the authors stated at (1), and then I think that the proposed framework has a kind of generalization ability (it is still not sure which typs of problems could fit).\n\nI also appreciate comments and updates (2)-(4), and (6)-(7). They increased the readability of the paper.\n\nIn terms of (5), I think we have two types of generalization abilities for learning-based heuristics in the literature. In some NN-based studies, they discussed size-related property (learning NNs on TSP-50, and using learned NNs for different sizes like TSP-100). However, for optimization problems, we have a different type; how to generate instances (for example, TSP-50 with different location distributions). I think the word _distributions_ have wider meanings, so I recommend the authors to clarify them if possible.\n\nAnway, thanks again for the comments with experimental results, which clarify my concerns.", " Thank you for your thorough response. I've also read other reviewers' comments and have some follow-up questions.\n\n**1. Heatmap v.s. Continuous Parameterization**\n\nThank you for the clarification. However, the proposed method uses exactly the same models proposed in previous works for TSP/MIS, which have already developed the compact n*n output. I can understand your claim that the interpretation of the output matrix could be different due to the different decoding methods. But since the compact output matrix structure is from previous work, I still think the main contribution of \"introduces a compact continuous space for parameterizing the underlying distribution of candidate solutions\" is overclaimed. \n\n**2/5. Run Time and Training Time**\n \nNow the runtime reported in Table 1 makes sense. \n\nA follow-up question is about the extremely fast training time for DIMES (1.5h - 10h). In [1], the authors report that training on TSP200 could be extremely challenging, which did not converge after 500 hours. The POMO paper [2] reports that it needs one week (168 hours) to observe full converge on TSP100. Why does DIMES only need 1.5h - 10h as the training time for much larger problems such as TSP500/1000/10000? \n\n\nReference\n\n[1] Chaitanya K Joshi, Quentin Cappart, Louis-Martin Rousseau, Thomas Laurent, and Xavier Bresson. Learning tsp requires rethinking generalization. arXiv preprint arXiv:2006.07054,2020.\n\n[2] Yeong-Dae Kwon, Jinho Choo, Byoungjip Kim, Iljoo Yoon, Youngjune Gwon, and Seungjai Min. POMO: Policy optimization with multiple optima for reinforcement learning. NeurIPS 2020.", " I thank the reviewers for their detailed responses. I’ve commented on these point by point below, but in summary, I still retain some reservations regarding DIMES novelty claims regarding “heatmaps” vs “continuous distributions”, and the way in which baselines are chosen/discussed. With that said, the most significant concern I had (that the main results table simply presented unfeasible timings) has been addressed. Therefore I am happy to increase my score to a 5, as the work does present interesting ideas and I now trust the results are valid/presented in good faith.\n\n1. I’m afraid this is still not clear to me. My understanding is that, taking the definition of the sampling distribution for the next node in a TSP tour (eq. 8), if you are at node $i$, the probability of going to node j next is $\\propto \\mathrm{exp}(\\theta_{i,j})$. The “autoregression” (dependence on the nodes selected at positions 1,…,i-1) only appears as (i) masking the already visited nodes (invalid actions) and (ii) normalising the overall sampling distribution. Both of these dependencies are trivial and don’t amount to meaningful insights (indeed, surely they must also be used when sampling from heatmaps in prior works, otherwise the tours generated would not be valid/distribution would be un-normalised). Whilst you contrast your approach to heatmaps as \"the values in the heatmap correspond to the estimated probability of each edge to be included in the optimal solution” in the rebuttal, the paper says of DIMES “ the higher valued $\\theta_{i,j}$ means the higher probability for the edge from node $i$ to node $j$ to be sampled”. I’m more than happy for my misunderstanding to be clarified, but right now I stand by my original view.\n\n2. Understood. I would note that perhaps deleting the second comma in: “To our knowledge, we are the first to apply meta-learning over a collection of CO problem instances, where each instance graph is treated as one of a collection tasks a sub-task in a unified framework.” would make this more clear (as Zhang et al are also “applying meta-learning to a collection of problem instances”).\n\n3. I’m glad the extra context will be included in the revision. However, I don’t see why Driori et al is more limited in scope than DIMES as the authors claim. Both are decoded autoregressivley (whereas DIMES uses the masked “compact parameterisation” to encode the action probs given a trajectory + node, Driori et al uses masked dot-product attention). Moreover, without further details on the training costs of baselines, comparing DIMES fine-tuned on larger instances to the generalisation performance of other methods does not strictly show DIMES to be a better approach.\n\n4. Thank you for these results, which are impressive and present DIMES in a positive light. However, again, they conflate generalisation ability with performance. I don’t question that it is great DIMES can be trained on big instances and meta-learn for even more performant policies. However, the paper is comparing DIMES to the performance of models trained on smaller TSP problems without sufficient context for how large of a disadvantage this is. To be concrete, Table 1 has DIMES with some greedy decoding (18.93 on TSP-500) beating larger Transformer-based models (e.g. AM) and even the TSP active search SOTA (POMO-EAS). I would strongly expect that if these models were trained on TSP-500 instances, they would be highly competitive with DIMES. None of this is to criticize DIMES — of course, the feasibility of training is important — but I do think that the current sentence added to address this “Note that baselines are trained on small graphs and evaluated on large graphs, while DIMES can be trained directly on large graphs.” is insufficient as it really could be argued that none of the learning baselines in Table 1 are a fair direct comparison (e.g. could the models be trained for the same amount of time and then evaluated?).\n\n5. I understand the challenge — my primary reason for suspicion was the odd timings discussed in your response to 7, with your clarification on that point I am happy to accept the timings are presented in good faith.\n\n6. Understood — thank you.\n\n7. Ah wow — that makes a big difference but looks much more reasonable now! \n\n8. I appreciate the additional results, however my point was (as elaborated in response 4) that table 1 could be mis-leading as it compares generalisation to direct performance.\n\n9. I am grateful to the authors for addressing my curiosity and certainly this is not a critical point with respect evaluating the paper.\n\n10. Understood — thank you.", " > 5. “The authors only rerun one of the presented baselines (Att-GCN) on their hardware, so timing results for other baselines taken from Fu et al are potentially misleading.”\n\nDue to the time limit and unreproducibility of some baselines, we mainly report baseline timing results from Fu et al. However, since Att-GCN (Fu et al.,) significantly outperforms all existing baselines in the large-scale setting, we believe such a comparison can still highlight the value of our work.\n\nBesides, we have tried our best to ensure a fair comparison. Note that only MCTS is evaluated on CPU, while other parts of Att-GCN and DIMES, as well as other learning-based methods, are evaluated on GPU. In our experiments, we used the same GPU (1080 Ti) as Fu et al., so comparing the time of our GPU experiments with those in Fu et al. is already fair. However, we were not able to find the same CPU as that of Fu et al. So, we re-ran the MCTS of Att-GCN on our CPU to ensure a fair comparison between the MCTS times of DIMES and Att-GCN.\n\n> 6. “In Table 3, why is LwD with sampling (RL+S) not included? As LwD is better than DIMES in greedy mode, but DIMES is better with sampling, would LwD with sampling not be expected to be best of all?”\n\nThe LwD method also evaluated their models with sampling, as mentioned in “To this end, we evaluate algorithms on ER-[400, 500] and SATLIB datasets with varying numbers of samples or time limits.” of the LwD paper. We now have fixed its keyword in Table 1, listing LwD as an RL+Sampling method instead of RL+Greedy.\n\n> 7. Regarding the fine-tuning of DIMES in table 1: (i) Is the fine-tuning time included in the run time (I would have expected so, but then I am surprised how, for example, DIMES is faster the AM with equivalent settings if it has to be fine-tuned first)? (ii) Which elements of the model are fine-tuned (as four variants are shown in Table 2(b) it is clear there is some choice to be made)?\n\nWe thank the reviewer for the great question. The runtime of DIMES should be total fine-tuning time + total MCTS time. However, we mistakenly reported average fine-tuning time per instance + total MCTS time. We have fixed it in the revised version of the paper and report the results of both un-finetuned (w/o active search) and finetuned (w/ active) models.\n\n> 8. “If all baselines were taken from Fu et al, can you comment on whether the training of these was a fair comparison to DIMES? For example, DIMES is trained on the target problem sizes but it is common to present results on larger TSP instances of models trained on smaller problems.”\n\nA main advantage of DIMES is that it can scale to large-scale graphs. On the other hand, we agree with you that comparing DIMES with other less-scalable methods on small graphs is also informative. So, we report the additional results of DIMES on TSP-100 in Appendix F.1. We can see that DIMES outperforms the state-of-the-art Att-GCN in terms of the optimality gap (0.0103% v.s. 0.0370%). In fact, the performance of both models are already very close to the optimal solutions.\n \n> 9. “In the meta-learning ablations, why would training the network using meta-learning (targeted to give good performance after T steps of fine-tuning on a specific instance), give better performance when T=10 during training and T=0 at inference (i.e why is row 2 of Table 2a better than row 1?)”\n\nThanks for the insightful question. We have a hypothesis but not sure:\n\nIn case I (training T=0 and inference T=0), since the optimal probability distribution should be 1 for the optimal solution and 0 for other solutions, the softmax operation will require the near-optimal parameterization $\\theta$ to have large output values, which is difficult to be learned or produced by a Lipschitz-continuous neural network.\n\nIn case II (training T=10 and inference T=0), the model is using an inner optimization (i.e., T inner steps during training) in the objective and does not need to produce the large values of $\\theta$, because the inner gradient updates will push small values produced by the neural network to large values. Hence, meta updates (with training T=10) help regardless of whether fine-tuning is used in the evaluation.\n\n> 10. Why is S2V-DQN not used as a baseline in Table 3, when it is stated to be a baseline on line 297?\n\nThis is a typo. We have removed it from the paper.\n\n[1] Drori, Iddo, et al. \"Learning to solve combinatorial optimization problems on real-world graphs in linear time.\" 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEE, 2020.\n", " We thank the reviewer for their time, insightful comments, and questions. We have provided our responses below.\n\n> 1. “I am not convinced by the authors claim that DIMES “introduces a compact continuous space for parameterizing the underlying distribution of candidate solutions” is novel or significant, nor that this new approach “addresses the scalability challenge in large-scale combinatorial optimization” more so than previous works.”\n\nThe heatmaps in previous works are generated in a fully non-autoregressive manner, where the values in the heatmap correspond to the estimated probability of each edge to be included in the optimal solution. In our work, the n-by-n matrix $\\theta$ is used to parameterize distribution $q$, in an autoregressive manner, in estimating the probability of the next node conditioned on the path (partial solution) so far. That is, the elements in our $\\theta$ matrix are not the estimated probabilities of edges. In other words, although the heatmaps in previous work look like our $\\theta$ syntactically (as both are in the form of an n-by-n matrix), the semantics behind them are fundamentally different. To avoid possible confusion, we have revised our paper to make the distinction between “heatmap” in previous work and our novel approach clearer, by calling our $\\theta$ matrix “continuous parameterization” instead of “heatmap”.\n\n> 2. “Whilst it was also novel to my knowledge, a quick search did show that the authors might want to consider softening their claim (line 51) that \"we are the first to apply meta-learning over a collection of CO problem instances” (e.g. https://arxiv.org/abs/2105.02741).”\n\nWe thank the reviewer for the comment. Notice that Zhang et al. applies meta learning to multiobjective optimization where each objective is treated as an optimization task. However, what we wrote in the paper is “To our knowledge, we are the first to apply meta-learning over a collection of CO problem instances, where each instance graph is treated as one of a collection tasks a sub-task in a unified framework.” We believe such a claim is still valid.\n\n> 3. “In this vein, another work the authors may consider relevant is that of Driori et al (https://arxiv.org/pdf/2006.03750) who use dot-product attention between graph embeddings as an alternative parameterisation of a heatmap and demonstrate impressive performance on TSP (outperforming Farthest/2-opt heuristics on most “real world” graphs of up to ~125k nodes).”\n\nWe agree with the reviewer that [1] is a relevant work to our work and included it in the citation list. Notice that one difference between [1] and DIMES is that [1] is only trained on TSP-100 and evaluated on larger graphs, while DIMES can be directly trained on large graphs with the help of meta updates. and that DIMES achieves an optimality gap of 7.61% on TSP-500, while [1] achieved an optimality gap of ~ 10% on TSP-200). More importantly, [1] is only designed for CO problems that can be transformed into a sequential decoding problem (such as routing), while DIMES can be applied to other general CO problems (such as MIS). We will add a paragraph to discuss those points about [1] vs DIMES in the 10-page version of the paper.\n\n> 4. “The attention model of Kool et al [34] is used as an RL baseline. However, this approach has been significantly improved in POMO (https://arxiv.org/abs/2010.16011) and then subsequently adapted for active search at test time (https://arxiv.org/abs/2106.05126).”\nWe run the POMO+EAS code by Hottung et al. with the POMO model trained on TSP100. The experiments ran on a 16GB GPU with batch size 1. Here are the results:\n\n| Setting | TSP500 | TSP1000| TSP10000|\n| ----------- | ----------- | ----------- | ----------- |\n| POMO+EAS-Emb | 19.24 | OOM | OOM|\n| POMO+EAS-Lay | 19.35 | OOM | OOM|\n| POMO+EAS-Tab | 24.54 | 49.56 | OOM|\n| DIMES | 16.84 | 23.69 | 74.06|\n\nwhere “OOM” indicates “out of GPU memory”. The results suggest that POMO+EAS may not work well on larger sizes due to distribution shift and high memory consumption. We have added those details in Table 1 of the paper.\n", " > 5. “DIMES needs to use RL to directly train deep GNN for large-scale problem instances with up to 100,000 nodes. Is there any challenge for such training? How long will DIMES take to converge for TSP500/1000/10000 (#training instances and wall-clock time)?”\n\nIn this paper, we only tried on graphs with up to 10,000 nodes, while we do not see obvious challenges to apply DIMES to larger graphs (e.g., 100,000 nodes).\nThe time 1.5h - 10h of DIMES is training time (i.e., time of meta-updates) on training instances (not on test instances). Please see Appendix D.1 for the details.\n\n> 6. “TSP/MIS is a good testbed for neural combinatorial optimization. But the real-world applications will typically have problems with various structures that can not be solved by classical solvers. This is an important motivation for learning-based solvers without domain knowledge. Can DIMES generalize to other routing problems such as those in the AM paper?”\n\nThe generality of our unified framework is based on the assumption that each feasible solution of the CO problem on hand can be represented with a vector of 0/1 valued variables (typically corresponding the selection of a subset of nodes or edges), which is fairly mild and generally applicable to many CO problems beyond MIS and (see Karp's 21 problems) with few modifications. The design principle of auxiliary distribution is to design an autoregressive model that can sequentially grow a partial solution toward a valid complete solution. This design principle is also proven to be generic enough for many problems in neural learning, including CO solvers.\nAs for problems beyond this assumption (as discussed in Section 5), Mixed Integer Programming (MIP) is an example, where the variables can take multiple integer values instead of binary. However, such a limitation can be addressed by reducing each integer value (x) to a sequence of bits (log x) and by predicting the bits one after another. In other words, a multi-valued MIP problem can be reduced to a binary-valued MIP problem with more variables, as is shown in [1].\n[1]: Solving Mixed Integer Programs Using Neural Networks\n\n> 7. \"Citation [42] and [43] in the paper are for the same AM paper but with two different years (the 2019 one is correct).\"\n\nThanks for pointing it out. We have fixed this.", " We greatly appreciate the enthusiasm and interest the reviewer has shown towards our work. We have provided our responses below.\n\n> 1. \"Inaccurate Contribution: One claimed contribution of this work is the compact continuous parameterization of the solution space. However, as discussed in the paper, DIMES directly uses the widely-used GNN models to generate the solution heatmap for TSP[1,2] and MIS[3] problems, respectively. The credit for compact continuous parameterization should be given to the previous work [1,2,3] but not this work.”\n\nThe heatmaps in previous works are generated in a fully non-autoregressive manner, where the values in the heatmap correspond to the estimated probability of each edge to be included in the optimal solution. In our work, the n-by-n matrix $\\theta$ is used to parameterize distribution $q$, in an autoregressive manner, in estimating the probability of the next node conditioned on the path (partial solution) so far. That is, the elements in our $\\theta$ matrix are not the estimated probabilities of edges. In other words, although the heatmaps in previous work look like our $\\theta$ syntactically (as both are in the form of an n-by-n matrix), the semantics behind them are fundamentally different. To avoid possible confusion, we have revised our paper to make the distinction between “heatmap” in previous work and our novel approach clearer, by calling our $\\theta$ matrix “continuous parameterization” instead of “heatmap”.\n\t\n> 2. \"Actual Cost of Meta-Learning: The meta-learning (meta-update/fine-tuning) approach is crucial for the proposed method's promising performance. However, its actual cost has not been clearly discussed in the main paper.”\nThe time 1.5h - 10h of DIMES is training time (i.e., time of meta-updates) on training instances (not on test examples), which should not be in Table 1. The evaluation runtime of DIMES in Table 1 consists of fine-tuning steps and MCTS on test instances. Therefore, it is a fair comparison to previous work.\n\nHowever, we find that we report the wrong runtime for our DIMES model by mistake. The runtime of DIMES should be total fine-tuning time + total MCTS time. But we mistakenly reported average fine-tuning time per instance + total MCTS time. We have fixed it in the revised version of the paper and report the results of both un-finetuned (w/o active search) and finetuned (w/ active) models.\n\n> 3. \"Generalization v.s. Testing Performance: To my understanding, all the other learning-based methods in Table 1 are trained on TSP100 instances but not TSP500-TSP10000 as for DIMES. Therefore, the results reported in Table 1 are actually their out-of-distribution generalization performance.” & “In addition, it is also interesting to see a comparison of DIMES with other methods on TSP100 (in-distribution testing performance) with/without meta-learning.”\n\nYou are right: for other DRL methods in Table 1, they are trained on TSP-100 and evaluated on TSP-500, 1000, and 1000. We agree a direct comparison of DIMES and other DRL methods on TSP-100 is informative. The detailed comparison is included in Appendix F.1 of the paper. We can see that DIMES outperforms the state-of-the-art Att-GCN in terms of the optimality gap (0.0103% v.s. 0.0370%). However, the performance of both models are already very close to the optimal solutions.\n\nBesides, we added into Section 4.1.2 Line 241-243 a clarification that previous methods are trained on small graphs while DIMES is trained on large graphs.\n\n> 4. \"Hottung et al.[5] shows that POMO + Efficient Active Search (EAS) can achieve promising generalization performance for larger TSP instances on TSP and CVRP. The comparison with POMO + EAS could be important to better evaluate the advantage of meta-learning in DIMES.”\n\nWe run the POMO+EAS code by Hottung et al. with the POMO model trained on TSP100. The experiments ran on a 16GB GPU with batch size 1. Here are the results:\n| Setting | TSP500 | TSP1000| TSP10000|\n| ----------- | ----------- | ----------- | ----------- |\n| POMO+EAS-Emb | 19.24 | OOM | OOM|\n| POMO+EAS-Lay | 19.35 | OOM | OOM|\n| POMO+EAS-Tab | 24.54 | 49.56 | OOM|\n| DIMES | 16.84 | 23.69 | 74.06|\n\nwhere “OOM” indicates “out of GPU memory”. The results suggest that POMO+EAS may not work well on larger sizes due to distribution shift and high memory consumption. We have added those details in Table 1 of the paper.\n", " > 4. “For datasets, the authors mentioned two sources from [35] (train) and [16] (test). The difference and similarities of datasets among them should be explained.”\n\nThe training data used by both DIMES and [35] are randomly sampled uniformly from a Euclidean space on the fly during training time The only difference in the data by DIMES and [35] is the use of different random seeds.\nThe test data [16] is generated in the same way by DIMES and [35]. For a fair comparison, we directly took the test data of [16] for evaluation.\n\n> 5. “For example, TSP-1000 with MAML requires 1.7[h] learning times + 4.47[m] to generate solution (1.7[h] from supp mat and 4.47[m] from Table 1.), but LKH-3 requires 38.09[m]. So, if the learning results are not shareable with other kinds of datasets, the learning process is a bit resource-consuming trial. Do you have any ideas or findings?”\n\nThe insight of applying learning algorithms to combinatorial optimization problems is that after training over a set of instance problems (graphs), the learned neural network can achieve better average performance than the algorithms without such learning. In other words, the learned neural network has the functionality of sharing the learned knowledge (search strategies based on the estimated $q$ distribution) across graphs, including those not seen in the training set. Therefore, as long as the training instances and test instances are from the same underlying distribution and share some of factorized parameters, the learning algorithms should work well. Since the training cost is off-line, a well-trained model can be faster (less resource-consuming) than untuned heuristic algorithms in the testing phase (which is the focus in Table 1).\n\n> 6. “The dimension T (of MAML) is fixed. Is this tunable? (or need to be tuned to get better solutions?). In the ablation study (c), it seems that learning more by MAML produces better solutions. Is this interpretation true?”\n\nAs shown in our ablation study (Table 2c), with the number of inner gradient updates $T$ increasing, the testing-phase performance improves accordingly. However, this also consumes more training time. Hence, there is a trade-off between performance and training time in practice.\n\n> 7. “what is DIR-based? solves => solves. Some other english errors should be updated for readability.”\n\nThanks for pointing out the typos. We have fixed them.\n", " We appreciate the in-depth questions and suggestions given by the reviewer. We have provided our responses below.\n> 1. “The generalization ability when designing auxiliary distributions is unclear.” & “I feel that the proposed concept is interesting to investigate, and I'm interested in how to design auxiliary functions (this is also noted in sec.5 by the authors). I also read proofs in the supplemental material. I wonder if we can design such functions for various CO problems…”\n\nThe generality of our unified framework is based on the assumption that each feasible solution of the CO problem on hand can be represented with a vector of 0/1 valued variables (typically corresponding the selection of a subset of nodes or edges), which is fairly mild and generally applicable to many CO problems beyond MIS and TSP (see Karp's 21 problems) with few modifications. The design principle of auxiliary distribution is to design an autoregressive model that can sequentially grow a partial solution toward a valid complete solution. This design principle is also proven to be generic enough for many problems in neural learning, including CO solvers.\nAs for problems beyond this assumption (as discussed in Section 5), Mixed Integer Programming (MIP) is an example, where the variables can take multiple integer values instead of binary. However, such a limitation can be addressed by reducing each integer value (x) to a sequence of bits (log x) and by predicting the bits one after another. In other words, a multi-valued MIP problem can be reduced to a few binary-valued MIP problems, as shown in [1].\n[1]: Solving Mixed Integer Programs Using Neural Networks\n\n> 2. The discussion and background of MAML (e.g., the idea of T gradient updates) are a bit hard to follow.\n\nWe have added a short description of MAML to Section 3.3 (line 170-174) to make the background of MAML clearer.\n\n> 3. “LKH-3 seems to solve TSP instances (size from 9847 to 16862) efficiently (from 381[s] to 976[s]). I wonder what is the differences between such traditional benchmark TSP instances (like ja9847) and those generated by Fu et al. [16] (TSP-1000/TSP-10000)...”\n\nYou are right. Following the previous work, we used the default parameters of LKH-3 for all experiments, where the number of max trials is 10000.\n\nIn our new experiments, we find that LKH-3 with less trials still have strong results indeed. Especially, when decreasing the number of max trials to 500 for TSP500, 500 for TSP1000, and 250 for TSP10000 to reduce the running times of LKH-3 which match that of DIMES+MCTS, we got the following results of LKH-3:\n| Setting | TSP500 | TSP1000| TSP10000|\n| ----------- | ----------- | ----------- | ----------- |\n| LKH-3 (10000 max trails) | 16.55 | 23.12 | 71.77|\n| LKH-3 (less trails) | 16.55 | 23.12 | 71.79|\n| DIMES | 16.84 | 23.69 | 74.06|\n\nWe can see that LKH-3 achieves the same performance with less trials except for the large-scale TSP10000 task. This shows that expert-designed solvers with careful parameter tuning still can outperform learning-based methods. We have included these additional results in our paper.", " We thank the reviewer for their time, insightful comments, and questions. We have provided our responses below.\n\n> 1. “How many trials are configured for LKH-3? Can you decrease the number of trials so that LKH-3 can be comparatively faster than DIMES, and report the time and tour lengths?”\n\nFollowing the previous work, we used the default parameters of LKH-3 for all experiments, where the number of max trials is 10000.\nEspecially, when decreasing the number of max trials to 500 for TSP500, 500 for TSP1000, and 250 for TSP10000 to reduce the running times of LKH-3 which match that of DIMES+MCTS, we got the following results of LKH-3:\n\n| Setting | TSP500 | TSP1000| TSP10000|\n| ----------- | ----------- | ----------- | ----------- |\n| LKH-3 (10000 max trails) | 16.55 | 23.12 | 71.77|\n| LKH-3 (less trails) | 16.55 | 23.12 | 71.79|\n| DIMES | 16.84 | 23.69 | 74.06|\n\nWe can see that LKH-3 achieves the same performance with less trials except for the large-scale TSP10000 task. This shows that expert-designed solvers with careful parameter tuning still can outperform learning-based methods. We have included these additional results in our paper.\n\n> 2. “In L149 the authors write: \"we also no longer need costly MCMC-based sampling for optimizing our model\". If sampling is not needed, how do you estimate the expected cost value after the neural network predicts q?”\n\nWe meant that we use autoregressive factorization with sampling from the auxiliary distribution which is faster than sampling with MCMC from the distribution defined by the energy function. We have modified our sentence (page 4 line 149) to make this point non-ambiguous.\n\n> 3. “What is the application scope of the proposed method? Beyond routing and independent set, can you list some other types of problems that the proposed method can tackle and some that may be hard to handle?”\n\nThe generality of our unified framework is based on the assumption that each feasible solution of the CO problem on hand can be represented with a vector of 0/1 valued variables (typically corresponding the selection of a subset of nodes or edges), which is fairly mild and generally applicable to many CO problems beyond MIS and TSP (see Karp's 21 problems) with few modifications. The design principle of auxiliary distribution is to design an autoregressive model that can sequentially grow a partial solution toward a valid complete solution. This design principle is also proven to be generic enough for many problems in neural learning, including CO solvers.\n\nAs for problems beyond this assumption (as discussed in Section 5), Mixed Integer Programming (MIP) is an example, where the variables can take multiple integer values instead of binary. However, such an limitation can be addressed by reducing each integer value (x) to a sequence of bits (log x) and by predicting the bits one after another. In other words, a multi-valued MIP problem can be reduced to a few binary-valued MIP problems, as shown in [1].\n[1]: Solving Mixed Integer Programs Using Neural Networks\n\n> 4. “Is there an ablation study for with/without meta-learning?”\n\nPlease see Table 2 (a) and Section 4.1.3 for our ablation study on meta-learning.\n", " We thank all the reviewers for their precious time and insightful comments. We appreciate that the reviewers recognize our work as well-motivated (Reviewer fQdp, Reviewer fe3B), technically sound (Reviewer fQdp), scalable (Reviewer fQdp, Reviewer LJJE, Reviewer dJ85), and is a timely contribution (Reviewer dJ85). To improve the paper quality, we respond to the reviewers’ comment by making the following major revisions to the paper:\n\n1. Following the reviewers’ suggestions, we include the results of LKH-3 with less trials and POMO + EAS as additional baselines in Table 1. We can see that DIMES is still a state-of-the-art learning method.\n\n2. We add a comparison of DIMES and other methods on TSP-100 in Appendix F.1 of the paper. We can see that DIMES still outperforms the state-of-the-art Att-GCN in terms of the optimality gap (0.0103% v.s. 0.0370%). In fact, the performance of both models is already very close to the optimal solutions.\n\n3. “On the novelty of our proposed continuous compact parameterization compared to heatmap method in previous work.”\n\nThe heatmaps in previous works are generated in a fully non-autoregressive manner, where the values in the heatmap correspond to the estimated probability of each edge to be included in the optimal solution. In our work, the n-by-n matrix $\\theta$ is used to parameterize distribution $q$, in an autoregressive manner, in estimating the probability of the next node conditioned on the path (partial solution) so far. That is, the elements in our $\\theta$ matrix are not the estimated probabilities of edges. In other words, although the heatmaps in previous work look like our $\\theta$ syntactically (as both are in the form of an n-by-n matrix), the semantics behind them are fundamentally different. To avoid possible confusion, we have revised our paper to make the distinction between “heatmap” in previous work and our novel approach clearer, by calling our $\\theta$ matrix only “continuous parameterization” instead of “heatmap”.\n\n4. We have added a short description of MAML to Section 3.3 (line 170-174) to make the background of MAML clearer.\n", " This paper improves existing DRL-based CO methods in two terms. Firstly, the authors leverage a continuous probabilistic space for the solutions, leading to a REINFORCEMENT-based training method which is more efficient than previous Q-learning or Actor-Critic methods. Besides, a MAML-based meta-learning framework is proposed. **Strengths**\n1. This paper is well-motivated by several important issues in existing DRL-based CO methods.\n2. The proposed REINFORCEMENT-based method and the MAML-based meta-learning method seem sound.\n3. The proposed method can handle larger-sized problems than existing DRL-based CO methods.\n\n**Weaknesses**\n1. This paper aims for developing a new general framework, but the current evaluation is only for two problems. Evaluating more problems of different natures (such as covering/matching problems) will make this paper more concrete and convincing.\n2. There are certain implementation details that seem unclear to me and I am expecting the authors to answer (see \"Questions\").\n\n**Typos**\n1. Line 45: What is \"DIR\"-based CO solver?\n2. Repeated citations: 29/30, 34/35 1. How many trials are configured for LKH-3? Can you decrease the number of trials so that LKH-3 can be comparatively faster than DIMES, and report the time and tour lengths?\n2. In L149 the authors write: \"we also no longer need costly MCMC-based sampling for optimizing our model\". If sampling is not needed, how do you estimate the expected cost value after the neural network predicts q?\n3. What is the application scope of the proposed method? Beyond routing and independent set, can you list some other types of problems that the proposed method can tackle and some that may be hard to handle?\n4. Is there an ablation study for with/without meta-learning? The limitations are addressed.", " This paper studies a differential meta solver for CO problems and its performance is demonstrated by using TSP instances (size 500, 1000, and 10000). The difference between the proposed method in this paper and existing DRL-based methods is from the fact that DIMES focuses on a parameterized continuous space to represent solutions of CO problems. Experimental results indicate that DIMES show better performance than DRL-based solvers. Ablation studies on DIMES show that the proposed components work effectively. [Strength]\n- Focusing on the continuous space (Eq.2) could widen this research field (rather than the encoding-decoding scheme in DRL-based methods).\n- Large instances (e.g., TSP-10000) are tackled by DIMES (Note that such scalability could be hard for DRL-based methods in the current status).\n\n[Weakness]\n- The generalization ability when designing auxiliary distributions is unclear.\n- The discussion and background of MAML (e.g., the idea of T gradient updates) are a bit hard to follow.\n - I feel that the proposed concept is interesting to investigate, and I'm interested in how to design auxiliary functions (this is also noted in sec.5 by the authors). I also read proofs in the supplemental material. I wonder if we can design such functions for various CO problems. The paper proposed the answer for TSP and MIS, but defining such functions (with proofs) seems to be challenging (as many variations on objective functions and constraints). If this is not straightforward (or this requires some hand-crafted works like parameter tuning), I feel that DIMES requires us to design such functions (this is similar to us designing good encoding-decoding scheme for DRL-based methods, good heuristics in OR, better algorithms, etc.).\n\n- In Table 1, LKH-3 with TSP-1000 requires 38.09[m] (this may be from Fu et al. [16] https://arxiv.org/pdf/2012.10658.pdf). Thus, it seems that the DIMES outperforms the famous heuristic solver. However, for example in Table 1 [https://arxiv.org/pdf/1402.4699.pdf] (Note that I do not want to say like `this paper is good/bad`, I just cite experimental results from this.), LKH-3 seems to solve TSP instances (size from 9847 to 16862) efficiently (from 381[s] to 976[s]). I wonder what is the differences between such traditional benchmark TSP instances (like ja9847) and those generated by Fu et al. [16] (TSP-1000/TSP-10000). This behavior is also seen in existing papers like AM's paper (LKH3 requires 21[m] to solve TSP-100). Such results (often reported in papers proposing learning-based solvers) seem to be strange to me. Could you please explain them?\n\n- For datasets, the authors mentioned two sources from [35] (train) and [16] (test). The difference and similarities of datasets among them should be explained.\n\n- Related to the previous question, I'm interested in the learned results that can be generalized to other instances. For example, TSP-1000 with MAML requires 1.7[h] learning times + 4.47[m] to generate solution (1.7[h] from supp mat and 4.47[m] from Table 1.), but LKH-3 requires 38.09[m]. So, if the learning results are not shareable with other kinds of datasets, the learning process is a bit resource-consuming trial. Do you have any ideas or findings?\n\n- The dimension T (of MAML) is fixed. Is this tunable? (or need to be tuned to get better solutions?). In the ablation study (c), it seems that learning more by MAML produces better solutions. Is this interpretation true?\n\n- (L.45) Instead of previous DIR-based CO sovles; what is DIR-based? sovles => solves. Some other english errors should be updated for readability.\n After reading the main paper and appendix, I feel that the authors carefully discuss the negative impacts.", " This work proposes a novel DIMES framework (stands for differentiable meta solver) to tackle large-scale learning-based combinatorial optimization problems. The key novelties and contributions are 1) an RL-based approach to train the widely-used GCN model to generate probability heatmap, and 2) a meta-learning approach to further finetune the solutions at inference. Experimental results show that the proposed DIMES can achieve promising performance for large-scale TSP and MIS problems with up to 10,000 nodes.\n **Strengths:**\n\n+ Learning to solve large-scale combinatorial optimization problems is crucial for many real-world applications. This work is a timely contribution to an important research topic.\n\n+ To my understanding, the proposed RL-based approach for training GCN to obtain the probability heatmap is novel. It is quite promising to see a pure end-to-end RL-based approach can tackle TSP with up to 10,000 nodes.\n\n+ The meta-learning based fine-tuning strategy is also new for neural combinatorial optimization.\n\n+ The experimental results on large-scale TSP are good.\n\n**Weaknesses:**\n\nI cannot give a clear acceptance to the current manuscript due to the following concerns:\n\n**1. Inaccurate Contribution:** One claimed contribution of this work is the compact continuous parameterization of the solution space. However, as discussed in the paper, DIMES directly uses the widely-used GNN models to generate the solution heatmap for TSP[1,2] and MIS[3] problems, respectively. The credit for compact continuous parameterization should be given to the previous work [1,2,3] but not this work.\n\nFor TSP, Joshi et al.[1] have systemactilly studied the effect of different solution decoding (e.g., Autoregressive Decoding (AR) v.s. Non-autoregressive decoding (NAR, the heatmap approach) and learning methods ( supversied learning (SL) v.s. reinforcement learning (RL)). To my understanding, the combination of AR + SL, AR + RL and NAR(heatmap) + SL have been investigated in Joshi et.al. and other work (e.g., PtrNet-SL, PtrNet-RL/AM, GCN), but I am not aware of othe work on NAR(heatmap) + RL. The NAR + RL combination could be the novel contribution of this work.\n\n**2. Actual Cost of Meta-Learning:** The meta-learning (meta-update/fine-tuning) approach is crucial for the proposed method's promising performance. However, its actual cost has not been clearly discussed in the main paper. For example, Table 1 reports that DIMES only needs a few minutes to solve 128 TSP500/TSP1000 and 16 TSP10000 instances. However, at inference, DIMES actually needs extra meta-gradient update steps to adapt its model parameters to each problem instance. The costs of the meta-gradient steps are 1.5h - 10h for TSP500 to TSP10000 as reported in Appendix C.1. Since all the other heuristic/learning methods do not require such meta update step, it is unfair to report that the runtime of DIMES is only a few minutes (which should be a few hours) in Table 1.\n\n**3. Generalization v.s. Testing Performance:** To my understanding, all the other learning-based methods in Table 1 are trained on TSP100 instances but not TSP500-TSP10000 as for DIMES. Therefore, the results reported in Table 1 are actually their out-of-distribution generalization performance. There are two important generalization gaps compared with DIMES: 1) generalization from TSP100 to TSP10000, 2) generalization to the specific TSP instances (the fine-tuning step in DIMES). I do see it is DIMES's own advantages (direct RL training for large-scale problems + meta fine-tuning) to overcome these two generalization gaps, but the difference should be clearly clarified in the paper. \n\nIn addition, it is also interesting to see a comparison of DIMES with other methods on TSP100 (in-distribution testing performance) with/without meta-learning.\n\n**4. Advantage of NAR(heatmap) + RL + Meta-Learning:** From Table 1&2, for TSP1000, the generalization performance of AM (G: 31.15, BS: 29.90) trained on TSP100 is not very far from the testing performance of DIMES without meta-learning (27.11) directly trained on TSP1000. It could be helpful to check whether the more powerful POMO approach[4] can have a smaller performance gap. Reporting the results for POMO and DIMES without meta-learning for all instances in Table 1 could make the advantage of the NAR(heatmap) + RL approach in DIMES much clearer.\n\nHottung et al.[5] shows that POMO + Efficient Active Search (EAS) can achieve promising generalization performance for larger TSP instances on TSP and CVRP. The comparison with POMO + EAS could be important to better evaluate the advantage of meta-learning in DIMES.\n\n[1] Chaitanya K Joshi, Quentin Cappart, Louis-Martin Rousseau, Thomas Laurent, and Xavier Bresson. Learning tsp requires rethinking generalization. arXiv preprint arXiv:2006.07054,2020.\n\n[2] Chaitanya K Joshi, Thomas Laurent, and Xavier Bresson. An efficient graph convolutional network technique for the travelling salesman problem. arXiv preprint arXiv:1906.01227, 2019.\n\n[3] Zhuwen Li, Qifeng Chen, and Vladlen Koltun. Combinatorial optimization with graph convolutional networks and guided tree search. NeurIPS 2018.\n\n[4] Yeong-Dae Kwon, Jinho Choo, Byoungjip Kim, Iljoo Yoon, Youngjune Gwon, and Seungjai Min. POMO: Policy optimization with multiple optima for reinforcement learning. NeurIPS 2020.\n\n[5] André Hottung, Yeong-Dae Kwon, and Kevin Tierney. Efficient active search for combinatorial optimization problems. ICLR 2022.\n - DIMES needs to use RL to directly train deep GNN for large-scale problem instances with up to 100,000 nodes. Is there any challenge for such training? How long will DIMES take to converge for TSP500/1000/10000 (#training instances and wall-clock time)? \n\n- TSP/MIS is a good testbed for neural combinatorial optimization. But the real-world applications will typically have problems with various structures that can not be solved by classical solvers. This is an important motivation for learning-based solvers without domain knowledge. Can DIMES generalize to other routing problems such as those in the AM paper?\n\n- Citation [42] and [43] in the paper are for the same AM paper but with two different years (the 2019 one is correct). \n Yes, the limitations have been adequately addressed in Section 5 Concluding Remarks. I do not see any potential negative societal impact of this work.", " The paper proposed DIMES, a framework for tackling graph-based combinatorial optimisation problems with reinforcement learning. DIMES runs a graph neural network to produce a heatmap over the candidate variables (edges for TSP, nodes for MIS) which corresponds to a policy for which variable to select next from any given position. This approach seeks to provide more scalable CO solvers as solutions can be efficiently sampled from a generated heatmap with minimal additional overhead. Compared to prior works, DIMES primary claims of novelty are (i) the process does not require supervised data to generate the heatmaps and (ii) that DIMES can be trained, using meta-learning, on a distribution of instances such that it can quickly and efficiently specialise during active search on a singe instance. In experiments on TSP and MIS, DIMES is shown to perform competitively or better than the considered RL/SL baselines and remain efficient even on large problem sizes.\n *Strengths*\n\nThe problem tacked — scalable RL approaches for CO — is significant and well motivated. Improvements in performance and scalability of are of interest to the community.\n\nAs it is typically intractable to produce optimal solutions to NP-hard CO problems, the efficient sampling of candidate solutions and active search on a target instance at test time important components of many approaches. In this context, the demonstrated efficacy meta-learning good starting points for the agent from which to fine-tune is interesting. Whilst it was also novel to my knowledge, a quick search did show that the authors might want to consider softening their claim (line 51) that \"we are the first to apply meta-learning over a collection of CO problem instances” (e.g. https://arxiv.org/abs/2105.02741).\n\nThe experimental results appear impressive, though I have some points of concern or that require clarification (see below) before I can fairly judge these.\n\n*Weaknesses*\n\nI am not convinced by the authors claim that DIMES “introduces a compact continuous space for parameterizing the underlying distribution of candidate solutions” is novel or significant, nor that this new approach “addresses the scalability challenge in large-scale combinatorial optimization” more so than previous works. Ultimately, DIMES is predicting a heatmap with a single pre-processing step of the entire problem instance (graph), which then guides some search process (greedy, sampling, MCTS etc). Whilst the paper contrasts DIMES to previous works with “we introduce a compact continuous space for parameterizing the underlying distribution of candidate solutions” — it seems to me that this is just different language for talking about a heatmap which, by the authors admission, many previous works have already used. The authors note that previous heatmap approaches were not pure RL (relying on some labelled data), however applying RL for TSP well established and having that agent output a heatmap instead of, say, embeddings that implicitly define a heatmap via attention, or running a full inference at each step is a straightforward modification that doesn’t appear to rely on any architectural or theoretical insights.\n\nWhilst the paper is well written from a linguistic perspective, I do feel it loses clarity in the way it presents DIMES and draws contrasts with prior work where they may not be justified. For example, “Instead of previous DIR-based CO sovles which rely on construction or improvement heuristics, we introduce a compact continuous space for parameterizing the underlying distribution of candidate solutions which allows massively parallel on-policy sampling”. To me, DIMES is a construction heuristic (it starts at a node/edge, and then iteratively picks the next until a solution is found). Moreover, the scalability does not come from any breakthrough regarding the compact continuous parameterisation space, but in essence because as much processing as possible (e.g. the expensive GNN) is restricted to single preprocessing step. However, this is not a novel idea - even the authors own RL baselines for TSP - Joshi et al [24] and Kool et al [34] - use an expensive preprocessing step followed by fast action decoding. In this vein, another work the authors may consider relevant is that of Driori et al (https://arxiv.org/pdf/2006.03750) who use dot-product attention between graph embeddings as an alternative parameterisation of a heatmap and demonstrate impressive performance on TSP (outperforming Farthest/2-opt heuristics on most “real world” graphs of up to ~125k nodes).\n\nI am concerned that certain more recent baselines may be missing, and that certain results may be missing from the experimental section. Some points are listed here, but others are left as questions below where I feel I need more clarity.\n\n- The attention model of Kool et al [34] is used as an RL baseline. However, this approach has been significantly improved in POMO (https://arxiv.org/abs/2010.16011) and then subsequently adapted for active search at test time (https://arxiv.org/abs/2106.05126). Driori et al (above) is also a significantly scalable RL TSP solver that would seem to be an important baseline if DIMES is justify claims that they are contributing with regards to scalability.\n\n- The authors only rerun one of the presented baselines (Att-GCN) on their hardware, so timing results for other baselines taken from Fu et al are potentially misleading.\n\n- In Table 3, why is LwD with sampling (RL+S) not included? As LwD is better than DIMES in greedy mode, but DIMES is better with sampling, would LwD with sampling not be expected to be best of all? - Regarding the fine-tuning of DIMES in table 1: (i) Is the fine-tuning time included in the run time (I would have expected so, but then I am surprised how, for example, DIMES is faster the AM with equivalent settings if it has to be fine-tuned first)? (ii) Which elements of the model are fine-tuned (as four variants are show in Table 2(b) it is clear there is some choice to be made)?\n- If all baselines were taken from Fu et al, can you comment on whether the training of these was a fair comparison to DIMES? For example, DIMES is trained on the target problem sizes but it is common to present results on larger TSP instances of models trained on smaller problems.\n- In the meta-learning ablations, why would training the network using meta-learning (targeted to give good performance after T steps of fine-tuning on a specific instance), give better performance when T=10 during training and T=0 at inference (i.e why is row 2 of Table 2a better than row 1?)\n- Why is S2V-DQN not used as a baseline in Table 3, when it is stated to be a baseline on line 297? Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "-Glzk0IaDkB", "tb_7mQqdP9M", "nj5C_rkr1LQ", "0FC8-ME0apIl", "s3cS73Hhm6k", "PLMwagGUXS_", "H2NQ-lK9GmK", "OB5icDMhwEp", "H2NQ-lK9GmK", "PVApuj_Kl5K2", "DtfZJx_gy-c", "1OJx9gpifSDB", "2J0haKZfP9v", "0dxK52_x6w_", "nips_2022_9u05zr0nhx", "nips_2022_9u05zr0nhx", "nips_2022_9u05zr0nhx", "nips_2022_9u05zr0nhx", "nips_2022_9u05zr0nhx" ]
nips_2022_VdQWVdT_8v
LOG: Active Model Adaptation for Label-Efficient OOD Generalization
This work discusses how to achieve worst-case Out-Of-Distribution (OOD) generalization for a variety of distributions based on a relatively small labeling cost. The problem has broad applications, especially in non-i.i.d. open-world scenarios. Previous studies either rely on a large amount of labeling cost or lack of guarantees about the worst-case generalization. In this work, we show for the first time that active model adaptation could achieve both good performance and robustness based on the invariant risk minimization principle. We propose \textsc{Log}, an interactive model adaptation framework, with two sub-modules: active sample selection and causal invariant learning. Specifically, we formulate the active selection as a mixture distribution separation problem and present an unbiased estimator, which could find the samples that violate the current invariant relationship, with a provable guarantee. The theoretical analysis supports that both sub-modules contribute to generalization. A large number of experimental results confirm the promising performance of the new algorithm.
Accept
The reviews and the discussions converged on the consensus that the paper contains novel ideas and is theoretically solid. However, a discrepancy between the scores remains after the discussions due to different opinions on the experimental part, especially the lack of comparison with standard adaptation baselines in the computer vision community. I read the manuscript, and I agree with reviewer WaQo and nvJG that the experiments are sufficient to support the claims, considering that the proposed method does not naturally apply to image data. That being said, I kindly ask the authors to take into account reviewers' comments while preparing the camera-ready version.
train
[ "1acJK91L9Vb", "GIZGl1sXGgd", "Ba4_w6U-PO", "cGzwYsxMg9", "-ob_RAz3H-l", "zJd-Ryvr3FT", "Ib59l4YyOdJ", "je-HM8Gdfpb", "p43xOQCbbk3" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to authors for their reply. \n\nAs the paper tackles active adaptation, and all three active adaptation baselines considered in the paper [37, 28, 6] show results on datasets like DIGITS and OFFICE, I still believe it would make a stronger paper with comparisons on these datasets. I acknowledge that the results on real world tabular data, and distribution shifts on them, validates the novel method to an extent. \n\nThank you for clarification on the mixture proportion. \n ", " Thank you for the advice. \nSemantic variable and intervention variable are terms in causality, that is, the semantic variable causals the instance label $Y$: $Y=f(Z^s)$ in our Assumption 3.3. More specifically, the intervention variable is the latent variable in $X$ that is irrelevant to the task label $Y$. Learning model directly on $(X, Y)$ may overfit the spurious correlation between intervention variables and task label. Under distribution shift, the relation between the intervention and task label will shift, but the relation between semantic variables and task label holds stable. More importantly, for out-of-distribution generalization, our goal is to identify the semantic variable (cause) to predict $Y$ and reduce the overfitting of intervention variables. In our experiments on real-world datasets, we validate the robustness on three types of intervention variables (distribution shifts): region, person, and time. For example, in income prediction, the features “sex” and “race” are the intervention variables. The features “work class”, “education”, “hours-per-week” … are the semantic variables that cause the income $Y$. Our learning model is not dependent on the intervention variables, and obtains a _robust_ predictor across different person groups.\n", " Thank you for the advice. \n1. About the experiments. Except for image data, this paper also studies tabular data with widely real-world applications. These datasets are widely employed in the related out-of-distribution generalization research [1, 2, 3]. Note that our experiments have included different distribution shift types: region, person group, and time. As a result, the presented method saves more than 50% samples to achieve competitive performance compared to the baseline (Figures 4 d and e), showing it has efficiently extracted the invariance and promoted active adaptation. For image data, we also reported the results on the benchmark C-MNIST in the appendix (Section C.6, pages 7-8). It has clearly demonstrated the effectiveness of our framework with representation learning. In addition, we had reported the active learning results on real-world data with varying budgets in Figure 4 (page 8 in the main paper). \n\n2. About the knowledge of mixture proportion. For the knowledge of mixture proportion, we have provided its estimation method (line 202-204, page 5) and implemented it in the experiments. All of our results are reported by the estimated mixture proportion. Notice that the mixture proportion estimation is well studied and not our main contribution. Thus, we omit the details in this paper and refer readers to the corresponding implementation [4]. \n\n[1] Heterogeneous risk minimization. ICML 2021 \n[2] Environment Inference for Invariant Learning. ICML 2021 \n[3] Kernelized heterogeneous risk minimization. NeurIPS 2021 \n[4] Mixture proportion estimation via kernel embeddings of distributions. ICML 2016 \n", " Thank you for the advice. \n1. The proposed framework RA2 is not limited to raw-level feature selection. In this paper, we mainly introduce and evaluate our framework based on raw-level features. Through the experiments on C-MNIST (experiments reported in the appendix), the feasibility of combination with representation learning has been demonstrated. We would like to explore more about representation learning in future work. \n2. About the complexity analysis. Compared with the regular active strategies, we introduce two sub-modules: invariance optimization and distribution inference. The theoretical convergence rate of invariant learning is still a challenge in the invariant learning community. Empirically, it could converge with small constant iterations. The latter could achieve a closed-form solution in $O(d^3+d^2 N_T+dN_S)$. Considering annotation cost is more critical than computation cost in active learning, the overhead of our polynomial cost is acceptable. Empirically, we could derive our solution in 5 seconds (device: RTX 3090) for 380000 instances. \n", " Thank you for the advice. \n1. About the end-to-end optimization. In this paper, we implement the proposed framework in a two-step scheme. The existing framework is not suitable for end-to-end optimization because it relies on an ungeneralizable samples inference module $g$. Nevertheless, the interactive connection of the two sub-modules has been verified by theoretical analysis and ablation experiments.\n2. About the resource-constrained perspective. It is interesting to consider out-of-distribution generalization under constrained resources. In this paper, we mainly focus on label-efficient methods and present an active model adaptation framework. In the future, we will explore more perspectives to generalization, such as limited storage or computation. \n", " This paper presents the invariance minimization principle, to address to handle distribution shift at a relatively small labeling cost. For this purpose, this paper designs an interactive model adaptation framework including active sample selection and invariant relationship learning. The authors demonstrate the effectiveness of the method both theoretically and experimentally. On one hand, the strengths include: \n1.\tThe paper is well-written.\n2. The theoretical work of this paper is sufficient, which improves the value of the paper.\n3.\tThe paper has several novelties: i)\n i) This paper proposes the invariance minimization principle for active model adaptation; ii) For this purpose, The authors present an interactive framework Robust Active model Adaptation (RA2); iii) This paper demonstrates the effectiveness in an extensive empirical study and theoretical analysis.\n\nOn the other hand, for me, the main weakness mainly lies in that the paper is not easy to follow. I understand that the authors have put necessary proofs and details into the supplementary material, but still Sections 3 are not easily understandable. The author explains some symbols unclearly, e.g., what is the physical meaning of the semantic variable and intervention variable and how these two variables are reflected in the experiment? In addition, the use of formulas is not uniform, e.g., why does the formula \\Phi(S(X^{e})) use observation S in line 118, but not in line 120?\n What is the physical meaning of the semantic variable and intervention variable and how these two variables are reflected in the experiment? The author does not mention the limitations of the work.", " The paper presents a method for active adaptation of a model trained on source data, when target data from changed distribution is available. It builds on the invariance minimization principle and extends the so called Maximal Invariant Predictor (MIP) for determining where to sample points from the unlabeled target data. The authors show that increasing the heterogeneity of training data improves generalization, and hence queries from samples where the current invariant model does not hold. Representative points are sampled from these using coreset. The model is updated by using a new invariance predictor obtained from the updated data. \n\nThe experiments include comparison with relevant baselines. Results are shown on synthetic data (along with the case where the target is an imbalanced mixture), and on three real world datasets. The paper provides an interesting extension of invariant minimization principle for active adaptation. The overall idea of the paper makes sense. The results on synthetic and small real data shows improvements in overall and worst-case error. Feature importance results and ablation studies further shows efficacy of the method on these datasets. \n\nComparisons on standard datasets used in other active adaptation methods are missing. Discussion/experiments to help understand the effect of different practical choices would be helpful. \n The active adaptation baselines included in the paper show results on DIGITS, OFFICE, etc datasets -- a comparison on those datasets is imperative. Also, the current results are using a fixed budget size. It would be helpful to add the effect of different budget size on the method, and the time complexity of querying samples (w.r.t size/classes/budget of target data). \n \nThe method requires a knowledge of mixture proportion in the unlabeled target data. Does this limit the quality of the approach in different scenarios?\nA more thorough contextualization with related active adaptation methods would be helpful for readers. The limitations of the work have not been described. Potential societal impact of the work has been discussed. ", " This paper studies active learning with Out-of-Distribution (OOD) generalization where target distribution may differ from the source distribution. To address it, the authors present the invariant minimization principle to guide the active learning, interactively learn the invariant relationship and actively query the current un-generalizable samples. Authors formulate it as an Active Heterogeneity Expansion process, and propose the RA2 method, with the corresponding theoretical support. Moreover, the experiment results from the benchmark simulation and real-world tasks clearly verify that the proposed RA2 outperforms SOTA active and OOD methods. Overall, the paper is technically sound and well supported by the corresponding theoretical analysis and empirical evaluation. Strengths:\n1.\tThe problem of model adaptation under changed distributions is an important problem and has not been well addressed yet. The invariant minimization principle presented by the authors provides a novel view of active learning to address both overall performance and worst-case guarantees. It may motivate researchers to understand active adaptation from causal views and promote robust deployment in real-world applications. \n2.\tThe proposed method RA2 is well-motivated by solid theoretical analysis. Theorem 3.7 presents the generalization conditions under the structural causal model, and indicates the heterogeneity of labeled data is critical to generalization performance. Theorem 3.10 shows the proposed Active Heterogeneity Expansion process could efficiently improve generalization via active interaction. \n3.\tThe experiments seem comprehensive. In both benchmark simulation and real-world tasks, RA2 outperforms the SOTA active adaptation and OOD methods. The effectiveness of the proposal has been clearly verified. \n4.\tThe paper is well organized and easy to follow. The related active adaptation and OOD literature have been comprehensively reviewed and discussed. The presentation of RA2 is generally clear. \n\nWeaknesses:\n1.\tThe proposed invariant learning module (Sec. 4.2) focuses on mask selection and raw-level features. The former framework (Line 167-174, Sec. 4) seems not limited to raw-level selection. There is also a discussion about representation learning in the appendix. I think the feature selection, presented in Section 4.2, could be further improved, with consideration of representation learning. \n2.\tThere are two interactive modules in the proposed RA2. Compared to previous active adaptation methods, which are designed on a specific metric, it introduces more computation processes. How about the complexity compared with previous methods? \n3.\tIllustration: The text in Figures 2, and 4 is too small. It should be adjusted to the same size as Figures 1, and 3. \n 1. About the combination with representation learning\n2. About the complexity analysis \n N/A", " This paper considers Out-of-Distribution generalization via actively querying samples at a relatively small labeling cost. The authors propose an invariant minimization principle and the corresponding active heterogeneity expansion to implement it in an interactive framework. The proposed RA2 framework is theoretically sound and well-validated in sufficient experiments. More specifically, the authors present the generalization condition and a quantitative dependence on source heterogeneity under the linear causal structure. The convergence of RA2 is provided under the linear assumption. The experiments include both benchmark simulation and a series of real-world tasks, supporting the effectiveness of RA2. Strengths:\n1.\tThe problem is interesting and the novelty is valuable. Authors consider the limitation of the existing OOD research and theoretically justify their dependence on a large number of labeled data. This paper introduces active learning and addresses OOD generalization via interactive learning. It may inspire some consideration about OOD generalization under limited resources, such as limited label information. \n2.\tThe authors present the generalization analysis and design an active heterogeneity expansion to achieve the ideal maximal invariant predictor. Such a framework is intuitive and theoretically sound. \n3.\tThe experiments include three common distribution shifts: region shift, person shift, and time shift. The experiment results seem sufficient to support the main claim. \n\nWeaknesses:\n1.\tThe invariant learning module in the proposed RA2 is dependent on the previous environmental-based invariant learning methods. The connection between the two sub-modules seems weak. Could the whole framework be trained in end-to-end optimization?\n2.\tThe mask-based feature selection may be weak to more complicated data input, such as high-dimensional images. \n 1.\tCould the whole framework be trained in end-to-end optimization? \n2.\tA resource-constrained perspective on OOD generalization is interesting. This paper mainly considers the limited labeled information. What about the limited supervision (such as label noise), limited storage, and limited computation. I think a more comprehensive discussion will further improve this paper and have a wider impact.\n The authors provide the discussion about negative societal impact. I suggest the authors make a further discussion about more powerful feature learning technologies with the proposed RA2 framework.\n\n" ]
[ -1, -1, -1, -1, -1, 5, 5, 8, 8 ]
[ -1, -1, -1, -1, -1, 5, 3, 5, 4 ]
[ "Ba4_w6U-PO", "zJd-Ryvr3FT", "Ib59l4YyOdJ", "je-HM8Gdfpb", "p43xOQCbbk3", "nips_2022_VdQWVdT_8v", "nips_2022_VdQWVdT_8v", "nips_2022_VdQWVdT_8v", "nips_2022_VdQWVdT_8v" ]
nips_2022_9YQPaqVZKP
Neuron with Steady Response Leads to Better Generalization
Regularization can mitigate the generalization gap between training and inference by introducing inductive bias. Existing works have already proposed various inductive biases from diverse perspectives. However, none of them explores inductive bias from the perspective of class-dependent response distribution of individual neurons. In this paper, we conduct a substantial analysis of the characteristics of such distribution. Based on the analysis results, we articulate the Neuron Steadiness Hypothesis: the neuron with similar responses to instances of the same class leads to better generalization. Accordingly, we propose a new regularization method called Neuron Steadiness Regularization (NSR) to reduce neuron intra-class response variance. Based on the Complexity Measure, we theoretically guarantee the effectiveness of NSR for improving generalization. We conduct extensive experiments on Multilayer Perceptron, Convolutional Neural Networks, and Graph Neural Networks with popular benchmark datasets of diverse domains, which show that our Neuron Steadiness Regularization consistently outperforms the vanilla version of models with significant gain and low additional computational overhead.
Accept
This paper measures intra-class neuron response variance, and shows that network performance is better when it is lower. They then use this term as a regularization target, and show that it leads to improved model performance. Reviews were high quality. Scores were between weak accept and accept, with one reviewer raising their score from weak accept to accept. The most significant concerns were experimental: around ablations, around the diversity and scale of models the technique was tested on, and around the tuning of baselines. However, the experiments seemed fairly strong as is, and of course there are always more experimental conditions that can be requested. Based upon the reviewer consensus, I also recommend acceptance for this paper.
val
[ "09nlXyA_5Vq", "JSvSshk2kh-", "bnwmIeAsrlr", "wLHfVmOksL", "3zoesM5Pg5K", "Y4EI717ycbd", "PjDrZ2vMN1D", "v0sR9hRlHOS", "1R9D7O9_jh", "aOP4iU2d1yI", "zQpNKlUZT0d", "JjyfaUzpsB" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your hard work in the reviewing period of NeurIPS'22 and we feel grateful for your helpful suggestions and questions. We are also very willing to further discuss with you if you have any questions or suggestions.", " Thank you for your response. \n\nWe compare the sensitivity proposed in [1] and the Jacobian regularization in [2] (which is used in Table 4 for comparisons between classical regularization methods). We find that they are both based on the F-norm of the Jacobian matrix between input-output. It is also mentioned in [2] that their proposed Jacobian regularization is \"in line with the observed correlation between the input-output Jacobian and generalization performance [1]\"(in Section 3.1 of [2]).\nConsidering the relation between the Jacobian regularization and the sensitivity metric proposed in [1], we think that our comparison with the Jacobian regularization could provide the desirable answer to your interested question about the comparison with the sensitivity metric, and the results show our superiority. Later, we will include [1] in the Related Work session and may conduct the exact comparison with the sensitivity metric proposed in [1] as you suggested.\n\nOn the other hand, we agree with you that the plot of variance over accuracy can provide additional evidence for our observation. We will carefully consider how to conduct different experiment settings to obtain the wanted figure and add it to the appendix.\n\n[1] Sensitivity and Generalization in Neural Networks: An Empirical Study, Novak et al., ICLR 2018. \n\n[2] Hoffman, Judy, Daniel A. Roberts, and Sho Yaida. Robust learning with jacobian regularization. arXiv preprint arXiv:1908.02729 (2019).", " We would like to thank you for your efforts in reviewing our paper and rebuttal materials. We really appreciate your detailed comments and valuable suggestions. We will conduct further experiments based on your suggestion to investigate the effect of weight decay on controlling intra-class variance. If you have any additional considerations, please let us know to make revisions accordingly.", " Thanks to the authors for their changes!\n\nI've raised my score and I hope the authors generalize their insights and continue to explore this interesting direction, especially from an understanding standpoint. One more interesting experiment perhaps is to see if increasing weight decay just for the last layer classifier weights can approximate reducing intra class variance. [1] observed good results with this.\n\n[1] https://arxiv.org/abs/2106.04560\n", " Thanks for the response and clarifications. \n\nI still think that a comparison with sensitivity or Jacobian is missing in terms of correlation to the classification accuracy. Also, what the suggested plot would bring in addition to Figure 1 is how well the correlation holds between the metric and the classification accuracy when comparing different models/settings. ", " Thanks for the answers and additional experiments. Overall I think the idea of NSR is interesting and insightful, and I've raised my score accordingly.", " Thank you for your valuable comments and feedback. Here are our answers and some improvements based on your comments. \n\n1. **Question about the robustness of models applied with NSR if input label has noise.** \n**Answer:** We follow the setting of this work[1] and conduct an experiment where the label of each image in CIFAR10 is randomly corrupted with a probability of 5\\%. The results show that the error rate of vanilla ResNet18 increases from 4.22\\% to 5.84\\%, and vanilla ResNet18 with NSR can obtain an error rate of 5.25\\% (10.1\\% relative improvement). This result can show a certain degree of robustness of models with NSR.\n\n2. **Comment that \"one needs to choose the layer with the largest aggregated neuron intra-class response variance, which seems not to be directly accessible and requires additional calculations\"** \n**Answer:** Yes, it needs some additional calculations. However, according to our experience, the layer with the largest aggregated neuron intra-class response variance could usually be estimated by running just a few updates (instead of epochs). So the additional calculation is usually quite small. We will add more explanations to the appendix. By the way, it is worth mentioning that even if NSR is applied on the other layer instead of the one with the largest aggregated intra-class response variance, the model will still achieve a good performance gain (please refer to Table 6 \\& 7) as applying NSR on different layers have overlap effects. \n\nThank you again for your constructive reviews. Hope that our response and additional experiment results can address your concerns. We will feel grateful if you could boost our paper.\n\n[1] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, Oriol Vinyals. Understanding deep learning requires rethinking generalization. International Conference on Learning Representations (ICLR), 2017.", " Thank you for your thoughtful feedback. Here are our answers and some improvements based on your comments. \n\n1. **Comment that ''Section 2 has some repetitions of what was already presented in Section 1.''** \n**Answer:** Thank you for your comment. The reason for such repetitions is that we would like to highlight the correlation between neuron intra-class response variance and the classification correctness, so a few sentences in Section 1 and Section 2 may present similar points. Besides, such highlight in Section 2 is the foundation of the following designs and makes the paper more coherent. If you have more detailed suggestions for the writing, we would be glad to adopt them in the revision. \n\n\n\n2. **Question about \"Comparison of the neuron response variance to similar metrics such as input-output sensitivity\".** \n**Answer:** Thanks for providing the detailed survey paper about the sensitivity. As described in the paper, the sensitivity can be measured by the norm of the input-output Jacobian. In addition, we have compared our NSR with the Jacobian-based regularization method (see Table 4), which shows that our NSR has a much lower error rate. \n\n3. **Suggestion that ''plotting the figure with the y-axis as accuracy and the x-axis as the variance for various experimental settings to show the correlation between intra-class response variance and classification accuracy.''** \n**Answer:** Thanks for providing the optional way besides Figure 1 to show the correlation between intra-class response variance and classification accuracy. Under the constraint of space, we carefully compare Figure 1 and your suggested figure and choose the former in the revision as it may be easier to explain and understand for general readers. But we will add the new figure according to your suggestion in the appendix. \n\n4. **Question on ''Why not consider the zero response in the calculations\".** \n**Answer:** Thank you for your great question. In fact, we tried both strategies (i.e., consider or not consider the zero response in the calculations) and found that ignoring\nzero during intra-class response variance has better performance. The intuition is that zero response means the neuron is inactive (i.e., no response) which is irrelevant to presenting how stable the neuron response is. We can add this discussion in the appendix.\n\n5. **Question on \"the effect of using different regularization intensity values ($\\lambda_n$) for each sample\"** \n**Answer:** Thank you for your question. It is worth mentioning that $\\lambda_n$ is the same for different samples. Actually, the subscript $n$ refers to neurons instead of samples, and this explanation can be found in line 122. \n\n\n6. **Question on \"For MLPs deeper models observe a higher performance gain. However, this is not observed for CNNs (in Table 2). What could be the explanation for this\"** \n**Answer:** Thank you for this question. In fact, the trend in CNN is similar to the trend in MLP. From Table 2 we can see that the gain of NSR on VGG-19 (11.97\\%) is indeed higher than its gain on ResNet-18 (11.37\\%), which shows that deeper CNN networks can achieve a better gain in performance with NSR. It is worth mentioning that ResNet-50 in Table 2 is for ImageNet, while ResNet-18 and VGG-19 are for CIFAR10. So, the performance gain on ResNet-50 is not directly comparable with the gain on ResNet-18 or VGG-19. \n\n\nThank you again for your constructive reviews. Hope that our response can address your concerns. We will feel grateful if you could boost our paper.", " Thank you so much for your great suggestions. We have added several improvements according to your suggestions. \n\n1. **Suggestion that ''the improvements with NSR can be magnified using stronger baselines, such as the one used in SAM (SAM refers to the method in https://arxiv.org/abs/2010.01412) for ImageNet.''** \n**Answer:** Thank you for your great suggestion. We conducted an experiment based on the SAM (the mentioned paper). After 100 training epochs, we obtain the top-5 error rates: 6.96\\% (w/o our regularization NSR), and 6.31\\% (with NSR). NSR still achieves a **relative improvement of 9.33\\% without tuning hyper-parameters**. Note that the ResNet-50 in SAM is trained by **TPU** with huge memory so that the batch size can be set to an extremely large value, for example, 4096 in their work. Nevertheless, we can only train their model on **GPU** and the largest batch size we can reach is only 800, 5 times smaller than theirs. Other hyper-parameters may need to be tuned accordingly; however, due to such computational resource constraints and the limited time, the result of our reproduced SAM is 6.96\\%, instead of 6.28\\% reported in the original paper.\n\n\n2. **Suggestion that ''in Table 4 and 5, one way to merge these would be to add additional results for maybe at least ResNet-18 where both dropout and BN are typically used and to combine positive regularization methods for all 3 models with NSR and see if they provide orthogonal improvements to NSR.''** \n**Answer**: Thank you for your suggestion. The description of our experiment settings may be unclear, and we feel sorry for the potential misunderstandings. Actually, following common empirical settings in CV, the vanilla CNN models in Table 2 are trained with most of the widely-used regularization methods, including BN, dropout, learning rate decay, L2 norm and data augmentation. Some of them are mentioned in line 247. As a result, ''Vanilla + NSR'' is designed to verify the effect of combining NSR with the aforementioned regularization. Experiment results in Table 2 demonstrate that NSR can indeed provide orthogonal improvements beyond such methods. In Table 4, we remove most of the regularization methods to compare our proposed NSR with conventional regularization methods fairly. We have added more explanation in our revised manuscript (colored blue in line 255).\n\n3. **Question that ''despite adding NSR to multiple layers has overlapping benefits, what happens if we employ it in all the layers?''** \n**Answer:** If we employ NSR to all the layers, the additional gain will be relatively little compared with NSR on a single layer, but the memory costs will be extremely high for the design of the memory queue. Thus, we only apply the regularization term on one layer to balance the effectiveness with the memory costs.\n\n4. **Question that ''another suggestion I had was to study qualitative effects of training with NSR, for example, does NSR implicitly penalize sharpness? If yes, since SAM (https://arxiv.org/abs/2010.01412) has a larger computational overhead, can NSR be used in lieu of it with similar benefits?''** \n**Answer:** Thank you for pointing out a new perspective of sharpness. Currently, we have not conducted a theoretical analysis of NSR from this perspective, but we can provide an intuitive discussion. The underlying principle of NSR is different from sharpness. Sharpness indicates that the small change in model parameters will lead to a large change in loss function value. So, it regards the relation between model parameter change and the loss change. While NSR considers the stableness of intra-class response, which regards the relation between response change and input change. Besides, according to the empirical results, we believe NSR can provide some orthogonal improvements apart from penalizing sharpness. As mentioned in the first answer, we conduct experiments based on SAM, and applying our NSR obtains additional gains. We add the discussion of SAM in Related Work (which is colored blue in line 306) and will explore this direction in future work.\n\n5. l311 typo: \"Actiavtion\" \n**Answer:** Thank you for pointing it out. We have revised it in the new version of our manuscript (which is colored blue in line 311).\n\nThank you again for your constructive reviews. Hope that our response and additional experiment results can address your concerns. We will feel grateful if you could boost our paper.", " This paper proposes a novel regularization scheme called Neuron Steadiness Regularization (NSR) which aims to reduce variance between activations belonging to a class. NSR is well motivated and well situated among related work in the paper. It is further verified empirically tested on wide variety of benchmark/model-combinations. Strengths\n- The paper does a good job of clearly explaining the proposed method and even discusses practical concerns like estimation error accrued by statistics being used.\n- Research questions and experiments are thorough and well thought out, especially how NSR fares against other regularization methods seems quite pertinent.\n\nWeaknesses\n- My main set of suggestions are related to making the paper stronger by how far we can battle test NSR in practical settings, more specifically\n - The improvements with NSR can be magnified using stronger baselines, for instance, this paper (https://arxiv.org/abs/2010.01412) obtained much lower Top-5 error rate when training ResNet-50 on Imagenet for 200 epochs.\n - In Table 4 and 5, one way to merge these would be to add additional results for maybe at least ResNet-18 where both dropout and BN are typically used. I would also try to combine positive regularization methods for all 3 models with NSR and see if they provide orthogonal improvements to NSR. In addition to what I mentioned in Strengths And Weaknesses section, I have following suggestions/questions:\n\n- I wonder, despite adding NSR to multiple layers have overlapping benefits, what happens if we employ it in *all* the layers? This will likely simplify usage of this method since now it's not required to explore which layer activations to use. \n- Another suggestion I had was to study qualitative effects of training with NSR, for example, does NSR implicitly penalizes sharpness? If yes, since SAM (https://arxiv.org/abs/2010.01412) has a larger computational overhead, can NSR be used in lieu of it with similar benefits?\n- l311 typo: \"Actiavtion\" Yes", " This paper studies the intra-class neuron response variance in neural networks. The authors observe that this variance is lower for correctly classified samples, and hence propose to use this as a regularization term. They then show the effectiveness of this regularization compared to the vanilla case and other regularizations such as the Jacobian norm. Strengths:\n\n1- There are enough experiments presented to show the improvement compared to the vanilla case. It would be even better to have the same amount of experiments to compare with other regularization techniques as well (they are only presented in Table 4).\n\n2- The metric is novel and interesting to the community. \n\nWeaknesses:\n\n1- The paper structure needs improvements. Section 2 is repetitions of what was already presented in Section 1. Also, the paper needs proofreading to fix typos.\n\n2- Most of the ablation studies (comparing deeper and shallower networks and layer selection for computation) are done on MLPs. Performing these studies on CNNs is more interesting to practitioners.\n\n3- Comparison of the neuron response variance to similar metrics such as input-output sensitivity is missing. 1- It is claimed that neuron intra-class response variance has a high correlation with classification accuracy. However, this has not been directly shown directly. It is only shown that the variance is lower for correctly classified samples. How would a plot look up with the y-axis as accuracy and the x-axis as the variance for various experimental settings? This should be then compared with similar metrics in terms of correlation such as output sensitivity to input perturbations which are studied in papers like [1]. Then it would be interesting to see the particular information this metric brings as opposed to the previous ones.\n\n2- Why not consider the zero response in the calculations? Even an inactivated state contains information by itself.\n\n3- Have you studied the effect of using different regularization intensity values (lambda_n) for each sample? What is the effect of that?\nIf this has not been studied, then I would suggest not introducing it at all and only presenting a single lambda.\n\n4- For MLPs deeper models observe a higher performance gain. However, this is not observed for CNNs (in Table 2). What could be the explanation for this?\n\n\n[1] Sensitivity and Generalization in Neural Networks: An Empirical Study, Novak et al., ICLR 2018. The mention \"We also systematically consider the border impact, No risk is found.\"\nThey don't mention the limitations of the method and where it would possibly break.", " This paper suggests that neurons with similar responses to instances of the same class may lead to better generalization. Accordingly, the authors propose a regularization method to reduce neuron intra-class response variance. This method outperforms the vanilla version of the optimization algorithm under the given settings on MLP, CNN, and GNN models. And the computational cost seems acceptable. Pros:\n- The proposed regularization method is consistent with the idea of the Consistency of Representations Complexity and is well-motivated in this sense.\n- The authors propose several practical ways to reduce computational budget and storage cost, e.g., saving a sequence of summation values of neurons' responses or regularizing only one particular layer. The experimental results show the effectiveness of these adjustments.\n- This paper is well organized and clearly written. Most claims are well supported by theoretical analysis or experimental results.\n\nCons:\n- The experiments are somehow limited. The generalization performance of models when using NSR under input corruptions is unclear. Given that this method requires the label information, evaluating the robustness to label noise is kind of necessary to verify its practical values.\n- In order to utilize this regularization efficiently, one needs to choose the layer with the largest aggregated neuron intra-class response variance, which seems not to be directly accessible and requires additional calculations. - When labels of training data get corrupted, how will the performance of NSR be affected? Will the Consistency of Representations Complexity become less informative when the given labels are not the ground truth labels?\n- When using selecting the layer to apply NSR, does the model need to be trained for a few epochs, and why?\n Utilizing NSR method requires well-annotated data, which limits it to only supervised learning problems (though the reviewer don't consider this as a major flaw and there is no need to work on solving it here)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "Y4EI717ycbd", "3zoesM5Pg5K", "wLHfVmOksL", "1R9D7O9_jh", "v0sR9hRlHOS", "PjDrZ2vMN1D", "JjyfaUzpsB", "zQpNKlUZT0d", "aOP4iU2d1yI", "nips_2022_9YQPaqVZKP", "nips_2022_9YQPaqVZKP", "nips_2022_9YQPaqVZKP" ]
nips_2022_nQcc_muJyFB
Improved Feature Distillation via Projector Ensemble
In knowledge distillation, previous feature distillation methods mainly focus on the design of loss functions and the selection of the distilled layers, while the effect of the feature projector between the student and the teacher remains under-explored. In this paper, we first discuss a plausible mechanism of the projector with empirical evidence and then propose a new feature distillation method based on a projector ensemble for further performance improvement. We observe that the student network benefits from a projector even if the feature dimensions of the student and the teacher are the same. Training a student backbone without a projector can be considered as a multi-task learning process, namely achieving discriminative feature extraction for classification and feature matching between the student and the teacher for distillation at the same time. We hypothesize and empirically verify that without a projector, the student network tends to overfit the teacher's feature distributions despite having different architecture and weights initialization. This leads to degradation on the quality of the student's deep features that are eventually used in classification. Adding a projector, on the other hand, disentangles the two learning tasks and helps the student network to focus better on the main feature extraction task while still being able to utilize teacher features as a guidance through the projector. Motivated by the positive effect of the projector in feature distillation, we propose an ensemble of projectors to further improve the quality of student features. Experimental results on different datasets with a series of teacher-student pairs illustrate the effectiveness of the proposed method. Code is available at https://github.com/chenyd7/PEFD.
Accept
The paper received 5 positive reviews and the reviewers increased/remained their scores after the rebuttal. All the reviewers agree that the proposed method is simple yet effective, and the experiments are comprehensive. Overall this work proposes an improved feature distillation method via projector ensemble. But I hope the authors will discuss the computational costs brought by multiple projectors clearly, as suggested by the reviewers.
train
[ "ACD68DBNYh6", "a2vyYxm6Xz8", "-LViqi-zNa0", "i7L3jl4tZei", "rqVDubYaTzq", "Ixcxy0JR_FX", "Dh8ccs1dM3W", "G2l6y3tmsFv", "GvnUx6oSngl", "8_SK4TNgaBM", "T5OMXV4g7lO", "1qHzafPrddc", "r6cCftIJiXw", "tYV7GxVJQMH", "C4xb4MDiPZ3" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely appreciate your support. In the table for Q2, we use teacher-student pair DenseNet201-ResNet18 on ImageNet for demonstration. In this table, we report the training times of one epoch of different methods and record the peak GPU (an NVIDIA V100 GPU) memory usages of different methods with batch size of 256. Following the reviewer's suggestion, we will clarify these details in the revised manuscript and include the additional experiments in the supplementary materials.", " The authors provided more explanation and experiments in their response to reviewers. This is very helpful for reviewers to understand and evaluate this work. I would suggest the authors try to include those additional materials in the paper or supplementary materials so that the reader can have better understanding. \n\nAnother clarification for response of Q2, how are the training cost and memory usages computed? Is it the time to train 1 epoch? Is it the peak GPU memory with batch size of how many samples?", " We sincerely appreciate your support. Regarding your penetrating question, we train the additional 120 epochs based on the original 240 epochs and report the corresponding L2 distances between two projectors w.r.t different epoch numbers (with a step of 20 epochs), in the following table. In this table, we observe that the parameters of projectors tend to converge after 300 epochs and the L2 distance between them basically remain unchanged from 300 epochs to 360 epochs. \n\nEpoch | 240 Epochs |260 Epochs |280 Epochs |300 Epochs |320 Epochs |340 Epochs |360 Epochs\n| ---- | ---- |---- | ---- |---- | ---- |---- | ---- | \nL2 distance | 206.32 |205.15|203.98 |202.80 |202.68 |202.55 |202.43 \n\nOn the other hand, inspired by the reviewer's comments, we simply add a regularization term to explicitly promote the diversity between projectors. The goal of this regularization term is to maximize the L2 distance between projectors as a basic strategy. The results are shown in the following table. From some preliminary results, adding the regularization term can marginally increase the projectors' diversity after 160 epochs and improve distillation performance (approximately 0.2% top-1 accuracy on CIFAR100) . More sophisticated designs that can leverage the diversity of projectors are worth paying further research attention to, which will be included as a separate analysis in the revised manuscript. We thank the reviewer for providing this inspiring suggestion. Any further discussions, recommendations and support are appreciated very much.\n\nEpoch| 40 Epochs\t| 80 Epochs\t| 160 Epochs\t| 240 Epochs\n| ---- | ---- |---- | ---- |---- |\nw/o regularization |394.57\t|372.82\t|298.86\t|206.32 \nw/ regularization |384.53 |368.17 |301.39 |211.61 \n", " After reading the rebuttal and other reviews, I decide to improve the final rating of the submission. ", " The authors' comments have addressed most of my concerns. \n\nI have one more question regarding Q2. In the above comments, it is shown that the distances between projectors decrease during training. I wonder the corresponding performance changes. In other words, if we continue the training process even the model performance reaches saturation (prone to overfit), what will happen to the distance between projectors? I suggest the authors add some analysis about this.", " The authors are thankful for the reviewer's comments.\n\n**Q1: The novelty is limited. The single-layer projector is widely used in KD. Further, a simple extension from single-layer to multi-layer is rather simple. It is easy to come up with this idea.** \nA1: We respectfully disagree with the reviewer's comments. We have to emphasize that this paper **DOES NOT** propose to extend the projector from single-layer to multi-layer. In fact, as shown in Fig.4 and we mentioned in Line 204 in our manuscript, simply increasing the number of layers of the projector tends to degrade the distillation performance. Instead, this paper proposes to ensemble a series of single-layer projectors. The proposed idea is simple but efficient and effective . Besides, to the best of our knowledge, this is the first paper using the projector ensemble strategy to improve the feature distillation performance. There are multiple factors may prevent researchers from using the projector ensemble strategy (Please refer to A1 to Reviewer 52Wo ).\n\n**Q2: Lack of references in all of the tables. It is recommended to add references of comparing methods in tables.** \nA2: Thanks for the reviewer's suggestion.", " We thank reviewer for the positive comments.\n\n**Q1: The idea is simple but the authors should explore more options for an ensemble of projectors, e.g. ensemble of various structure projectors.** \nA1: In our experiments, we find that integrating projectors with similar architecture yields better performance. We select two commonly-used projector's architecture for demonstration (i.e., 1x1 convolutional kernel and our single-layer MLP). We report the top-1 classification accuracy on CIFAR-100 with pair ResNet32x4-ResNet8x4 in the following table. Besides, we also investigate the performance of our method by using different initialization methods (Please refer to A5 to Reveiewr 52Wo).\n\nw/o Projector |Conv |MLP |Conv+Conv |MLP+MLP |Conv+MLP \n-|-|-|-|-|-\n73.66 |73.87 |75.14 |74.73 |**76.08** |75.96\n\n\n**Q2: Is there any limitation in the training of an ensemble of projectors? Is the training time/memory usage impacted by the number of projectors?** \nA2: The reviewer raises an interesting question. The following table (similar to Table 3 in our manuscript) evaluates the training costs and memory usages of different distillation methods and the proposed method with different number of projectors. We can see that the training cost and memory usage of our method will slightly increase with the increase of the number of projectors. On the other hand, in our experiments, the proposed method uses three projectors and outperforms the SOTA methods. With three projectors, the training cost and memory usage of our method are relatively lower than existing methods as shown in the following table. \n\nComplexity|CRD |SRRL |CID |1-Proj |Ours(2-Proj)|Ours(3-Proj) |Ours(4-Proj)\n-|-|-|-|-|-|-|-\nTimes(s) |3,158 |3,026 |3,587 |2,995 |2,988 |2,995 |3,004 \nMemory(MB) |15,687 | 11,991 |12,021 |11,475 |11,523 |11,523 |12,215 \n\n**Q3: Line 251, which table is this referring to? Is it table 1?** \nA3: Yes, it is table 1. We will clarify this in the revised manuscript. \n", " The authors are thankful for the reviewer's comments.\n\n**Q1: This is an experiment-driven paper. The authors do lots of experiments and give empirical analyses. However, I still think the contributions are not enough for NeurIPS.** \nA1: The authors would like to argue that most of existing distillation methods are experiment-driven since it is difficult to give deeper insight to the mechanism of knowledge distillation as well as deep learning. For contributions: (1) Technically and conceptually, differ from existing feature distillation methods that focus on the design of loss functions and the selection of the distilled layer, this paper demonstrates that it is also feasible to achieve promising distillation performance by simply modifying the architecture of the projector, which presents a new perspective for the improvement of feature distillation; (2) Experimentally, we conduct comprehensive evaluation to illustrate the effectiveness of the proposed projector ensemble and show that the proposed method outperforms SOTA feature distillation methods with less training complexity. Based on the simplicity and effectiveness of the proposed method, we believe that this paper will be beneficial to the related research.\n\n**Q2: The authors should re-organize the Method section. It is not obvious from one equation to the next. And the notations are somewhat confused.** \nA2: Thanks for the reviewer's suggestion! We will improve the readability of the manuscript.\n \n**Q3: The hypothesis (Line 125-126) is a little coarse. I think the main contribution in this paper is the analysis about the effectiveness of the projector. Therefore, the authors should give deeper analysis.** \nA3: We give more analysis about the diversity and behaviors of projectors. Please refer to A2 and A3 to Reviewer n8Zt.\n\n**Q4: In experiments, table is more suitable and clear to demonstrate the performance than figure. I recommend the authors use tables instead of Fig. 3 and Fig. 4.** \nA4: We thank the reviewer for this advice. We will take this recommendation.\n\n**Q5: Considering the feature distillation, some experiments about CUB-200-2011, Cars-196 and SOP should be included.** \nA5: Thanks for providing related datasets. Due to the time limitation, we only test the performance of different methods on CUB200 and Cars196 datasets. We transfer the knowledge of MobileNet trained on ImageNet to CUB200 and Cars196 datasets. We freeze the parameters of networks and re-train the last linear classifiers. The generalization performance of networks distilled by different methods is shown in the following table. In this experiment, we report the top-5 classification accuracy (%) of different methods. Experimental results indicate that the proposed distillation method can significantly improve the generalization ability of networks on downstream tasks compared to the SOTA methods.\n\nDatasets |w/o distillation |CRD |CID |SRRL |Ours \n-|-|-|-|-|-\nCUB200 |89.62 |89.61 |90.21 |90.26 |**91.14** \nCars196 |79.33 |79.62 |77.59 |80.41 |**82.27** ", " The authors are thankful for the reviewer's comments. \n\n**Q1: The ensemble strategy is too simple. The authors simply add all the projected features together. In fact, ensemble strategy plays a critical role in ensemble learning. In many situations, simply averaging the outputs of weak learners cannot bring improvements.** \nA1: We would like to argue that being simple should not be viewed as a weakness of the proposed method. Instead, we believe that simplicity is a strength of our method from the perspective of Occam's Razor. According to the experimental results in the submitted manuscript, the ensemble strategy can effectively improve the feature distillation performance and the proposed distillation framework can consistently outperform SOTA methods.\n\n**Q2: This paper lacks discussions about the projector behaviors, especially their diversity. Different initialization cannot guarantee enough variety between projectors. Moreover, the different projected features are simply added together to calculate losses. There is no special architecture and training designs to promote the projector diversity. In experiments, there is also no experimental analysis of the projector behaviors.** \nA2: We thank the reviewer for this insightful comment. We investigate the diversity and behaviors of projectors from the following two perspectives: (1) In the following table, we compute the L2 distance between two different projectors in our ensemble distillation framework. We can see that the diversity of projectors gradually decreases with the increase of training epochs. However, projectors are still different from each other during training even if we do not use special training designs;\nEpoch |40 Epochs | 80 Epochs | 160 Epochs |240 Epochs \n-|-|-|-|-\nL2 distance |394.57|372.82 |298.86 |206.32 \n\n(2) We compute the average cosine similarities between teacher and student features transformed by two different projectors. We report the the top-3 categories with the largest average cosine similarities obtained by different projectors in the following table. From this table, we can see that different projectors prone to fit different teacher features of different classes (e.g., Proj-1 tends to fit samples of the 4-th class, Proj-2 tends to fit samples of the 22-th class), which indicates the diversity of projectors during and after training. \n\nEpoch |40 Epochs | 80 Epochs | 160 Epochs |240 Epochs \n-|-|-|-|-\nProj-1 |(26,93,4) |(3,4,11)|(4,55,26) |(4,55,26)\nProj-2 |(22,45,44) |(45,98,35) |(45,22,72) |(22,45,44) \n\n\n**Q3: The theoretical analysis in section 3.1 needs to be clearer. Also, the authors need to give explanations of why more projectors are better under their feature gradients perspective.** \nA3: In section 3.1, we hypothesize that the student network may better capture the global feature distribution by introducing a projector to preserve data information during back propagation. However, since projectors with random initialization may contain bias as discussed in A2, we propose to integrating multiple projectors. By taking the average of different projectors, the distribution of the projected features will be smoother. We empirically verify this hypothesis by computing the standard deviation (the lower the better) of average cosine similarities between teacher and student features in the following table. From this table, we can see that by imposing a projector to assist distillation, the student network can better capture global data distribution of the teacher according to its lower standard deviation compared to the student w/o Proj. Furthermore, by using an ensemble of projectors, the performance can be further improved. \n\nw/o Proj. |One Proj. |Two Proj. Ensemble |Three Proj. Ensemble\n-|-|-|-\n0.024 |0.018 |0.017 |0.016\n", " We thank reviewer for the positive comments.\n\n**Q1: Are you sure that there are no more researchers who have used this method to improve the performance of their model?** \nA1: Yes. There are some factors that may prevent researchers from proposing the idea of projector ensemble. Firstly, as discussed in the manuscript, most of existing feature distillation methods pay more attention to the design of loss functions (e.g., CRD, CID and SRRL) and the selection of the distilled layer (e.g., AFD and KR). Therefore, the effect of projector is largely-ignored. Secondly, a more common way to modify the projector's architecture is to increase the number of layers. However, results in Fig.4 in our manuscript show that increasing the number of layers of the projector tends to degrade the distillation performance. \n\n**Q2: The number of projectors in the ensemble seems to be sensitive, how to determine this hyper-parameter appropriately in practice?** \nA2: Similar to the settings of hyper-parameters in previous methods, in our experiments, we determine the number of projectors via grid search and observe that the proposed method generally obtains good trade-off between distillation performance and training costs by using an ensemble of three projectors. \n\n**Q3: The ablation study does not provide the effect of different numbers of projectors on distillation when the feature dimensions are different.** \nA3: Experimental results in the following table show that the ensemble of projectors can consistently improve the distillation performance when the feature dimensions of the student and teacher are different. In this table, we use the teacher-student pair ResNet50-MobileNet (the teacher outputs 2048-dimensional features and the student outputs 1024-dimensional features) and report the top-1 classification accuracy (%) on ImageNet.\n|1-Proj |2-Proj |3-Proj |4-Proj| \n|:-: | :-: | :-:| :-:|\n|72.75 |73.15(+0.4) |73.16(+0.41) |73.29(+0.54) | \n\n**Q4: ''The student network benefits from a projector even if the feature dimensions of the student and teacher are the same'', is this conclusion first discovered and proposed by the author of this paper? The word used in the abstract is ''observe''.** \nA4: We believe that some of the related researchers may notice this phenomenon. However, to the best of our knowledge, this paper makes the first attempt to comprehensively study the effect of projectors in feature distillation.\n\n**Q5: Initialization of different projectors is not clear, do different initialization methods have a big impact on the experimental results?** \nA5: In our experiments, we find that simply initializing different projectors with different seeds and the default initialization method of linear layer in Pytorch is sufficient to yield good performance. Therefore, we stick to this strategy to make the proposed method as simple as possible. We also compare the distillation performance by using different initialization methods in the following table. Experimental results show that mixing different initialization methods has a slight impact on the performance and is a potential way to further improve the distillation performance. We thank the reviewer for providing this suggestion and will explore this in future work. The top-1 classification accuracy on CIFAR-100 with pair ResNet32x4-ResNet8x4 is as follows: \n\nKaiming Ini. |Orthogonal Ini. |Ours(Default Ini.) |Mixing Different Ini. \n:-: | :-: | :-:| :-:\n75.78 |76.12 |76.08 |76.27\n", " The authors discuss the phenomenon that using a projector on a student’s feature can improve the performance of distillation when the feature dimensions are the same.\nThey focus on the projector between teachers and students, which is unnoticed in the past.\nThen the authors propose a simple ensemble method to further improve the performance.\nThey conduct comprehensive experiments in this paper and the authors claim the superiority of their proposal.\nA complete story, from the phenomenon to the essence, which is simple and effective.\n Strengths\n1.\tThe method is simple and easy to implement.\n2.\tExperiments are detailed and code is provided.\n3.\tThis paper is well written.\nWeakness\n1.\tThe novelty of the proposed method is limited. Are you sure that there are no more researchers who have used this method to improve the performance of their model?\n2.\tthe number of projectors in the ensemble seems to be sensitive, how to determine this hyperparameter appropriately in practice?\n3.\tThe ablation study does not provide the effect of different numbers of projectors on distillation when the feature dimensions are different. \n“the student network benefits from a projector even if the feature dimensions of the student and teacher are the same” , Is this conclusion first discovered and proposed by the author of this paper? The word used in the abstract is “observe”.\n\nInitialization of different projectors is not clear, do different initialization methods have a big impact on the experimental results?\n As mentioned above in the Weakness.", " This paper proposes a feature distillation method based on an ensemble of multiple projectors. The model uses multiple projectors to project the features of a student model, and the Direction Alignment loss are calculated between the multiple projected features and the teacher features. Some analysis are given to explain why projector is helpful. Experimental results show that the proposed method outperforms previous competitors. Strengths:\n\n1. This paper focuses on the projectors in feature distillation models and proposes a simple but effective ensemble method. This angle is interesting and the proposed model can be widely applicated.\n\n2. The writing is clear and easy to understand. \n\n3. Experimental results show that the proposed model obtains promising results.\n\nWeaknesses:\n\n1. The ensemble strategy is too simple. The authors simply add all the projected features together. In fact, ensemble strategy plays a critical role in ensemble learning. In many situations, simply averaging the outputs of weak learners cannot bring improvements.\n\n2. This paper lacks discussions about the projector behaviors, especially their diversity. The authors only give a simple description of \"projectors with different initialization would provide different transformed features, which is beneficial to the generalizability of the student\". However, different initialization cannot guarantee enough variety between projectors. Moreover, the different projected features are simply added together to calculate losses. There is no special architecture and training designs to promote the projector diversity. In experiments, there is also no experimental analysis of the projector behaviors.\n\n3. The theoretical analysis is not clear enough. In section 3.1, the authors try to analyze why projector is helpful by comparing the feature gradients with and without projector. The explanation is that the non-linear transformation updated from previous data helps to better capture the global feature distribution. This is not convincing enough to me. Furthermore, this explanation cannot explain why multiple projectors are better. 1. I suggest the authors give some visualization results or experimental analysis of the projector behaviors during and after training. If different projectors show significant differences, it would be better to give some theoretical analysis about what brings them variety. \n2. The theoretical analysis in section 3.1 needs to be clearer. Also, the authors need to give explanations of why more projectors are better under their feature gradients perspective. The authors have addressed the limitations and potential negative societal impacts.", " This is an experiment-driven paper. The authors do lots of experiments and comparisons to demonstrate the effectiveness of feature projectors in feature distillation. Strengths:\n\nThe authors do lots of experiments and comparisons to show the superiority of feature projectors in feature distillation. \n\nWeaknesses:\n\n1. This is an experiment-driven paper. The authors do lots of experiments and give empirical analyses. However, I still think the contributions are not enough for NeurIPS. \n2. The authors should re-organize the Method section. It is not obvious from one equation to the next. And the notations are somewhat confused. \n - The notations $s^p$ and $W^p$ is confused. I suggest to use $t$ as the $t$-th iterations and use $W_p$ to represent the parameters of projector. It is more clear. \n - The Eq. (2), Eq. (3) and Eq. (4) should be more clear. It is not easy to understand, especially for the junior researchers. For example, Eq. (3) can be denoted as $\\frac{\\partial L_{DA}}{\\partial s_i^p}=\\frac{\\partial L_{DA}}{\\partial g(s_i^p)}\\frac{\\partial g(s_i^p)}{\\partial s_i^p}$. The last term of Eq. (4) can be denoted as $\\frac{\\partial L_{DA}}{\\partial g(s_i^p)}\\frac{\\partial g(s_i^p)}{\\partial W^p}$. \n3. The hypothesis (Line 125-126) is a little coarse. I think the main contribution in this paper is the analysis about the effectiveness of the projector. Therefore, the authors should give deeper analyses. \n4. In experiments, table is more suitable and clear to demonstrate the performance than figure. I recommend the authors use tables instead of Fig. 3 and Fig. 4. \n5. Considering the feature distillation, some experiments about CUB-200-2011, Cars-196 and SOP should be included. See weaknesses. See weaknesses. ", " The paper presents a feature matching-based distillation method that makes use of a set of feature projectors to better align the features of the student and teacher network. \nThe authors show that such an ensemble of projectors can improve the distillation performance further even if the student and teacher have the same feature dimension.\nSome analysis is conducted to show the reason of introducing a projector can benefit the student network learning.\nExperiments were performed on benchmark datasets using various teacher-student network structure pairs to compare with different methods. Strengths:\n- The paper is well written with comprehensive experiments.\nWeakness:\n- The idea is simple but the authors should explore more options for an ensemble of projectors, e.g. ensemble of various structure projectors. 1. Line 251, which table is this referring to? Is it table 1?\n2. Is there any limitation in the training of an ensemble of projectors? Is the training time impacted by the number of projectors and/or the depth of the projectors? The authors have a section for discussing limitations and future works. However, it did not directly address the limitation of the multiple projectors, e.g. training time, memory usage, etc.", " This work proposed a new feature distillation method via projector ensemble. In particular, different from the traditional single-projector method, it introduces multiple projectors and computes the average output to perform KD. Further, two different designs including a single-layer projector and MLP projector are discussed and compared. Experiments on popular datasets are conducted to verify the effectiveness of the method. The strengths are listed below:\n(1)\tThe proposed method achieves significant performance improvement. \n(2)\tThis proposed method is simple and easy to be reproduced.\n\nThe weaknesses are listed below:\n(1)\tThe novelty is limited. The single-layer projector is widely used in KD. Further, a simple extension from single-layer to multi-layer is rather simple. It is easy to come up with this idea. \n(2)\tLack of references in all of the tables. It is recommended to add references of comparing methods in tables.\n Please refers to the weaknesses mentioned above. As mentioned in the paper, the proposed method is only verified on classification. Its effectiveness on other tasks can be further explored. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5, 5 ]
[ "a2vyYxm6Xz8", "Dh8ccs1dM3W", "rqVDubYaTzq", "G2l6y3tmsFv", "GvnUx6oSngl", "C4xb4MDiPZ3", "tYV7GxVJQMH", "r6cCftIJiXw", "1qHzafPrddc", "T5OMXV4g7lO", "nips_2022_nQcc_muJyFB", "nips_2022_nQcc_muJyFB", "nips_2022_nQcc_muJyFB", "nips_2022_nQcc_muJyFB", "nips_2022_nQcc_muJyFB" ]
nips_2022_XxmOKCt8dO9
ConfounderGAN: Protecting Image Data Privacy with Causal Confounder
The success of deep learning is partly attributed to the availability of massive data downloaded freely from the Internet. However, it also means that users' private data may be collected by commercial organizations without consent and used to train their models. Therefore, it's important and necessary to develop a method or tool to prevent unauthorized data exploitation. In this paper, we propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners. Specifically, the noise produced by the generator for each image has the confounder property. It can build spurious correlations between images and labels, so that the model cannot learn the correct mapping from images to labels in this noise-added dataset. Meanwhile, the discriminator is used to ensure that the generated noise is small and imperceptible, thereby remaining the normal utility of the encrypted image for humans. The experiments are conducted in six image classification datasets, including three natural object datasets and three medical datasets. The results demonstrate that our method not only outperforms state-of-the-art methods in standard settings, but can also be applied to fast encryption scenarios. Moreover, we show a series of transferability and stability experiments to further illustrate the effectiveness and superiority of our method.
Accept
All the reviewers were excited by the idea and a efficient method to solve very critical problem with rigorous experimental support. They all agreed that the paper is above bar for publications. We hope the authors will further improve the paper for camera ready submission.
train
[ "5rTgAXT-4g", "8KliZsQTWsp", "ngcy_vaXJf", "vUE0LQV5sV8", "SUx0d-M1Kl", "8peBcxPt5gm", "BV9lH6mXQQ0", "J0bm0vzitKH", "RbM7os-79DKE", "xiXVb-8GP_", "tJUJl49fWu", "ZUrDSk7uP5F", "rcJgz2rPQnh" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your suggestion, although we believe that the current version of ConfounderGAN can be applied to most practical scenarios (i.e. as data owners usually don't reveal which encryption tool they use), we also note that it is important to design an encryption tool that strictly satisfies the Kerckhoffs's principle [14]. In future work, we will further improve ConfounderGAN so that it can work optimally under this principle.\n\nMeanwhile, we also add the above discussion to Appendix E (Line 573 – Line 576).\n\n[14] La cryptographie militaire. Petitcolas F. J. des Sci. Militaires, 1883.\n", " Thank you for the update.\n\nBased on the additional experiments performed, the paper update, and the discussion, I am willing to update my score to weak accept.\n\nI am still unsatisfied with the discussion on the adaptive settings. \n\nFirst, no matter how well ConfounderGAN is designed it should be assumed that the adversary can have access to it as well. Second, the adaptive attack is posterior to the release of ConfounderGAN. Assuming that data owners won't reveal the encryption tools they use in most scenarios is a form of security through obscurity, which violates the Kerckhoffs's principle that any good \"encryption\" system should follow.\n\n \n", " Dear Reviewer. Have you had a chance to look at our rebuttal and updated paper? We're eagerly awaiting your response to better address your concerns.", " We thank all reviewers for the detailed comments and constructive suggestions, which have undoubtedly improved the quality of our manuscript. We have uploaded the revised manuscript and appendix based on the reviewers’ feedback, and have highlighted changes from the original submission in red. We summarize the notable changes below, and refer to minor changes in the individual responses.\n\n* As requested by Reviewer oeZM and rw45, we have discussed the relation between our method and **data poisoning in related work**. \n* As requested by Reviewer oeZM and Bo5j, we have conducted the experiment to investigate the effectiveness of our method under the adaptive setting and given a detailed discussion about this setting in **Appendix D**. \n* As requested by Reviewer Bo5j, we add the performance comparison of our method and DeepConfuse under the out-of-distribution encryption setting in **Appendix E**.", " Thanks for the reviewer’s constructive feedback. We answer the questions in the Weaknesses and Questions:\n\n**Weaknesses: There are some typos throughout the paper.**\n\n**A.** Thank you for your careful review. For the typos in Line 12, Line 14 and Line 237 in the original manuscript, we have corrected them in the revised version.\n\n**Q1. Section 2, Related work reviews data privacy, data poisoning and causal confounder in DL. However, only data privacy and confounder in deep learning have been discussed. Has data poisoning been combined with data privacy? There was no clear demarcation in the discussion of the two concepts under Data Privacy.**\n\n**A1.** The relation between our method and traditional data poisoning is discussed in the related work of the revised manuscript. Note that some privacy protection methods and traditional data poisoning methods both deal with the training-time dataset. However, traditional data poisoning methods usually carry the purpose of malicious attack, while the motivations of our work, EMN [1], and Fawkes [2] are to protect the privacy of data owners. Therefore, we classify the latter into Data Privacy in the related work. The added details of data poisoning are as follows:\n\n**Data poisoning:** The goal of traditional data poisoning is to reduce the test accuracy of the model by modifying the training set. Biggio et al. [3] first introduce this type of attack in support vector machines [4]. Then Muñoz-González et al. [5] propose a deep learning version by poisoning the most representative samples in the training examples. Although data poisoning attacks seem to share the same goal as ours, these methods have a limited impact on DNNs and are not suitable for data protection tasks. e.g., the model trained on poisoned examples can still achieve acceptable performance [5], and the modified image can be easily distinguished from the original one [6]. In addition, the backdoor attack is another type of attack that poisons training data with a trigger pattern [7,8,9], but this attack does not prevent the model from learning useful knowledge in the natural data. Therefore, traditional data poisoning methods cannot be used for data protection, while our proposed method can produce unlearnable examples with imperceptible noise. \n\n[1] Unlearnable Examples: Making Personal Data Unexploitable. Huang et al. ICLR, 2021. \n[2] Fawkes: Protecting privacy against unauthorized deep learning models. Shan et al. USENIX Security. 2020. \n[3] Poisoning attacks against support vector machines. Biggio et al. ICML, 2012. \n[4] Support vector machines. Hearst et al. IEEE Intelligent Systems and their applications, 1998. \n[5] Towards poisoning of deep learning algorithms with back-gradient optimization. Muñoz-González et al. ACM workshop on artificial intelligence and security, 2017. \n[6] Generative poisoning attack method against neural networks. Yang et al. arXiv preprint arXiv:1703.01340, 2017. \n[7] Targeted backdoor attacks on deep learning systems using data poisoning. Chen et al. arXiv preprint arXiv:1712.05526, 2017. \n[8] Poison frogs! targeted clean-label poisoning attacks on neural networks. Shafahi et al. NeurIPS, 2018. \n[9] Reflection backdoor: A natural backdoor attack on deep neural networks. Liu et al. ECCV, 2020.\n\n**Q2. Section 5.2.1 could be better structured for ease of reading.**\n\n**A2.** In the revised manuscript, we have reorganized the paragraphs of Section 5.2.1 to make their structure clearer.", " We thank the reviewer’s constructive feedback and answer each question below:\n\n**Q1. 1) For experiments on the different proportions of unlearnable examples, it does not make sense to compare Polyline 5 with others. Polyline 5 is still regarded as the 100% unlearnable case. 2) It would be interesting to see $D_{out,en}$ with $D_{nat}$.** \n\n**A1. 1)** Each point of Polyline 5 uses only p% of the dataset for training the ConfounderGAN. During the evaluation phase, the confounder property of our method needs to generalize to (1-p)% out-of-distribution (OOD) data. If ConfounderGAN cannot achieve this, then Polyline 5 (p% $D_{in,en}$ + (1-p%) $D_{out,en}$) and Polyline 4 (p% $D_{in,en}$ + (1-p%) $D_{nat}$) will be close. The results show that Polyline 5 is significantly lower than Polyline 4, indicating that our ConfounderGAN is still effective for OOD data. We elaborate on the above statement in Section 5.2.2 of the manuscript. \n\n**A1. 2)** In this paper, our purpose is consistent with Huang et al. [1], i.e., use ConfounderGAN to encrypt the original images as unlearnable examples, so that the model trainer cannot exploit useful knowledge from these processed images. When the training dataset is a mixed dataset consisting of encrypted images $D_{en}$ and natural images $D_{nat}$, it is an interesting question whether these encrypted images $D_{en}$ are still unlearnable. We explore this problem in Figure 4(c)~4(d) of the manuscript (the corresponding description is Line 274 - Line 283). Taking bloodMNIST (Figure 4(d)) as an example.\nA quick glance at Curve 2 (EMN: 90% $D_{en}$+10% $D_{nat}$) and Curve 3 (Ours: 90% $D_{en}$+10% $D_{nat}$) tells us that the effectiveness of both EMN and our method drops quickly when the data are not made 100% unlearnable (encryption). \nHowever, the result of Curve 4 (EMN: only 90% $D_{en}$) and Curve 5 (Ours: only 90% $D_{en}$) demonstrates that the 90% $D_{en}$ is still unlearnable, and our method is more efficient.\nMeanwhile, Curve 1 (only 10% $D_{nat}$), Curve 2 and Curve 3 are almost the same, showing that the exploitable knowledge in Curve 2 and Curve 3 mainly comes from 10% natural data.\nTherefore, we conclude that both images encrypted by our method or EMN remain unlearnable property even as part of the dataset, and our method maintains its superiority to some extent.\nAlthough this investigation is conducted under in-distribution encryption, it can naturally generalize to the out-of-distribution setting.\n\n[1] Unlearnable Examples: Making Personal Data Unexploitable. Huang et al. ICLR, 2021.\n\n**Q2. Legends on Fig 5 (c-d) color is not clear for EMN v.s. the proposed method for only $\\mathcal{D}_{in}$.** \n\n**A2.** Thank you for your careful review, we've updated the color of Curve 5 in the legend to make it easier to distinguish from Curve 4.\n\n**Q3. Line 215, \"When the model trainer downloads these images...\" I believe the goal of unlearnable examples is to make the model unable to predict the protected classes/users rather than high-performing models.**\n\n**A3.** Thank you for your careful review. In the revised manuscript (Line 233), we have revised this description as follows: When the model trainers download these images, they cannot exploit useful knowledge from these processed images, thus protecting the data privacy of the data owner.\n\n**Q4. Line 237 \"we first\" -> \"We first\"** \n\n**A4.** Thank you for your careful review. We have corrected this typo in the revised manuscript.", " Meanwhile, for the adaptive setting proposed by Radiya-Dixit et al. [10], we think there are two important points that need to be clarified.\n\n**1) We believe that this adaptive setting has unbalanced assumptions about the strength of the data owner’s capability and model trainer’s capability, leaving the data owner (or crypto tool designer) on the weaker side.** \nRadiya-Dixit et al. [10] assume that the model trainer has full access to the encryption tool, while the crypto tool designer has no knowledge of the model trainer's decryption method. For example, in our paper, ConfounderGAN’s designer does not know that the model trainer will use a denoiser to remove encryption noises. However, if we know this information in advance, we might be able to introduce the knowledge of the denoiser into the training process of the ConfounderGAN, making it robust to the denoiser. An intuitive idea is to change the existing training architecture from \"original image -> generator -> confounder noise -> pretrain classifier\" to \"original image -> generator -> confounder noise -> **pretrain denoiser** -> pretrain classifier\", so that the confounder property can be preserved even if the generated noises encounter a denoiser in the future. Of course, this solution is very rudimentary. We will explore the optimal solution in future work, thus giving the data owner (or crypto tool designer) an edge in the arms race with the model trainer.\n\n**2) Since data owners usually don't reveal which encryption tool they use, we believe that the non-adaptive setting may be more practical than the adaptive setting in real-world scenarios.** \nRadiya-Dixit et al. [10] believe that the adaptive setting is practical in the real-world, and they give the following argument: encryption tools are usually publicly accessible applications, thus model trainers can adaptively train a feature extractor that resists these encryption noises. We agree that encryption tools are generally publicly accessible, but disagree that model trainers can adaptively train feature extractors. This is because multiple encryption methods will be proposed in the future. When data owners publish encrypted data, they won't reveal the encryption tools they use in most scenarios. Thus it is difficult for the model trainers to determine which encryption method should be used when adaptively training decryption feature extractors. In fact, referring to the community of adversarial examples [12,13], one encryption method can derive multiple instantiations by modifying the encryption constraints. For example, data owners can replace small pixel-wise perturbation with watermark [12], color channel perturbation [13], etc., according to their preferences. As long as the data owner does not expose the information of the encryption tool, the model trainer cannot decrypt it in an adaptive manner. Based on these analyses, we believe that the non-adaptive setting may be more practical than the adaptive setting in real-world scenarios.\n\nWe have added experiments and discussions about the adaptive setting in the **Appendix D** of the revised manuscript.\n\n[10] Data Poisoning Won't Save You From Facial Recognition. Radiya-Dixit et al. arXiv preprint arXiv:2106.14851, 2021. \n[11] Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising. Zhang et al. TIP, 2017. \n[12] Adv-watermark: A novel watermark perturbation for adversarial examples. Jia et al. Proceedings of the 28th ACM International Conference on Multimedia. 2020. \n[13] Color channel perturbation attacks for fooling convolutional neural networks and a defense against such attacks. Kantipudi J et al. IEEE Transactions on Artificial Intelligence, 2020.", " **Q3. The current discussion on how attackers can bypass the defence is a bit limited. What happens when some users have both their perturbed and un-perturbed data online? Since ConfounderGAN can be obtained by any user, the attacker can also use it to train a denoiser. Overall, more discussion on the adaptive setting could help.**\n\n**A3.** We supplement an experiment to explore this problem. To better understand the adaptive setting proposed by [10], we first illustrate the assumption on the data owner’s capability and the model trainer’s capability under this setting:\n\n**Assumption on data owner’s capability:** A data owner processes some natural images $D_{nat}$ into encrypted images $D_{en}$ via ConfounderGAN and uploads them to social media. \n**Assumption on model trainer’s capability:** A model trainer knows that these images $D_{en}$ have been processed by ConfounderGAN and can directly access the generator of ConfounderGAN. The model trainer wishes to train a denoiser against the noise generated by the ConfounderGAN. However, the trainer cannot obtain the original images $D_{nat}$ corresponding to the encrypted images $D_{en}$, otherwise he/she can directly use these original images $D_{nat}$ to train the model. Therefore, in order to denoise the encrypted images $D_{en}$, the model trainer needs to do the following steps: 1) collect additional public images $D_{nat}^\\prime$. 2) feed the surrogate images $D_{nat}^\\prime$ into the generator of ConfounderGAN to build the encrypted images $D_{en}^\\prime$. 3) use $D_{nat}^\\prime$ and $D_{en}^\\prime$ to train a denoiser. 4) remove the noise of the encrypted images $D_{en}$ by the trained denoiser.\n\nWe conduct the experiment on CIFAR10 to investigate whether the adaptive denoiser can completely remove the effect of the encrypted noises. In practice, we divide the training set of CIFAR10 into two equally as $D_{nat}$ and $D_{nat}^\\prime$, and then use the above steps to obtain the denoised images, where the training of the denoiser follows DnCNN [11]. The experimental results are shown in the table below.\n\n| Training dataset | natural images $D_{nat}$ |denoised images |encrypted images $D_{en}$ |\n| :----: | :----: |:----: |:----: |\n| Test accuracy | 87.4\\% |77.9\\% |11.9\\% |\n\nThe result shows that although the denoiser can resist our encryption method to a certain extent, the model’s performance can still be significantly compromised by our confounder noises, which shows that our method retains its effectiveness under the adaptive setting.", " **Q2. I suggest, using \"perturbed\" images instead of \"encrypted\" images.**\n\n**A2.** In this paper, we name the noise-enhanced images as the ''encrypted'' images according to the effect of noises. This nomenclature is common in the field of security machine learning. For example, in adversarial attacks, the \"perturbed\" images are called ''adversarial'' images (examples), and in traditional data poisoning, the \"perturbed\" images are called ''poisoned'' images. Therefore, we believe that the term ''encrypted'' images can emphasize that the model trainer cannot exploit useful knowledge from these \"perturbed\" images. i.e., highlight the unlearnable property of images [1]. We also understand the reviewer's concern. To avoid ambiguity or misleading, we have made two changes in the revised manuscript: 1) Clarify what the term \"encryption\" means when it first appears in the main text (i.e., Line 37). 2) In the problem statement section (i.e., Line 131), we explicitly illustrate that the ''encrypted'' images in this paper are obtained by adding imperceptible perturbations to the original images.\n\n[1] Unlearnable Examples: Making Personal Data Unexploitable. Huang et al. ICLR, 2021.", " Thanks for the reviewer’s constructive feedback. We find that the comments of Weaknesses and Questions are consistent, thus we mainly focus on solving the three questions below.\n\n**Q1. Please consider discussing the relation between this solution and data poisoning in more detail.**\n\n**A1.** The relation between our method and traditional data poisoning is discussed in the related work of the revised manuscript. Note that some privacy protection methods and traditional data poisoning methods both deal with the training-time dataset. However, traditional data poisoning methods usually carry the purpose of malicious attack, while the motivations of our work, EMN [1], and Fawkes [2] are to protect the privacy of data owners. Therefore, we classify the latter into Data Privacy in the related work. The added details of data poisoning are as follows:\n\n**Data poisoning:** The goal of traditional data poisoning is to reduce the test accuracy of the model by modifying the training set. Biggio et al. [3] first introduce this type of attack in support vector machines [4]. Then Muñoz-González et al. [5] propose a deep learning version by poisoning the most representative samples in the training examples. Although data poisoning attacks seem to share the same goal as ours, these methods have a limited impact on DNNs and are not suitable for data protection tasks. e.g., the model trained on poisoned examples can still achieve acceptable performance [5], and the modified image can be easily distinguished from the original one [6]. In addition, the backdoor attack is another type of attack that poisons training data with a trigger pattern [7,8,9], but this attack does not prevent the model from learning useful knowledge in the natural data. Therefore, traditional data poisoning methods cannot be used for data protection, while our proposed method can produce unlearnable examples with imperceptible noise. \n\n[1] Unlearnable Examples: Making Personal Data Unexploitable. Huang et al. ICLR, 2021. \n[2] Fawkes: Protecting privacy against unauthorized deep learning models. Shan et al. USENIX Security. 2020. \n[3] Poisoning attacks against support vector machines. Biggio et al. ICML, 2012. \n[4] Support vector machines. Hearst et al. IEEE Intelligent Systems and their applications, 1998. \n[5] Towards poisoning of deep learning algorithms with back-gradient optimization. Muñoz-González et al. ACM workshop on artificial intelligence and security, 2017. \n[6] Generative poisoning attack method against neural networks. Yang et al. arXiv preprint arXiv:1703.01340, 2017. \n[7] Targeted backdoor attacks on deep learning systems using data poisoning. Chen et al. arXiv preprint arXiv:1712.05526, 2017. \n[8] Poison frogs! targeted clean-label poisoning attacks on neural networks. Shafahi et al. NeurIPS, 2018. \n[9] Reflection backdoor: A natural backdoor attack on deep neural networks. Liu et al. ECCV, 2020.", " This paper proposes ConfounderGAN, a GAN whose generator can be used to create a noise to an image to make it *unlearnable*, by creating a spurious correlation between the image and the label. The proposed approach has been evaluated on several image classification tasks and the results show it can help reduce the accuracy of a model trained on noisy data in the *non-adaptive* setting. \n**Strengths** \n\nThe paper tackles an important issue. The results seem promising. However, several issues need to be addressed to make the paper more convincing.\n\n**Weaknesses** \n\n- The proposed approach seems a lot like a data poisoning attack. However, discussions on data poisoning and its relation to this solution are missing. Instead, terms like encryption are used, which can be misleading. \n- The paper did not discuss the asymmetry between users and attackers as discussed in recent literature (e.g., [a]), which may give a false sense of security to users as these types of countermeasures have been proven to be ineffective.\n\n[a]- Data Poisoning Won't Save You From Facial Recognition. (Radiya-Dixit et al., 2021) \n- Please consider discussing the relation between this solution and data poisoning in more detail. \n- I suggest, using \"perturbed\" images instead of \"encrypted\" images\n- The current discussion on how attackers can bypass the defence is a bit limited. What happens when some users have both their perturbed and un-perturbed data online? Since ConfounderGAN can be obtained by any user, the attacker can also use it to train a denoiser. Overall, more discussion on the adaptive setting could help. Nothing to report.\n", " This paper proposed using GAN to produce confounder noise for unlearnable examples. The proposed method address an important issue that personal data is being used for unauthorized machine learning training. Unlike existing methods that require bi-level optimizations with multiple backward passes, the proposed method can generate confounder noise in a forward pass after training, making it very practical in a real-world application. Empirically, the proposed method outperforms existing methods. \n Strengths\n- Well-motivated method for efficiently generating unlearnable examples, and the context of unauthorized machine learning training is well explained. \n- The proposed method is efficient and technically sound. Existing works rely on optimizations that may not be practical for a user to generate unlearnable examples on the fly. Using GAN, the unlearnable version of the image can be generated in a forward pass, which improves usability in a practical setting. \n- Comprehensive empirical evaluations of different datasets and models are appreciated. Results demonstrated the proposed method consistently outperforms existing methods.\n\n---\nWeaknesses/Limitations:\n- For experiments on the different proportions of unlearnable examples, it does not make sense to compare Polyline 5 with others. Polyline 5 is still regarded as the 100% unlearnable case. It would be interesting to see $ D_{out,en} $ with $D_{nat}$. \n- Legends on Fig 5 (c-d) color is not clear for EMN v.s. the proposed method for only $ D_{in,en} $\n- Line 215, \"When the model trainer downloads these images...\" I believe the goal of unlearnable examples is to make the model unable to predict the protected classes/users rather than high-performing models. \n- Line 237 \"we first\" -> \"We first\"\n- Once the data is released, the defender may not modifies the data anymore, and the model trainer can retroactively apply new models/methods [1]. An adaptive case should be carefully examined.\n- Comparison with DeepConfuse [2], which also able to generate unleranable samples for $ D_{out,en} $\n\n[1] Data Poisoning Won’t Save You From Facial Recognition, ICML 2021 Workshop AML\n[2] Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder. NeurIPS 2019\n\n\n---\nAfter the author's response, I increased my rating score to 7. \n\nA detailed analysis of the adaptive method to reverse the original image is well explained and discusses the potential limitation of the proposed method. Based on the author's response, in practice, the owner should keep the parameters of ConfounderGAN private to prevent the model trainer reverse the unlearnable data. Please address the questions in the Strengths And Weaknesses section.\n\n Please address the potential limitations in the Strengths And Weaknesses section.", " This paper proposes a GAN that makes personal image data unlearnable by DL methods for data protection. The authors utilize the confounder property present in the noise produced by the generator. This property builds spurious correlations between images and labels, disallowing the model to learn the correct mappings. The discriminator is used to ensure that this generated noise is undetectable. The authors conduct experiments using six image classification datasets, 3 of which are natural object datasets and 3 are medical datasets. Specifically, a confounder based framework has been proposed for image data encryption. The paper has been very well written, introduction to concepts have been well laid out and the structure of the paper is clear. The authors have done a thorough study of the past works in this particular domain. There are, however, some typos throughout the paper. For example: Line 12 -> \"thereby, remaining the normal utility\", Line 14 -> \"The experiments are conducted in six image classification datasets, including three natural object datasets and three medical datasets\" (this reads slightly ambiguous so a better choice of word for including might be consisting/comprising), Line 237 -> sentence capitalization. Section 2, Related work reviews data privacy, data poisoning and causal confounder in DL. However, only data privacy and confounder in deep learning have been discussed. Has data poisoning been combined with data privacy? There was no clear demarcation in the discussion of the two concepts under Data Privacy.\nSection 5.2.1 could be better structured for ease of reading. There are very few possible negative societal impacts, that are not straightforward in nature. The authors have, however, discussed the positive impact, which is data privacy in DL." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "8KliZsQTWsp", "BV9lH6mXQQ0", "BV9lH6mXQQ0", "nips_2022_XxmOKCt8dO9", "rcJgz2rPQnh", "ZUrDSk7uP5F", "J0bm0vzitKH", "RbM7os-79DKE", "xiXVb-8GP_", "tJUJl49fWu", "nips_2022_XxmOKCt8dO9", "nips_2022_XxmOKCt8dO9", "nips_2022_XxmOKCt8dO9" ]
nips_2022_0GRBKLBjJE
A Fast Post-Training Pruning Framework for Transformers
Pruning is an effective way to reduce the huge inference cost of Transformer models. However, prior work on pruning Transformers requires retraining the models. This can add high training cost and high complexity to model deployment, making it difficult to use in many practical situations. To address this, we propose a fast post-training pruning framework for Transformers that does not require any retraining. Given a resource constraint and a sample dataset, our framework automatically prunes the Transformer model using structured sparsity methods. To retain high accuracy without retraining, we introduce three novel techniques: (i) a lightweight mask search algorithm that finds which heads and filters to prune based on the Fisher information; (ii) mask rearrangement that complements the search algorithm; and (iii) mask tuning that reconstructs the output activations for each layer. We apply our method to BERT-base and DistilBERT, and we evaluate its effectiveness on GLUE and SQuAD benchmarks. Our framework achieves up to 2.0x reduction in FLOPs and 1.56x speedup in inference latency, while maintaining < 1% loss in accuracy. Importantly, our framework prunes Transformers in less than 3 minutes on a single GPU, which is over two orders of magnitude faster than existing pruning approaches that retrain the models.
Accept
The authors deliver on what they promise: a fast post-training pruning framework for transformers. It reduces the inference costs of deploying transformers while preserving much or all of their accuracy on the standard range of academic downstream tasks. Moreover, it does so without the hefty costs that typically come with prune-and-retrain cycles. The paper is clearly written and well-presented, and the technique seems to work quite well. The authors seemed to satisfactorily address all reviewer concerns, and those concerns were minor at best. What more can you ask for? I look forward to visiting the poster at NeurIPS and trying this technique myself. The authors are to be especially commended for focusing on real-world speedup on real hardware. That's (sadly) still a rarity in pruning papers. This is something that appears genuinely useful, today, by practitioners.
train
[ "5XV3yhzitRi", "f07FbpOaNC_", "8jh5nAdISi_o", "8HsqMR-Xh-S", "P30SE6-eFQH", "dcRfHPda7sg", "Pw1aLo_ErZX", "vWG-YAjxQJL", "iQB9-BELUb", "sSV1Y2960CD", "7e5_6VgnVW", "rO9dYgiVg5P" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the author for the rebuttal. The author solves my concern.", " ==================post rebuttal============ \n\nMy concerns are addressed. I raise up my score. Thanks.", " Thanks for providing the feedback. The rebuttal addresses my concerns. I would like to keep my acceptance recommendation. ", " Thank you for your detailed review and suggestions. We have added the new ablation experiments in our paper. Here we address your questions in detail:\n\n> **[Question1]** Figure 3 (Left) only shows that the actual speed is linear to the number of filters, which does not support the claim in line206-207. I suggest authors re-plot Figure 3 (Left) with fewer filter number.\n\nThat is an excellent point. We have re-plotted Figure 3 with fewer number of filters so that the non-linearity of LAT is more clearly shown. The threshold numbers of filters that the latency starts increasing are 1536, 672, 288 for the batch size 8 (blue line), 16 (green line), 32 (yellow line), respectively.\n\n> **[Question2]** Technical merits are limited. Each component used in the three-stage pruning has been used in prior works. For example, taking Fisher Information Matrix as an importance score and tuning the mask variables toward minimizing the layer-wise reconstruction error.\n\nWhile the individual methods might not be regarded as new in isolation, we are the first attempt to bring and carefully adjust these ideas to the re-training free setting of Transformers. For instance, using the Fisher information matrix as a sensitivity matrix to guide pruning (or other model compression methods) has been explored in prior literature; however, our mask search and re-arrangement are a novel approach to find an accurate pruning pattern under a *block*-diagonal assumption of the Fisher information matrix by bringing down the approximation problem into a tractable scale. This was particularly critical for the re-training free setting where (i) the pruning pattern should be accurate enough to retain critical information without retraining and (ii) the search time should be fast. Taken together, our result proves the potential and efficacy of re-training free pruning for Transformer architectures for the first time, which we expect to have a substantial academic and industrial impact.\n\n> **[Question3]** I can not understand why the mask by Eq. (4) and Eq. (11) is different because the importance score is the same. Line 237 claims that $||m_l||_0$ is equal to $||m_l^\\ast||$. Line 242 claims that $\\mathcal{I}_l$ is the $l$-th diagonal block of $\\mathcal{I}$. In such situation, $m_l^\\ast$ selected by $\\mathcal{I}$ and $m_l$ selected by $\\mathcal{I}_l$ should be the same. This can be proved by contradiction.\n\nSince the Fisher information matrix is not in fact diagonal nor block-diagonal, the masks obtained from the diagonal/block-diagonal/full Fisher information matrix can be different. Below we provide an example to illustrate this. Assume that we have a two-layer NN where each layer has 3 filters (i.e., 6 filters in total), and we have to prune 3 out of the 6 filters. Say the full Fisher information matrix $\\mathcal{I}$ is as follows:\n\n$\\mathcal{I} = \n\\begin{pmatrix}\n0 & -1 & 2 & 0 & 0 & 1 \\\\\\\\\n0 & 4 & -3 & 1 & 0 & 0 \\\\\\\\\n1 & 2 & 5 & 0 & -1 & 0 \\\\\\\\\n0 & 0 & 0 & 2 & 5 & 3 \\\\\\\\\n0 & 1 & 0 & 1 & 1 & -7 \\\\\\\\\n-1 & 0 & 0 & 4 & -4 & 3 \\\\\\\\\n\\end{pmatrix}$\n\nUnder the diagonal assumption, we can first obtain the warmup mask $\\text{m}^{\\ast}$ as $(m_1, m_2, m_3, m_4, m_5, m_6) = (0, 1, 1, 0, 0, 1)$ using Algorithm 1. Then, by the block-diagonal approximation, we decompose $\\mathcal{I}$ into two sub-matrices $\\mathcal{I}_1$ and $\\mathcal{I}_2$:\n\n$\\mathcal{I}_1 = \n\\begin{pmatrix}\n0 & -1 & 2 \\\\\\\\\n0 & 4 & -3 \\\\\\\\\n1 & 2 & 5 \\\\\\\\\n\\end{pmatrix}, \\quad\n\\mathcal{I}_2 = \n\\begin{pmatrix}\n2 & 5 & 3 \\\\\\\\\n1 & 1 & -7 \\\\\\\\\n4 & -4 & 3 \\\\\\\\\n\\end{pmatrix}$\n\nBy the warmup constraint, $|m_1| + |m_2| + |m_3| = 2$ and $|m_4| + |m_5| + |m_6| = 1$. Here, the optimal mask for $\\mathcal{I}_1$ is $(m_1, m_2, m_3) = (0, 1, 1)$ (i.e., the mask variables are not re-arranged). However, as the non-diagonal elements are considered, the optimal mask for $\\mathcal{I}_2$ changes to $(m_4, m_5, m_6) = (1, 0, 0)$. Thus, the re-arranged mask $\\hat{\\text{m}} = (0, 1, 1, 1, 0, 0)$ is different from the warmup mask $\\text{m}^\\ast = (0, 1, 1, 0, 0, 1)$.", " > **[Question4-1]** The ablation experiments are not clear enough. There are two ablative modules in Figure 6, including importance score and mask search algorithm. Such an ablation study with one more ablative module can not support the claim of Line 319. The ablation experiment in Figure 7 should be divided into two experiments. 1) The importance score. Keeping the three-stage pruning pipeline and just modifying the importance score, which can reflect the efficacy of the importance score.\n\nIn Appendix A.12, we conduct experiments to demonstrate the efficacy of our Fisher-based importance score. We compare the performance when the three different importance metrics (i.e., weight magnitude, gradient-based, Fisher) are plugged into our pruning pipeline. However, we note that while our mask search algorithm can be used with any importance metric, the mask re-arrangement technique requires the Fisher information matrix because the algorithm requires signals that capture the interaction between mask variables. Hence, we designed the following two ablation experiments. First, we skip the mask re-arrangement stage for all importance metrics (Figure 9, added). Second, we include the Fisher-based mask re-arrangement stage for all importance metrics (Figure 10, added). In both experiments, our Fisher-based importance score consistently leads to the highest accuracy.\n\n> **[Question4-2]** 2) Mask search and rearrangement. Keeping the importance score and using uniformly prune, can reflect the efficacy of mask search and re-arrangement.\n\nTo show the effectiveness of mask search and re-arrangement, we compare its performance with uniform Fisher pruning, which prunes every layer with the same sparsity level. Mask tuning is applied to both methods. Figure 11 in Appendix A.13 shows that the accuracy of our method significantly outperforms that of uniform pruning by up to 8.6\\%. The result demonstrates the necessity of our mask search and re-arrangement techniques in finding quality binary masks.\n\n**[Typos]** We thank the reviewer for letting us know the typos. They are fixed in the updated paper.\n", " Thank you for your review and thoughtful comments. Here we address your questions:\n\n> **[Weakness1]** Firstly, it seems that the proposed method is not limited to pruning transformers; it can be also applied to other models like CNNs by just using the channel mask. Is there any consideration why the work is limited to transformers? Will the algorithm perform well on other models like CNNs?\n\n> **[Question1]** The authors may discuss if the proposed method is only restricted to transformers. Can it be applied to other models like CNNs? Is it possible that the post-training method only works well here due to the large redundancy of BERT models on a specific downstream dataset?\n\nIn this paper, we focused on Transformer pruning because the paper was motivated by our initial observation that existing post-training CNN pruning methods cannot be applied to Transformers, which we elaborate in [Weakness2]. However, we think our method can be extended to CNNs as well with a minor modification in Algorithm 1. The current version of Algorithm 1 leverages the fact that the input tensor shape is the same across every layer in a Transformer, which is not the case for CNNs. With a further consideration for this difference, our method can be extended to CNNs, which we will clarify in the final version of the paper.\n\nMoreover, we acknowledge the reviewer's concern that our post-training pruning method might have benefited from the large redundancy of BERT models; however, we would also like to highlight that our framework was effective for pruning DistilBERT, which is already a compressed model with significantly less redundancy. \nFurthermore, our extensive experiments over 8 different tasks (including classification, regression, and question answering) demonstrate that our framework can consistently achieve good performance across different tasks as well.\n\n> **[Weakness2]** There are some other works on post-training channel pruning (e.g., [a]). It would be better if the authors can show the proposed method can also outperform general data-free pruning methods when applied to transformers. But I also agree the current experimental results are already solid.\n\nWhile there exist post-training pruning methods for CNNs, we find it difficult to extend those techniques to Transformer pruning because their underlying ideas are often tightly coupled with the architectural characteristics of CNNs. For example, Neuron Merging [1] exploits the equation $\\text{ReLU}(ax) = a\\text{ReLU}(x)$ if $a \\geq 0$, which does not hold for GELU. For another example, RED [2] requires a model to be a repeating structure of linear layers and element-wise activations, and thus cannot be applied to MHA layers.\n\nThe proposed method in [3] ([a] in the reviewer's comment) can be applied to Transformer pruning. However, we did not consider it as our baseline or competitor method as it is an unstructured pruning method whereas our main focus in the paper is on structured pruning. Moreover, we show in Figure 6 that magnitude pruning of Transformers as in [3] leads to significant accuracy drop without re-training.\n\n[1] Kim, Woojeong, et al. \"Neuron merging: Compensating for pruned neurons.\" Advances in Neural Information Processing Systems 33 (2020): 585-595.\n\n[2] Yvinec, Edouard, et al. \"RED: Looking for Redundancies for Data-FreeStructured Compression of Deep Neural Networks.\" Advances in Neural Information Processing Systems 34 (2021): 20863-20873.\n\n[3] Lazarevich, Ivan, Alexander Kozlov, and Nikita Malinin. \"Post-training deep neural network pruning via layer-wise calibration.\" 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). IEEE, 2021.\n\n> **[Societal Impacts]** There lacks a discussion of potential negative societal impacts. It might be acceptable due to the nature of the work; but a discussion is still encouraged.\n\nThank you for your suggestion. We have added a brief discussion on the potential societal impacts of this paper in Appendix A.14. Due to the space limit, we did not include the section in the main paper. ", " Thank you for your review and positive feedback. We have added performance comparison with CoFi in the updated version of the paper. Here we address your comments in detail:\n\n> **[Weakness1]** This paper should compare their algorithms with some SOTA pruning methods on transformer such as COFI.\n\nWe thank the reviewer for introducing the new related work. In Figure 5, we have added the performance of CoFi on SST-2 and MRPC datasets without knowledge distillation and data augmentation. \nWe will add more data points including the results on the QQP and QNLI datasets in the final version of the paper.\n\nThe results align with our previous experiments in which our framework exhibits comparable or better accuracy than the prior works even without re-training of the pruned model. On the MRPC dataset, CoFi marginally outperforms our results. However, on the SST-2 dataset, CoFi shows larger accuracy drop than ours. We would also like to note that CoFi requires at least 40 epochs for pruning and re-training, which amounts to **7 GPU hours** for SST-2. In contrast, the end-to-end time of our pruning method is **only 39 seconds**. Pruning time comparison on the MNLI dataset is also provided in Table 1.\n\n> **[Question1]** How to obtain latency exactly? Is the latency become different when obtain them from different hardware and inference engines.\n\nFor latency objective pruning, our framework takes a latency lookup table as an input. The lookup table can be obtained by measuring the MHA/FFN layer latencies with different numbers of heads/filters, using the target hardware and inference engine. In this way, our framework adapts to diverse hardware and software backends. For the results in Table 3 in Appendix, we used PyTorch with a V100 GPU to generate the lookup table and to measure the latencies of the pruned models.", " Thank you for your review and positive feedback. Here we address your question:\n\n> **[Question1]** The reason to use second-order Taylor. It seems that first-order Taylor can be\nfaster and doesn’t lose much accuracy? Some other works use first-order.\n\nThis is a good question. Many of the previous works used first-order Taylor expansion and ignored the second order term, mostly because of its computational cost. However, the Hessian matrix can be efficiently approximated by Fisher information matrix, which can be computed using gradients. Thus, our Fisher-based importance score can be considered as an efficient and more accurate substitute for first-order based methods. Our empirical results and end-to-end timings also support this.", " The paper proposed a post-training pruning framework for Transformers, without retraining. It prunes of both heads in MHA and filters in FFN layers in a structured way. The process is done by applying a lightweight Fisher based mask search along with a Fisher mask rearrangement and mask tuning. The results are comparable or even better FLOPs-accuracy trade-off than prior methods. Strengths: The paper is well written, and the authors proposed the method in quite a detail. The post-training pruning framework does not require retraining, which is very good. Adding latency-constrained in consideration is also good.\nThe experiments are quite sufficient and are able to support their claims and conclusions. The results show the effectiveness of the proposed methods. The paper also compared existing structured pruning works on GLUE\n\n 1. The reason to use second-order Taylor. It seems that first-order Taylor can be faster and doesn't lose much accuracy? Some other works use first-order [1][2]\n\n\n[1]Accelerating Sparse DNN Models without hardware-Support via Tile-Wise Sparsity\n[2]Chasing sparsity in vision transformers: an end-to-end exploration Please see the above comments. I will consider changing my ratings based on the author's rebuttal.\n\n", " This paper proposes three techniques to obtain a high accuracy transformer without retraining. A search algorithm to find which heads and filters are need to prune based on the Fisher information. An algorithm which rearrange the mask that complements the search algorithm. And tuning the mask which reconstructs the output activations for each layer. The experiments show the author can get better results than compare methods. Strengths:\n\nThis paper is well-written, well-motivated, and clear presentation.\n\nThe proposed algorithm improves transformer throughout efficiency with competitive accuracy and small latency, outperforming prior pruning and distillation approaches.\n\nFormulating the hardware-aware structural pruning as a knapsack problem is interesting.\n\nWeaknesses:\n\nThis paper should compare their algorithms with some SOTA pruning methods on transformer such as COFI[1]\n\n[1]Xia M, Zhong Z, Chen D. Structured pruning learns compact and accurate models[J]. arXiv preprint arXiv:2204.00408, 2022. 1、How to obtain latency exactly? Is the latency become different when obtain them from different hardware and inference engines\n\n Please see my weaknesses", " In this paper, the authors proposed a fast post-training pruning framework for transformer-based language models which does not require retraining. The algorithm takes a model, sample dataset, and compression constraint to generate the compressed model. It introduces three techniques to retain high accuracy: 1. mask search; 2. mask rearrangement; 3. mask tuning. Experiments show that the proposed method achieves 2x FLOPs reduction and 1.6x speed up within 1% accuracy drop. Strengths:\n1. Firstly, the paper is well written and easy to follow. \n2. The proposed method solves a complex optimization problem by introducing several approximations and using a multi-step approach. The process is introduced clearly. The effectiveness of each step is demonstrated by ablation studies. \n3. The experimental results are solid. The proposed method achieves similar performance compared to existing work without a large training cost, which could be valuable to real-life applications. \n4. I like the discussion of latency-aware compression, where the authors used a piece-wise linear function to approximate the latency LUT, which is integrated into the optimization objective. It is a smart design to fit both settings under the same optimization framework. \n\nWeakness:\n1. Firstly, it seems that the proposed method is not limited to pruning transformers; it can be also applied to other models like CNNs by just using the channel mask. Is there any consideration why the work is limited to transformers? Will the algorithm perform well on other models like CNNs? \n2. There are some other works on post-training channel pruning (e.g., [a]). It would be better if the authors can show the proposed method can also outperform general data-free pruning methods when applied to transformers. But I also agree the current experimental results are already solid.\n\n[a] Lazarevich et al., Post-training deep neural network pruning via layer-wise calibration, ICCVW. \n 1. The authors may discuss if the proposed method is only restricted to transformers. Can it be applied to other models like CNNs? Is it possible that the post-training method only works well here due to the large redundancy of BERT models on a specific downstream dataset?\n There lacks a discussion of potential negative societal impacts. It might be acceptable due to the nature of the work; but a discussion is still encouraged. ", " This paper proposes a three-stage post-training pruning framework for Transformers. It first uses Fisher information to search the binary mask, i.e, layer-wise pruning rate. Then, the framework modifies the binary mask in a layer-wise manner. Lastly, it tunes the non-zero valued mask for minimizing the layer-wise reconstruction error.\n\nSuch a framework could retain the performance of the model without retraining, thus it can finish the pruning in less than 3 minutes on a single GPU and can obtain actual speedup in inference because of structured pruning.\n\nExtensive experiments show that the proposed post-training pruning framework has a comparable performance with prior methods.\n 1. To my knowledge, this is the first work to use post-training pruning in Transformer. I recognize the contribution of applying technologies to new areas.\n\n2. Proposing simple mask search solutions based on FLOPs and Latency, which avoids user intervention.\n\n3. Experiments show the efficacy of the proposed framework, which could retain high accuracy without retraining. 1. Figure 3 (Left) only shows that the actual speed is linear to the number of filters, which does not support the claim in line206-207. I suggest authors re-plot Figure 3 (Left) with fewer filter number.\n\n2. Technical merits are limited. Each component used in the three-stage pruning has been used in prior works. For example, taking Fisher Information Matrix as an importance score and tuning the mask variables toward minimizing the layer-wise reconstruction error.\n\n3. I can not understand why the mask by Eq. (4) and Eq. (11) is different because the importance score is the same. Line 237 claims that $||m_l||_0$ is equal to $||m_l^*||$. Line 242 claims that $\\mathcal{I}_l$ is the $l$-th diagonal block of $\\mathcal{I}$. In such situation, $m_l^*$ selected by $\\mathcal{I}$ and $m_l$ selected by $\\mathcal{I}_l$ should be the same. This can be proved by contradiction.\n\n4. The ablation experiments are not clear enough. There are two ablative modules in Figure 6, including importance score and mask search algorithm. Such an ablation study with one more ablative module can not support the claim of Line 319. The ablation experiment in Figure 7 should be divided into two experiments. 1) The importance score. Keeping the three-stage pruning pipeline and just modifying the importance score, which can reflect the efficacy of the importance score. 2) Mask search and rearrangement. Keeping the importance score and using uniformly prune, can reflect the efficacy of mask search and re-arrangement.\n\n\ntypos:\n\n- Line 613: Eq. ?? -> Eq. 9\n- Algorithm 2.2 pf->of\n\n==================post rebuttal============\nMy concerns are addressed. I raise up my score. Thanks. No societal impact discussion needed, in my opinion." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "vWG-YAjxQJL", "rO9dYgiVg5P", "dcRfHPda7sg", "rO9dYgiVg5P", "rO9dYgiVg5P", "7e5_6VgnVW", "sSV1Y2960CD", "iQB9-BELUb", "nips_2022_0GRBKLBjJE", "nips_2022_0GRBKLBjJE", "nips_2022_0GRBKLBjJE", "nips_2022_0GRBKLBjJE" ]
nips_2022_yI7i9yc3Upr
Controllable Text Generation with Neurally-Decomposed Oracle
We propose a general and efficient framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO). Given a pre-trained base language model and a sequence-level boolean oracle function, we aim to decompose the oracle function into token-level guidance to steer the base model in text generation. Specifically, the token-level guidance is provided by NADO, a neural model trained with examples sampled from the base model, demanding no additional auxiliary labeled data. Based on posterior regularization, we present the close-form optimal solution to incorporate the decomposed token-level guidance into the base model for controllable generation. We further discuss how the neural approximation affects the quality of the solution. These experiments conducted on two different applications: (1) text generation with lexical constraints and (2) machine translation with formality control demonstrate that our framework efficiently guides the base model towards the given oracle while keeping high generation quality.
Accept
All three reviewers sided to accept the paper. The method of the paper is formulated as an optimization problem based on posterior regularization, and as such is quite different from existing paradigms in controllable NLG (e.g., lexically constrained beam search or modified probability sampling). The work's theoretical basis also offers a nice contrast with established methods in this area, as the existing methods are often applied in post-hoc manners and without theoretic guarantees. The only significant downside of this paper is that its evaluation is not very standard and lacks human evaluation, and its model-based automated evaluation of attributes such as formality could have been affected by spurious correlations (note: the latter concern affects only one of two tasks of the paper). As the paper achieves some substantial gains on two very different tasks, the reviewers generally considered the method of the paper to be quite effective.
train
[ "lACN1pY0iw", "XBDHcf-mM3l", "XHt-lEZuacg", "T5jxIk4EvtU", "tiRY0_bvmGn", "o3rRcZnYKoy", "nsdAZHrWcW-", "LIQu9Dz999h", "oSED9QLbM-x" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed response, revision, and additional results. The authors have addressed most of my questions, and I am happy to increase my score (also apologize for this last-minute response)", " Dear Reviewer 2y5z,\n\nThanks for your valuable comments and we believe they help a lot in our revision. To address your concern, we provided further empirical results about a smaller base model (GPT2-base) and continuous oracle (partially satisfied), including related analysis and discussion. We clarified our assumptions and key challenges in application scenarios that we target. Some qualitative generation results were also provided in the appendix of the revision to help better understand the effect and generation quality of NADO. Basically, we can control the model generation with respect to the given constraints, while **keeping the overall generation quality comparable to the base model**.\n\nWe hope our responses sufficiently addressed your concerns such that you will raise your rating as you mentioned in your original review, and we are more than happy to have a discussion if you have any follow-up questions or comments. ", " Dear Reviewers,\n\nWe appreciate your valuable comments and feedback. In our response we managed to address all of your concerns and provide related experimental results for some of them. Considering the end of author-reviewer discussion period is approaching (Aug. 9, next Tue), we would like to know if our responses are clear enough to resolve your concerns and we are open to further discussion.", " Thanks for your comments. \n\n_\"Hyperparameters in NADO”:_ Our key innovation is on how to incorporate constraints in the decoding, and for that stage, there is no hyper-parameter needed as we derive an optimal closed-form solution based on posterior regularization. This is in contrast to neural logic decoding, which requires some hyper-parameters to control the strength of constraints, and FUDGE/GeDI, which requires hyper-parameters to balance the logits between the base and the auxiliary model. \n\nHowever, there are hyper-parameters involved in 1) sampling strategy and 2) model design for R_\\theta. For the former, we show that the results are not sensitive to the temperature in Sec 3.5, and we are able to select it by evaluation on the development set. For the latter, we follow the standard design of neural models to select the model architecture, number of layers, etc. The same process is required in earlier works leveraging auxiliary models like FUDGE.\n", " Thanks for the comments and questions. Please see the response below:\n\n_\"Number of samples needed for the approximation ”:_ The number of samples required to approximate the search space depends on the complexity of the constraints. However, in our experiments, we found that a reasonably large number of samples is sufficient to train a good NADO to capture the **inherent correlation between generated token and the sequence-level oracle**. In both experiments, we show that with the same small number of samples (80,000 samples for MT, and 35141 * 16 samples for CommenGen) as the auxiliary data, our approach is more effective than existing approaches such as FUDGE (see Sec. 4.2). We also discuss how to generate effective samples by important sampling in Sec 3.5.\n\n_“Requirements for base model quality / diverse”:_ Indeed, if the base models perform suboptimal, the quality of control will be influenced. This will be an issue for all other approaches. In practice, we consider the setting where the base model is a large pre-trained language model and has a decent quality as shown in the experiments. We also update some results using GPT2-base in CommenGen, please refer to the revision. Basically, the generation quality (evaluated by BLEU scores) drops a little bit compared to GPT2-large. Please refer to the response below to see the further discussion about controllability.\n\n\n_“What if base distribution is far from constraints”:_ This is a naturally challenging setting and it is exactly the scenario NADO contributes the most compared with the existing alternatives. Please refer to Table 1, the performance of p(Domain Adaptation pretrain) row. This is an example where the base model is far from constraints and hard to hit it (very low constraint coverage). With NADO we are able to boost the coverage to 96.1% leveraging importance sampling and warmup. \n\n\n_“Model scale vs. controllability”:_ Our preliminary results show that the controllability is not correlated with model scale. The controllability is much more sensitive to the complexity of oracle rather than the base model distribution. Imagine a simple constraint “never output the token ‘rejection’ “, it is easy to be learned by NADO no matter how large the base model is. What the base model scale/quality affects most is the generation quality after controlling. We include new results with GPT2-small in CommenGen, please refer to the revision. Basically, the GPT-2 base model has lower scores with and without NADO compared with GPT-2 large, while the coverage improvements are similar. \n\n\n_“Partially meet the condition”:_ It is possible to extend C as a real value function in [0,1] by reformulating our approach. Empirically, in LCG experiment, we conducted a preliminary experiment to define C as the keywords coverage and the results are a little bit worse than using a binary C: for example, in CommenGen experiment q (Seq2seq pretrained p + NADO) setting (the second row from last), if we change C from binary to continuous, we get coverage drop from 97.1 to 96.6, and BLEU-3/4 score drop from 40.9/30.8 to 40.1/29.9.\n\n_“Sequences look like, compared to other models”:_ We add some generation samples in the appendix D.\n\n_“Certain spurious correlations”_: Our analysis and derivation are based on the theoretical assumption that we have an oracle C; how to get such an oracle is orthogonal to our contributions. However, in practice, in LCG the oracle is a simple rule-based keywords checker and it is almost perfect. In MT the oracle is a neural network that may leverage some superficial features. For example, some informal little words like “hmm” “uh”, some abbreviations like “ ‘cause ” “gonna” and capital letters. We find that NADO tends to fix them (please refer to Appendix D). Generally it makes the sentences more fluent and formal so we believe the oracle is good.\n\n_“Human Evaluation”_: We agree that human evaluation can better tell the performance of language generation applications. However, our goal is to control the model generation with respect to the given oracle, while keeping the overall generation quality compared to the base model. Thus, we evaluated the approaches by automatic metrics (BLEU scores, oracle scores). This follows the setting of related work like neural logic decoding. We also provide some qualitative examples in the Appendix D.\n\n_“Improper language generation”_: Preventing improper language generation is a great application of our method. Our approach can be applied when having a high quality oracle that classifies the improper sentences (for example, PerspectiveAPI) or a blocklist of toxic phrases. \n", " Thanks for your comments. We have fixed the typos and added the limitation section in the revision as following.\n\nIn this work, we assume a practical setting, where we have a base model with decent quality (e.g., large pretrained language models) and an oracle for the controlled attributes. We also assume we have the access to the probability distribution of generated tokens in each step by the base model, so that we can learn an auxiliary model to reweight the distribution. However, retraining base model is not required. \n\nWe also note that similar to other language generation approaches, there is a risk that malicious users may use NADO to generate improper or toxic texts and the generated texts may contain societal biases inherited from data. However, on the other hand, NADO could be a powerful weapon against toxicity and biases by incorporating a blocklist of toxic phrases. We refer readers to the discussion in Sheng et al. (2019); Zellers et al. (2019); Bender et al.(2021); Radford et al. (2019); Brown et al. (2020).\n", " **What is the task?**\nA general, flexible and efficient framework to control auto-regressive generation models with NeurAlly-Decomposed Oracle (NADO)\n\n\n**What has been done before?**\nPPLM , GeDI and FUDGE also aim to guide the base model with an auxiliary 42 model. However, they either shift the base model distribution in a post-hoc manner without theoretical guarantee, or/and require external labeled data to train the auxiliary model. Instead, we derive a closed-form solution for the optimal way to incorporate the oracle, without requiring external labeled data or token-level guidance.\n\n**What are the main contributions of the paper?**\n\n* Given a pre-trained base language model and a sequence-level boolean oracle function, author((s) propose to decompose the oracle function (indicating whether an attribute is satisfied) into token-level guidance to steer the base model in text generation.\n\n* They present the closed-form optimal solution to incorporate the token-level guidance into the base model for controllable generation.\n\n* They provide a theoretical analysis of how the approximation quality of NADO affects the controllable generation results.\n\n\n**What are the main results?**\nExperiments conducted on two applications: (1) text generation with lexical constraints and (2) machine translation with formality control demonstrate that our framework efficiently guides the base model towards the given oracle while maintaining high generation quality.\n\n Strengths\n\n* Generally, the post-processing methods are considered expensive in inference and low quality in generated texts. However, proposed framework, as a kind of post-processing method, can achieve high generation quality demonstrated in experiments and is efficient in the inference time.\n\n* Since NADO is trained on the data sampled from the base models, it aligns better with the base model and thus can achieve better control.\n\n* The token-level guidance is approximated by a neural model trained with examples sampled from the base model, demanding no additional auxiliary labeled data.\n Typo : Line 41 aims→ aim There is no limitations section i the paper.", " This work proposes controllable autoregressive generation by decomposition the control signal into token level constraints. The method is formulated as an optimization problem based on posterior regularization and approximated by a neural network. Experiments on lexically controlled generation and machine translation with formality demonstrates the effectiveness of the method. Strengths \n\n- This is indeed a novel method for controlling autoregressive generation. The method is different than the existing paradigms like lexically controlled beam search or modified sampling probability. The authors also provide a certain level of theoretical guarantee.\n- This method is demonstrated effective in the experiments, especially the machine translation with formality control.\n\nWeakness \n\n- There are important missing details, specifically:\n - How many samples are need for the approximation (Section 3.3)? Intuitively, as the approximation is performed over the full autoregressive decoding space, one would expect the number of sample be large, thus the computation complexity may also be large.\n - How good and diverse should the base model be? If the base model is not good enough for generation, then the controlled q will consequently not be good. If the base model is not diverse enough, then one may require more sampling to hit the condition (note that controlling the temperature cannot alleviate the mode collapse problem, if it is the problem of the base model)\n - How does the performance related to model scale? Can one expect a larger model to be easier to control (possibly because of better language modeling) or harder to control (possibly because of harder optimization)?\n - What if base distribution is far from constraints (but may sill be good per se) such that the sampled sentences cannot hit the constraints? This could happen in settings where the constraint distribution is far from the model distribution.\n - What happens if the sampled sequence partially meet the constraint? How would the method behave and can it still learn from partial satisfaction?\n- My other concern is that the experiments not grounded to linguistic explanations. Specifically:\n - How does the generated sequences look like? I would encourage the authors to include examples about the intermediate sample of q and show how q start from not following the constraints and gradually becomes more following.\n - How does the model differ from other model in terms of generated sequence?\n- My final concern is that there is no human evaluation. Generally, classifier-based evaluation (Table 2) may be fooled by certain spurious correlations thus not strongly reliable. Can the authors include human evaluation, or at least put generated examples for the reviewers to get an impression?\n\nI will be happy to increase my score accordingly if the above concerns can be properly addressed. See above comments. How the method relate to or can help improper language generation? It would be nice if the authors add certain discussions about how improper sentences can be prevented with this method. ", " This paper proposes to decompose sequence-level attributes into token-level guidance for controllable text generation, where the token-level guidance is approximate by a neural model trained with examples sampled from the base model. Both theoretical analysis and experimental results demonstrate the effectiveness of the proposed method on different tasks. Pros:\n- This paper is well-written and easy to read.\n- The proposed method is novel and well supported by theoretical analysis\n- Experiments are exhaustive. The authors conduct experiments on both constrained text generation and machine translation, and show the performance improvement on various datasets.\n\nCons:\n- The proposed NADO model introduces some additional hyperparameters that need to be tuned. It’s better if the authors can provide a deep analysis on the effect/sensitiveness of hyperparameters. Refer to comments above. Yes. I do not see any limitations." ]
[ -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "XBDHcf-mM3l", "LIQu9Dz999h", "nips_2022_yI7i9yc3Upr", "oSED9QLbM-x", "LIQu9Dz999h", "nsdAZHrWcW-", "nips_2022_yI7i9yc3Upr", "nips_2022_yI7i9yc3Upr", "nips_2022_yI7i9yc3Upr" ]
nips_2022_C7cv9fh8m-b
Moderate-fitting as a Natural Backdoor Defender for Pre-trained Language Models
Despite the great success of pre-trained language models (PLMs) in a large set of natural language processing (NLP) tasks, there has been a growing concern about their security in real-world applications. Backdoor attack, which poisons a small number of training samples by inserting backdoor triggers, is a typical threat to security. Trained on the poisoned dataset, a victim model would perform normally on benign samples but predict the attacker-chosen label on samples containing pre-defined triggers. The vulnerability of PLMs under backdoor attacks has been proved with increasing evidence in the literature. In this paper, we present several simple yet effective training strategies that could effectively defend against such attacks. To the best of our knowledge, this is the first work to explore the possibility of backdoor-free adaptation for PLMs. Our motivation is based on the observation that, when trained on the poisoned dataset, the PLM's adaptation follows a strict order of two stages: (1) a moderate-fitting stage, where the model mainly learns the major features corresponding to the original task instead of subsidiary features of backdoor triggers, and (2) an overfitting stage, where both features are learned adequately. Therefore, if we could properly restrict the PLM's adaptation to the moderate-fitting stage, the model would neglect the backdoor triggers but still achieve satisfying performance on the original task. To this end, we design three methods to defend against backdoor attacks by reducing the model capacity, training epochs, and learning rate, respectively. Experimental results demonstrate the effectiveness of our methods in defending against several representative NLP backdoor attacks. We also perform visualization-based analysis to attain a deeper understanding of how the model learns different features, and explore the effect of the poisoning ratio. Finally, we explore whether our methods could defend against backdoor attacks for the pre-trained CV model. The codes are publicly available at https://github.com/thunlp/Moderate-fitting.
Accept
The paper proposed an approach to against backdoor triggers by restricting the language model fine-tuning to the moderate-fitting stage. The paper also provides a nice analysis to demonstrate the factors that impact the models’ vulnerability to backdoors. Overall, the paper is well-written and provide sufficient analyses to support the claims. The revision and rebuttal address the comments from the reviewers.
train
[ "BcLGRO2DOgT", "r8GS_mbsmh_", "XhgFaWGrx_2", "0DPAmRrnnA", "mCU71Fhdj8", "-C2TZ3XGqH", "amBUVhoQYcG", "cFy2WsGTCQ", "D4JBEjnwoqpd", "b-HT8cAxVsg", "vOXPvrXRUJl", "q4HiSW6jK0Q", "617vJbXmW6", "YM9fTYlktQM", "GVDTXlJxZBV", "iAp-SEi32uz", "j2NIHxIz82Z", "J7mnQuxkhEs", "_b-c_tGRUoO", "ENWdHMoQDz", "yZgGlsKGa4G", "zaLyVXc_uop", "oFxrJi66KAQ", "m794oB8K21H", "32f8j0pCkn8", "9BRKB87G2I", "3r-vIguI-vW", "Kex7yor8r3Y", "u5GdGIz64u0", "_bBFi2OVyoy" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We sincerely thank you for your effort to improve this paper and your decision.\n\n", " I appreciate the authors' response. I am glad to see that my questions are well answered, and the quality of the paper is improved. \n\nI maintain my original score.", " We sincerely thank you for your effort to improve this paper and your decision. DBS [1] is indeed an important related work. Following your suggestion, we will cite the paper [1] in the revision.\n\nReferences:\n\n[1] Constrained Optimization with Dynamic Bound-scaling for Effective NLP Backdoor Defense. Proceedings of the 39th International Conference on Machine Learning (ICML 2022)\n\n", " Thank authors for their comprehensive comments and add-on experimental results, specially for adapting DBD and ABL on NLP backdoor defense. Since most of my concerns are addressed, I will raise my score from 4 to 6. Beside PICCOLO[1], [2] is another related work which should also be cited in the revision. \n\n\n[1] Piccolo: Exposing complex backdoors in nlp transformer models. In 2022 IEEE Symposium on Security and Privacy (SP).\n\n[2] Constrained Optimization with Dynamic Bound-scaling for Effective NLP Backdoor Defense. Proceedings of the 39th International Conference on Machine Learning (ICML 2022)\n\n\n", " We sincerely thank you for your effort to improve this paper and your decision.", " Thanks for answering our question. After reading the author's response and other reviewers' comments. I am willing to improve my score from 4 to 5.", " Thank you for your inspiring comments and valuable suggestions. \n\nThe low-rank structures of the vanilla LoRA are individually distributed in different Transformer layers. Even if each local structure has a low-rank restriction, the overall rank of all the local structures can still be high. This is the reason why the vanilla LoRA cannot successfully constrain the overall model capacity; instead, we argue that the model capacity of a PLM is determined by the global intrinsic rank of the tunable parameters. Therefore, it is essential to constrain the overall intrinsic rank of the weight updates to be lower than a threshold (as our proposed global low-rank method does).\n\nWe have performed experiments to demonstrate this viewpoint. The experimental results of reducing the local rank *r* of LoRA against the word-level attack are shown in Table 1 (in Figure 1 in the manuscript). The experimental results of reducing the global rank of our proposed reparameterized LoRA against the word-level attack are shown in Table 2 (in Table 5 in the appendix). \n\nFrom the experimental results in Table 1, we can see that even if the local rank *r* of LoRA is extremely low (reduced to 1), the ASR is still very high (96.82\\%). Also, we can find that reducing local rank $r$ of vanilla LoRA does not have much influence on the ACC and ASR, which demonstrates the defects of the local low-rank architecture of vanilla LoRA when defending against backdoor attacks. \n\nHowever, if we reduce the global rank of our proposed reparameterized LoRA, the ASR drops sharply. As shown in Table 2, when the bottleneck dimension changes from 256 to 1, the ASR decreases from 98.79\\% to 10.96\\%. The experimental results demonstrate the effectiveness of our proposed global low-rank reparameterization network. We will explain it more explicitly in the revision.\n\n| local rank *r* | 16 | 8 | 4 | 1 |\n| :------------: | :---: | :---: | :---: | :---: |\n| ACC (SST-2) | 94.45 | 94.29 | 94.34 | 94.56 |\n| ASR (SST-2) | 96.05 | 96.16 | 95.50 | **96.82** |\n\nTable 1:Results of the vanilla LoRA when using different local rank *r*\n\n| global rank (*bottleneck dimension*) | 256 | 32 | 4 | 2 | 1 |\n| :------------: | :---: | :---: | :---: | :---: | :---: |\n| ACC (SST-2) | 94.78 | 95.00 | 93.96 | 92.59 | 92.64 |\n| ASR (SST-2) | 98.79 | 98.36 | 11.95 | 11.07 | **10.96** |\n\nTable 2: Results of our proposed reparameterized LoRA when using different global rank (*bottleneck dimension*)\n\n\n\n\n\n\n\n", " Thanks for the very detailed responses. I agree that simplicity is an advantage of your method and thanks for adding the relevant baselines for comparison. But I still wondering why vanilla LoRA and Adapter fail to defence against backdoor attacks. Currently, the explanaion of Adapter failed is due to the low-rank constraint, but when adding the low-rank (i.e.,, LoRA), it still fails and maybe it due to the global-rank. But can you provide more insights for that to convince me the effective of global-rank. ", " We thank the reviewers for their valuable suggestions and constructive comments. Following the reviewers' suggestions, we have revised our manuscript and submitted a new revised version in the \"rebuttal revision\" field. In the following, we summarize the primary changes we have made for your convenience to check our revised manuscript. The revised parts are highlighted in blue color for easier review.\n\n(1) We have further clarified our threat model in Line 40-45 of the revised manuscript.\n\n(2) We have added the comparisons with other defense methods in Line 279-288 of the revised manuscript.\n\n(3) We have cited PICCOLO [1] and DBS [2] in the related work (Line 94-96) and analyzed the difference between our work with them.\n\n----------\n\nReferences:\n\n[1] Piccolo: Exposing complex backdoors in nlp transformer models. In 2022 IEEE Symposium on Security and Privacy (SP).\n\n[2] Constrained Optimization with Dynamic Bound-scaling for Effective NLP Backdoor Defense. Proceedings of the 39th International Conference on Machine Learning (ICML 2022)", " Dear reviewer,\n\nThank you for your valuable suggestions and constructive comments. We have done the point-to-point response to your comments. Could you please let us know whether our responses address your concerns? We would really appreciate it if you could let us know before the reviewer-author discussion period ends.", " Dear reviewer,\n\nThank you for your valuable suggestions and constructive comments. We have done the point-to-point response to your comments. Could you please let us know whether our responses address your concerns? We would really appreciate it if you could let us know before the reviewer-author discussion period ends.", " Thank you for your valuable suggestions and feedback. We will add those baseline performances in the revision. We also admit that it is interesting to discuss how the truncated training steps influence the model's performance on other dimensions. We will further explore this problem in future work. By the way, following your suggestion, we have updated and corrected the table header of Table 8. ", " Thanks to the author for the very detailed responses, and they successfully answer all my questions and my concerns. I raise my score to 6. I hope the authors will add those baseline performances to the revision soon. As an important future work, I think it is important to discuss how the truncated training steps undermine the model's performance on dimensions other than backdoor attack. For example, will the model trained with lesser steps be less robust to OOD samples? Or will the model perform worse on those samples falling in the long-tail part of the data distribution? Or will the model be less prone to synonym substitution attacks? Still, I think those topics I mentioned above are out of the scope of the paper, and need not be addressed in this paper.\n\nA very minor point is in Table 8 of response 5, the SST-2 should be QNLI.", " We thank the reviewer for the insightful and constructive feedback for improving this paper. Please find below our point-to-point response to your comments.\n\n**Comment 1:** ***\"The technical novelty is not enough, the proposed second and third approaches (reducing training epochs and learning rate) are too simple to take as a methodology. Although they have proved the effectiveness in preventing the backdoor attack in the paper, they are more seems to the hyper-parameter tuning and both are tricks for the model training.''***\n\n**Response:** \n\n**We believe simplicity is actually an advantage of our method.** Following the Occam’s razor principle, we do not pursue designing a sophisticated algorithm, instead, we propose a **simple and effective** one.\n\nThe novelty of this paper is as follows. \n\n(1) To the best of our knowledge, this is the **first work** to explore the possibility of backdoor-free adaptation for PLMs.\n\n(2) We revealed the mechanism of distinct learning phases for the backdoor task and original task during PLM's adaptation on a poisoned dataset.\n\n(3) We design three methods to defend against backdoor attacks by reducing the model capacity, training epochs, and learning rate, respectively. For reducing the model capacity, we propose a novel global low-rank architecture which is applied to PET (parameter-efficient tuning) algorithms. \n\n(4) We also analyze the reason why our method works and show the visualization of PLM's learning dynamics in Section 4.2. \n\n\n\n\n\n\n\n", " **Comment 2:** ***\"This paper validates the proposed approach comprehensively, however, it lacks the comparison with some state-of-the-art backdoor defender techniques to prove that they can achieve the new state-of-the-art performance.''***\n\n**Response:** Thank you for the valuable suggestion. (1) Firstly, following your suggestion, we have compared the defense performance of our method with other backdoor defense methods, including ONION [1], Backdoor Keyword Identification (BKI) [2], STRIP [3] and RAP [4]. For a brief introduction, Backdoor Keyword Identification (BKI) is a training-time defense method by identifying and filtering out poisoning samples from training samples. ONION, STRIP and RAP are inference-time defense methods. We adapt them to the training-time defense for comparison. The experimental results are shown in Table 1 and Table 2. From the experimental results, we can see that the defense performance of our proposed methods is **better** than other defense methods. The **ASR** after applying our proposed defense method is **lower** than those after applying other defense methods. For the syntactic-level attack, the defense performance of other methods is much poorer than ours.\n\n| Defender | ONION | BKI | STRIP | RAP | Our Method |\n| ---------------------- | ------ | ----- | ------ | ------ | ---------- |\n| Word-level (ACC) | 92.42 | 94.29 | 94.07 | 94.29 | 94.23 |\n| Word-level (ASR) | 10.20 | 76.75| 99.12 | 82.89 | **7.89** |\n| Syntactic (ACC) | 92.75 | 93.74 |93.85 | 93.52 | 91.98 |\n| Syntactic (ASR) | 86.29 | 93.09 | 89.47 | 91.67 | **42.11** |\n| Add-sentence (ACC) | 93.68 | 94.56 | 94.34 | 93.74 | 92.81 |\n| Add-sentence (ASR) |99.89 | 100.00 | 100.00 |87.61 | **42.21** |\n| Style-Transfer (ACC) | 93.47 | 94.18 | 94.07 | 86.00 | 91.76 |\n| Style-Transfer (ASR) | 81.58 |80.48 | 85.09 | 85.53 | **42.21** |\n\nTable 1: Comparisons of the defense performance between our proposed method and other defense methods against the word-level, syntactic, add-sentence, and style transfer attacks on SST-2.\n\n| Defender | ACC | ASR |\n| :--------: | :--------------: | :--------------: |\n| ONION | 90.28 | 99.35 |\n| BKI | 90.76 | 99.47 |\n| STRIP | 91.11 | 99.44 |\n| RAP | 90.45 | 99.67 |\n| Our Method | 88.45 | **67.14** |\n\nTable 2: Comparisons of the defense performance between our proposed method and other defense methods against the syntactic attack on AG News.\n\n(2) Secondly, our defense methods are **orthogonal** to the other defense methods, and can be used together with other methods. Other defense methods either filter training samples or testing samples. For the training-time defense, the victim can first filter the training samples and then use our backdoor-free training method to train the model on the filtered training dataset. For the inference-time defense, after training the model using our backdoor-free training method, the victim can further perform inference on the filtered testing dataset. We leave the combination of our proposed backdoor-free training method and other defense methods in future work. \n\n----------\n\n**References:**\n\n[1] Onion: A simple and effective defense against textual backdoor attacks. In EMNLP 2021.\n\n[2] Mitigating backdoor attacks in lstm-based text classification systems by backdoor keyword identification. Neurocomputing, 2021.\n\n[3] Design and evaluation of a multi-domain trojan detection method on deep neural networks. IEEE Transactions on Dependable and Secure Computing, 2021.\n\n[4] Rap: Robustness-aware perturbations for defending against backdoor attacks on nlp models. In EMNLP 2021.", " **Comment 3:** ***\"Line 205 arranges the comparison with the vanilla LoRA and Adapter in Appendix B.4 and B.5, why not put these comparisons in the main body, since they are important to prove the proposed global low-rank approach? Furthermore, can you provide more explanations about why both compared approaches completely fail to defence against backdoor attacks? If they are working for reducing the model capacity, why they are not working for backdoor defenders?''***\n\n**Response:** Thank you for the valuable suggestion. We will put these comparisons in the main body to prove the proposed global low-rank approach. Though vanilla LoRA and Adapter reduce the model capacity, however, the extent of such reduction is still far from causing moderate-fitting, as stated in line 133-134 in the manuscript. \n\n----------\n\n**Comment 4:** ***\"Why not compare the proposed approach with other backdoor defender techniques? In Section Backdoor defence in NLP, there are some different techniques, why not take them as the baselines? There are also some related works that are missed in this paper for example PICCOLO [1].''***\n\n**Response:** Thank you for the constructive suggestion. (1) Following your suggestion, we have compared our proposed method with other backdoor defense techniques [2,3,4,5]. The experimental results are shown in Table 3 and Table 4. From the experimental results, we can see that the defense performance of our proposed method is **better** than other defense methods. The **ASR** after applying our proposed defense method is **lower** than those after applying other defense methods.\n\n| Defender | ONION | BKI | STRIP | RAP | Our Method |\n| ---------------------- | ------ | ----- | ------ | ------ | ---------- |\n| Word-level (ACC) | 92.42 | 94.29 | 94.07 | 94.29 | 94.23 |\n| Word-level (ASR) | 10.20 | 76.75| 99.12 | 82.89 | **7.89** |\n| Syntactic (ACC) | 92.75 | 93.74 |93.85 | 93.52 | 91.98 |\n| Syntactic (ASR) | 86.29 | 93.09 | 89.47 | 91.67 | **42.11** |\n| Add-sentence (ACC) | 93.68 | 94.56 | 94.34 | 93.74 | 92.81 |\n| Add-sentence (ASR) |99.89 | 100.00 | 100.00 |87.61 | **42.21** |\n| Style-Transfer (ACC) | 93.47 | 94.18 | 94.07 | 86.00 | 91.76 |\n| Style-Transfer (ASR) | 81.58 |80.48 | 85.09 | 85.53 | **42.21** |\n\nTable 3: Comparisons of the defense performance between our proposed method and other defense methods against the word-level, syntactic, add-sentence, and style transfer attacks on SST-2.\n\n| Defender | ACC | ASR |\n| :--------: | :--------------: | :--------------: |\n| ONION | 90.28 | 99.35 |\n| BKI | 90.76 | 99.47 |\n| STRIP | 91.11 | 99.44 |\n| RAP | 90.45 | 99.67 |\n| Our Method | 88.45 | **67.14** |\n\nTable 4: Comparisons of the defense performance between our proposed method and other defense methods against the syntactic attack on AG News.\n\n(2) PICCOLO targets at distinguishing trojaned models from clean models. PICCOLO's threat model is different from ours. In PICCOLO, they assume the attacker has full control of the training process. The defender is given a model and a few clean sentences to determine if the model contains backdoor. In our setting, the attacker only poisons the training data and can not control the model training process. The defender aims to train a backdoor-free model with the downloaded third-party poisoned dataset. Although our threat model is different from PICCOLO, PICCOLO is indeed an important related work. We will cite PICCOLO in the related work (Backdoor Defense in NLP) and illustrate the difference between our threat model and theirs in the revised paper.\n\n----------\n\n**References:**\n\n[1] Piccolo: Exposing complex backdoors in nlp transformer models. In 2022 IEEE Symposium on Security and Privacy (SP).\n\n[2] Onion: A simple and effective defense against textual backdoor attacks. In EMNLP 2021.\n\n[3] Mitigating backdoor attacks in lstm-based text classification systems by backdoor keyword identification. Neurocomputing, 2021.\n\n[4] Design and evaluation of a multi-domain trojan detection method on deep neural networks. IEEE Transactions on Dependable and Secure Computing, 2021.\n\n[5] Rap: Robustness-aware perturbations for defending against backdoor attacks on nlp models. In EMNLP 2021.\n", " We thank the reviewer for the insightful and constructive feedback for improving this paper. Please find below our point-to-point response to your comments.\n\n**Comment 1.1:** ***\"No baseline methods are included for comparison, and it is hard to understand how effective is the proposed method compared to other defense methods.\"***\n\n**Response:** Thank you for the valuable suggestion. (1) Firstly, following your suggestion, we have compared the defense performance of our method with other backdoor defense methods, including ONION [1], Backdoor Keyword Identification (BKI) [2], STRIP [3] and RAP [4]. For a brief introduction, Backdoor Keyword Identification (BKI) is a training-time defense method by identifying and filtering out poisoning samples from training samples. ONION, STRIP and RAP are inference-time defense methods. We adapt them to the training-time defense for comparison. The experimental results are shown in Table 1 and Table 2. From the experimental results, we can see that the defense performance of our proposed methods is **better** than other defense methods. The **ASR** after applying our proposed defense method is **lower** than those after applying other defense methods. For the syntactic-level attack, the defense performance of other methods is much poorer than ours.\n\n| Defender | ONION | BKI | STRIP | RAP | Our Method |\n| ---------------------- | ------ | ----- | ------ | ------ | ---------- |\n| Word-level (ACC) | 92.42 | 94.29 | 94.07 | 94.29 | 94.23 |\n| Word-level (ASR) | 10.20 | 76.75| 99.12 | 82.89 | **7.89** |\n| Syntactic (ACC) | 92.75 | 93.74 |93.85 | 93.52 | 91.98 |\n| Syntactic (ASR) | 86.29 | 93.09 | 89.47 | 91.67 | **42.11** |\n| Add-sentence (ACC) | 93.68 | 94.56 | 94.34 | 93.74 | 92.81 |\n| Add-sentence (ASR) |99.89 | 100.00 | 100.00 |87.61 | **42.21** |\n| Style-Transfer (ACC) | 93.47 | 94.18 | 94.07 | 86.00 | 91.76 |\n| Style-Transfer (ASR) | 81.58 |80.48 | 85.09 | 85.53 | **42.21** |\n\nTable 1: Comparisons of the defense performance between our proposed method and other defense methods against the word-level, syntactic, add-sentence, and style transfer attacks on SST-2.\n\n| Defender | ACC | ASR |\n| :--------: | :--------------: | :--------------: |\n| ONION | 90.28 | 99.35 |\n| BKI | 90.76 | 99.47 |\n| STRIP | 91.11 | 99.44 |\n| RAP | 90.45 | 99.67 |\n| Our Method | 88.45 | **67.14** |\n\nTable 2: Comparisons of the defense performance between our proposed method and other defense methods against the syntactic attack on AG News.\n\n(2) Secondly, our defense methods are **orthogonal** to other defense methods, and can be used together with other methods. Other defense methods either filter training samples or testing samples. For the training-time defense, the victim can first filter the training samples and then use our backdoor-free training method to train the model on the filtered training dataset. For the inference-time defense, after training the model using our backdoor-free training method, the victim can further perform inference on the filtered testing dataset. We leave the combination of our proposed backdoor-free training method and other defense methods in future work. \n\n----------\n\n**References:**\n\n[1] Onion: A simple and effective defense against textual backdoor attacks. In EMNLP 2021.\n\n[2] Mitigating backdoor attacks in lstm-based text classification systems by backdoor keyword identification. Neurocomputing, 2021.\n\n[3] Design and evaluation of a multi-domain trojan detection method on deep neural networks. IEEE Transactions on Dependable and Secure Computing, 2021.\n\n[4] Rap: Robustness-aware perturbations for defending against backdoor attacks on nlp models. In EMNLP 2021.\n\n\n\n\n\n\n\n\n\n\n", " **Comment 1.2:** ***\"When defending the syntactic-level backdoor attack, the attack success rate (ASR) of the proposed method is not very low.''***\n\n**Response:** Firstly, the high ASR of the backdoor attacks using syntactic triggers may be attributed to the reason that such triggers **change the semantic information of the text sample dramatically** [1]. It is even possible that the syntactic paraphrase **changes the ground-truth label of texts**. For example, one original sentence from SST-2 is \"neither funny nor suspenseful nor particularly well-drawn\", which is labeled as negative. After using the syntactic paraphrase in [2], it is transformed into \"when it 's funny , it 's nice and tight\", which is near positive. Therefore, the model is reasonable to \"misclassify\" samples with such triggers, which is irrelevant to backdoor attacks.\n\nTo further prove this viewpoint, we have performed experiments to see the ASR of the syntactic and word-level triggers on the clean models trained under two settings. We fine-tune the RoBERTa-BASE model on SST-2 with 10 epochs. We also use reparameterized LoRA with the bottleneck dimension 1 to train the model. The experimental results are shown in Table 3. From the experimental results, we can see that even on clean models, the ASR of the syntactic trigger may be above 20\\%, which is significantly higher than the word-level trigger. To sum up, the success of syntactic triggers is with the cost of significantly changing or even flipping the semantic information of the samples. Thus, the high ASR value after defense does not indicate that our defense method is ineffective.\n\n| Trigger Type | Word-level ACC | Word-level ASR | Syntactic ACC | Syntactic ASR |\n| :-----------------: | :---------------: | :---------------: | :--------------: | :--------------: |\n| Finetune | 94.23 | **7.13** | 94.23 | **19.96** |\n| LoRA | 92.97 | **9.76** | 92.97 | **20.83** |\n\nTable 3: The ACC and ASR after training the model with clean training data on SST-2 for word-level triggers and syntactic triggers.\n\n----------\n\n**References:**\n\n[1] Rethink the evaluation for attack strength of backdoor attacks in natural language processing.\n\n[2] Hidden killer: Invisible textual backdoor attacks with syntactic trigger. In ACL 2021", " **Comment 2:** ***\"The performance of all the three proposed method on AG-News when defending against syntactic attacks are not very convincing. This makes me doubt whether the proposed methods are general enough for other datasets.''***\n\n**Response:** (1) Our proposed method is general for various datasets. We have performed experiments on three representative datasets, i.e., SST-2 [1], AG News [2] and Hate Speech and Offensive Language (HSOL) [3], which are commonly used datasets in the field of backdoor attack/defense in NLP [4,5,6]. To address your concerns, we further perform new experiments on the rotten tomatoes [7] dataset. Specifically, we defend against syntactic attack using reparameterized LoRA tuning with the small bottleneck dimension. The experimental results are shown in Table 4. From the experimental results, we can see that the ASR declines from **87.80\\%** to **42.96\\%** when the bottleneck dimension changes from 32 to 1. However, the ACC does change much. These results demonstrate that our proposed method can be applied to various datasets.\n\n| Bottleneck Dimension | 32 | 4 | 2 | 1 |\n| :------------------: | :---: | :---: | :---: | :---: |\n| Rotten Tomatoes (ACC) | 86.96 | 86.59 | 87.24 | 86.87 |\n| Rotten Tomatoes (ASR) | 87.80 | 69.79 | 51.97 | 42.96 |\n\nTable 4: Results of reducing the model capacity using reparameterized LoRA against syntactic attack on the rotten tomatoes dataset.\n\n(2) Syntactic attack on AG News is also difficult to defend against for other defense methods as shown in Table 5. The performance of our proposed defense method is **better** than other defense methods when defending against syntactic attacks on AG News.\n\n| Defender | ACC | ASR |\n| :--------: | :--------------: | :--------------: |\n| ONION | 90.28 | 99.35 |\n| BKI | 90.76 | 99.47 |\n| STRIP | 91.11 | 99.44 |\n| RAP | 90.45 | 99.67 |\n| Our Method | 88.45 | **67.14** |\n\nTable 5: Comparisons of the defense performance between our proposed method and other defense methods against the syntactic attack on AG News.\n\n----------\n\n**References:**\n\n[1] Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP 2013.\n\n[2] Character-level convolutional networks for text classification. In NIPS 2015.\n\n[3] Automated hate speech detection and the problem of offensive language. In Proceedings of the 11th International AAAI Conference on Web and Social Media, ICWSM ’17, 2017.\n\n[4] Mind the style of text! adversarial and backdoor attacks based on text style transfer. In EMNLP 2021.\n\n[5] Hidden killer: Invisible textual backdoor attacks with syntactic trigger. In ACL 2021.\n\n[6] Onion: A simple and effective defense against textual backdoor attacks. In EMNLP 2021.\n\n[7] Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In ACL 2005.\n\n\n\n\n\n\n\n", " **Comment 3:** ***\"Some experiment settings are not very clear. For example: (1) For SST-2, how is the data split? Is the ASR shown for SST-2 the performance on the official testing set or the development set? In Appendix A.1, it says the accuracy of SST-2 is calculated based on the official testing dataset. But since the label of the testing dataset of SST-2 is not publicly available, it is unclear how the ASR on SST-2 is calculated. (2) How many samples are used when training for each of the three datasets? This matters since 1 epoch for a large dataset is different from 1 epoch for a small dataset.''***\n\n**Response:** (1) Following [1], we use the data split of the original version of SST-2 dataset in [2], instead of the GLUE-version [3] SST-2 dataset. The original version of SST-2 dataset contains the labeled testing samples; while the GLUE version of SST-2 does not contain publicly available labels for testing samples.\n\n(2) For SST-2, HSOL, and AG News, 6920, 5832, and 11106 samples are used for training the model, respectively. We will add the description of the number of training samples for each of the three datasets in the revised paper.\n\n----------\n\n**Comment 4:** ***\"In Figure 5, is the ASR/ACC and tSNE the result of the training set or the testing set? I have this question because if we want to know how well the model \"fits\", we normally mean \"how well the model fits on the training data\".''***\n\n**Response:** In Figure 5, the ASR/ACC is the result of the testing set. The tSNE is the result of the development set. \nThe development set and test set follow the same data distribution as the training set.\n\nFollowing your suggestion, we have performed new experiments to see the ASR/ACC of the training set. The ACC is tested on the clean training dataset part. The ASR is tested on the poisoned training dataset part. The results are shown in the new figure at https://www.dropbox.com/s/5j2j8vsbfz32fia/Visualization%20of%20the%20changes%20of%20ACC%20and%20ASR%20on%20the%20training%20dataset..png?dl=0. \n\nFrom the new figure and the original Figure 5 in the manuscript, we can see that the trend of changes of ACC/ASR on the training dataset and testing dataset are similar. We will add these results to the revised paper.\n\n----------\n\n**Comment 5:** ***\"In the experiment of the synthetic dataset, why not use the whole dataset of SST2? In the experiment in Appendix B.7, why not use the whole AG-News dataset?''***\n\n**Response:** In the experiment of the synthetic dataset in the main paper, we have used the whole dataset of SST-2. We use the original version of SST-2 [2], which follows the dataset used in the previous work [1]. In the experiment in Appendix B.7, we take all samples whose labels are “World” or “Sports” from our originally used AG News training dataset. Our originally used AG News training dataset follows the dataset used in the previous work [1].\n\n----------\n\n**References:**\n\n[1] Mind the style of text! adversarial and backdoor attacks based on text style transfer. In EMNLP 2021.\n\n[2] Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP 2013.\n\n[3] Glue: A multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461, 2018.", " **Comment 6:** ***What does Line 10 in the Appendix mean? Does it mean that only the 11106 samples are used for fine-tuning, and 10\\% of the 11106 samples are poisoned? If this is the case, why is this so?''***\n\n**Response:** Yes, it means that only the 11106 samples are used for fine-tuning, and 10\\% of the 11106 samples are poisoned. The sampled AG News dataset we used follows the sampled AG News dataset used in the previous work [1].\n\n----------\n\n**Comment 7:** ***\"I am not convinced by using training epochs as early stopping. One epoch for a large dataset and one epoch for a small dataset is very different, and it is also possible that for a rather easy but large dataset, the over-fitting phase will occur in less than one epoch.''***\n\n**Response:** Thank you for the constructive suggestion. For large datasets, we can use training steps as the criterion for early stopping, as PLMs generally require some training steps to fit a large dataset. Specifically, we can see the clean accuracy on the validation dataset with a fixed interval of training steps to determine when to do the early stop. If the accuracy on the validation dataset is high enough at a certain training step, we can early stop the training process.\n\n----------\n\n**Comment 8:** ***\"I would recommend first showing that the two phases (moderate-fitting and over-fitting) exist during fine-tuning by plotting something like Figure 5 (but with standard fine-tuning) before introducing the method in Section 3.''***\n\n**Response:** Thank you for the constructive suggestion. Following your suggestion, we have plotted the PLM's learning dynamics during standard fine-tuning and put the figure at https://www.dropbox.com/s/o6t1ob8f26wteqj/Visualization%20of%20PLM%E2%80%99s%20learning%20dynamics%20when%20using%20the%20standard%20fine-tuning%20method.png?dl=0. \n\nIt shows that the two phases (moderate-fitting and over-fitting) exist during standard fine-tuning. We will put the figure before introducing the method in Section 3 in the revised paper.\n\n----------\n\n**Comment 9:** ***\"I suggest testing the proposed method on different sizes and types of PLMs, instead of only using RoBERTa-BASE.''***\n\n**Response:** Thank you for the constructive suggestion. Following your suggestion, we have performed experiments on RoBERTa-LARGE and BERT-BASE models. We fine-tune the RoBERTa-LARGE and BERT-BASE models with the learning rate of $2\\times10^{-5}$ under different training epochs on SST-2. The results are shown in Table 6 and Table 7, respectively. From the experimental results, we can see that our proposed method is effective for different sizes and types of PLMs. These results demonstrate that our moderate-fitting can be applied to various PLMs. We will add these results to the revised paper.\n\n| Epochs | 10 | 2 | 1 |\n| :--------: | :---: | :---: | :---: |\n| SST-2 (ACC) | 95.55 | 95.83 | 95.50 |\n| SST-2 (ASR) | 99.67 | 72.48 | 7.35 |\n\nTable 6: Results of reducing the training epochs against word-level attack when fine-tuning the RoBERTa-LARGE model.\n\n\n\n| Epochs | 10 | 2 | 1 |\n| :--------: | :---: | :---: | :---: |\n| SST-2 (ACC) | 91.93 | 91.32 | 90.66 |\n| SST-2 (ASR) | 99.67 | 90.57 | 18.86 |\n\nTable 7: Results of reducing the training epochs against word-level attack when fine-tuning the BERT-BASE model.\n\n----------\n\n**Comment 10:** ***\"I suggest using more diverse datasets. The three datasets used in this paper are quite simple classification tasks that can be categorized by superficial word-level clues. I would like to know whether the proposed method will still be effective for tasks such as natural language understanding, e.g., QNLI, MNLI, and RTE.''***\n\n**Response:** We thank the reviewer for the valuable suggestion. Following your suggestion, we have performed new experiments on QNLI against the word-level attack. From the original QNLI training dataset, we sample 80\\% as our training dataset and sample 10\\% as the testing dataset. The results are shown in Table 8. Our proposed method can be applied to different NLP tasks, including the natural language understanding task. \n\n| Bottleneck Dimension | 4 | 2 | 1 |\n| :------------------: | :---: | :---: | :---: |\n| QNLI (ACC) | 86.66 | 84.89 | 82.68 |\n| QNLI (ASR) | 97.01 | 28.33 | 19.94 |\n\nTable 8: Results of reducing the model capacity using reparameterized LoRA against word-level attack on QNLI.\n\n----------\n\n**References:**\n\n[1] Mind the style of text! adversarial and backdoor attacks based on text style transfer. In EMNLP 2021.\n\n\n", " We thank the reviewer for the insightful and constructive feedback for improving this paper. Please find below our point-to-point response to your comments.\n\n**Comment 1.1:** ***\"The proposed method shares similarity with several existing works in vision domains. The overreaching goal of this work is to train a clean model even if the dataset is poisoned. There are several similar works for the vision model backdoor defense[1-2].''***\n\n**Response:** Thank you for the valuable comment. Overall, our method is different from DBD [1] and ABL [2]. DBD and ABL are both multi-stage methods and they both contain one stage to select potentially poisoned samples. However, our method does not contain multiple stages and we do not need to perform additional operations on the training samples.\n\n----------\n\n**Comment 1.2:** ***\"Although they were originally proposed for vision models. I would like to see more discussion about the possibility to extend such methods for PLMs and compare them with the proposed method. It will be more convincing if the proposed method outperforms them.''***\n\n**Response:** Thank you for the constructive suggestion. It is possible to extend DBD [1] and ABL [2] to the NLP domain. Following your suggestions, we have adapted DBD and ABL to the NLP domain, and experimented with BERT-BASE model under our setting on SST-2 dataset. For the implementation of DBD, we use MixText [3] to replace MixMatch [4]. As shown in Table 1, after applying our defense method, the ASR decreases to **18.86\\%**, which is significantly lower compared with DBD (**94.63\\%**) and ABL (**99.45\\%**). Furthermore, our method has a minor effect on the accuracy of the original task (denoted as ACC). As shown in Table 1, on the BERT-BASE model, after applying our defense method, the ACC is 90.66\\%, which is higher than DBD (87.10\\%) and ABL (90.12\\%). The above results demonstrate that our method significantly **outperforms** both baselines. \n\nThe reasons why ABL and DBD can not work well for PLMs may be as follows. ABL is designed for non-pre-trained CNN models and it is not appropriate for pre-trained models. Specifically, after early training, they select examples with the lowest loss values as potentially poisoned samples, which may not be applicable to the PLM. For DBD, only training the classifier to select high-credible samples may be insufficient for the PLM. Also, DBD uses semi-supervised learning in the last stage, which may cause the final accuracy to decline. These may be the reasons why ABL and DBD do not work well for PLMs. \n\n| Defense Method | DBD | ABL | Our Method |\n| :------------: | :---: | :---: | :--------: |\n| SST-2 (ACC) | 87.10 | 90.12 | 90.66 |\n| SST-2 (ASR) | 94.63 | 99.45 | **18.86** |\n\nTable 1: Comparisons with adapted defense methods from the vision domain.\n\n----------\n\n**Comment 2.1:** ***\"The paper organization should be further improved. For example, the authors did not provide the threat model explicitly to describe the attacker and defender’s capability. ''***\n\n**Response:** We thank the reviewer for the constructive comment. We clarify our threat model as follows:\n\n(1) The attacker poisons the training data and releases the poisoned training dataset on open-source platforms. The attacker does not control the model training process. \n\n(2) The victim downloads the poisoned training dataset from the open-source platform to train the model. If no defense is applied, the victim will get a model injected with backdoors. However, with our proposed defense method, the victim will get a backdoor-free model even using the poisoned dataset to train the model.\n\nFollowing your suggestion, we will modify the paper organization and illustrate the threat model more explicitly in the revised paper.\n\n----------\n\n**References:**\n\n[1] Backdoor defense via decoupling the training process. In ICLR 2021.\n\n[2] Anti-backdoor learning: Training clean models on poisoned data. In NIPS 2021.\n\n[3] MixText: Linguistically-informed interpolation of hidden space for semi-supervised text classification. In ACL 2020.\n\n[4] Mixmatch: A holistic approach to semi-supervised learning. In NIPS 2019.\n\n", " **Comment 2.2:** ***\"Based on my understanding, the authors assumed the poisoning happens during the fine-tuning stage, i.e. the attackers poisoned the dataset used in the fine-tuning. If so, such a scenario seems too narrow for me.''***\n\n**Response:** Defending against the poisoning attack in the fine-tuning stage is a critical problem, and training data poisoning is the mainstream backdoor attack in the NLP community. The reasons are as follows:\n\n(1) It has become a **routine** for NLP practitioners to **outsource the curation of training data** to obtain large-scale training datasets [1], and there are many platforms (such as the Huggingface Datasets Library) releasing such training datasets, with millions of downloads (https://huggingface.co/datasets). The released datasets may be poisoned by the attacker and raise serious security concerns. Therefore, building secure machine learning systems against data poisoning attacks is an important problem from the industry perspective [2].\n\n(2) The \"Pre-train and then fine-tune'' paradigm has boosted the performance of many downstream AI tasks [3] and become a **mainstream paradigm** for NLP tasks [4]. Thus, defending against the poisoning attack in the fine-tuning stage is a critical problem in the NLP community, which is the main focus of our paper.\n\nMoreover, our proposed defense methods can be **widely applied to many real-world scenarios**. As stated in ABL [5], the defense methods under such a setting could benefit companies, research institutes, or government agencies who have the resources to train their own models but rely on outsourced training data. It also benefits MLaaS (Machine Learning as a Service) providers such as Amazon ML and SageMaker, Microsoft Azure AI Platform, Google AI Platform and IBM Watson Machine Learning to help users train backdoor-free models. \n\n----------\n\n**Comment 2.3:** ***\"What if the attack happens during the pre-training stage? Or users want to train their models from scratch. Under such situations, it’s unknown whether the model’s training still strictly follows moderate-fitting and overfitting stages. If not, please further clarify the threat model in the revision.\"***\n\n**Response:** The main focus of this paper is the backdoor-free training during fine-tuning a PLM towards downstream tasks, rather than training from scratch. Pre-trained models have been widely used to **boost the performance** of downstream AI tasks [3] and become the **foundation models** for NLP tasks [4]. Although training from scratch is not our focus, we have also performed experiments to see the phenomenon when users train their models from scratch. The results are shown in appendix B.2 in the supplementary material. We perform experiments with a randomly initialized model whose architecture is the same as RoBERTa-BASE on SST-2. From the experiments shown in Table 2 in the appendix, we can see that if users train their models from scratch, the model's training may not follow moderate-fitting and overfitting stages. This demonstrates that pre-training may be an important factor for the defense performance. Following your suggestions, we will further clarify the threat model in the revised paper.\n\n----------\n\n**References:**\n\n[1] Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022.\n\n[2] Adversarial machine learning-industry perspectives. In 2020 IEEE Security and Privacy Workshops.\n\n[3] On the opportunities and risks of foundation models. ArXiv, abs/2108.07258, 2021.\n\n[4] Pre-trained models: Past, present and future. ArXiv preprint, abs/2106.07139, 2021.\n\n[5] Anti-backdoor learning: Training clean models on poisoned data. In NIPS 2021.", " **Comment 3:** ***\"The proposed method seems not very effective against several advanced NLP backdoor attacks. As shown in Fig.2-4, Table.1,3. The syntactic, sentence and style-based triggers remain effective after the defense (over 50\\% ASR after defense), which might indicate the proposed method is not general and can not handle more stealthy triggers than simple word-level triggers.\"***\n\n**Response:** (1) Firstly, the high ASR of the backdoor attacks using the so-called \"advanced\" triggers, e.g., the syntactic trigger, may be attributed to the reason that such triggers **change the semantic information of the text sample dramatically** [1]. It is even possible that the syntactic paraphrase **changes the ground-truth label of texts**. For example, one original sentence from SST-2 is \"neither funny nor suspenseful nor particularly well-drawn\", which is labeled as negative. After using the syntactic paraphrase in [2], it is transformed into \"when it 's funny , it 's nice and tight\", which is near positive. Therefore, the model is reasonable to \"misclassify\" samples with such triggers, which is irrelevant to backdoor attacks.\n\nTo further prove this viewpoint, we have performed experiments to see the ASR of the syntactic and word-level triggers on the clean models trained under two settings. We fine-tune the RoBERTa-BASE model on SST-2 with 10 epochs. We also use reparameterized LoRA with the bottleneck dimension 1 to train the model. The experimental results are shown in Table 2. From the experimental results, we can see that even on clean models, the ASR of the syntactic trigger may be above 20\\%, which is significantly higher than the word-level trigger. To sum up, the success of syntactic triggers is with the cost of significantly changing or even flipping the semantic information of the samples. Thus, the high ASR value after defense does not indicate that our defense method is ineffective.\n\n| Trigger Type | Word-level ACC | Word-level ASR | Syntactic ACC | Syntactic ASR |\n| :-----------------: | :---------------: | :---------------: | :--------------: | :--------------: |\n| Finetune | 94.23 | **7.13** | 94.23 | **19.96** |\n| LoRA | 92.97 | **9.76** | 92.97 | **20.83** |\n\nTable 2: The ACC and ASR after training the model with clean training data on SST-2 for word-level triggers and syntactic triggers.\n\n(2) Secondly, even for the advanced NLP backdoor attacks, **our defense method outperforms other backdoor defense methods**, including Backdoor Keyword Identification (BKI) [3], ONION [4], STRIP [5] and RAP [6]. As shown in Table 3, our proposed method achieves **significantly lower ASR** with a very small degradation on the ACC.\n\n| Defender | ONION | BKI | STRIP | RAP | Our Method |\n| ---------------------- | ------ | ----- | ------ | ------ | ---------- |\n| Word-level (ACC) | 92.42 | 94.29 | 94.07 | 94.29 | 94.23 |\n| Word-level (ASR) | 10.20 | 76.75| 99.12 | 82.89 | **7.89** |\n| Syntactic (ACC) | 92.75 | 93.74 |93.85 | 93.52 | 91.98 |\n| Syntactic (ASR) | 86.29 | 93.09 | 89.47 | 91.67 | **42.11** |\n| Add-sentence (ACC) | 93.68 | 94.56 | 94.34 | 93.74 | 92.81 |\n| Add-sentence (ASR) |99.89 | 100.00 | 100.00 |87.61 | **42.21** |\n| Style-Transfer (ACC) | 93.47 | 94.18 | 94.07 | 86.00 | 91.76 |\n| Style-Transfer (ASR) | 81.58 |80.48 | 85.09 | 85.53 | **42.21** |\n\nTable 3: Comparisons of the defense performance between our proposed method and other defense methods against the word-level, syntactic, add-sentence, and style transfer attacks on SST-2.\n\n----------\n\n\n**References:**\n\n[1] Rethink the evaluation for attack strength of backdoor attacks in natural language processing.\n\n[2] Hidden killer: Invisible textual backdoor attacks with syntactic trigger. In ACL 2021\n\n[3] Mitigating backdoor attacks in lstm-based text classification systems by backdoor keyword identification. Neurocomputing, 2021\n\n[4] Onion: A simple and effective defense against textual backdoor attacks. In EMNLP 2021\n\n[5] Design and evaluation of a multi-domain trojan detection method on deep neural networks. IEEE Transactions on Dependable and Secure Computing, 2021.\n\n[6] Rap: Robustness-aware perturbations for defending against backdoor attacks on nlp models. In EMNLP 2021", " **Comment 4:** ***\"When evaluating the proposed method on several vision backdoor attacks, why only report the evaluation results of altering learning rate and training epochs? What’s the performance of the proposed reparameterized PET on such vision models?'\"***\n\n**Response:** PET methods (tuning a few parameters while keeping other parameters frozen) are bound to pre-trained models, and thus can not be applied to non-pre-trained vision models. For vision experiments, we experiment on pre-trained CV models. However, up to now, PET methods are mainly applied to pre-trained language models, and are seldom applied to the pre-trained CV models. Therefore, in this paper, we only report the evaluation results of altering learning rate and training epochs for vision experiments. To address your concerns, we will add the experiments of reparameterized PET on pre-trained vision models in the revised paper. \n", " We thank the reviewer for the insightful and constructive feedback for improving this paper. Please find below our point-to-point response to your comments.\n\n**Comment 1:** ***\"It would be better to illustrate the setting of the backdoor attacks more explicitly. It seems that the backdoor is not injected in the pretraining stage, but injected in the task-specific backdoor fine-tuning stage, which follows the so-called “clean fine-tuning” setting of Qi et al. [1]\"***\n\n**Response:** Thank you for the constructive suggestion. In our scenario, the attacker poisons the training dataset and releases it on the open-source platform. Then the victim downloads the poisoned training dataset and uses it to fine-tune a PLM (pre-trained language model). Our defense is applied in the fine-tuning stage, enabling the victim to train a backdoor-free model on a poisoned dataset. Following your suggestion, we will modify the paper to illustrate our setting more explicitly in the revised paper.\n\nWe also want to point out that there are some differences between our setting and the \"clean fine-tuning\" setting of Qi et al. [1]. In their setting, the attacker first uses a poisoned dataset of the target task to fine-tune the pre-trained model and obtain a backdoored model. After that, the backdoored model is further fine-tuned using a clean dataset. Differently, under our setting, through the proposed backdoor-free training method, the victim can train a backdoor-free model even using the poisoned dataset. Our defense does not require access to a clean dataset.\n\n----------\n**Comment 2:** ***\"This leads to the second question of whether the proposed method can defend against the backdoors injected in the pre-training stage like BadPre [2] or weight poisoning attack [3]. Such attacks manipulate the backdoored representations dramatically when trigger words appear, which makes me a bit unsure about whether the proposed method will work.\"***\n\n**Response:** Our defense is not designed for the settings of BadPre [2] and weight poisoning attack [3]. BadPre [2] and weight poisoning attack [3] are model-level attacks. However, our defense is designed for data-level attacks. For model-level attacks like BadPre [2] and weight poisoning attack [3], the attacker can control the model training process to train a backdoored model and release it to open-source platforms like huggingface. However, for the data-level attack, the attacker poisons the training data and releases the poisoned training dataset to open-source platforms. The attacker does not control the model training process. The victim downloads the poisoned training dataset from the open-source platform to train their models. If no defense is applied, the victim will get a backdoored model. However, with our proposed backdoor-free fine-tuning method, the victim will get a backdoor-free model.\n\nNote that data-level attacks are real-world threats to machine learning models. As the training data requirements grow, practitioners have to outsource the curation of training data to obtain large enough training datasets [4] for training their models. In this real-world scenario, the attacker can poison the training dataset but cannot control the model training process. Our proposed defense method is not designed for the setting of BadPre [2] and weight poisoning attack [3]. We expect future works to take inspiration from our findings and design corresponding defense methods for the setting of BadPre [2] and weight poisoning attack [3].\n\n----------\n**References:**\n\n[1] Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, and Maosong Sun. Hidden killer: Invisible textual backdoor attacks with syntactic trigger. In ACL 2021\n\n[2] Kangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang, Jiwei Li, Chun Fan. BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models. In ICLR 2022\n\n[3] Keita Kurita, Paul Michel, and Graham Neubig. Weight poisoning attacks on pre-trained models. In ACL 2020\n\n[4] Micah Goldblum, Dimitris Tsipras, Chulin Xie, Xinyun Chen, Avi Schwarzschild, Dawn Song, Aleksander Madry,\nBo Li, and Tom Goldstein. Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses. In IEEE Transactions on Pattern Analysis and Machine Intelligence 2022", " In this paper, the authors investigate an interesting phenomenon that when the models are doing a moderate fitting with parameter-efficient training methods, the models are likely to ignore the backdoored features, as those features are ill-trained. Based on this observation, the authors suggest restricting the language model fine-tuning to the moderate-fitting stage to naturally improve the robustness of language models against backdoor triggers. Furthermore, the authors find that (1) parameter capacity, (2) training epochs, and (3) learning rate are key factors that can impact the models’ vulnerability to backdoors. Reducing those hyper-parameters can help models fail to adapt to backdoor features. The authors also make several ablation studies including the visualizations of the training dynamics given different hyper-parameters, the poisoning ratio, and the experiments on the cv models, and draw several interesting conclusions. **Strengths**:\n- The paper is well written and easy to follow\n- The paper draws several interesting observations and insights into the robustness of parameter-efficient training.\n- Though simple, the paper provides an easy yet efficient way to improve model robustness against backdoor attacks.\n\n**Weaknesses**:\n- It would be better to illustrate the setting of the backdoor attacks more explicitly. It seems that the backdoor is not injected in the pretraining stage, but injected in the task-specific backdoor fine-tuning stage, which follows the so-called “clean fine-tuning” setting of Qi et al. [1]\n- This leads to the second question of whether the proposed method can defend against the backdoors injected in the pre-training stage like BadPre [2] or weight poisoning attack [3]. Such attacks manipulate the backdoored representations dramatically when trigger words appear, which makes me a bit unsure about whether the proposed method will work.\n\n[1] Fanchao Qi, Mukai Li, Yangyi Chen, Zhengyan Zhang, Zhiyuan Liu, Yasheng Wang, and Maosong Sun. Hidden killer: Invisible textual backdoor attacks with syntactic trigger. In ACL 2021\n[2] Kangjie Chen, Yuxian Meng, Xiaofei Sun, Shangwei Guo, Tianwei Zhang, Jiwei Li, Chun Fan. BadPre: Task-agnostic Backdoor Attacks to Pre-trained NLP Foundation Models. In ICLR 2022\n[3] Keita Kurita, Paul Michel, and Graham Neubig. Weight poisoning attacks on pre-trained models. In ACL 2020\n\n Could you please make it clear about the setup of the backdoor attacks considered in the paper?\n Please refer to the weakness of the paper.\n", " This paper proposed a defense method named moderate-fitting against NLP backdoor attacks for pre-trained language models. The key observation is that during fine-tuning on the poisoned dataset, the PLM follows two learning stages: the moderate-fitting stage, which mainly focuses on learning major features(i.e. clean samples), and an overfitting stage, which learns subsidiary features(i.e. trigger features). Based on the observation, the authors proposed 3 simple training strategies to reduce models’ capacities, hence they will stay in the moderate-fitting stage and not learn the backdoor features. Evaluations on 4 different NLP attacks and 3 text classification datasets demonstrated the effectiveness of the proposed method. - Strengths \n - The topic is very interesting and critical for the community.\n - The idea is simple and easy to understand.\n - The evaluation is overall comprehensive. Authors evaluated their proposed methods under various types of NLP attacks on several NLP datasets. The evaluation results demonstrated the effectiveness of their proposed method on certain types of attacks. \n\n- Weaknesses\n - The proposed method shares similarity with several existing works in vision domains. The overreaching goal of this work is to train a clean model even if the dataset is poisoned. There are several similar works for the vision model backdoor defense[1-2]. Although they were originally proposed for vision models. I would like to see more discussion about the possibility to extend such methods for PLMs and compare them with the proposed method. It will be more convincing if the proposed method outperforms them.\n - The paper organization should be further improved. For example, the authors did not provide the threat model explicitly to describe the attacker and defender’s capability. Based on my understanding, the authors assumed the poisoning happens during the fine-tuning stage, i.e. the attackers poisoned the dataset used in the fine-tuning. If so, such a scenario seems too narrow for me. What if the attack happens during the pre-training stage? Or users want to train their models from scratch. Under such situations, it’s unknown whether the model’s training still strictly follows moderate-fitting and overfitting stages. If not, please further clarify the threat model in the revision. \n - The proposed method seems not very effective against several advanced NLP backdoor attacks. As shown in Fig.2-4, Table.1,3. The syntactic, sentence and style-based triggers remain effective after the defense (over 50% ASR after defense), which might indicate the proposed method is not general and can not handle more stealthy triggers than simple word-level triggers.\n - When evaluating the proposed method on several vision backdoor attacks, why only report the evaluation results of altering learning rate and training epochs? What’s the performance of the proposed reparameterized PET on such vision models? \n\n------------------\nUpdate: missing references\n\n[1] Huang, Kunzhe, et al. \"Backdoor defense via decoupling the training process.\" arXiv preprint arXiv:2202.03423 (2022)\n[2] Li, Yige, et al. \"Anti-backdoor learning: Training clean models on poisoned data.\" Advances in Neural Information Processing Systems 34 (2021): 14900-14912.\n Please refer Strengths and Weaknesses No", " This paper proposes a method to defend against training data poisoning attacks on pre-trained language models (PLMs). The proposed method, moderate-fitting, is based on the observation that during fine-tuning, the PLM will first learn the major features in the training dataset, and then starts to fit on the poisoned samples. They propose to (1) reduce the model capacity using low-rank parameter-efficient tuning (PET) (2) reduce the training epochs (3) lower the learning rate to prevent the PLM from fitting on the poisoned dataset.\nThe proposed method is shown to be highly effective against word-level backdoor triggers on three different text classification datasets. Update during author/reviewer discussion period\n===\nSince all my questions and concerns are properly addressed, I raise the score to 6. \n***\n***\n\n\n### Strength\n1. The paper is overall well-written and easy to follow\n2. The proposed method can effectively defend against word-level attacks on three different datasets.\n\n### Weakness\nPlease correct me if I am misunderstanding any part of the paper.\n1. No baseline methods are included for comparison, and it is hard to understand how effective is the proposed method compared to other defense methods. *(This is addressed during rebuttal)*\n 1. For the word-level backdoor attack, the performance of the proposed method should be compared with other baseline methods, such as ONION.\n 2. When defending the syntactic-level backdoor attack, the attack success rate (ASR) of the proposed method is not very low. Without comparing it with other defense methods, it is hard to tell how effective the proposed method is.\n2. The performance of all the three proposed method on AG-News when defending against syntactic attacks are not very convincing. This makes me doubt whether the proposed methods are general enough for other datasets. \n3. Some experiment settings are not very clear. *(These are addressed during rebuttal)* For example: \n 1. For SST-2, how is the data split? Is the ASR shown for SST-2 the performance on the official testing set or the development set? In Appendix A.1, it says the accuracy of SST-2 is calculated based on the official testing dataset. But since the label of the testing dataset of SST-2 is not publicly available, it is unclear how the ASR on SST-2 is calculated. \n 2. How many samples are used when training for each of the three datasets? This matters since 1 epoch for a large dataset is different from 1 epoch for a small dataset. ### Questions\n1. In Figure 5, is the ASR/ACC and tSNE the result of the training set or the testing set? I have this question because if we want to know how well the model \"fits\", we normally mean \"how well the model fits on the training data\".\n2. In the experiment of the synthetic dataset, why not use the whole dataset of SST2? In the experiment in Appendix B.7, why not use the whole AG-News dataset?\n3. What does Line 10 in the Appendix mean? Does it mean that only the 11106 samples are used for fine-tuning, and 10% of the 11106 samples are poisoned? If this is the case, why is this so?\n4. I am not convinced by using training epochs as early stopping. One epoch for a large dataset and one epoch for a small dataset is very different, and it is also possible that for a rather easy but large dataset, the over-fitting phase will occur in less than one epoch.\n\n### Suggestions\n1. I would recommend first showing that the two phases (moderate-fitting and over-fitting) exist during fine-tuning by plotting something like Figure 5 (but with standard fine-tuning) before introducing the method in Section 3.\n2. I suggest testing the proposed method on different sizes and types of PLMs, instead of only using RoBERTa-BASE.\n3. I suggest using more diverse datasets. The three datasets used in this paper are quite simple classification tasks that can be categorized by superficial word-level clues. I would like to know whether the proposed method will still be effective for tasks such as natural language understanding, e.g., QNLI, MNLI, and RTE. The authors have addressed the limitations and the ethical concerns.", " This paper proposes to restrict the Pre-trained Language Models (PLMs)'s adaptation to the moderate-fitting stage to neglect the backdoor triggers for the backdoor defender. Specifically, three methods i.e., reducing the model capacity, training epochs and learning rate are proposed to defend the backdoor attacks. Extensive experiments are conducted on three datasets for two representative backdoor attacks to illustrate the effectiveness of the proposed approach to reduce the impact of the common backdoor attacks against PLM without sacrificing the performance of the model on original data. Furthermore, they also have some experiments to confirm the effectiveness of the proposed approach to other NLP backdoor attacks and to the pre-trained CV models. Strengths: \n- The paper is well-written and clear in its purpose and objectives, and it is easy to follow for a non-specialist. \n- The PET algorithm of global low-rank proposed in this paper is interesting, and reasonable and effective results are obtained in the experiment. There have been plenty of experiments and analyses to validate the proposed approach \n\nWeakness: \n- The technical novelty is not enough, the proposed second and third approaches (reducing training epochs and learning rate) are too simple to take as a methodology. Although they have proved the effectiveness in preventing the backdoor attack in the paper, they are more seems to the hyper-parameter tuning and both are tricks for the model training.\n- This paper validates the proposed approach comprehensively, however, it lacks the comparison with some state-of-the-art backdoor defender techniques to prove that they can achieve the new state-of-the-art performance. - Line 205 arranges the comparison with the vanilla LoRA and Adapter in Appendix B.4 and B.5, why not put these comparisons in the main body, since they are important to prove the proposed global low-rank approach? Furthermore, can you provide more explanations about why both compared approaches completely fail to defence against backdoor attacks? If they are working for reducing the model capacity, why they are not working for backdoor defenders? \n- Why not compare the proposed approach with other backdoor defender techniques? In Section Backdoor defence in NLP, there are some different techniques, why not take them as the baselines? There are also some related works that are missed in this paper for example PICCOLO [1].\n\n[1]. PICCOLO : Exposing Complex Backdoors in NLP Transformer Models. Liu et al. \n\n - The technique novelty is not enough and lacks the comparison with some state-of-the-art backdoor defender techniques." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 3 ]
[ "r8GS_mbsmh_", "9BRKB87G2I", "0DPAmRrnnA", "vOXPvrXRUJl", "-C2TZ3XGqH", "amBUVhoQYcG", "cFy2WsGTCQ", "b-HT8cAxVsg", "nips_2022_C7cv9fh8m-b", "_bBFi2OVyoy", "Kex7yor8r3Y", "617vJbXmW6", "yZgGlsKGa4G", "_bBFi2OVyoy", "_bBFi2OVyoy", "_bBFi2OVyoy", "u5GdGIz64u0", "u5GdGIz64u0", "u5GdGIz64u0", "u5GdGIz64u0", "u5GdGIz64u0", "Kex7yor8r3Y", "Kex7yor8r3Y", "Kex7yor8r3Y", "Kex7yor8r3Y", "3r-vIguI-vW", "nips_2022_C7cv9fh8m-b", "nips_2022_C7cv9fh8m-b", "nips_2022_C7cv9fh8m-b", "nips_2022_C7cv9fh8m-b" ]
nips_2022_EENzpzcs4Vy
Unsupervised Learning of Shape Programs with Repeatable Implicit Parts
Shape programs encode shape structures by representing object parts as subroutines and constructing the overall shape by composing these subroutines. This usually involves the reuse of subroutines for repeatable parts, enabling the modeling of correlations among shape elements such as geometric similarity. However, existing learning-based shape programs suffer from limited representation capacity, because they use coarse geometry representations such as geometric primitives and low-resolution voxel grids. Further, their training requires manually annotated ground-truth programs, which are expensive to attain. We address these limitations by proposing Shape Programs with Repeatable Implicit Parts (ProGRIP). Using implicit functions to represent parts, ProGRIP greatly boosts the representation capacity of shape programs while preserving the higher-level structure of repetitions and symmetry. Meanwhile, we free ProGRIP from any inaccessible supervised training via devising a matching-based unsupervised training objective. Our empirical studies show that ProGRIP outperforms existing structured representations in both shape reconstruction fidelity and segmentation accuracy of semantic parts.
Accept
All reviewers recommend acceptance of this paper. They find the approach of repeatable parts innovative and the paper well written. The AC concurs
train
[ "MnUAzCOqmlk", "g5E28o-af5X", "KtQb9WXAPG", "HhpiZnGRtT", "EWWMJug1u-r", "dxXbA56d-n", "zhE1qVkHCKU", "e2UiPf1CRdy", "gEYlckzKVF2", "Sb_z-g6MqKn", "jzWxZAOFlKw", "VN3Ex1064qy" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the answers. I believe that most concerns have been properly addressed. I am inclined to stick to my original rating.\n\n", " Thank you again for taking the time and care to consider both our original manuscript and our responses! This has certainly made the work stronger and clearer. We will make the adjustments as you suggested in the revised version, at least to Figure 3 and Table 1 of the supplementary and the introduction. ", " Thanks to the authors for the clarifications, and sorry for my late response. I am slightly more positive about the paper after the author have clarified some misunderstandings on my part about Tab.1 in the supplementary. I think overall, between the added edit results and the clearer description of Table 1, that actually contains a measure of part-reuse I was asking for in its semantic segmentation results, the paper shows the benefits of working with parts and re-using them convincingly enough. I therefore raise my rating to a borderline accept.\n\nThere are still many things that should be improved, and I would condition acceptance on the following changes:\n- Changing Figure 3 of the supplementary to either show the F-score of ProGRIP (box) instead of ProGRIP (implicit), or changing the x-axis of the plots to include the z-latents in the measure of compactness (for example by counting the number of float parameters required to represent an object). In its current form Figure 3 is misleading. When using ProGrip (box) for the F-score, for example, ProGRIP should still show a rasonable advantage in the figure.\n- Adding SIF (and ideally also OccNets) to the semantic segmentation experiment in Table 1 of the supplementary. The main advantage over methods like SIF, NeuralParts and LDIF seems to be that ProGRIP uses repeatable parts (as also argued by the authors in the related work section). Currently, the semantic segmentation experiment seems to be the main experiment showing the advantage of using repeatable parts quantitatively. Therefore, adding SIF and OccNets to the semantic segmentation experiment would be a good way of showing the main strength of ProGRIP over these two methods.\n- Clarifying in the introduction to what extent the method is unsupervised, by clearly mentioning that an existing unsupervised decomposition method is used to generate part ground truth. Currently, this does not seem clear enough to me, for example 'we propose an unsupervised learning objective' is a bit misleading, since it is supervised with an existing part decomposition, also 'which allows learning from unannotated shapes' could easily be misinterpreted by the reader in this context.\n\nThe following changes would greatly improve the paper, but I would not consider them strictly necessary for acceptance:\n- Adding an ablation that shows the effect of using/not using repeatable parts on reconstruction, instance segmentation, and semantic segmentation. Previously I thought that Table 1 of the supplementary provided part of this ablation, but was corrected by the authors. This ablation seems quite relevant to understand what the advantages of repeatable parts are.\n- Comparing to Neural Parts and LDIF.\n", " Dear Reviewers,\n\nThank you again for your constructive reviews, which have helped us improve the quality and clarity of the paper. We have updated the individual response to each of your reviews under your thread, and have supplied a list that summarizes the changes. We hope that we have been able to address your concerns. As we approach the end of the discussion period approach, please don’t hesitate to let us know if you have any additional questions or comments. We look forward to the discussion!\n\nThanks for your time,\n\nAuthors\n\n", " * Main Paper:\n * Added all suggested references to the main paper.\n* Supplementary Materials:\n * Added a video of interpolation (.mp4 in the zip file).\n * Added additional editing examples as a figure and a sentence inline referring to the figure.\n * Fixed a typo in Supplementary Tab. 1.\n * Clarified the instance segmentation vs. semantic segmentation experiment in supplementary materials (Tab. 1 and Sec. D).\n * Clarified the use of the z latent feature in our compactness experiment (Sec. C) as suggested by R#2.", " - **Compactness computation**: Thank you for pointing out that the shape latent should be considered in the compactness measures. Our primary motivation with original Supp.Fig.2 (or updated Supp.Fig.3) was to show that ProGRIP has only a small # of primitives (~5-6) while having decent reconstructions. Having a compact representation aids interpretability and editing. But we agree that a significant portion of gains in the y-axis (F-Score) is by virtue of our z-latents. For a fair comparison, in Tab. 2 and Tab. 3 in our original supplementary materials, we report the F-score of ProGRIP (box, without implicit z-latents) and CubeSeg (w/ implicit), respectively. We observe that ProGRIP (w/ implicit) is a method with high compactness as well as good shape expressivity. We have also updated our supplementary text to clarify that our method has implicit functions contributing to F-Score while others do not (Sec. C).\n \n- **Additional Citations**: Thank you for the relevant pointers! We have updated the main paper citing these works (including concurrent work) in appropriate sections.\n\n- **Distinguishing fine-grained geometric difference with the same box**: In the ShapeNet dataset, we didn't notice such issues. Moreover, distinguishing fine geometry can be done by modifying our hard shape copies (repeating strictly the same shape) to an \"as geometrically similar as possible\" loss term, such as similar occupancies from implicit functions to guarantee such separation in fine details.\n\n- **Transformers**: Transformers are also applicable for our unordered set prediction problem. This is inspired by prior works such as DeTR [7] that also use transformers similarly. We leverage that the self-attention mechanism is permutation invariant and have a decoder that decodes the transformed tokens independently (and in parallel) to predict a set of candidate parts and their existence, trained with a matching loss.\n\n- **Reflection transformation**: We consider the same transformations used in Shape2Prog. The challenge of adding reflection is making it differentiable for backpropagation, which can be an interesting future direction.\n\n\n- **Threshold for F-score computation**: Please see Tab. 1 of our original paper and L43 of our original supplementary materials for the threshold value (0.01).\n\n\n\n[1] R Kenny Jones, Theresa Barton, Xianghao Xu, Kai Wang, Ellen Jiang, Paul Guerrero, Niloy J Mitra, and Daniel Ritchie. Shapeassembly: Learning to generate programs for 3d shape structure synthesis. In ACM TOG, 2020. \n[2] R Kenny Jones, David Charatan, Paul Guerrero, Niloy J Mitra, and Daniel Ritchie. Shapemod: Macro operation discovery for 3d shape programs. In SIGGRAPH, 2021. \n[3] Yonglong Tian, Andrew Luo, Xingyuan Sun, Kevin Ellis, William T. Freeman, Joshua B. Tenenbaum, and Jiajun Wu. Learning to Infer and Execute 3D Shape Programs. In ICLR, 2019. \n[4] Zhiqin Chen, Andrea Tagliasacchi, and Hao Zhang. Bsp-net: Generating compact meshes via binary space partitioning. In CVPR, 2020. \n[5] Chuhang Zou, Ersin Yumer, Jimei Yang, Duygu Ceylan, and Derek Hoiem. 3D-PRNN: Generating Shape Primitives with Recurrent Neural Networks. In ICCV, 2017. \n[6] Thomas O. Binford. Visual Perception by Computer. Invited talk at IEEE Conf. on Systems and Control, 1971. \n[7] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. In ECCV, 2020.", " - **Benefit of having repeatable parts**: Please note that we don’t claim the formulation of repeatable parts as our contribution. Instead, it’s a line of established research [1, 2, 3] that we further by introducing implicit shape representation and our training strategy. For the value of repeatable parts, on a high-level, learning to understand repeatable geometric structures is a big step forward towards intelligent shape understanding. More practically, in addition to showing promising reconstruction and segmentation, the advantages have been demonstrated in more semantic shape editing and compactness (Sec.B&C of our supplementary). As per the reviewers’ proposals, we added more shape editing examples and a shape interpolation application in our updated supplementary materials. We’d also like to acknowledge the application of shape programs with repeatable parts illustrated in prior works, such as shape completion [3], novel shape generation, and directed shape manipulation [2].\n\n- **Use of repeatable parts _hurts_ in segmentation? (Tab. 1 in supp)**: Using repeatable parts, **helps** and does not hurt in segmentation. We’d like to clarify any factual misunderstanding here.\n * In Tab.1 (right) of the main paper, we conducted an \"instance segmentation” evaluation. This means in the ground-truth shape part labels, we treat copies of the same part (e.g. legs of a chair) as different label predictions. This is the standard protocol adopted by prior works [4, 5]. We observe that ProGRIP does comparable or better across all classes, including CubeSeg, which can be thought of as a non-repeatable abstract shape representation.\n * In Supp.Tab.1, we also show \"semantic segmentation\" results. This means in the ground-truth shape part labels, we treat all copies of the same part as a single label. This is measuring the extent to which the detected repeatable parts are semantically similar. This is generally a harder task (one can see it as instance segmentation + classification) hence the lower performance than \"instance segmentation\" alone. To avoid similar confusion in the future, we update the table header to “instance segmentation” and “semantic segmentation”.\n\n- **Additional edit examples?**: We have added a few more editing examples in our updated supplementary materials. We may also release a demo interface in the future supporting semantic level editing, as we show in our examples. It will only be enabled by our repeatable parts.\n\n- **Measure of part re-use**: while there are no established metrics for this, we made our best efforts computing the portion of labeled part surface points that are within a threshold (0.02) to our reconstructed repeatable part surface. This is indicative of the “how good a semantic part is reconstructed by our repeatable part''. Results are (higher is better):\n * | Method | chair | table | airplane |\n| --------- | ------- | ------ | -------- |\n| CubeSeg | 46.02 | 36.36 | 66.33 |\n| ProGRIP | **85.35** | **77.49** | **93.47** |\n * We see from the results that ProGRIP’s repeatable part structure is closer to semantic decomposition of the object compared to CubeSeg.\n\n- **OccNet/LDIF baseline**: Please note that we don’t argue that having local feature encoding is detrimental to ProGRIP. However, due to its architectural complexity, we leave it as an open direction for future research. While there are several methods like OccNet/LDIF that focus solely on high fidelity reconstructions, the key contribution of ProGRIP is a structured shape representation that has favorable properties (such as repetitions, symmetry, interpolation, editing, etc.) while maintaining decent reconstruction fidelity. This is not achievable by prior methods that propose similar representations (Shape2Prog, CubeSeg, BSP-Net). To compensate for the comparison with similar representations as LDIF (gaussian-based and non-repeatable), we have a comparison with SIF [6] in our original supplementary materials, Tab. 4. We find that our geometric fidelity is overall slightly better than SIF while also​​ possessing the property of modeling repeatable structure.\n\n- **Clarify the extent to which the method is unsupervised**: We clarify that unsupervised refers to “matching the oriented bounding boxes of predicted repeatable parts to non-repeatable box-based shape decomposition” (original or updated main paper, L48–49), which is also obtained without annotations.", " - **Relation to CubeSeg [1]**:\n - *Similarity*: as pointed out by the reviewer, ProGRIP relies on the box abstraction of shapes from CubeSeg [1] as a starting point for learning geometric self-similarity, i.e. repeatable parts. Therefore, it’s possible that some mistakes made by CubeSeg [1] would be inherited by ProGRIP. We discuss this in detail in Sec. I of our original supplementary materials and listed relaxing this dependency as an open future research direction.\n - *Difference*: as we discussed in L109-113 (original main paper, or L110-114 in updated main paper) CubeSeg [1] uses non-repeatable parts and is limited in fidelity due to its box abstraction. In contrast, ProGRIP enriches the modeling capacity with repeatable parts and implicit shape representation. This is shown in significant improvement in reconstruction (Tab. 1). We also conduct an ablation study to extend CubeSeg with implicit shape representation and still show clear performance gains due to the use of repeatable parts (Tab. 3 in original supplementary materials).\n\n- **Computational complexity**: We show below the statistics on the computational complexity for ProGRIP. As a reference, we also show the same statistics for BSP-Net:\n * | Method | BSP-Net | ProGRIP |\n| ----------- | ----------- | ----------- |\n| Training | ~7d10h | ~2d 1 hour |\n| Test | 1.27s/mesh | 1.35s/mesh |\n * From the statistics, we can see that ProGRIP is faster than BSP-Net during the training stage and comparable to BSP-Net on test time inference.\n\n- **Novelty in the learning part**: Please note that while ProGRIP inherits the **non-repeatable** box estimation from CubeSeg [1] and uses similar losses for implicit function training as [2, 3, 4], the core to our work is the learning of *repeatable geometric structure* as we outlined in Sec. 4.1. This is a novel training strategy and is critical to the success of our shape modeling (as shown in Fig. 7).\n\n- **Why is ProGRIP called a shape program?**: We follow standard terminology from the literature to classify ProGRIP as a shape program. In particular, we’d like to remark on the connection of our representation and the program in Shape2Prog [5] where the major difference is that our program is unordered while Shape2Prog’s program is sequential. For instance, all the occurrences of a repeatable part can be regarded as a `for loop drawing` in Shape2Prog where the `drawing` commands are executed in parallel.\n\n- **Varying the scale s_i for different posed parts from the same repeatable part?**: This is indeed an interesting idea. Currently, our formulation requires different posed occurrences of the same repeatable part to have identical shapes. One can relax such constraints by using a loss during the implicit shape training stage to encourage the shapes to be as close as possible instead. We list this as a good future extension of our work.\n\n- **Clarify NM in L193**: Please note in Sec. 3.1. (L141 - L 149) we formally define `N` as the number of repeatable parts and `M` as the number of posed occurrences for each part. `NM` is the product of these two values.\n\n- **Discussion on limitations**: Please see Sec. I in the original supplementary materials where we discussed our limitations and particularly visualized one example where our model is misled by the mistake in CubeSeg [1] (Fig. 3 in original supplementary or Fig 4. in updated supplementary).\n\n[1] Kaizhi Yang and Xuejin Chen. Unsupervised learning for cuboid shape abstraction via joint segmentation from point clouds. In SIGGRAPH, 2021. \n[2] Lars Mescheder, Michael Oechsle, Michael Niemeyer, Sebastian Nowozin, and Andreas Geiger. Occupancy networks: Learning 3d reconstruction in function space. In CVPR, 2019 \n[3] Jeong Joon Park, Peter Florence, Julian Straub, Richard Newcombe, and Steven Lovegrove. Deepsdf: Learning continuous signed distance functions for shape representation. In CVPR, 2019. \n[4] Tao Chen, Saurabh Gupta, and Abhinav Gupta. Learning Exploration Policies for Navigation. In ICLR, 2019. \n[5] Yonglong Tian, Andrew Luo, Xingyuan Sun, Kevin Ellis, William T. Freeman, Joshua B. Tenenbaum, and Jiajun Wu. Learning to Infer and Execute 3D Shape Programs. In ICLR, 2019.", " * **Comparing with BSP-NET on training and testing complexity**: Below are the statistics of training and testing complexity for ProGRIP and BSP-Net: \n * | Method | BSP-Net | ProGRIP |\n | ----------- | ----------- | ----------- |\n | Training | ~7d10hr | ~2d1hr |\n | Test | ~1.27s/mesh | ~1.35s/mesh |\n * We train BSP-Net as specified in the paper using the authors’ open-source code release. They use hierarchical training (16^3 → 32^3 → 64^3) for the continuous domain, which takes a total of ~4d15h, followed by a 64^3 discrete stage which takes an additional ~2d20h. All experiments were run on a Titan RTX GPU, similar hardware to ProGRIP experiments.\n * From the statistics, we can see that ProGRIP is faster than BSP-Net during the training stage and comparable to BSP-Net on test time inference.\n\n- **Canonical Orientation Requirement**: Our method doesn’t require any canonicalization of object shapes in the data itself.\n\n- **Experiments on more categories**: Please note that we select the Chair and Table classes to compare with Shape2Prog, which can only work on these classes. We further select the airplane class as a representative demonstration that our ProGRIP method applies to any class. Due to the limited time of the rebuttal period, we are not able to finish the experiments for categories such as cars and lamps, in particular, due to the data processing and the training of baselines (e.g., 7 days+ for BSP-Net). We will include the results for these classes as soon as possible once the experiments finish.\n\n- **Expressiveness of the new shape representation**:\n * Interpolation: We thank the reviewer for the proposal. Please see the interpolation video (.mp4) in our updated supplementary materials.\n * Single-view 3D reconstruction: Our ProGRIP can be adapted to single-view 3D reconstruction when combined with a proper single-view encoder. This is a quite interesting future direction yet beyond the scope of our current work.\n\n- **Is ProGRIP category-specific?**: Yes, ProGRIP is focused on modeling a single class.\n\n- **How to ensure fair comparison in Tab. 1?**: We use the exact same task set up to ensure this, including using the same train/eval data split, using the same set of sample points for training all different implicit shape based methods, and setting identical hyperparameters (specifically, the number of samples for computing the metrics and the threshold used in F-Score computation) for evaluation.\n", " The paper introduces a novel unsupervised framework for representing repeatable parts as shape programs with part-based implicit functions. The authors propose to utilize implicit functions to represent parts, which improve the representation capacity of shape programs. Meanwhile, a matching-based unsupervised learning objective is devised to learn the shape program from unannotated shapes. Experimental results show the effectiveness of the proposed method in shape reconstruction fidelity and semantic segmentation for chairs, tables and airplanes of the ShapeNet dataset. Strengths: \n+ Both the problem addressed in the paper and its proposed solutions are innovative and interesting.\n+ The proposed matching-based unsupervised training objective is sound.\n+ The experiment shows quite marginal performance improvement.\n+ The paper is well-organized and easy to follow.\n\nWeaknesses:\nI don't really have any major concerns; some minor concerns are as follows: \n1. Since part-based implicit shape representation is used, it would be better to compare the training and testing complexity between the proposed method and the baseline, such as BSP-NET.\n2. Does the proposed method require a canonical orientation of shapes.\n3. the authors should provide qualitative and quantitative results on more categories, such as car and lamp.\n4. It would be good if the authors could conduct more experiments to show the expressiveness of the new shape representation. For example:\ni) It would be interesting to explore the feature space to validate if the method can accomplish smooth interpolation and extrapolation of shapes. \nii) Can the new shape programs be adapted for single-view 3D reconstruction?\n5. Are the trained models category-specific or a single universal model is trained for these 3 categories? How do you make sure the comparison is fair in Tab. 1. All baselines are trained in the same experimental setting.\n\n Please see the weaknesses. Please see the weaknesses.", " The authors propose a method to reconstruct part-based shape models from point clouds, where each part has a local neural representation of the part's geometry. Additionally, multiple instances of the same geometry can be re-used by different parts, thereby explicitly modelling translational and rotational symmetries. The point cloud is encoded into a global feature vector, that is then used in two transformer-based steps to first generate part geometries and then translated and rotated instances of the part geometries. The method needs a part decomposition of the training shapes as supervision, although this decomposition can be obtained using existing unsupervised methods. A novel loss allows for efficient training of parts without requiring explicit correspondences with the ground truth parts. The authors show that this approach allows for part-aware shape reconstruction, unsupervised segmentation, and shape editing.\n\nThe two-step generation approach with geometries and instances and the loss with shape-based part assignments both seem like interesting contributions to me. Using neural representations of part geometries seems a bit less novel, since it has been used before in prior works (for example, Neural Parts, Local Deep Implicit Functions). Strengths:\n- The separation of part generation into geometry and instances seems like an interesting idea to make use of the compositional nature of many shapes, and seems to be relatively novel in the context of part-based shape generation.\n- Implementation seems technically solid.\n- The paper is generally well-written and easy to read.\n- The evaluation shows some advantages in unsupervised shape segmentation.\n\nWeaknesses:\n- The authors do not clearly demonstrate the advantages of their contributions. What is the advantage of having a representation with repeatable parts over current work with non-repeatable parts or current work without parts? An more thorough evaluation of some applications that are only possible with repeatable parts, or that clearly perform better than the state of the art would be necessary.\n- The evaluation focuses on shape reconstruction, however existing methods like Occupancy Networks perform significantly better on shape reconstruction (Occupancy Networks should be included in the left part of Table 1). There could be other applications where working with repeatable parts is necesary or beneficial (like part editing, part mixing, inductive biases for generation, interpolation, etc.), but these are not demonstrated thoroughly enough. Only one example of shape editing is provided.\n- The segmentation experiments are reasonable, but by themselves not sufficient for acceptance, and Table 1 in the supplementary seems to suggest that using repeatable parts actually hurts segmentation performance compared to using non-repeatable parts.\n\nIn summary, I like the idea of using repeatable parts to represent shapes, but the evaluation does currently not convincingly demonstrate the advantage of such a representation.\n\nDetails:\n\n- Using a part-based representation with part re-use is described as a major advantage of the proposed method in the introduction, and it is a major part of the technical contribution. However, from the evaluation it is unclear what the advantage of such a representation is over the state-of-the-art. For reconstruction accuracy, non-part based representations like OccNet seem to perform better, so that does not seem to be a strong suit of the proposed representation, or possibly of part-based representations in general. The segmentation comparison does show some advantage in Table 1, even if the results are a bit mixed, but Table 1 of the supplementary shows that the segmentation performance is lower with part re-use. There is one edit example in the supplementary that shows some down-stream advantages of part re-use, but additional examples, ideally with more repeated parts (storage furniture?) and possibly mirrored parts (usually the chair arms are mirrored, not related by only translations and rotations) would be more convincing. Also, metrics that specifically measure the success of the part re-use are missing, for example measuring how often symmetric parts in a shape are correctly modeled with the same geometry and how often parts that have different geometry are incorrectly modeled as instances of the same geometry.\n\n- Comparisons to Neural Parts and Local Deep Implicit Functions (LDIF) both seem relevant, since they both perform unsupervised shape decomposition and use neural representations of part geometries. For LDIF, the authors argue that it is not included due to using a local feature encoding compared to the global feature encoding of ProGRIP. What are the specific applications or tasks that LDIF can't do due to its local feature encoding, but that ProGRIP can do? This needs to be discussed and clarified if the goal is to show that an empirical comparisons to LDIF is not needed since there is a clear theoretical advantages. I can imagine, for example, that a single global latent space enables interpolation or generation, which may be more difficult with local latent spaces. Although ideally such an argument is backed up by empirical evidence.\n\n- It might be good to clarify the extent to which the method is unsupervised in the introduction. It is true that the method can be trained on an unannotated dataset, but only if a part annotation is generated for the dataset as a pre-processing step, possibly using existing unsupervised methods. Calling it a fully unsupervised approach in the introduction without further clarification may incorrectly create the expectation that the method is trained end-to-end with only the final occupancy loss as supervision.\n\n- To a lesser extent, calling the representations a shape program may also create the unrealistic expectation of working more complex program. The current representation may better be described as structural representation rather than a program, since only two types of 'operators' are effectively used, always in the same order: first box creation and then instancing. But I would consider fixing this point (for example by changing the title and introduction) optional.\n\n- I agree with the authors that representation compactness is a good measure for a learning-based model. However, it seems like the F-score the authors use measures the reconstruction quality with geometry (the information stored in the z-vectors), but only measures space requirements without geometry (i.e. without counting the information stored in the z-vectors). This seems like an unfair comparison, as the other methods do not make use the z-vectors to store additional information that is useful in improving the F-score.\n\n- The following papers could be added to the related work:\n\t- SPAGHETTI: Editing Implicit Shapes Through Part Aware Generation, Hertz et al., ArXiv 2022 (concurrent work)\n\t- SDM-NET: Deep generative network for structured deformable mesh, Yang et al., Siggraph Asia 2019\n\t- PQ-NET: AGenerative Part Seq2Seq Network for 3D Shapes, Wu et al., CVPR 2020\n\t- DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape Generation, Yang et al., Siggraph 2022 (concurrent work)\n\t- Generative 3D Part Assembly via Dynamic Graph Learning, Huang et al., NeurIPS 2020\n\t- Write, Execute, Assess: Program Synthesis with a REPL, Ellis et al., NeurIPS 2019\n\t- InverseCSG: Automatic Conversion of 3D Models to CSG Trees, Du et al., TOG 2018\n\t- Engineering Sketch Generation for Computer-Aided Design, Willis et al., CVPR 2021Workshop paper\n\t- Learning Adaptive Hierarchical Cuboid Abstractions of 3D Shape Collections, Sun et al., Siggraph Asia 2019\n\t- ParSeNet: A Parametric Surface Fitting Network for 3D Point Clouds, Sharma et al., ECCV 2020\n\n- Since the box generation model is pre-trained without knowledge of the geometry, it should create two instances of the same box in cases where two boxes are have the same shape. But same box shape does not necessarily mean same geometry (for example, two boxes may contain mirrored geometry, or just geometry that has slightly different details, etc.). Did you observe the model successfully splitting two instances of the same box into two different boxes in the refinement step if the two boxes are the same but not their geometry? Showing such an example might be good to confirm that this is not a problem.\n\n- Transformers typically work on ordered sequences (step i takes the output of step i-1 as input), some details should be given how the transformer is set up here to be order-invariant. For example, does each step still output a probability distribution that is then sampled probabilistically (the standard setup for transformers), or does each step regress/classify output values deterministically (given the input)? If each step samples a potentially multi-modal probability distribution, how are the samples of different steps coordinated to make sure only compatible modes are selected when sampling?\n \n- Allowing for a reflection transformation in the instances seems like it could enable significantly more part re-use, since parts are often mirrored. Was there a specific reason this was not used? Or can the rotation R also be used to model a reflection? This could be discussed in the future work for example, or a clarification is needed that R can also describe reflections.\n\n- The threshold used to compute the F-score metric should be mentioned. Providing a clear description of what the authors consider to be the main practical contributions of their representation with repeatable parts of existing shape generation methods, and how these contributions are demonstrated in the paper would be good. The authors show some limitations of their method.", " The paper proposes a new method for geometric and structure reconstruction of composite 3D shapes. The main contribution consist of automatically detecting and encoding the repeated parts. This leads to a relevant contribution in comparison with existing work. Moreover, the single parts are accurately reconstracted using neural implicit function. The proposed approach is fully unsupervised. An exhoustive experimental section shown the improvement of the proposed approach in comparison with other methods for both shape recontruction and shape segmentation. *Strengths*\n-the paper focuses on a relevant problem that is the automatic encoding of composite objects. The capability of detecting repeatable parts is very important bringing a significative step toward the semantic understanding of objects and scenes. \n-the use of deep implicit function enables the method to reconstruct more accurate details of the observed object improving the geometric representation of the observed parts (differently than other method that capture only a coarse approximation of shape such as 3D boxes).\n-the proposed composite encoding pipeline is well designed by properly combine existing methods with novel parts. \n- experiments are well organized and shown promising results in comparison with other methods. \n\n*Weaknesses*\n-It seems that the proposed work strongly believes on the results of [55]. In this sense the proposed work can be considered as an extension or a refinement of [55]. Authors should better explain this point.\n-The proposed approach seems computational demanding, authors should discuss and evaluate the computational complexity of their work.\n-The learning part is not fully novel since the main parts are based on alredy available neural architectures (for box estimation [55], and implicit function estimation [33, 38, 7]).\n -A better explanation on the relation between the proposed approach and [55] is required to understand the impact of the proposed method (i.e., what happen if the method [55] fails).\n-What is the computational complexity of the work in comparison with other methods? \n-Authors emphasized that they proposed a shape program approach but it seems that ‘program’ part is not considered in the methodological section and results. Authors should clarify why they consider their work as ‘program’-like (or clarify better the meaning of shape program). \n\nMinor points:\nIs it possible that scale s_i change for the same part? (i.e., two reapeted parts with different scales).\nIt is not clear the meaning of NM on line 193. Authors wrote that primitives are estimated using [55], but NM is predicted by ProGRIP, please clarify. \n Limitations are not discussed. For instance authors should declare their strong dependency from [55]." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "gEYlckzKVF2", "KtQb9WXAPG", "dxXbA56d-n", "nips_2022_EENzpzcs4Vy", "nips_2022_EENzpzcs4Vy", "jzWxZAOFlKw", "jzWxZAOFlKw", "VN3Ex1064qy", "Sb_z-g6MqKn", "nips_2022_EENzpzcs4Vy", "nips_2022_EENzpzcs4Vy", "nips_2022_EENzpzcs4Vy" ]
nips_2022_4u-oGqB4Lf6
Efficient Active Learning with Abstention
The goal of active learning is to achieve the same accuracy achievable by passive learning, while using much fewer labels. Exponential savings in terms of label complexity have been proved in very special cases, but fundamental lower bounds show that such improvements are impossible in general. This suggests a need to explore alternative goals for active learning. Learning with abstention is one such alternative. In this setting, the active learning algorithm may abstain from prediction and incur an error that is marginally smaller than random guessing. We develop the first computationally efficient active learning algorithm with abstention. Our algorithm provably achieves $\mathsf{polylog}(\frac{1}{\varepsilon})$ label complexity, without any low noise conditions. Such performance guarantee reduces the label complexity by an exponential factor, relative to passive learning and active learning that is not allowed to abstain. Furthermore, our algorithm is guaranteed to only abstain on hard examples (where the true label distribution is close to a fair coin), a novel property we term \emph{proper abstention} that also leads to a host of other desirable characteristics (e.g., recovering minimax guarantees in the standard setting, and avoiding the undesirable ``noise-seeking'' behavior often seen in active learning). We also provide novel extensions of our algorithm that achieve \emph{constant} label complexity and deal with model misspecification.
Accept
In this paper, the authors develop the first computationally efficient active learning algorithm with abstention, while maintaining the exponential savings in terms of label complexity. Furthermore, the proposed algorithm enjoys other nice properties, such as recovering minimax rates in the standard setting. The algorithm is based on novel applications of techniques from contextual bandits, and the analysis is nontrivial. On the other hand, the authors should improve their paper by addressing the concerns of reviewers, especially the realizable assumption.
train
[ "ERNnqqxssc", "4XZa2aCbsPt", "OjMhucNTMhl", "4yu5LDpBI3CR", "cLzTR24zjT", "yNaSPhHjDWY", "FQABQiNb6C", "edAwltDxlA7", "M8lRHkb8DUP", "80QUogWmHmK", "9ZJcTEYLLyD", "FDVTnCl_hAM", "QiZFKf9wJs" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply. \n\nWe believe it's hard to find *a single function class* that contains all possible true conditional probability (under *arbitrary* ${\\cal D_{XY}}$) and has a non-trivial disagreement coefficient (we can trivially bound $\\theta \\leq \\gamma/\\epsilon$ for any ${\\cal F}$; see Definition 1 and footnote 3).\nOur guarantees hold under realizability and bounded disagreement coefficient; we discuss example function classes in the paper (which strictly generalizes previous results focusing on linear models).\nWe see two directions to further strengthen our results:\n\n - Relax the realizability assumption. We develop preliminary results when the assumption is only approximately achieved (see Section 4.2). We believe a comprehensive understanding of the problem without realizability is an important future direction.\n\n - Bound disagreement coefficient (or eluder dimension/star number) for richer function classes. Since our algorithms are designed for general function classes, any future developments on these complexity measures directly lead to broader applications of our algorithms.\n \nWe are happy to add more discussions on this issue in the revision.", " Thank you for the detailed response. The rebuttal has addressed most parts of my concerns. But I am still a little bit confused by the boundedness of $\\theta$.\n\nIt seems that the main theorems of this paper rely on the crucial realizability assumption, where the $\\mathcal{F}$ should contain a regressor characterizing the true conditional probability. But it is unclear to me whether the listed examples (such as linear function and generalized linear function) meet the condition in general. Does there exists a function class containing all possible true conditional probability but has bounded $\\theta$. I believe a clearer discussion on this issue would make this paper more competitive.", " Thank you for your prompt response. We'll definitely incorporate suggestions from reviewers into the revision of the paper.", " I would like to thank the authors for the detailed responses provided in the rebuttal. While the manuscript has not been updated yet on openreview, I hope that the authors will incorporate the suggestions from reviewers for the camera ready version of the paper. All things considered, I believe this paper would be a good addition to the conference, and hence, I keep my score and recommend acceptance.", " We thank Reviewer 8Wbo for spending their time reviewing our paper. We first notice that Reviewer 8Wbo has clearly stated the following in their review: \n\n> [note confidence score of 1 (@authors + @AC). this is largely outside my comfort zone with many proofs, of which I did not study in details. I would recommend discarding my opinion.]\n\nWe next respond to Reviewer 8Wbo's comments to help the reviewer resolve their concerns. \n\n\n\n1. > Reviewer 8Wbo wrote ''I think lines 35-38 already starts this paper off on the wrong foot. The objective of active learning is to the learn the decision boundary with minimal labels; what the model does with points close to the decision boundary or with high uncertainty once this is learnt could be considered another problem. If the model can the learn the decision boundary with high accuracy given more queries around the decision boundary then this is justified.'', and ''Scenario proposed does not reasonable.''\n\t\n**Response:** Active learning aims to learn good classifiers with low label complexity (i.e., at least better than passive learning). **However, such a goal cannot be achieved without additional low noise assumptions due to fundamental lower bounds established in active learning (Kaariainen, 2006)**. As a result, to apply active learning in real-world scenarios (i.e., cases without low noise assumptions), it's necessary to consider a refinement of the label complexity goal (as suggested by Kaariainen (2006)). We consider one such refinement with abstention and Chow's excess error and provide the first computationally efficient active learning algorithm that achieves exponential label savings without any low noise assumptions. We believe our problem setup is reasonable, and our results are important to the active learning community (see Section 1.3 for a summary of other main contributions of our paper).\n\t\n2. > Reviewer 8Wbo wrote ''Certainly seems new, but a more directed related work would be appreciated.'', ''I find the theoretical exposition somewhat lacking in places-- e.g. for proposition 3 and the statement follows of the algorithms superiority over any uncertainty-based AL method.'', and ''Parts of the paper make strong statements without backing.''\n\t\n**Response:** As clearly stated in line 163, we discuss additional related work and provide complete proofs in the Appendix (see the supplementary material) due to lack of space.\n\n3. > Reviewer 8Wbo wrote ''No experimental results whatsoever.''\n\t\n**Response:** We agree that empirically examining the proposed algorithms is an interesting future direction. However, as it stands, our contributions are theoretical, and we would like them to be viewed as such.\n\n4. > Reviewer 8Wbo wrote ''The paper is not easy to read.'', and ''Neither well written nor well organised. Language is imprecise and unclear.''\n\t\n**Response:** We hope Reviewer 8Wbo can explicitly point out where they find the paper hard to follow. We can further polish/re-organize the paper to make it more readable. ", " Thank you for your positive review. We hope our responses below can help resolve your concerns.\n\n\n1. > The computational efficiency is phased in terms of number of calls to an oracle, yet leaving the runtime of that oracle unsettled. Please provide concrete computational cost analysis to justify the main contribution.\n\n**Response:** The implementation of the regression oracle should be viewed as an efficient operation since the regression oracle solves a convex optimization problem with respect to $f$, which even admits closed-form solutions for certain function classes. A concrete example is when $f$ is a linear function in $\\mathbb{R}^{d}$: With a set of $N$ data points, we can implement the regression oracle with runtime $O((N+d)\\cdot d^{2})$ using the standard least squares method.\n\n2. > It is true that [PT21] runs with minimizing an empirical 0/1 loss which is NP-hard. Can you give more intuition on why the 0/1 loss is vital for their analysis, and why the regression oracle approach in the paper works as well?\n\n**Response:** The analysis of Puchkin and Zhivotovskiy (2021) follows the standard active learning analysis (as their authors agree, see Appendix B of their paper), which relies on connecting the empirical error on the labeled dataset and the whole dataset (e.g., Step 0 in their Appendix B). Such connection is established using the 0/1 loss. Besides, 0/1 loss is an unbiased estimator of the true error, which can be easily analyzed using standard concentration results. \n\nTo get computational efficiency, we need to consider regression over the square loss (since ERM on 0/1 loss is known to be NP-hard). Our analysis (e.g., proof of Theorem 4 on page 18 of the supplementary material) shows that the classifier induced from a regression function with small square loss enjoys low error; such guarantee is developed with concentration results regarding the square loss (Lemmas 1-4 in Appendix D; pages 16-17 of supplementary material) and properties of the uneliminated set of regression functions (Lemmas 5-9 in Appendix E.1; pages 20-22 of supplementary material).", " Thank you for your thorough review and positive comments. Please see our responses to specific comments below.\n\n### **Responses to Weaknesses:**\n\n1. > about the realizable assumption: although the realizable has frequently appeared in the active learning literature, the most related work on active learning with abstention seems not to require such an assumption (Theorem 1.1 of [Puchkin and Zhivotovskiy, 2021]). It seems to me that the realizable assumption is the price for the efficient algorithm since Algorithm 1 requires to approximate $\\eta(y = +1 \\vert x)$ with the function $f(x)$ (by ucb and lcb). So, I think it would be necessary to make a more clear comparison with the previous work.\n\n**Response:** Yes, the work of Puchkin and Zhivotovskiy (2021) doesn't require realizability (but instead requires the NP-hard ERM oracle). We tend to believe that (approximate) realizability is the price for computational efficiency. We are happy to add clarifications in the revision. \n\n2. > issue on the parameter setting: the algorithm takes the disagreement coefficient $\\theta$ as the input. I am not sure whether such a coefficient can be calculated efficiently in general? (maybe an upper bound for $\\theta$ is enough in special cases, but the realizable assumption could be violated.)\n\n**Response:** This is a great question. We answer your question from the following two aspects.\n\n(1) Our algorithm only requires an upper bound on the disagreement coefficient $\\theta$. We know that $\\theta \\leq d$ for linear functions and $\\theta \\leq C_{\\textsf{link}}\\cdot d$ for generalized linear functions ( $C_{\\textsf{link}}$ is a constant depending on the link function). More generally, we can upper bound $\\theta$ by the eluder dimension or the (squared) star number, which are well-known complexity measures that have been previously studied in the literature (see page 15 of supplementary material for detailed discussion). That being said, once the eluder dimension/star number of a function class has been theoretically analyzed (there is no need to compute it empirically), we can use it as an upper bound for $\\theta$.\n\n(2) We don't necessarily need to take $\\theta$ as an input to the algorithm. Instead, we can simply run a modified version of Algorithm 1 with $T = \\widetilde O (\\frac{\\textsf{Pdim}({\\cal F})}{\\epsilon \\gamma})$ and achieve excess error $O(\\epsilon \\cdot \\theta)$. In this case, $\\theta$ is only a theoretical quantity that our algorithm is agnostic of.\n\n3. > issue on the estimation of lcb and ucb: although the authors have referred to [Krishnamurthy et al. 2017] for the calculation of the lcb and ucb, I think it would be nice to discuss their computational costs since the efficiency is one of the main contributions of this paper. (for example how hard it is to compute lcb or ucb for a ${\\cal F}$ containing the $f^{\\star}$?).\n\t\n**Response:** The computational cost is in terms of the number of calls to the regression oracle (defined above line 84), which solves a convex optimization problem wrt $f$ (and even admits a closed-form solution when $f$ is linear). Previous results (Proposition 8 on page 19 of supplementary material) show that one can achieve $\\alpha$ approximation error with $O(\\frac{1}{\\alpha^{2}} \\log \\frac{1}{\\alpha})$ (or $O(\\log \\frac{1}{\\alpha})$ when ${\\cal F}$ is convex) calls to the regression oracle. Our analysis (Theorem 5, with proofs on page 19 of supplementary material) shows that it suffices to approximate lcb/ucb with approximation error $\\alpha = O(\\frac{\\gamma}{\\log T})$ to achieve the same theoretical guarantees shown in Theorem 4 up to changes in constant terms. This leads to the computational guarantees stated in Theorem 5.\n\n---\n\n### **Responses to Questions:**\n\n1. > can the proposed method achieves similar Chow's excess risk without the realizable assumption when comparing with the best model in the hypothesis space ${\\cal F}$.\n\t\n**Response:** We currently managed to provide guarantees when the approximation error $\\kappa$ is small (see Theorem 9 in Section 4.2). We provide preliminary evidence (see Section 4.2 and Appendix H.2) on why such a requirement is needed in our analysis. We believe a comprehensive understanding of the problem without realizability is an important future direction.\n\n2. > how to compute the parameter $\\theta$ efficiently (please refer to the second point of the weakness for more details)\n\n**Response:** Please see our response to point 2 in the **Responses to Weaknesses** section above.\n\n3. > what is the computational cost of ucb and lcb for a hypothesis space ${\\cal F}$ containing $f^{\\star}$\n\n**Response:** Please see our response to point 3 in the **Responses to Weaknesses** section above.", " ### **Responses to Questions (Cont'd)**\n\n3. > Algorithm 1 enjoys strong guarantees at arbitrary noise levels. This is captured in the bounds via $\\gamma$ which needs to be chosen as a function of the noise level (indeed this is explicit in the proof of Theorem 6) and $\\theta$ (which captures the disagreement compared to $f^{\\star}$ at a fixed level $\\gamma$). How do $\\theta$ and the bound on sample complexity change when $\\gamma$ is chosen inappropriately in Theorem 6, without knowledge of the noise level?\n\n**Response:** We first clarify two points: (1) Under Chow's excess error (defined at lines 87-97), $\\gamma$ is a given parameter that is not chosen based on other quantities (it is associated with the definition of Chow's excess error). (2) Under standard excess error, $\\gamma$ is chosen based on the noise level but not $\\theta$. Instead, $\\theta$ (or an upper bound of $\\theta$) is a function of $\\gamma$, and its value is not affected even if $\\gamma$ is chosen inappropriately regarding the noise level.\n\nWe now discuss the effect of the unknown noise level. Our current analysis of Theorem 6 requires the knowledge of noise level (also previously assumed in active learning literature, e.g., Balcan et al. (2007), Hanneke (2014)). If $\\gamma$ is chosen inappropriately (when the noise level is unknown), we can still upper bound the standard excess error using Eq. (4) (below line 247). However, the excess error bound may no longer match the minimax lower bound. Designing minimax optimal algorithms that can automatically adapt to the noise level is left as an interesting future direction.\n\n4. > Can the noise-seeking noise condition be relaxed for Proposition 3? The negative result for uncertainty-based AL is similar in spirit to the one in “On the relationship between data efficiency and error for uncertainty sampling” Mussman \\& Liang, 2018. While the analysis here is significantly different, the result in Mussman et al only requires non-vanishing Bayes error to reveal the failure of uncertainty sampling.\n\n**Response:** Thank you for pointing out the paper by Mussmann and Liang (2018). We answer this question from the following three aspects.\n\n(1) Proposition 3 states, ''there exists a learning problem satisfies Definition 5/Definition 6 (the noise-seeking conditions) such that any 'uncertainty-based' active learner doesn't perform well''. This is essentially a lower bound statement, and the fact of satisfying noise-seeking conditions only makes the lower bound stronger: It automatically ensures that ''there exists a learning problem such that any 'uncertainty-based' active learner doesn't perform well''.\n\n(2) To strengthen Proposition 3, one needs to change ''there exists a learning problem'' to ''for any learning problem''. However, we don't believe such a statement is true: There certainly exist easy learning problems where ''uncertainty-based'' active learners perform well.\n\n(3) Nevertheless, there is no conflict with the statements proved in Mussmann and Liang (2018): They analyze a special two-stage uncertainty sampling algorithm (see Section 4.1 of their paper), and our ''uncertainty-based'' active learner is formally defined in Definition 10 (page 24 of supplementary material). Also, they study and relative performance of uncertainty sampling with respect to random sampling, yet we consider the absolute performance of ''uncertainty-based'' active learners. \n\n5. > Minor remark: “Noise-seeking Massart/Tsybakov noise” can be a bit confusing. Perhaps something like “Random flip-allowing Massart noise” could make it easier to grasp (although it’s a bit of a mouthful)?\n\n**Response:** Thank you for your suggestion. We will take your suggestion into consideration in the revision. \nWe currently use the term ''noise-seeking'' to refer to the noise-seeking behavior (i.e., over-sampling from the high-noise regions) of standard ''uncertainty-based'' algorithms under the proposed noise assumptions.", " Thank you for your thorough review and positive comments. Please see our responses to specific comments below.\n\n### **Responses to Weaknesses:**\n\n1. > While the paper is generally easy to follow, certain details regarding the algorithm or the analysis could be discussed in more detail in the main text (see the Questions section)\n\n**Response:** Thank you for your suggestion. We are quite space-limited in the submission, but we are happy to add more details to the main content in the revision.\n\n2. > Minor remarks: There are a few typos in the paper (e.g. lines 259, 266, 274 etc). Also the pseudocode of Algorithm 1 can be made a bit more precise and easier to follow (perhaps add a notation for the labeled set; $Q_t, x_t, y_t$ are not defined when they appears first in step 4; unspecified how $\\widehat f_1$ is selected etc).\n\n**Response:** Thank you for catching these typos; we'll correct them in the revision. We are also happy to adjust the pseudocode of Algorithm 1. Regarding your concerns: $Q_t, x_t, y_t$ are defined at lines 10 - 12; and $\\widehat f_1$ can be selected arbitrarily since the first epoch is of length $2$ (and thus, the total number of labels queried in the first epoch is never larger than $2$).\n\n---\n\n### **Responses to Questions:**\n\n1. > How does the analysis of Algorithm 1 change if we assume a finite unlabeled set? Can the bound of Theorem 4 be changed to factor in the size of the unlabeled set?\n\t\n**Response:** Our analysis works as long as one can randomly sample $(x,y)$ from the underlying distribution ${\\cal D_{XY}} = {\\cal D_{X}} \\times {\\cal D_{{Y} \\vert {X}}}$ ($y$ is observed only after label query). When ${\\cal X}$ is finite, we can take ${\\cal D_{\\cal X}}$ as the uniform distribution over ${\\cal X}$, and ${\\cal D_{{\\cal Y} \\vert {\\cal X}}}$ as the labeling distribution. If ${\\cal D_{{\\cal Y} \\vert {\\cal X}}}$ is stochastic (with fresh randomness each time), our analyses/guarantees are the same as before. If ${\\cal D_{{\\cal Y} \\vert {\\cal X}}}$ is deterministic, the sample complexity is trivially upper bounded by the cardinality of ${\\cal X}$: There is no need to query the label of a previously queried data point. Obtaining better dependence on the cardinality of ${\\cal X}$ may require additional structural assumptions.\n\n2. > Steps 5-6 of Algorithm 1 approximate the lcb/ucb over the set of functions ${\\cal F_{\\text{m}}}$. For what function classes ${\\cal F}$ is this step tractable? How can the approximation error of the lcb/ucb influence the result of Theorem 4? For what function classes is the approximation of the lcb/ucb “reasonable”?\n\n**Response:** We can efficiently approximate lcb/ucb as long as one has access to a regression oracle that solves the (weighted) square loss optimization problem (defined above line 84) with respect to the function class ${\\cal F}$. The optimization problem is reasonably easy to solve since it is convex with respect to the regression function $f \\in {\\cal F}$. It even admits closed-form solutions in many cases (e.g., it is reduced to standard least square when $f$ is linear). \n\nWe carefully dealt with the approximation error so that the guarantees in Theorem 4 still hold (up to changes in constant terms). Previous results (Proposition 8 on page 19 of supplementary material) show that one can achieve $\\alpha$ approximation error with $O(\\frac{1}{\\alpha^{2}} \\log \\frac{1}{\\alpha})$ (or $O(\\log \\frac{1}{\\alpha})$ when ${\\cal F}$ is convex) calls to the regression oracle. Our analysis (Theorem 5, with proofs shown on page 19 of supplementary material) shows that it suffices to approximate lcb/ucb with approximation error $\\alpha = O(\\frac{\\gamma}{\\log T})$ to achieve the same theoretical guarantees shown in Theorem 4 up to changes only in constant terms.\n\n**Continued in the next response.**", " Paper proposes an active learning algorithm in which labels are not acquired when the model chooses to abstain from predicting. [note confidence score of 1 (@authors + @AC). this is largely outside my comfort zone with many proofs, of which I did not study in details. I would recommend discarding my opinion.]\n\nI think lines 35-38 already starts this paper off on the wrong foot. The objective of active learning is to the learn the decision boundary with minimal labels; what the model does with points close to the decision boundary or with high uncertainty once this is learnt could be considered another problem. If the model can the learn the decision boundary with high accuracy given more queries around the decision boundary then this is justified.\n\nStrengths\n- The introduction makes clear what the paper is trying to achieve.\n\nWeaknesses\n- The paper is not easy to read.\n- Scenario proposed does not reasonable.\n- No experimental results whatsoever.\n- Lots of propositions and theorems are stated in the paper, but all the proofs in the appendix. I have but skimmed these. \n\nOriginality:\nCertainly seems new, but a more directed related work would be appreciated.\n\nQuality:\nI find the theoretical exposition somewhat lacking in places-- e.g. for proposition 3 and the statement follows of the algorithms superiority over any uncertainty-based AL method. No experimental results.\n\nClarity:\nNeither well written nor well organised. Language is imprecise and unclear. Parts of the paper make strong statements without backing.\n\nSignificance:\nDifficult to assess the impact of the method. The impact of the paper however will be small, given the problems above. N/A N/A", " The paper proposes an active learning algorithm that can avoid sampling from regions of the input space with high label noise. The algorithm satisfies two important properties: 1) it achieves exponential improvements compared to passive learning with respect to an evaluation metric that penalizes abstentions (Chow’s excess error); and 2) the algorithm is computationally tractable for finite pseudo dimension function classes.\n Strengths:\n\n- The paper employs in a creative way techniques from contextual bandit literature to extend the idea of Puchkin et al and propose a computationally tractable algorithm. Moreover, the result is particularly remarkable since it does not require a condition constraining the amount of label noise, but rather captures it in the bound.\n\n- The analysis for deriving the results is non-trivial and some of the connections to quantities from the contextual bandit literature (e.g. eluder dimension, disagreement coefficient) may be of independent interest for the active learning community.\n\n- The paper contains several results that help to position the proposed algorithm in the broader active learning literature. For instance, the analysis of Section 3 confirms that the algorithm is minimax optimal (albeit not substantially better than passive learning) with respect to the standard excess risk.\n\nWeaknesses:\n\n- While the paper is generally easy to follow, certain details regarding the algorithm or the analysis could be discussed in more detail in the main text (see the Questions section)\n\n- Minor remarks: There are a few typos in the paper (e.g. lines 259, 266, 274 etc). Also the pseudocode of Algorithm 1 can be made a bit more precise and easier to follow (perhaps add a notation for the labeled set; Q_t, x_t, y_t are not defined when they appears first in step 4; unspecified how $\\hat{f}_1$ is selected etc).\n - How does the analysis of Algorithm 1 change if we assume a finite unlabeled set? Can the bound of Theorem 4 be changed to factor in the size of the unlabeled set?\n\n- Steps 5-6 of Algorithm 1 approximate the lcb/ucb over the set of functions $F_m$. For what function classes $F$ is this step tractable? How can the approximation error of the lcb/ucb influence the result of Theorem 4? For what function classes is the approximation of the lcb/ucb “reasonable”?\n\n- Algorithm 1 enjoys strong guarantees at arbitrary noise levels. This is captured in the bounds via $\\gamma$ which needs to be chosen as a function of the noise level (indeed this is explicit in the proof of Theorem 6) and $\\theta$ (which captures the disagreement compared to $f^\\star$ at a fixed level $\\gamma$). How do $\\theta$ and the bound on sample complexity change when $\\gamma$ is chosen inappropriately in Theorem 6, without knowledge of the noise level?\n\n- Can the noise-seeking noise condition be relaxed for Proposition 3? The negative result for uncertainty-based AL is similar in spirit to the one in “On the relationship between data efficiency and error for uncertainty sampling” Mussman & Liang, 2018. While the analysis here is significantly different, the result in Mussman et al only requires non-vanishing Bayes error to reveal the failure of uncertainty sampling.\n\n- Minor remark: “Noise-seeking Massart/Tsybakov noise” can be a bit confusing. Perhaps something like “Random flip-allowing Massart noise” could make it easier to grasp (although it’s a bit of a mouthful)?\n The paper generally addresses some of the poignant limitations of the analysis (e.g. focus on finite pseudo dimensions function classes, realizable vs agnostic case etc). See the Questions section for other limitations that could also be discussed in the paper.\n", " This paper studies the pool-based active learning problem. The main contribution is to propose a computationally efficient algorithm to train a rejection model. Under the realizable case, the model enjoys $\\epsilon$ chow's excess risk with $\\widetilde{O}(\\mathrm{polylog}(1/\\epsilon))$ label complexity. The guarantee is achieved without any low noise assumption commonly used to achieve the exponential savings label complexity in literature. Although a similar rate (for learning with abstentions) has already appeared in the literature, the proposed method is more efficient (or practical) than the previous one. Besides the main result, the authors also show that (a slight modification of) the proposed method enjoys minimax optimal label complexity for the standard excess risk with the low noise assumption. Furthermore, this paper has shown a constant label complexity in a special case (with a finite hypothesis set) and presented the guarantees with model misspecification. ### Strength:\nOverall, I think this is a nice paper with fruitful results. Specifically, the strengths of this paper are listed as follows,\n+ novelty& significance: although the algorithm framework shares a similar spirit as the previous work [Krishnamurthy et al. 2017] in standard active learning with abstention, the new criterion for label querying is interesting to me. A similar rate for active learning with abstention has been achieved by [Puchkin and Zhivotovskiy, 2021], but a computationally efficient algorithm is always what we desire. \n\n+ clarity: this paper is well written and clearly structured for the most part. Although there are fruitful results regarding Chow's excess risk and the standard excess risk (under different conditions), the authors have clearly organized them to make the results easy to follow.\n\n### Weakness:\nIn general, I like the results of the paper, but I have still some reservations about the assumption and the computational cost as follows,\n- about the realizable assumption: although the realizable has frequently appeared in the active learning literature, the most related work on active learning with abstention seems not to require such an assumption (Theorem 1.1 of [Puchkin and Zhivotovskiy, 2021]). It seems to me that the realizable assumption is the price for the efficient algorithm since Algorithm 1 requires to approximate $\\eta(y=+1|\\mathbf{x})$ with the function $f(\\mathbf{x})$ (by ucb and lcb). So, I think it would be necessary to make a more clear comparison with the previous work.\n\n- about the efficient algorithm:\n\t- issue on the parameter setting: the algorithm takes the disagreement coefficient $\\theta$ as the input. I am not sure whether such a coefficient can be calculated efficiently in general? (maybe an upper bound for $\\theta$ is enough in special cases, but the realizable assumption could be violated.)\n\t- issue on the estimation of lcb and ucb: although the authors have referred to [Krishnamurthy et al. 2017] for the calculation of the lcb and ucb, I think it would be nice to discuss their computational costs since the efficiency is one of the main contributions of this paper. (for example how hard it is to compute lcb or ucb for a $\\mathcal{F}$ containing the $f_\\star$?).\n Q1: can the proposed method achieves similar Chow's excess risk without the realizable assumption when comparing with the best model in the hypothesis space $\\mathcal{F}$.\n\nQ2: how to compute the parameter $\\theta$ efficiently (please refer to the second point of the weakness for more details)\n\nQ3: what is the computational cost of ucb and lcb for a hypothesis space $\\mathcal{F}$ containing $f_\\star$ This paper has discussed its limitation on the realizable assumption in Section 4.2. It has shown that the same exponential saving label complexity is achieved with misspecified model space as long as $\\epsilon$ is less than the approximation error $\\kappa$. The results partially address the limitation on the assumption, but I think the paper would become even strong if the author could show a similar convergence rate for Chow's excess risk compared with the best model in the hypothesis when $f_*$ is not in $\\mathcal{F}.", " The paper studies active learning of general concept classes. Lower bound is known in this regime to rule out savings in label complexity over passive learning. However, [PT21] showed that with the additional action of abstention, active learning does provide exponential savings in terms of the error rate. This work follows the research line, and the main contribution falls into a computationally efficient algorithm that archives label complexity comparable to [PT21]. The main algorithm relies on efficient implementation of regression oracles, which has been developed in prior works.\n Strengths:\n+ Active learning is a very useful tool to reduce labeling cost, and this paper studies an interesting and practical extension.\n+ The core contribution on efficient learning paradigm is important.\n+ The paper is well written and easy to follow, with right amount of reminders and pointers.\n\nWeakness:\n- The computational efficiency is phased in terms of number of calls to an oracle, yet leaving the runtime of that oracle unsettled. Please provide concrete computational cost analysis to justify the main contribution.\n- It is true that [PT21] runs with minimizing an empirical 0/1 loss which is NP-hard. Can you give more intuition on why the 0/1 loss is vital for their analysis, and why the regression oracle approach in the paper works as well?\n \nIt is true that [PT21] runs with minimizing an empirical 0/1 loss which is NP-hard. Can you give more intuition on why the 0/1 loss is vital for their analysis, and why the regression oracle approach in the paper works as well? Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 1, 3, 3, 3 ]
[ "4XZa2aCbsPt", "FQABQiNb6C", "4yu5LDpBI3CR", "edAwltDxlA7", "80QUogWmHmK", "QiZFKf9wJs", "FDVTnCl_hAM", "M8lRHkb8DUP", "9ZJcTEYLLyD", "nips_2022_4u-oGqB4Lf6", "nips_2022_4u-oGqB4Lf6", "nips_2022_4u-oGqB4Lf6", "nips_2022_4u-oGqB4Lf6" ]
nips_2022_J-IZQLQZdYu
Brownian Noise Reduction: Maximizing Privacy Subject to Accuracy Constraints
There is a disconnect between how researchers and practitioners handle privacy-utility tradeoffs. Researchers primarily operate from a privacy first perspective, setting strict privacy requirements and minimizing risk subject to these constraints. Practitioners often desire an accuracy first perspective, possibly satisfied with the greatest privacy they can get subject to obtaining sufficiently small error. Ligett et al. have introduced a `"noise reduction" algorithm to address the latter perspective. The authors show that by adding correlated Laplace noise and progressively reducing it on demand, it is possible to produce a sequence of increasingly accurate estimates of a private parameter and only pay a privacy cost for the least noisy iterate released. In this work, we generalize noise reduction to the setting of Gaussian noise, introducing the Brownian mechanism. The Brownian mechanism works by first adding Gaussian noise of high variance corresponding to the final point of a simulated Brownian motion. Then, at the practitioner's discretion, noise is gradually decreased by tracing back along the Brownian path to an earlier time. Our mechanism is more naturally applicable to the common setting of bounded $\ell_2$-sensitivity, empirically outperforms existing work on common statistical tasks, and provides customizable control of privacy loss over the entire interaction with the practitioner. We complement our Brownian mechanism with ReducedAboveThreshold, a generalization of the classical AboveThreshold algorithm that provides adaptive privacy guarantees. Overall, our results demonstrate that one can meet utility constraints while still maintaining strong levels of privacy.
Accept
The reviewers unanimously agreed that the paper is well-motivated and the theoretical results surrounding the proposed Brownian mechanism are interesting. Initial concerns regarding presentation and clarity were assuaged after the authors' responses to the reviews. Overall, the paper is a non-trivial and valuable extension of [Ligget et al., 2017] and should be presented at the conference.
test
[ "aNPB3djtXz", "lGvoS0YDHx-", "Yt8xyhmAkgxg", "a18_FK-mlub", "dql8zcrcXz6", "fecGNrQTpqx", "5aQdOF3GKo8", "NRqUWCBBDIt", "4GD4YmSkIZe", "QOynWzqWGgy", "1TMOCNZkudH", "9KOQudDLoiv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the clarifications and edits. I encourage you to give the paper one more round of editing to correct minor typos, e.g., $(Z_t)_{t \\ge 0}$ in Line 236, and note that the probabilities sum to 1 in the proof of Proposition C.1 so the total sum is $\\gamma/2$. Also, it could be helpful to increase the font sizes in the plots.\n\n", " Thank you for the detailed response and the clarifications, and apologies for the shifted section numbering.", " Thank you for your detailed response, and in particular, the response to my first question has addressed my main concern. I will increase my score accordingly.", " Thanks you for the clarifications", " Response to Questions:\n\n> 1. Definition 3 does not make sense. What does the symbol $\\sigma$...\n\nThe notation $\\sigma(X)$ denotes the sigma algebra generated by the random variable $X$. Intuitively, $\\sigma(X)$ denotes the set of all possible events that can involve the random variable $X$. Likewise, if is $(X_n)$ is a sequence of random variables (or equivalently a sequence of algorithm outputs), $N$ is said to be a stopping time with respect to $(X_n)_{n \\geq 1}$ if the event $(N \\leq n) \\in \\sigma(X_m : m \\leq n)$ for each $n$. Intuitively, this means whether or not one has stopped by time n can be determined just from the random variables or algorithm outputs observed up to and including time $n$. We have added a formal definition of these objects to aid the reader.\n\n> 2. Is it efficient to produce samples from the Brownian motion algorithm or...\n\nIt is efficient to produce samples from both mechanisms. To produce a sample from the Brownian mechanism, one first samples a normal random variable with mean 0 and variance $T_1$ (since the first time is deterministic). Given the first $n - 1$ samples, the time $T_n$ is computed. One can then sample a point at the corresponding time $T_n$ using a Brownian bridge. To sample from the Laplace Noise Reduction technique, one first generates all of the points of jump for the inhomogeneous poisson process, which can be done using standard python packages. Then, one generates independent Laplace random variables with variances according to the sampled times from the point process. That is, if a jump occurs at time $t$, then one generates a $\\mathrm{Lap}(t)$ random variable. Given these variables, one can compute the value of $Z_s$ for any $s \\geq \\eta$. The condition $\\eta > 0$ ensures that the number of jumps of the Poisson Process is finite almost surely. We have added a short description to the appendix summarizing the above information.\n", " Response to Weaknesses:\n\n> 1. In my opinion, the presentation can be improved by providing more high-level descriptions...\n\nWe have added justification for the importance of adaptivity in LNR in Section 4. Extending LNR to the continuous case is beneficial as it allows a data analyst to adaptively select how much noise to remove from the perturbed optimal parameter. The original Laplace-based mechanism, as leveraged by Liggett et al., requires prefixing privacy/noise levels (choosing a sequence of constants before interacting with the data), which is very restrictive. In particular, our generalization is needed for Algorithm 1, which allows the data analyst to adaptively specify privacy levels based on observed parameter iterates.\n\n> 2. As the privacy of the Brownian Mechanism is ex-post and is determined by the privacy functions...\n\nWe have included a more intuitive description of privacy boundaries before the definition. To use the Brownian mechanism, a data analyst first selects a family of privacy boundaries — either linear or mixture. If the analyst has a target approximate privacy level in mind, then they should use a linear boundary. Otherwise, a mixture boundary may be preferable as it offers greater overall tightness at the cost of tightness at any specific point in time. Then, the data analyst selects the parameters for said boundaries. This can be done heuristically, or the data analyst can select the parameters to optimize the privacy boundary for tightness at a prespecified privacy level (as is done in the experiments). This optimization can be done with a simple function call. Then, the user can either (1) evaluate the privacy boundary at a given time function to observe the privacy loss, or (2) specify a privacy function (i.e. target privacy level), computing the corresponding time as given in Corollary 3.7.\n\nResponse to Questions:\n\n> 1. As in [LNRWW 2017], Lemma 3.2 builds on the Markov property of the Brownian mechanism. However, as in general, the time functions may depend on the future of the Brownian noise...\n\nThe strong Markov property is not actually needed to prove Lemma 3.2. To address the reviewers concerns, we have provided a more direct proof that directly examines the joint density of $(B_{T_1}, \\dots, B_{T_n})$ at an arbitrary starting point $\\mu$. This direct proof can be found in Appendix B of the rebuttal submission, in the stead of the original proof. First, it is clear that $B_{T_1} \\sim \\mathcal{N}(\\mu, T_1)$, as $T_1$ is just a constant function. Then, given $B_{T_1}, \\dots, B_{T_{m - 1}}$, we know using a Brownian bridge that the conditional distribution of $B_{T_m}$ is $B_{T_m} \\sim \\mathcal{N}\\left(\\mu + \\frac{T_m}{T_{m - 1}}(B_{T_{m - 1}} -\\mu), \\frac{(T_{m - 1} - T_m)T_m}{T_{m - 1}}\\right)$. In short, one computes that $$p_{1:n}^\\mu\\left(B_{T_n}, \\dots, B_{T_1}\\right) \\propto \\exp\\left( - \\frac{(B_{T_1} - \\mu)^2}{2T_1}\\right)\\prod_{m = 2}^n \\exp\\left(\\frac{-(B_{T_m} - \\mu - \\frac{T_m}{T_{m - 1}}(B_{T_{m -1}} - \\mu))^2}{2(T_{m - 1} - T_m)}\\cdot \\frac{T_{m -1}}{T_m}\\right).$$ \n\nOne can then check the equivalence $p_{1:n}^\\mu\\left(B_{T_n}, \\dots, B_{T_1}\\right) \\propto \\exp\\left(\\frac{-(B_{T_n} - \\mu)^2}{2T_n}\\right)\\prod_{m = 2}^n \\exp\\left(\\frac{-(B_{T_{m - 1}} - B_{T_m})^2}{2(T_{m - 1} - T_m)}\\right)$. Given this, it is clear that, for $\\mu, \\mu'$, the ratio of densities is $\\frac{p^\\mu(B_{T_n}, \\dots, B_{T_1})}{p^{\\mu'}(B_{T_n}, \\dots, B_{T_1})} = \\frac{\\exp\\left(\\frac{-(B_{T_n} - \\mu)^2}{2T_n}\\right)}{\\exp\\left(\\frac{-(B_{T_n} - \\mu')^2}{2T_n}\\right)}$, as desired.\n\n> 2. The notation of the stopping function is a little confusing, as it gives a false impression...\n\nWe have chosen the notation $N(x)$ as to indicate that the stopping time $N$ does indeed depend on the underlying dataset, but have chosen not to write $N(M_{1:n}(x))$ for notational ease. In the rebuttal submission, when introducing time functions (the first instance of this indexing notation), we now explicitly establish that $T_n$ depends just on the dataset $x$ through observed iterates $M_{1:n}(x)$. We hope this will ease confusion for the reader. \n\n\n", " Response to Weaknesses:\n\n> 1. The main issue with the paper is clarity...\n\nThe definitions of the aforementioned objects are intended to be fully rigorous, and hence ensure the correctness of the proofs found in the paper. After each definition, there are intuitive written explanations. We had not provided such an intuitive explanation for privacy boundaries, and thus have now introduced one into the rebuttal revision of the paper. \n\n> 2. Corollary 2.7, if I understand the intention correctly, would be...\n\nWe believe the reviewer means Corollary 3.7. Both usage scenarios for corollary 3.7 are intended for the accuracy-first regime. The first situation (mentioned in the first part of the corollary) provides ex-post privacy guarantees for a fixed privacy boundary. We have included the second part to show how a data analyst can adaptively select target privacy levels while using the Brownian mechanism. \n\n> 3. Section 3 seems to be disconnected from the rest of the text.\n\nWe believe the reviewer meant Section 4 as opposed to Section 3. Section 4 is intended to introduce the Laplace Noise Reduction mechanism, a strict generalization of the noise reduction algorithm presented in Ligett et al. We present this mechanism in the body of the paper as it is used as a subroutine in ReducedAboveThreshold. We have added “which will be used as a subroutine in the following section” to the first line of Section 4 to further indicate connection to the rest of the paper.\n\n> 4. It is also not fully clear if the results of the paper cannot be achieved using the tools...\n\nTo apply results from fully adaptive composition to this setting, fresh Gaussian noise would need to be added in each round. This would ultimately lead to a blowup in the privacy loss proportional to the number of parameters observed. The Brownian mechanism circumvents this blowup by adding strongly correlated Gaussian noise through a Brownian motion. Due to this high correlated noise, results from fully adaptive composition cannot be applied. Results from martingale concentration needed to be applied instead. The result is that the privacy loss at the end is *independent* of the number of interactions, or the number of parameters observed, as is the main point of noise reduction mechanisms.\n\n> 5. In the experimental section, although the results are convincing, it is unclear...\n\nUnfortunately, the privacy guarantees of Laplace Noise reduction gain virtually no benefit from conversion to approx DP (since the marginals of the process are Laplace). For instance, the conversion $\\epsilon’ = \\log(\\exp(\\epsilon) \\cdot (1-\\delta) - \\delta)$ is known from Murtagh and Vadhan 2016 (https://arxiv.org/pdf/1507.03113.pdf). If one were to set $\\delta = 10^{-6}$ and $\\epsilon = 1$, this yields $\\epsilon’ = 0.99999863212$, a negligible improvement. Moreover, such guarantees are not time-uniform in nature, which is necessary for the application of LNR. Likewise, since the marginals of the Brownian mechanism are Gaussian, pure privacy guarantees cannot be extracted. We thus conduct such comparisons due to mathematical limitations, not limitations in our analysis. The empirical privacy loss is computed using Theorem 3.6 and Corollary 3.7. Suppose we have chosen to use a linear privacy boundary, as is the case in our experimentation. Furthermore, suppose we have fixed some associated parameters with the boundary. Empirical privacy loss is then simply computed by plugging the time at which the mechanism stopped (i.e. corresponding variance) into the privacy boundary. Over multiple runs, this quantity is then averaged. \n\nResponse to Questions:\n\n> 1. How does the proposed mechanism relate to the prior work on...\n\nPlease see 4 under weaknesses.\n\n> 2. In the experiments, are comparisons of guaranteed privacy loss and high-probability loss fair...\n\nPlease see 5 under weaknesses \n\n> 3. Is the continuous-time extension of Laplace-based Markov...\n\nYes, the continuous time extension of the Laplace-based Markov process is necessary for algorithm 1. This is because Algorithm 1 allows the data analyst to adaptively specify an increasing sequence of privacy levels based on the outputs of the algorithm. The Laplace-based Markov process leveraged by Ligett et al requires privacy levels that are fixed in advance of viewing the data (i.e. a sequence of constants) — thus making it incompatible with Algorithm 1. We have added mention of the necessity in Section 4. \n\n> 4. Is there a reason that the Algorithm 1 settles when finding any noise level...\n\nAlgorithm 1 attempts to find the greatest (not any) noise level — hence the smallest possible epsilon — such that the target accuracy is satisfied. In other words, it minimizes the privacy loss (maximizes privacy) subject to meeting the utility threshold. If the algorithm were to be further run after the first instance of meeting a target accuracy, additional information about the underlying dataset would be leaked.\n\n\n\n\n\n\n", " Response to Weaknesses:\n\n> 1. The authors mention in Line 16: \"[BM] empirically outperforms existing work on common statistical tasks, ....\" Relatedly, in Line 302, the authors write \"... \n\nUnfortunately, the privacy guarantees of Laplace Noise reduction gain virtually no benefit from conversion to approximate privacy (since the maringals of the process are Laplace). For instance, the conversion $\\epsilon’ = \\log(\\exp(\\epsilon) \\cdot (1-\\delta) - \\delta)$ is known from Murtagh and Vadhan 2016 (https://arxiv.org/pdf/1507.03113.pdf). If one were to set $\\delta = 10^{-6}$ and $\\epsilon = 1$, this yields $\\epsilon’ = 0.99999863212$, a negligible improvement. Moreover, such guarantees are not time-uniform in nature, which is necessary for the application of LNR. Likewise, since the marginals of the Brownian mechanism are Gaussian, pure privacy guarantees cannot be extracted. We thus conduct such comparisons due to mathematical limitations, not limitations in our analysis. The utility guarantees of ReducedAboveThreshold do not depend on the underlying mechanism masking the parameter (LNR or BM). As such, plotting these utility guarantees would not illustrate a difference between LNR and BM.\n\n> 2. There are no details about how BM can be incorporated with composition of DP mechanisms...\n\nWe agree that the composition of mechanisms satisfying ex-post privacy is of great importance. In this paper we consider the problem of noise reduction in a one-shot setting. We will expand upon the composition limitation, noting that one can obtain a “naive” composition guarantee by adding the ex-post bounds and delta parameters. Developing a more refined theory of composition for algorithms satisfying ex-post privacy would likely necessitate new theoretical tools, and we leave this as an area of investigation for future work.\n\nResponse to Questions:\n\n> 1. In Line 495, for $X^\\pi_t$ to be well-defined....\n\nI believe you are correct here that $(\\lambda, \\omega) \\mapsto Y_t^{\\lambda}$ needs to be jointly measurable in $\\lambda$ and $\\omega$. However, the processes we consider in this paper are all of the form $Y_t^{\\lambda} := \\exp(\\lambda M_t - \\frac{\\lambda^2}{2}t)$ for a martingale $(M_t)$, so the desired joint measurability assumption is trivially satisfied.\n\n> 2. What is $T_1$ in Definition 3.1...\n\nYou are correct, $T_1 = T_1(x)$ should indeed be a constant function. We have explicitly added this to the definition to ensure no further confusion. We have made a typo in line 238, and have corrected it to the proper constraint that $t \\geq \\eta$. Thanks for bringing this to our attention. The proof for the marginal distributions of $(Z_t)_{t \\geq \\eta}$ requires $\\eta$ to be strictly greater than 0 (but can be arbitrarily small, thus almost equal to zero for all practical purposes). This constraint is also necessary for the purposes of simulation.\n\n> 3. The term 'density' is loosely mentioned in Definition 2.1...\n\nWe have added a footnote explaining which measure the densities can be assumed to be taken with respect to. In particular, we note that one can take the density to be the Radon-Nikodym derivative of the law of A(x) (or A(x’) respectively) with respect to the sum of the laws of A(x) and A(x’).\n\n> 4. The paper has superb presentation, but a few typos should be fixed...\n\nThank you for bringing the typos to our attention. We have gone through the paper and corrected said typos. \n\n> 5. a comment by the authors on why the variances shown in Fig. 4 are large might be in order.\n\nWe have added mention that Fig. 4 is located in the appendix. We have corrected our caption to note that the privacy level optimized for in the figure was $\\epsilon = 0.3$, not $\\epsilon = 0.5$. Such large variances are expected. For instance, for $\\delta = 10^{-6}$, to obtain a privacy level of $\\epsilon = 0.3$, a standard conversion yields $\\sigma^2 = 2\\cdot \\frac{\\log(1.25/\\delta)}{\\epsilon^2} = 135$. Since our bounds are time uniform in nature, they naturally are a bit looser than any point-wise optimal bound.\n\n\n", " The paper introduces the theoretical framework for a new noise reduction algorithm for probabilistic differential privacy (DP), namely, the Brownian mechanism (BM), and it also compares it empirically with the Laplace noise reduction (LNR) mechanism on empirical risk minimization (ERM) tasks. The privacy loss (random variable) for BM is characterized, a bound on it is given, and two 'privacy boundaries' are derived for it using results from martingale theory. Additionally, LNR, introduced in [Ligett et al., 2017], is extended in this paper to the continuous-time setting (and it is briefly indicated in the appendices how a Skellam noise reduction mechanism can be derived from the present work). Strengths:\n\n- Quality: The derived results and the empirical assessment are of high quality and highly non-trivial.\n- Clarity: The paper is extremely well-written.\n\nMore details: The introduction of BM is a natural step after LNR, as Gaussian perturbation is expected to be more private than a Laplace perturbation if one cares about $\\ell_2$-sensitivity. Nevertheless, the authors borrow results from martingale theory that make the introduction of BM a nontrivial analogue of LNR. Further, the results are derived in a highly rigorous manner. Also, the extension of LNR to the continuous setting, besides being rigorously derived in a nice way, yields a significant improvement regarding the privacy problem: querying a utility now can only cause a privacy loss comparable to that incurred by the disclosure of the private parameter. Additionally, a union bound in [Ligett et al., 2017] is disposed of, yielding tighter bounds.\n\n=====\n\nWeaknesses: The following are not detrimental weaknesses of the paper, as it is strong as is; rather, they are rooms for improvement.\n\n- The performance comparison between BM and LNR is largely empirical. The main concern here is that it could be that the chosen datasets and tasks fall within a regime where BM outperforms LNR, but that this regime, as far as the present paper is concerned, is unknown. The authors mention in Line 16: \"[BM] empirically outperforms existing work on common statistical tasks, ....\" Relatedly, in Line 302, the authors write \"... we plotted guaranteed (in the case of LNR) or high-probability (in the case of BM) privacy loss on the x-axis against average loss (either logistic or ridge) on the y-axis.\" In addition, the authors write in Appendix C in Line 579, before proving a utility guarantee for ReducedAboveThreshold, \"... instead of plotting the utility guarantee in our experiments in Section 6, we instead plot empirically observed loss/accuracy.\" In view of these statements, I think the work could be stronger if more aligned comparisons between BM and LNR are presented.\n\n- There are no details about how BM can be incorporated with composition of DP mechanisms; this issue is mentioned only as a limitation in the conclusion of the paper. - In Line 495, for $X_t^{\\pi}$ to be well-defined, one at least should require that each $(\\lambda,\\omega) \\mapsto Y_t^\\lambda(\\omega)$ is a Borel function.\n\n- What is $T_1$ in Definition 3.1? It looks like it is a pre-specified constant $T_1$ or function $x\\mapsto T_1(x)$, and it would be good to add this remark. Then, do we require $T_2(f(x)+B_{T_1(x)})\\le T_1$ (or $\\le T_1(x)$), in accordance with the $T_n$ specifying a sequence of time functions? Various instances of such initial parts of a trend have to be carefully specified in the paper, e.g., it is imposed that $\\eta>0$ in Line 227, yet it is mentioned in Line 238 that $(Z_t)_{t\\ge 0}$ (i.e., $\\eta=0$) will be used (perhaps one should define $\\mathrm{Lap}(0)$ as deterministically $0$).\n\n- The term 'density' is loosely mentioned in Definition 2.1 (e.g., density with respect to which measure?). I recommend slightly modifying how the definition is spelled out.\n\n- The paper has superb presentation, but a few typos should be fixed. \n- - English typos include: \"depend [on] the\" in Line 101; delete 'the' in Lines 187 and 222; replace ',' with '.' in Line 235; delete 'instead' in Line 570. \n- - Math typos include: need $\\mathcal{L}_{1:n}^{\\mathrm{Alg}}$ in the last equation after Line 546; $Z(t)$ is used in Line 559, but only defined in Line 593; should probably be $\\mathrm{Lap}(\\tau,\\frac{2\\Delta}{\\epsilon_n})$ in Line 572; the two equations after Line 586 need to be fixed; 'minus' sign missing in penultimate equation after Line 612; also, the wording of Prop. 4.2 might be revised, as ex-post privacy is prefixed with a pair when first defined.\n\nI also recommend that it be mentioned that Fig. 4 is in the appendix (when mentioned in Line 198); also, a comment by the authors on why the variances shown in Fig. 4 are large might be in order. The authors mention briefly in the conclusion that existing composition results for DP are inapplicable for BM, and that noise reduction is concerned only with output perturbation. Although not a hinderance to the quality of the paper, it would have been nice to see some indication on how composition results can potentially be derived for BM.\n\nThe authors mention that there are \"no negative societal impacts,\" but it can be argued that privacy mechanism can be misused.", " The paper introduces a Brownian mechanism for private release, in which an analyst (1) generates a sequence of noise values from a Brownian process up until a certain large time point, (2) starting from this large time point, takes steps back until they find a noise value which incurs acceptable utility, and (3) releases the data with noise at the identified time point. Because of the Markov properties of the process, the privacy loss of such mechanism only depends on the final noise value output. The authors show two ways to compute the epsilon-guarantee of the resulting mechanism in terms of ex-post privacy. Next, the authors extend the existing Laplace-based Markov process mechanism to continuous time. In the final part of the theoretical sections of the paper, the authors introduce a ReducedAboveThreshold mechanism for deciding when to stop the Brownian (or another noise reduction) mechanism when the utility has to be computed on private data. In this algorithm, the utility with added Laplace noise is compared against a noisy threshold, with threshold noises coming from a Laplace-based Markov process which is synchronized with the main (e.g., Brownian) process. Finally, the authors conduct two experiments on private release of the logistic regression parameter vector, and ridge regression via covariance perturbation, finding that the Brownian mechanism on average results in lower privacy loss at the same level of utility threshold, and the privacy loss is more consistent. The paper deals with a useful setting in modern data analysis, in which an analyst might want to adaptively pick the best privacy level subject to accuracy constraints. The paper proposes two tools (Brownian mechanism and ReducedAboveThreshold) which formalize an iterative interactive procedure in which the analyst tries several noise levels before settling with a satisfactory one. Unlike the previous work on Laplace-based process mechanism, the proposed Brownian mechanism is calibrated to L2 sensitivity, thus suitable to multi-dimensional vectors. I believe this is a useful contribution to the field, given that the results are correct (I have not verified the proofs.)\n\nThe main issue with the paper is clarity. The notation is obtuse with the generic time functions $T$ which have an unclear definition, stopping functions $N$, privacy boundary functions $\\psi$, privacy functions $\\mathcal{E}$. It took a lot of time and effort to parse. The intended meaning of some functions only become clear at a much later point after its introduction, such as $T_n$ by line 211. In general, the generality of exposition is a hindrance. Corollary 2.7, if I understand the intention correctly, would be better as two algorithms for two usage scenarios (accuracy-first and privacy-first). Section 3 seems to be disconnected from the rest of the text. It is also not fully clear if the results of the paper cannot be achieved using the tools of fully adaptive composition. In the experimental section, although the results are convincing, it is unclear if the comparison of guaranteed privacy and high-probability privacy is fair, and how exactly is the empirical privacy loss computed.\n\nAs ERM is a motivation of the work, it would be interesting to see how the output perturbation with BM compares to objective perturbation at the same accuracy levels.\n\n * How does the proposed mechanism relate to the prior work on fully adaptive composition? Can the results be re-analyzed in the framework of fully adaptive composition?\n* In the experiments, are comparisons of guaranteed privacy loss and high-probability loss fair? How is the privacy loss computed in the experiments (using Theorem 3.6?).\n* Is the continuous-time extension of Laplace-based Markov process mechanism necessary for Algorithm 1? \n* Is there a reason that the Algorithm 1 settles when finding any noise level within the utility threshold, if it seems that it could proceed to actually do the minimization of privacy loss? The limitations are discussed.", " The paper considers the Brownian mechanism, a new noise reduction DP scheme that allows for \"stripping off\" the noise gradually until the desired accuracy is satisfied. By leveraging the Markovian nature of the Brownian noise, the privacy guarantee of releasing a sequence of queries (with different epsilons) is the same as releasing the least private one. This improves upon using independent noise, where the privacy needs to be accounted for via composition theorem. In addition, compared to the Laplace-based method [LNRWW 2017], the Brownian mechanism gives an $\\ell_2$ guarantee and hence is more suitable for FL tasks. Finally, the authors combine the Brownian mechanism with ReducedAboveThreshlod, a generalization of the AboveThreshold method in [Dwork 2014], giving an adaptive (ex-post) privacy guarantee that will be at most twice that of the non-adaptive version. - Strengths\n\nThe paper extends the previous noise reduction technique in [LNRWW 2017] to $\\ell_2$ geometry with $(\\varepsilon, \\delta)$-approximate DP. Having an $\\ell_2$ guarantee is always preferable for FL tasks in both theory and practice. In addition, the improved AboveThreshlod method seems to be significant and allows for much tighter privacy accounting.\n\n- Weakness\n\n1. In general, I think the theoretical grounding looks mostly sound. However, I do have a technical question regarding Lemma 3.2 (see the \"Question\" section). As Lemma 3.2 is the basis of almost all results of this paper, I'd appreciate it and will increase the score accordingly if the authors can clearly clarify it.\n\n2. In my opinion, the presentation can be improved by providing more high-level descriptions of how this work compares with [LNRWW 2017]. In particular, in Section 4, the authors extend the LNR scheme in [LNRWW 2017] to the continuous case; however, I don't think I can fully appreciate why extending LNR to the continuous case is important and what is the advantage of doing so.\n\n3. As the privacy of the Brownian Mechanism is ex-post and is determined by the privacy functions $\\mathcal{E}_n$, it seems to be unclear how to set them in practice. It would be good if the authors could briefly discuss it. 1. As in [LNRWW 2017], Lemma 3.2 builds on the Markov property of the Brownian mechanism. However, as in general, the time functions $T_n$ may depend on the future of the Brownian noise, i.e., $T_n = T_n(f(x), B_{T_{n-1}},...,B_{T_1})$, i am not sure whether the Markovian property still holds. In its proof, the authors claim that by using the strong Markov property of Brownian motion, one can replace the time indices with random ones. I am not sure whether this is correct or not, as to my knowledge strong Markov property only ensures $B_{T+t_1},..., B_{T+t_n}$ being Markov for a stopping time $T$. I think the authors need to include more details in the proof, as Lemma 3.2 is the basis of the Brownian mechanism.\n\n2. The notation of the stopping function $N(x)$ is a little confusing, as it gives a false impression that the stopping time can directly depend on the private data set $x$. However, if my understanding is correct, it could only depend on the prefix of the private release $M_{[1:n-1]}(x)$. The notation of $N(x)$ could be misleading and (incorrectly) suggest that the ReducedAbove threshold is unnecessary since one could directly compare $M_n(x)$ with $f(x)$, which would still be a valid stopping time.\n\n==========\nPost rebuttal: the authors' response to Q1 has addressed my main concern, and i have modified my score accordingly. Yes, the social impact is properly discussed.", " This paper generalizes and improves the framework Ligett et al. for releasing a private\nquery subject to accuracy constraints. In this framework, a query is released\nwith gradually less noise until a desired accuracy constraint is met, and the\nprivacy cost is the cost of the last output.\n\nThey noise the query with a continuous\ndistribution specified by Brownian motion, allowing the fully-adaptive release\nof the query where even the desired epsilon values are allowed to depend\nprevious outputs. Their Brownian Motion Noise Reduction offers improved \nperformance over the Laplace Noise Reduction, the previous state of the art.\n\nFinally, they generalize NoisyAboveThreshold to allow adaptive values of\nepsilon. Combined with the previous results, this enables the accuracy\nconstraints to also depend on the the private database.\n + Results allow fully adaptive release of the query (where epsilon are allowed\nto vary) and apply to functions with bounded l_2 sensitivity.\n\n+ Brownian motion is simpler, more elegant, and offers improved performance over previous work.\n\n+ The generalization of AboveThreshold allows the accuracy constraint to depend\non the private database and have the same privacy cost as releasing the final\noutput. This overcomes a major hurdle of previous work in this area, and makes\nthe accuracy-first framework more practical.\n\n- Some minor writing mistakes, which I will describe.\n Definition 3 does not make sense. What does the symbol $\\sigma$ mean here? Furthermore,\nwhat precisely is a stopping time with respect to a sequence of mechanism\noutputs?\n\nIs it efficient to produce samples from the Brownian motion algorithm or the Laplace\nNoise Reduction technique? It would help the paper to describe implementation details.\n Yes - there are no real limitations to this work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 2, 4, 4 ]
[ "NRqUWCBBDIt", "5aQdOF3GKo8", "fecGNrQTpqx", "dql8zcrcXz6", "9KOQudDLoiv", "1TMOCNZkudH", "QOynWzqWGgy", "4GD4YmSkIZe", "nips_2022_J-IZQLQZdYu", "nips_2022_J-IZQLQZdYu", "nips_2022_J-IZQLQZdYu", "nips_2022_J-IZQLQZdYu" ]
nips_2022_ZL-XYsDqfQz
Make an Omelette with Breaking Eggs: Zero-Shot Learning for Novel Attribute Synthesis
Most of the existing algorithms for zero-shot classification problems typically rely on the attribute-based semantic relations among categories to realize the classification of novel categories without observing any of their instances. However, training the zero-shot classification models still requires attribute labeling for each class (or even instance) in the training dataset, which is also expensive. To this end, in this paper, we bring up a new problem scenario: ''Can we derive zero-shot learning for novel attribute detectors/classifiers and use them to automatically annotate the dataset for labeling efficiency?'' Basically, given only a small set of detectors that are learned to recognize some manually annotated attributes (i.e., the seen attributes), we aim to synthesize the detectors of novel attributes in a zero-shot learning manner. Our proposed method, Zero-Shot Learning for Attributes (ZSLA), which is the first of its kind to the best of our knowledge, tackles this new research problem by applying the set operations to first decompose the seen attributes into their basic attributes and then recombine these basic attributes into the novel ones. Extensive experiments are conducted to verify the capacity of our synthesized detectors for accurately capturing the semantics of the novel attributes and show their superior performance in terms of detection and localization compared to other baseline approaches. Moreover, we demonstrate the application of automatic annotation using our synthesized detectors on Caltech-UCSD Birds-200-2011 dataset. Various generalized zero-shot classification algorithms trained upon the dataset re-annotated by ZSLA shows comparable performance with those trained with the manual ground-truth annotations.
Accept
This paper has proposed a method named zero-shot learning for attributes to deal with a research problem about novel attribute classification and attribute labeling. The reviewers have many questions in the intial round. After the rebuttal, the authours clarify most unclear points, and some reviewers raise the score. In general, all the reviewers agree with the acceptance of this paper.
test
[ "gCI2tL0Y7I9", "vGdaMoFn531", "RarfWwRU9p4", "ylD_aGF0YyC", "xgWBTxpjYZ2", "UbgpGCeN64J", "FXwTwlBtyjt4", "XXcLeuFNUc8", "3n3tur_VPzL", "MYi-zGb27L-", "GCf7zIGA2RV", "Qd5jtXhB9TA", "TpMI-Igh8_", "Q4oUV-13MPm", "T7cIWpY4d-5", "Hh4vaDi34TT", "XiCbNtiISdv" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " First, we thank the reviewer for recognizing our efforts to provide clarification in the rebuttal as well as for his/her consideration to raise the rating on our work. Below we sequentially reply to the remaining concerns in the reviewer's comments.\n\n---\n\n**[Reply to Q1]**\n\nWe believe that the good performance of our synthesized attribute detectors is not simply due to the specificity of CUB as we also testify our proposed method on another dataset, $\\alpha$-CLEVR, which has quite different data distribution and properties in comparison to CUB (while the only common ground that CUB and $\\alpha$-CLEVR share stems from the fact that they both can be modeled in the form of “classes ↔ attributes ↔ base-attributes” hierarchy). Regarding the reviewer’s suggestion of adapting our proposed method to the problem of CZSL, we find it quite insightful and would be happy to explore more possibilities along such direction.\n\n---\n\n**[Reply to Q2]**\n\nWe thank the reviewer again for his/her constructive suggestion and will refine the paper organization as well as notations accordingly in our next version.\n\n---\n\n**[Reply to Q3]**\n\nWe are glad that our rebuttal does help to resolve the misunderstanding. We use the term “zero-shot” to characterize our approach for learning to synthesize novel attribute detectors due to the fact that no annotated visual examples of these novel attributes are explicitly provided to our approach for learning, in which it is analogous to the setting of “zero-shot” classification where no annotated visual examples of novel classes are observed during model training. We will do our best to clarify such analogy as well as the reason why we adopt the term “zero-shot” in our future version.", " **[Reply to Q4]** \n\nAdopting a simplified ViT architecture (without positional encoding and with only a single self-attention block, in which such model is actually lightweight) for the intersection model is a design choice that first comes into our mind for realizing the property of commutativity and it results to perform pretty well. However, we agree that there could be other design choices or network architectures serving the same purpose, hence we are open for any suggestions and will be more than happy to try. Moreover, we use the SAME intersection model for both object parts and adjectives. Even object parts and adjectives are very different features/base-attributes (as commented by the reviewer), our intersection model is trained to be invariant to such difference but concentrates on extracting the common ground (i.e. the base-attribute) between two input attributes. In other words, our intersection model is trained to act as a typical set operation regardless of which type of base-attribute is shared among the input attributes. Furthermore, as our intersection model should function well for any combinations of two input attributes that have a common base-attribute, the learning problem in our proposed method hence is not small (not less than a hundred data samples as commented by the reviewer, since we need to take all the possible combinations of two input attributes into consideration) and that becomes another motivation for us to choose ViT architecture as our base to construct the intersection model due to its powerful learning capacity. Regarding the question of reassembling principle, the main motivations behind our design choice of adopting a simple average to build up the union model are described as follows: as we would like our union model to act as a typical set operation (i.e. the union model should function well for any types of input base attributes) and its output should be effective attribute detectors (e.g. the union between the extracted base-attributes “red” and “wing” would become effective to detect “red wing” on images, and the union between “red” and another extracted base-attribute “beak” will be also effective to detect “red beak” on images), the inputs to our union model (i.e. the base-attributes extracted by the intersection model) should be already representative on their own and disentangled from each other to achieve so. To this end, we decide to make our union model as simple as possible and even have no learnable parameters (where we finally adopt simple average computation for our union model), such that all the training objectives are contributed to learning disentangled and representative/informative base attributes (otherwise, if there are some learnable capacities in the union model, the input base attributes could potentially become less informative as there exists a chance to enhance the combination of base attributes during the union model to synthesize the effective attribute detectors). The experimental results demonstrate that the extracted base attributes under our model design (i.e. having simple average for union model) indeed have disentangled and representative characteristics, as shown in Figure.4 of our main manuscript where the resultant base attributes themselves already have certain retrieval and localization abilities. We will better clarify these motivations and intuitions behind our model design for the union model in our next version (in addition to what we already have now at the end of Sec. 3 in our main manuscript). Finally, the design of our decompose-and-reassemble procedure (i.e. intersection and union models) is also validated on another $\\alpha$-CLEVR dataset (cf. Appendix C4).\n\n---\n \n**[Reply to Q5]**\n\nWe thank the reviewer again for bringing up the insightful perspective on the similarity between our work and the blind source separation (BSS). We will have a further investigation on BSS and include the corresponding discussion in the next version as suggested.\n\n---\n\n**[Final remark]**\n\n**We are also delighted to learn that the reviewer is leaning to increase the review rating score. We are just one post away to answer any follow-up questions you and other reviewers may have!**\n", " \nDear authors,\n\nThank you for providing detailed answers to my questions. Here are a few comments.\n\n[Q1] The demonstration of your approach on a single dataset remains a weakness for me, as noted also by author reviewers. You provided many supplementary experiments to compensate it, which is nice, however it is difficult to see if the good performance of the new synthesized attribute detectors is due to the specificity of CUB (localized bird parts), or is generic. You have discussed the difference with CZSZL and adapted some of the current approaches: maybe the converse could have been done, adapt your approach to the problems of CZSL using the datasets of this field.\n\n[Q2] It would be nice indeed to put the detailed \"Intersection\" algorithm in the main paper. Another notation that bothered me is to write the intersection using abstract attributes $I(a_k,a_l)$ whereas what is used actually as input are the detector parameters $m_k$.\n\n[Q3] Your answer made things clearer. Basically, I have been misled by the use of the \"zero-shot\" expression related to two different things: the synthesis of unseen attribute detectors, and the use of these attributes in a more classical zero-shot classification, actually exploited in the paper as an evaluation tool of the attribute detector synthesis. I am still not convinced that characterizing your approach as \"zero-shot\" is well suited to your work.\n\n[Q4] This is perhaps the answer that satisfies me the least. The learning problem addressed by attribute synthesis is very small (less than a hundred data samples available) and commutativity can be obtained by much simpler architectures. I didn't find either if the same network is used for the intersection of object and adjectives, which are very different features. I am also quite puzzled by the reassemble principle: a simple average. For me, given the cosine similarity used for attribute detection, it can only work if the image features are already disentangled in some way between objects and adjectives. Hence the necessity to validate the approach on other datasets (cf. Q1).\n\n[Q5] There may be more similarities with BSS than you state, and I do not see the \"logical/semantic\" constraints so difficult to introduce from a formal point of view. I agree that this field has a very different history and vocabulary, but I am OK if you mention it as a further direction of investigation or comparison.\n\n\nAs a final remark, I think that, thanks to your answers, I have better understood your work: still It costed me quite a lot of time, meaning that the writing could be improved, reorganized and simplified. I will probably raise my rating after final discussion with the other reviewers.\n", " We thank Reviewer Vc5a for providing constructive and insightful comments. In our response, we believe we have addressed your concerns, which are summarized as follows:\n1. We clarify the reviewer's critical misunderstanding, in which our zero-shot learning for attribute (ZSLA) is actually different from generalized zero-shot learning (GZSL) or zero-shot learning for classification (ZSC). Assume that there exists a “classes ↔ attributes ↔ base-attributes” hierarchy, then our ZSLA is related to the level of “attributes ↔ base-attributes” while ZSC focuses on the level of “classes ↔ attributes”. Therefore, we do not compare ZSLA with any ZSC algorithms (i.e., CADAVAE [22], TFVAEGAN [19], ALE [1], and ESZSL [21] in Section 4.2 of our main manuscript), but adapt/modify two algorithms from ZSC into the scenario of zero-shot learning for attributes (i.e., A-LAGO and A-ESZSL) as our baselines to make comparison with.\n1. In “Q1. [The concern on our validation dataset]”, we clarify the concerns on our validation dataset and our usefulness by underlining the experiments carried out on CUB and another dataset, alpha-CLEVR.\n2. In “Q3. [Pure zero-shot learning]”, we clarify that:\n (1) *Our ZSLA actually works **during the construction of a zero-shot classification dataset** (where ZSLA acts as an attribute annotator like what human annotators do). Since this process is orthogonal to the class information and ZSLA does not observe any visual examples of unseen attributes, ZSLA well fits the definition of “zero-shot” learning.*\n (2) *The ZSC algorithms are trained **after the zero-shot classification dataset is constructed**, and the performances are used to evaluate the quality of the attribute annotations.*\n3. In “Q4 [The input and architecture design of intersection model]”, we answer the questions related to our intersection model architecture and point out the corresponding paragraphs in our main manuscript and Appendix.\n4. in “Q5 [Relation to Blind Source Separation (BSS)]”, we identify the main difference between BSS and our ZSLA: ZSLA works based on the logical and semantic constraints, which are neither blind nor being aligned with the common assumptions behind BSS (e.g. the independence among sources).\n\nAs the deadline of author-review rebuttal is approaching, we would like to know if there is any feedback based on our rebuttal, and is there anything we can do to further clarify the reviewer's concern. Please don't hesitate to let us know!\n\nPaper 4688 authors\n", " We thank Reviewer RG1h for providing constructive and insightful comments. In our response, we believe we have addressed your concerns, which are summarized as follows:\n1. We emphasize the contribution of our work by pointing out the importance of using “attributes” as class semantics in zero-shot classification and explain how our ZSLA alleviates the main problem of it (i.e. expensive cost of manual attribute annotations) in “Q1 [Is it important to drive the ZSL research ?]” \n2. We conduct further experiments of utilizing word2vec as semantic information (to associate across classes) for training the GZSL algorithms, and the results clearly indicate the superiority of adopting the class semantics stemming from the attribute annotations produced by our ZSLA. Please refer to our rebuttal “Q1 [Is it important to drive the ZSL research?]”\n3. We clarify the concern on our validation dataset and elucidate the unsuitability of AWA2/SUN datasets on the scenario of zero-shot setting for attributes in “Q2 [The concern on our validation dataset and useness]”\n\nAs the deadline of author-review rebuttal is approaching, we would like to know if there is any feedback based on our rebuttal, and is there anything we can do to further clarify the reviewer's concern. Please don't hesitate to let us know!\n\nPaper 4688 authors\n", " We thank Reviewer 2STe for providing constructive and insightful comments. In our response, we believe we have addressed your concerns, which are summarized as follows:\n1. We clarify the reviewer’s main concern on the generalization ability of our proposed framework in “Q1 [New semantic concept]”.\n2. We explain the key distinction between our proposed method and the multi-task segmentation model in “Q2 [Relation to multi-task image-segmentation models]”.\n3. We summarize the reason why our proposed method is able to outperform the approach of predicting color and part separately and providing the soft predictions (in which such suggested approach is similar to the scenario of Direct Attribute Prediction) in “Q3. [Predict color and part separately and providing the soft predictions]”.\n4. Owing to the novel application scenario of our proposed zero-shot learning for attributes, there exists no prior work which we can directly make comparison with. However, as mentioned in “Q4. [Comparison to SOTA]”, we have actually provided the quantitative comparison with the baselines respectively adapted from zero-shot classification approaches and the closely-related task (i.e. compositional zero-shot learning), where our ZSLA shows its superiority in multiple metrics.\n\nAs the deadline of author-review rebuttal is approaching, we would like to know if there is any feedback based on our rebuttal, and is there anything we can do to further clarify the reviewer's concern. Please don't hesitate to let us know!\n\nPaper 4688 authors\n", " We thank Reviewer 2LT8 for providing constructive and insightful comments. In our response, we believe we have addressed your concerns, which are summarized as follows:\n1. We fix the problems in the reference part.\n2. We provide extra new experiments on a recent GZSL algorithm, CE-GZSL, with the same setting as what we have in Section 4.2 of our main manuscript. The experiment results shown in our response share the same conclusion as Table 2 in our main manuscript, which further confirm that the quality of attribute labels automatically annotated by our ZSLA could generally benefit various GZSL algorithms and are superior to manual ones (evaluated by the performance of GZSL algorithms trained on the (re-)annotated zero-shot classification dataset).\n\nAs the deadline of author-review rebuttal is approaching, we would like to know if there is any feedback based on our rebuttal, and is there anything we can do to further clarify the reviewer's concern. Please don't hesitate to let us know!\n\nPaper 4688 authors\n", " We thank Reviewer Vc5a for reviewing our paper; however, we believe there are some critical misunderstandings in the reviewer's summary and we would like to have a clarification here first:\nAs motivated in our introduction section, we assume that there exists a “classes ↔ attributes ↔ base-attributes” hierarchy, in which the typical setting of zero-shot classification (where most existing works of zero-shot learning address) adopts attributes as the auxiliary semantic information to associate classes (e.g., using seen attributes to define the unknown classes) thus being related to the level of “classes ↔ attributes” in the hierarchy, while our proposed zero-shot learning for attributes instead aims to leverage base-attributes as the auxiliary semantic information to link across attributes (e.g., using seen base-attributes to define the unknown attributes) thus being related to the level of “attributes ↔ base-attributes” in the hierarchy. In brief, \n1. Our ZSLA tackles the “zero-shot learning for attributes” problem (which focuses on synthesizing unseen attributes) instead of “zero-shot learning for categories/classes (also termed as zero-shot classification, or abbreviated as ZSC by the reviewer)” problem. In particular, the attribute detectors produced by our ZSLA can be used to **automatically** annotate the attributes during constructing the training dataset for zero-shot classification (as obtaining the auxiliary semantic information for zero-shot classification requires images being annotated by attributes), thus alleviating the expensive cost of manual annotations. \n2. As our proposed ZSLA is tackling a different problem (i.e. zero-shot learning for attributes) from zero-shot classification, we do not directly compare our ZSLA with ZSC algorithms. Instead, the baselines (i.e. A-ESZSL and A-LAGO as described in Section 4 of our main manuscript) that we make comparisons with are adapted from their original ZSC settings into our scenario of zero-shot learning for attributes. Moreover, as described in Section 4.2 of our main manuscript, since our synthesized attribute detectors are able to provide automatic attribute annotations for constructing the ZSC training dataset, we, therefore, adopt four ZSC algorithms (i.e. CADAVAE [22], TFVAEGAN [19], ALE [1], and ESZSL [21]) to evaluate the quality of attribute annotations. \n\n### Q1. [The concern on our validation dataset]\n>*\"The approach is validated on a single dataset (CUB), limiting its potential usefulness.\"*\n\nThough our current experiments are mainly conducted on a single dataset (CUB), we view this as a limitation of the mainstream datasets for zero-shot learning, rather than a limitation of our method. Moreover, CUB dataset is for \"fine-grained\" classification thus typically being treated as the most challenging dataset in zero-shot learning. Furthermore, we would like to emphasize that we also conducted experiments on another dataset, CLEVR, which may be overlooked by the reviewer as some of the results are presented in the Appendix (cf. Appendix B.2 and C.4). \n\nThe potential usefulness of our proposed ZSLA is also well demonstrated in our experiments and should not be overlooked: For instance, the re-annotation experiment on CUB (as shown in Table 2 of the main manuscript) indicates that the quality of the attributes annotated by our ZSLA method can be comparable or even superior to that of manual annotations for training various GZSL algorithms. Hence, our proposed ZSLA can be used to perform automatic attribute annotations during constructing the training dataset for zero-shot classification. Moreover, our control experiments on alpha-CLEVR (cf. Figures 6 and 7 of the main manuscript) validate the robust annotation quality of our ZSLA against the noisy/ambiguous (manual) labels of seen attributes in the training data.\n\n### Q2. [Paper organization issue]\n>*\"A lot of material is provided in the main paper and supplementary material (17 pages!) but in a confusing organization. For instance, the main algorithm (“intersection”) is not fully described in the main paper, and we need to search for the information in the supplementary material. Wordy general discussions could be replaced fruitfully by more specific technical details.\"*\n\nWe thank the reviewer for the feedback on the paper organization and would follow reviewer’s suggestion to move more specific technical details into the main paper for our camera ready version.", " ### Q3. [Pure zero-shot learning]\n>*“The Zero-shot character of the proposed approach seems to me over-stated, and thus misleading: the new attribute synthesis looks like an unmixing/mixing problem, and the unseen category description with the new attributes in fact requires images, which breaks the “zero-shot” principle, as far as I understand it.”*\n\nAs mentioned, the concept of our proposed zero-shot learning for attributes scenario is related to the level of **“attributes ↔ base-attributes”** in the hierarchy. Our ZSLA achieves the goal by decomposing the base attributes out from the seen attribute (where these base attributes are hidden in the seen attribute thus being implicitly observed) and then reassembling the base attributes to compose the unseen attributes (i.e. having zero-shot for these composed unseen attributes), in which such scenario is actually analogous to the zero-shot classification (where the attributes are implicitly observed in the seen classes and further used to compose or define the unseen classes) thus following the general definition of “zero-shot” learning (i.e. the auxiliary semantic information is implicitly observed during the training and is used to define the unseen targets). \n\nIn brief, the comment from the reviewer “the unseen category description with the new attributes in fact requires images” is basically wrong, due to the facts that: 1) our ZSLA is in the level of **“attributes ↔ base-attributes”** and it learns to synthesize novel attributes without any dependency on the category information; 2) the unseen category description can be tackled by any existing zero-shot classification algorithms (e.g. CADAVAE [22], TFVAEGAN [19], ALE [1], and ESZSL [21] used by our experiments in Section 4.2) without requiring to see the corresponding images.\n\n>*“...When reading in the supplementary (C.2) the way the unseen categories are described in δ-CUB dataset from a series of images, it seems to me that what is actually evaluated is more a “few-shot” than a “zero-shot” learning scheme since images are needed to compute the description…”*\n\n**Here we would like to have a clarification first: the main purpose of the experiments based on $\\delta$-CUB dataset is to evaluate the quality of the automatic attribute annotations produced by our synthesized attribute detectors, where these attribute detectors learned by our ZSLA are independent of the categories/classes.**\n\nMoreover, our ZSLA is shown to contribute to construct the zero-shot classification dataset (e.g. $\\delta$-CUB) with little cost (as demonstrated in the experiments of Section 4.2 of our main manuscript, based on merely 32 seen attributes, we can synthesize another 207 novel attribute detectors, where the resultant automatic attribute annotations in $\\delta$-CUB can largely benefit the performance of GZSL algorithms). The overall procedure to build up the $\\delta$-CUB dataset by our ZSLA is summarized as follows:\n\n1. The human annotators annotate several types of attribute labels for the CUB dataset.\n2. The annotated attribute labels are utilized for training a set of seen attribute detectors.\n3. The weights of the trained seen attribute detectors are used for training our intersection and union models.\n4. We synthesize the weights of the unseen/novel attribute detectors by the trained intersection and union models.\n5. The synthesized unseen attribute detectors provide annotations for images in the CUB dataset, where the re-annotated images all together construct the resultant $\\delta$-CUB dataset. \n\nDuring the procedure described above, our ZSLA acts as an annotator (just like what human annotators do). No matter in the training phase or in the inference phase of ZSLA, the ZSLA model never obtains any example/labeled images showing how an unseen attribute links to its corresponding visual concept, instead the ZSLA model directly learns to synthesize the unseen attribute detectors via set operations (i.e. intersection and union), which exactly fulfills the basic idea of “zero-shot” learning for the attribute annotations. \n\n>*“...isn’t there a bias when comparing to “pure” GZSL approaches which have to deal with noisy attribute description of unseen categories?”*\n\n**We again clarify that our ZSLA simply acts as an annotator (just like what human annotators do) during constructing the dataset**, and the learning of GZSL algorithms upon the constructed zero-shot classification dataset is totally independent of our ZSLA. Since the relation between CUB and $\\delta$-CUB (as described in Section 4 of our main manuscript) is akin to “the same dataset but annotated by different groups of annotators”, the performance of GZSL algorithms thus reflects the quality of the attribute annotations provided in the training dataset.", " ### Q4. [The input and architecture design of intersection model ]\n>*“I didn’t understand what is the input of the network computing the intersection: is it just an attribute label? If so, how is it encoded? Or is it the detector parameter m?”*\n\nAs described in line 198-199 of our main manuscript, the input of the intersection model is the weight vectors (i.e. embeddings) of two trained seen attribute detectors.\n\n>*“Exploiting a VIT architecture seems to me oversized given the complexity of the dataset. Why such a model?”*\n\nThe VIT-based architecture we adopt for our intersection model is shown in Figure 1 of Appendix, where only a single self-attention block is used (as described in line 42-43 of Appendix A.3, and we also include the corresponding description in the revised main manuscript) thus our intersection model is actually light-weight and not oversized (this implementation detail can be also seen in the source code provided in supplementary file: /code/networks/logic_network.py). The main reason behind our design choice of adopting VIT-based architecture is due to the fact that the transformer framework without position embedding (as described in line 200-206 in our main manuscript) nicely satisfies the commutative property needed for our intersection operation.\n\n### Q5. [Relation to Blind Source Separation (BSS)]\n\n>*“The question of building atomic attributes from their conjunctions reminds me of the classical question of blind source separation in signal processing: is there any similarity to that problem?”*\n\nWe thank the reviewer for bringing up such an insightful question for discussion. Though our ZSLA framework composed of decompose-and-reassemble procedure seems to be similar to blind source separation at the first glance, there actually exists a significant distinction which clearly differentiates our ZLSA from the blind source separation in signal processing: Our intersection operation to perform decomposition on seen attributes is actually non-blind, in which it works by the guidance of logical and semantic constraints (i.e. two input attributes should have a common ground in one of the base attributes but not both), while there is no such constraint in the blind source separation (which instead typically adopts independent assumption or mutual information in its modeling) and it could be non-trivial to include logical/semantic constraints into the framework of blind source separation. We will add this discussion in Appendix A.5 in the revised version.", " We thank Reviewer RG1h for reviewing our paper and providing suggestions on additional experiments to enrich our contributions.\n\n### Q1. [Is it important to drive the ZSL research ?]\n\nAmong various types of semantic information, adopting attributes in the study of zero-shot learning is still one of the most popular modeling choices and it has been continuously investigated in many recent research works. Attributes not only well act as the semantics to associate classes/categories but also better fit the intuitive sense for humans to describe things, however, annotating attributes usually requires expensive cost and that becomes its main burden of applications. To this end, our proposed method of zero-shot learning for attributes directly contributes to alleviate such problem, in which our ZLSA is able to offer high quality automatic attribute annotations to construct the zero-shot learning dataset with little cost (as shown in the experiments of Section 4.2 of our main manuscript, based on merely 32 seen attributes, we can synthesize another 207 novel attribute detectors, where the resultant automatic attribute annotations in $\\delta$-CUB can largely benefit the performance of GZSL algorithms). \n\nMoreover, we also follow the reviewer’s suggestion to experiment with using word2vec as the class semantics for training the GSZL algorithms (where the word2vec embeddings are provided by [22]). The results are summarized in Appendix D.5 of our revised version (also shown in the table below). From the results, we can observe that the class semantics stemming from the attribute annotations produced by our ZSLA can lead to better performance than those based on word2vec embeddings.\n| | | Word2Vec[22] | Our ZSLA |\n|:----------------:|:-----:|:--------:|:--------:|\n| | **S** | **65.5** | 52.8 |\n| **CADAVAE[22]** | **U** | 11.3 | **58.1** |\n| | **H** | 19.3 | **55.3** |\n| | | | |\n| | **S** | 45.2 | **59** |\n| **TFVAEGAN[16]** | **U** | 28.1 | **55.9** |\n| | **H** | 34.7 | **57.4** |\n| | | | |\n| | **S** | **60.1** | 52.4 |\n| **ALE[1]** | **U** | 3.3 | **27.5** |\n| | **H** | 6.3 | **36.1** |\n| | | | |\n| | **S** | 63.5 | **65.1** |\n| **ESZSL[21]** | **U** | 1 | **16.4** |\n| | **H** | 2 | **26.2** |\n\n### Q2. [The concern on our validation dataset and useness]\n>*“The experiments were conducted on only one dataset (CUB), which is only one of the five popular datasets in ZSL benchmarks...”*\n>*“As discussed in the limitation of the paper, would the proposed method only work for fine-grained and specific datasets? If it does not work for general, coarse-grained ones like AWA2 and SUN, does it imply this approach has quite limited applications?”*\n\nWe understand the reviewers' concern. But we view this challenge as a limitation of the mainstream datasets for zero-shot learning, rather than a limitation of our method. Also, we would like to emphasize that we also conducted experiments on another dataset, CLEVR, which may be overlooked by the reviewer as some of the results are presented in the Appendix (cf. Appendix B.2 and C.4). \n\nParticularly, the re-annotation experiment on CUB, as shown in Table 2 of the main manuscript, indicates that the quality of the attributes annotated by our ZSLA method can be comparable to that of manual annotations for training various GZSL algorithms. Moreover, our control experiments on alpha-CLEVR (cf. Figures 6 and 7 of the main manuscript) validate the robust annotation quality of our ZSLA against the noisy/ambiguous (manual) labels of seen attributes in the training data.\n\nThe above properties demonstrate one novel application of our work: providing efficient attribute annotations when constructing the new datasets once the attribute format can be re-factored (e.g. blue wing) for assembling novel attributes via the decompose-and-reassemble approach, in which such format actually better fits the human's intuition for describing the discriminative attributes for the \"fine-grained\" classes.\n\nNevertheless, we would like to clarify that the current modeling constraint of ZSLA actually comes from requiring the re-factorable format of attributes (e.g. blue wing) for assembling novel attributes via the decompose-and-reassemble approach, instead of being limited to the fine-grained dataset. We consider relaxing such dependency on the attribute format as a future work of our method.\n\nAs AWA2 and SUN datasets do not have such re-factorable attribute format, they are not considered in the experiments for our target scenario of zero-shot learning on attributes (as what we have discussed in the limitation part in Section 4 of our main manuscript and Appendix E.1).", " We thank Reviewer 2STe for the efforts and time to review our paper. We address each of your questions below.\n\n### Q1. [New semantic concept] \n\nWe would like to have a clarification here first: As motivated in our introduction section, we assume that there exists a **“classes ↔ attributes ↔ base-attributes”** hierarchy, in which the typical setting of zero-shot classification (where most existing works of zero-shot learning address) adopts attributes as the auxiliary semantic information to associate classes (e.g., using seen attributes to define the unknown classes) thus being related to the level of **“classes ↔ attributes”** in the hierarchy, while our proposed zero-shot learning for attributes instead aims to leverage base-attributes as the auxiliary semantic information to link across attributes (e.g., using seen base-attributes to define the unknown attributes) thus being related to the level of **“attributes ↔ base-attributes”** in the hierarchy. \n\nTo be detailed, our proposed zero-shot learning for attributes focuses on decomposing the base attributes out from the seen attributes (where these base attributes are hidden in the seen attributes thus being implicitly observed) and then reassembling the base attributes to compose the unseen attributes (i.e. having zero-shot for these composed unseen attributes), in which such scenario is actually analogous to the zero-shot classification (where the attributes are implicitly observed in the seen classes and further used to compose or define the unseen classes) thus following the general definition of “zero-shot” learning (i.e. the auxiliary semantic information is implicitly observed during the training and is used to define the unseen targets). The question raised by the reviewer is actually referring to the case that even the auxiliary semantic information is unobserved during training (e.g. the unseen base attributes \"yellow\" or \"leg\" that the reviewer takes as examples), which is quite different from the typical zero-shot learning problem and there exists no prior zero-shot learning works capable of addressing such challenging case to the best of our knowledge.\n\nNevertheless, our original intent of the paper title is to make an analogy that we can decompose and reassemble broken eggs into omelettes given that they are made by similar ingredients (but of course not something that is impossible to reassemble such as meats). If the reviewer feels the motivation of using the current title is not clear, we are willing to modify it.\n\n### Q2. [Relation to multi-task image-segmentation models]\n\nOur proposed method is not directly related to the multi-task image-segmentation models due to the fact that: instead of giving a pixel-wise prediction as segmentation, our attribute detectors learn to focus on the image parts which most likely contain the corresponding attributes and provide the image-level prediction on **whether the attributes exist in an image or not**.\n\n### Q3. [Predict color and part separately and providing the soft predictions]\n\nWe thank the reviewer for bringing up such an insightful question for discussion. The reviewer’s idea of **“predicting color and part separately and providing the soft predictions”** is actually quite similar to the scenario of **“Direct Attribute Prediction (DAP)”** as described in [14] for zero-shot learning on multi-category classification. According to [1], the performance of DAP could suffer since DAP turns to focus on the intermediate tasks (i.e., the ones similar to the color and part detections as suggested by the reviewer) instead of taking care of the main task (i.e., detecting the color-part combination). \n\nIn fact, the baseline methods we provided in Section 4 of our main manuscript, i.e., A-LAGO (modified from LAGO-Singleton [3], in which it can be viewed as the relaxed version of DAP, please refer to Appendix C.3) and A-ESZSL (modified from ESZSL [21]), can be implicitly treated as advanced algorithms which utilize different ways to fuse the results of implicit base attribute detectors (e.g., the color and part detectors). We compare our ZSLA with them and demonstrate the superior performance of ZSLA across all the experimental settings with respect to those baselines.\n\n### Q4. [Camparison to SOTA]\n\nOwing to the novel application scenario of our proposed zero-shot learning for attributes, there exists no prior work which we can directly make comparison with. We hence adapt several representative algorithms of ZSL/GZSL as our baselines (i.e., A-LAGO and A-ESZSL as described in Section 4 of our main manuscript). However, in order to further evince the capability of our proposed ZSLA in this task, the comparisons with the modified state-of-the-art algorithms of compositional zero-shot learning (CZSL, which conceptually has the closest setting to ours) are also provided in Appendix D.3. Our ZSLA outperforms the modified state-of-the-art CZSL algorithms in multiple metrics.", " We thank Reviewer 2LT8 for the positive comments on recognizing our strengths in originality, quality, clarity, and significance. Also, we appreciate the kind reminder for pointing out the problems with our references (e.g., case of the title for the reference paper and wrong citation formats), in which we have fixed them in the revised version. Regarding the concern on the four GZSL algorithms used in Sec. 4.2 being a bit archaic, we believe that these four GZSL algorithms are still representative enough to show promising results as the main purpose here is to demonstrate the quality of the automatic attribute annotations provided by our synthesized attribute detectors. Also, these algorithms are widely adopted, highly cited, and built upon different modeling perspectives (i.e., generative [19, 21] and embedding-based [1, 22] methods).\n\n### [Extra experiment on another recent GZSL algorithm]\n\nMoreover, we have experimented on another recent GZSL algorithm, CE-GZSL [E1], where the experiments were done by utilizing the official code of [E1] (https://github.com/Hanzy1996/CE-GZSL) with following their default hyper-parameter settings. The experimental setting is the same as described in Section 4.2 of our main manuscript (where different strategies of attribute annotations are applied to build the zero-shot classification dataset, i.e. (re)-annotate the CUB dataset, and GZSL algorithms (e.g. CE-GZSL [E1] in this experiment and CADAVAE [22], TFVAEGAN [19], ALE [1], and ESZSL [21] in Section 4.2 of the main manuscript) are trained on such (re-)annotated dataset to testify the quality of attribute annotations). \n\nIn the table below, $N^{s}$ represents the number of attribute types that were annotated manually, while $N^{u}$ represents the number of attribute types that were automatically annotated by the corresponding algorithm. We then train [E1] for solving the GZSL task individually using the semantic information provided by different annotation strategies (i.e. train GZSL on the (re-)annotated dataset by different attribute annotation strategies). The terms S, U, and H respectively indicate the accuracy of the seen classes, the accuracy of the unseen classes, and the harmonic mean of S and U; the numbers in bold represent the best performance. As being observable from the results summarized in the table below, in terms of attribute annotation quality, our ZSLA outperforms the two baselines (i.e., A-LAGO and A-ESZSL) and shows a comparable or even superior result with respect to the fully manual annotated one (i.e., denoted as Manual ($N^{s}=312$)), which reflects a similar tendency as the Table 2 in our main manuscript.\n\n|| Manual ($N^{s}=32$) | Manual ($N^{s}=312$) | A-LAGO($N^{s}=32$, $N^{u}=207$) | A-ESZSL($N^{s}=32$, $N^{u}=207$) | Our ZSLA ($N^{s}=32$, $N^{u}=207$)|\n|:-:|:-------------------:|:--------------------:|:------:|:-------:|:-----------------------------------------------:|\n| S | 37.96 | 52.36 | 50.45 | 51.51 | **59.84** |\n| U | 24.05 | 47.31 | 44.47 | 40.88 | **53.28** |\n| H | 29.44 | 49.71 | 47.27 | 45.58 | **56.37** |\n\n[E1] Zongyan Han, Zhenyong Fu, Shuo Chen and Jian Yang. Contrastive Embedding for Generalized Zero-Shot Learning. In *IEEE Conference on Computer Vision and Pattern Recognition (CVPR)*, 2021.\n\n\n", " This paper has proposed a method named zero-shot learning for attributes to deal with a research problem about novel attribute classification and attribute labeling. Specifically, this research problem is tackled by first decomposing the seen attributes into basic attributes, and then recombining these basic attributes into new ones. Experiments are conducted on the CUB and α-CLEVR (a synthetic dataset) datasets for empirical evaluation. - Originality: This paper proposes a novel research problem about novel attribute classification and attributes labeling, which is tackled by zero-shot learning for attributes .\n\n- Quality: In the paper, Sec. 3 details the training of visible attribute detectors with the decomposition-recombination strategy used to synthesize new attribute detectors, providing theoretical support for ZSLA.\n\n- Clarity: The paper is easy to follow and the figures on the paper are straightforwards.\n\n- Significance: The zero-shot problem of attributes is presented and solved, while the proposed method can be used for automatic labeling of attributes. 1. In Sec. 4.2, the four representative GZSL algorithms are a bit archaic, with the earliest being 2013 and the latest being 2020.\n2. There is a problem with the case of the title in the references, for example, in Ln. 390, \"Clevr\" should be \"CLEVR\".\n3. Several NeurIPS conference papers are incorrectly used in the journal paper format, e.g., [6], [9] and [10]. In Ln. 257-265, the authors discuss the limitation of the proposed approach that the format of the attribute annotation must be \"adjective + object part\", e.g., blue wing. However, some ZSL datasets are not labeled with attributes in this form, such as AWA2 and SUN.", " The paper proposed to train the model by decomposing overlapping concepts from labeled annotations and then combining them to create new concepts. After training, the model can be used to provide new annotations which can be further used for supervised training. Experimental analyses are provided to explain the results and show improvement over compared methods. The intuition and motivation are well demonstrated and easy to understand. The results on given comparison settings are reasonable. 1) My biggest concern is regarding the generalizability of the approach. Taking Figure 1 for example, the model is basically requesting the training data and testing data to be limited to a scope of linear combinations of the two concept sets \"red, blue, green\" and \"head, wing, breast\". Would the model totally fail when a new concept is introduced, like either \"yellow\" or \"leg\" is they have not been seen during training? That is, is the model \"simply\" decomposing the existing concepts without really generalizing to real unseen concepts. As the analogy in the title, if the model can really \"make an omelette with breaking eggs\", that would be generalizing to a relevant concept of omelette from egg, not just different combinations of \"brown, white, dark\" and \"shell, yolk, membrane\" under the realm of egg.\n\n2) How is the proposed model related to multi-task image-segmentation models? For example, if we have an encoder-decoder model with different prediction heads for colors and parts, that model should also be able to disentangle two types of attributes and predict accordingly for unseen combinations, and it can even interpolate between attributes by, e.g., providing soft predictions. How is the proposed model better in that sense?\n\n3) I'm not too familiar with the CUB dataset, so it might be helpful to provide SOTA results on CUB for direct comparisons. The authors addressed the limitations in Section 4.", " Motivated by the issue of annotation efforts on attribute labels, which are required for zero-shot learning, this paper develops methods to automatically annotate novel attributes for a dataset. Given seen attributes, the proposed method can detect the unseen attributes via a decompose-and-reassemble manner. Results are demonstrated using the CUB dataset alone. As the task of ZSL on attributes is new, most experiments are specified by the authors. The problem addressed in this paper is new: automatically annotating unseen attributes for zero-shot learning. While the problem is interesting, is it important to drive the ZSL research? Attributes are among one type of semantic information that relate seen and unseen classes. Besides attributes, word embeddings of labels (via word2vec or BERT) can also be used, which can be learned via unsupervised learning and do not require manual annotation. The experiments do not compare the proposed semantic representations (which requires supervision though) to those unsupervised approaches.\nAlthough the proposed decompose-and reassemble approach is fairly sounded, it is not fully validated by the experiments. The experiments were conducted on only one dataset (CUB), which is only one of the five popular datasets in ZSL benchmarks. CUB is fine-grained and challenging, but others (e.g., AWA2, SUN) have different characteristics and are also important. As discussed in the limitation of the paper, would the proposed method only work for fine-grained and specific datasets? If it does not work for general, coarse-grained ones like AWA2 and SUN, does it imply this approach has quite limited applications? Yes, the authors have provided a paragraph in the beginning of the experimental section to discuss the limitation of the proposed method.", " The paper proposes an approach for estimating new detectors of visual attribute conjunctions on images by unmixing its parameters and recombining them. The attribute detector parameter is a prototype in deep feature space that localizes the attributes by cosine similarity. The recombination is produced by averaging the detector parameters. The new attribute synthesis can be used to solve Zero-Shot Classification (ZSC) problems when only few attribute annotations are available in the learning dataset by augmenting the annotations with new synthesized attribute detectors. The approach is compared to 4 other ZSC algorithms on the CUB dataset.\n Strengths\n- Addressing knowledge transfer at the level of attribute description for classifier design is an interesting idea.\n- Rather detailed experiments (distributed b/w main paper and supplementary material) although on a single small dataset.\n\nWeaknesses\n- The approach is validated on a single dataset (CUB), limiting its potential usefulness.\n- A lot of material is provided in the main paper and supplementary material (17 pages!) but in a confusing organization. For instance, the main algorithm (“intersection”) is not fully described in the main paper, and we need to search for the information in the supplementary material. Wordy general discussions could be replaced fruitfully by more specific technical details.\n- The Zero-shot character of the proposed approach seems to me over-stated, and thus misleading: the new attribute synthesis looks like an unmixing/mixing problem, and the unseen category description with the new attributes in fact requires images, which breaks the “zero-shot” principle, as far as I understand it. \n - When reading in the supplementary (C.2) the way the unseen categories are described in $\\delta$-CUB dataset from a series of images, it seems to me that what is actually evaluated is more a “few-shot” than a “zero-shot” learning scheme since images are needed to compute the description: isn’t there a bias when comparing to “pure” GZSL approaches which have to deal with noisy attribute description of unseen categories? \n- I didn’t understand what is the input of the network computing the intersection : is it just an attribute label? If so, how is it encoded? Or is it the detector parameter $m$? \n- Exploiting a VIT architecture seems to me oversized given the complexity of the dataset. Why such a model? \n- The question of building atomic attributes from their conjunctions reminds me of the classical question of blind source separation in signal processing: is there any similarity to that problem?\n Non applicable" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "RarfWwRU9p4", "RarfWwRU9p4", "ylD_aGF0YyC", "XiCbNtiISdv", "Hh4vaDi34TT", "T7cIWpY4d-5", "Q4oUV-13MPm", "XiCbNtiISdv", "XiCbNtiISdv", "XiCbNtiISdv", "Hh4vaDi34TT", "T7cIWpY4d-5", "Q4oUV-13MPm", "nips_2022_ZL-XYsDqfQz", "nips_2022_ZL-XYsDqfQz", "nips_2022_ZL-XYsDqfQz", "nips_2022_ZL-XYsDqfQz" ]
nips_2022_WSAWRKVjr5K
All Politics is Local: Redistricting via Local Fairness
In this paper, we propose to use the concept of local fairness for auditing and ranking redistricting plans. Given a redistricting plan, a deviating group is a population-balanced contiguous region in which a majority of individuals are of the same interest and in the minority of their respective districts; such a set of individuals have a justified complaint with how the redistricting plan was drawn. A redistricting plan with no deviating groups is called locally fair. We show that the problem of auditing a given plan for local fairness is NP-complete. We present an MCMC approach for auditing as well as ranking redistricting plans. We also present a dynamic programming based algorithm for the auditing problem that we use to demonstrate the efficacy of our MCMC approach. Using these tools, we test local fairness on real-world election data, showing that it is indeed possible to find plans that are almost or exactly locally fair. Further, we show that such plans can be generated while sacrificing very little in terms of compactness and existing fairness measures such as competitiveness of the districts or seat shares of the plans.
Accept
The reviewers universally agreed that this paper is timely, interesting, and well written. It has two limitations addressing which would make the paper even stronger. First, being upfront about the heuristic (rather than rigorous) nature of some of the statements, as highlighted by the reviewers. Second, is the investigation of other datasets to further strengthen the empirical section. I urge the authors to make these changes for the camera ready.
train
[ "bFninMk6o6E", "CDivMVRwAW", "IdHTCgqXhV9", "nUF3yU5sSH3", "aKX9zjvUHhd", "vtUwBCGLxG8", "OROvo2Z8FK6", "2aAmgqDFGj", "zwtojYu-1W5", "6rmGlTgOETf", "Gr9rUq8LGVr", "K-0AK9M5pVl" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the additional response. I agree that counting all fractionally “wasted votes” from each VTD (full votes for people whose party lost in their district and fractional if their party won by more than necessary) would likely address the issue of packing. However, there are two emergent issues with this suggestion which would need to be investigated. (1) It’s very possible that this would make 0.5-locally fair districts much less likely and a larger c would need to be motivated. (2) Using “wasted votes” could have unintended side effects as noted in Bernstein, Mira, and Moon Duchin. \"A formula goes to court: Partisan gerrymandering and the efficiency gap.\" Notices of the AMS 64, no. 9 (2017)\n\nAn expanded paper exploring these questions and those raised across all reviews would be a strong submission, especially to venues such as AAMAS and IJCAI which appear to be the most interested in this type of work. Tightening Sections 1 and 2 and moving half of Section 3 to the appendix would probably accommodate the new content.", " We thank the reviewer for pointing out we did not respond to the comment on using average partisanship as a metric for competitiveness. We plan to further test against other competitiveness measures in future versions of our paper.\n\nFor potential packing in redistricting, we believe that this could be partially remedied via the suggestion of extending the unhappiness from binary to fractional, since unhappy voters in \"lopsided\" districts will contribute more unhappiness than those in close districts. Additionally, redistricting relies on practitioners striking a balance among all desired properties. Through this discussion, we see a more general point: the notion of local fairness is best used for filtering/selecting from a pool of \"good plans\" that satisfy most if not all existing fairness notions, so that the final plan can be more locally stable. Practitioners should not, instead, set a tight local fairness threshold (a large c) so that few locally fair plans exist for a state, and use it to justify this small set of plans ignoring other measures. We plan to address this in future versions of our paper.\n\n", " Thanks for your response. I have added some replies to your comments below. In addition, I noticed that your response did not address the issue of using average partisanship as a metric and wanted to give you a chance to respond to that in case you hadn’t seen that comment.\n\nQuestion 1. Thanks for your response and clarification. I think I understand the reasoning now. In this case, either compactness of deviating groups should be part of Definition 2 of c-locally fair plans or added to some new definition in Section 2 that includes this constraint. It should also be explained and motivated in Section 2 as you have in this response.\n\nQuestions 2 & 3. I understand that fair maps without packing exist. However, it appears that (1) a packed map could also be called fair under this measure and (2) this measure could potentially be used to justify a packed map over a map which is “more fair” by other standards. To me, that is both a limitation and potential negative social impact that should be discussed and explored more in a work proposing a new fairness definition. For example, the court-mandated map adopted by Pennsylvania in 2018 took PA from a 13R-5D state to a 9R-9D state more in line with the voter demographics of the state. It would be easy to check if your ensemble approach could be used to find a plan that is both more c-locally fair and more partisan (favoring either republicans or democrats) than the 2018 map and thus, could be used to undermine such a map. \n\nQuestion 4. Thanks for the clarification. This should be discussed a bit more in the paper and possibly listed as a limitation.\n\nQuestion 5. I think it’s important to note explicitly in the paper that it is theoretically possible in the worst for no fair plans to exist. I don’t think such a statement appears in the current draft. A simple example of this would be a 3x3 grid graph with a checkerboard pattern of party membership. In the real world, my guess is that Massachusetts could become a state with no locally fair map if it were to have even a slight shift toward the republican party. An experiment exploring this or alternately showing that you can find 0.5-locally fair plans for all states would be very interesting.\n\nQuestions 6 & 7. Thanks for your response.\n\nRe: 2. Theory vs. practice. I think the number of states analyzed is sufficient to show the promising potential of the method, but not sufficient to indicate that local fairness is generally achievable in practice as you have claimed. All 4 states analyzed have close to 50/50 partisanship. Including a few more states with different demographics like Maryland would strengthen the claim.", " *Weakness 1 and Question 1.* In definition 2, the feasible district $W$ is not a district in the given redistricting plan $\\Pi$; instead, it is an alternative district containing voters from multiple districts in $\\Pi$, many of which are likely blue. In other words, $W$ is a \"red hypothetical district'' that overlaps multiple real districts in $\\Pi$, where at least one of them is blue. \n\n*Weakness 2.* In Figure 3(a) and lines 340-343, we deemed a higher (but still less than 50%) seat share for the minority party as better/desirable. This is because all four states we used (NC, TX, PA, and WI) have had close to 50-50 popular vote totals in recent elections, and thus a minority seat share closer to 50% aligns with the proportional share. In general, we do not suggest a higher (or lower) minority seat share to be more desirable. However, we did not make this clear, and we thank the reviewers for pointing this out. Finally, since seat shares are directly tied to the electoral outcome, a lower variance of seat shares in the set of locally fairest plans implies stability in electoral outcome when the redistricting plan is to be selected from such set, and thus is desirable.\n\n*Weakness 3, Question 3, and Limitations.* Please refer to paragraph 6 (Societal Impact) in the [general response](https://openreview.net/forum?id=WSAWRKVjr5K&noteId=2aAmgqDFGj).\n\n*Question 2.* The actual plan for WI is deemed locally fair by the ensemble approach, and it is the only state where this is the case. As most of the plans satisfy local fairness in WI (see Table 1), the local fairness notion has weaker power in separating plans in WI, and thus we do not consider it in the robustness tests in Appendix D. We did measure the actual WI plan with the other metrics (see Figure 3).", " *Question 1.* In footnote 2, we compare counting the total number of voters in deviating regions versus counting the total number of unhappy voters in deviating regions, neither double counts voters. However, it is possible for the same set of unhappy voters to form distinct deviating groups by \"pulling\" in different sets of other voters. \n\n*Question 2 and Clarity Comment 3.* We thank the reviewer for pointing out that the role of the ensemble-based approach (Section 3.1) and the DP (Section 3.2.1) should not be viewed as two parallel/competing approaches. We agree with the suggestion on the framing, that the DP is best suited for verifying (providing more evidence of) the locally fairness of the plans deemed fair by the ensemble approach.\n\nDue to its higher complexity, the DP can only be faster than the ensemble method when the input graph is very small, which is usually not the case in practice. Therefore, in practical usage, it is better to use the ensemble method as the first step (identifying unfair plans) and than use the DP to further investigate the smaller set of remaining \"seemingly fair'' plans. \n\n*Question 3.* 50% is the strictest setting of c, as a deviating group only requires a simple majority. It may be the case that we would like to require deviating groups to have a super majority, and c can be increased accordingly. \n\n*Questions 4-6.* \nIn Figure 3(a) and lines 340-343, we deemed a higher (but still less than 50%) seat share for the minority party as better/desirable. This is because all four states we used (NC, TX, PA, and WI) have had close to 50-50 popular vote totals in recent elections, and thus a minority seat share closer to 50\\% aligns with the proportional share. In general, we do not suggest a higher (or lower) minority seat share to be more desirable. However, we did not make this clear, and we thank the reviewers for pointing this out. Finally, since seat shares are directly tied to the electoral outcome, a lower variance of seat shares in the set of locally fairest plans implies stability in electoral outcome when the redistricting plan is to be selected from such set, and thus is desirable.\n\nPlease also refer to paragraph 3 (partisanship) and 4 (competitiveness) in the [general response](https://openreview.net/forum?id=WSAWRKVjr5K&noteId=2aAmgqDFGj).\n\n*Question 7.* Please refer to paragraph 2 (Theory vs. practice) in the general response.\n\n*Limitations.* Please refer to paragraph 6 (Societal Impact) and paragraph 1 (LF redistricting) in the general response.", " *Weakness.* We thank the reviewer for pointing out the wordings appear as there are underlying probabilistic claims that can be substantiated in those sentences. We acknowledge that both the ensemble-based and the DP-based auditing approach are heuristics for which we currently could not provide mathematical guarantees (except general properties, e.g., they make one-sided error).\n\n*Question 1.* We agree with the reviewer that ideally the redistricting plan should be compact with no deviating group of any kind (compact or non-compact). However, it is usually too hard to achieve such a strong property. Deviating groups with low compactness have artificial shapes (or even holes, see Figure 5b in supplementary material) and do not serve well as \"hypothetical districts'', and requiring no deviating groups may require compromising further on other desirable properties (e.g., competitiveness, compactness). We believe holding the districts in the plans and the deviating groups to the same standard is a good balance: A deviating group is valid only if it is at least as compact as the least compact districts in the context. We followed this standard in our experiments. For example, in our NC ensemble, the least compact district has a Polsby-Popper score of 0.14 (lines 664-669), and thus we deemed all deviating groups with a score of less than 0.14 as spurious and disregarded them.\n\n*Question 2.* We are aware of that local fairness encourages higher average partisanships, which can be achieved via packing (see paragraph 3-4 in the [general response](https://openreview.net/forum?id=WSAWRKVjr5K&noteId=2aAmgqDFGj)); however, as empirically shown in the experiments, it remains possible to achieve local fairness without extremely increasing the average partisanship score, which can be used as the criteria in choosing among locally fair plans. We agree with the reviewer that a natural extension of the local fairness definition is to allow different magnitudes of unhappiness, and makes an interesting direction of future work. \n\n*Question 3.* Please refer to paragraph 2 (Theory vs. practice) in the [general response](https://openreview.net/forum?id=WSAWRKVjr5K&noteId=2aAmgqDFGj).\n\n*Question 4.* In Figure 3(a) and lines 340-343, we deemed a higher (but still less than 50\\%) seat share for the minority party as better/desirable. This is because all four states we used (NC, TX, PA, and WI) have had close to 50-50 popular vote totals in recent elections, and thus a minority seat share closer to 50\\% aligns with the proportional share. In general, we do not suggest a higher (or lower) minority seat share to be more desirable. However, we did not make this clear, and we thank the reviewers for pointing this out. Finally, since seat shares are directly tied to the electoral outcome, a lower variance of seat shares in the set of locally fairest plans implies stability in electoral outcome when the redistricting plan is to be selected from such set, and thus is desirable.\n\n*Question 5.* In our work, we proposed one measure (the unfairness score) to rank plans that are not locally fair. We do acknowledge that, due to the highly non-convex nature of the problem, there may exist different plans that are comparable in the unfairness score (or any other metric defined on local properties) that lead to different global outcomes or have the local unfairness come from different regions. In such cases where local fairness is not effective in separating the plans, we believe it is up to the policy makers to further examine the plans in other existing (global) fairness notions. \n\n*Question 6.* To place our work in context with the FAccT '22 paper, our paper is similar in the usage of an ensemble in an auditing process. However, there is a fundamental difference between how we use the ensemble and their approach: They directly compare the outcome of the plan in question with the outcome of the plans in the ensemble, while our approach uses the ensemble as merely a way to generate candidate districts (deviating groups). Their proposed metric focuses on packing and cracking, and thereby places more emphasis on the gerrymandering strategies (which detects whether the plan is artificially \"gerrymandered\"), while our metric focuses on individual voter's justified complaints (detecting whether the plan appears as fair to voters, regardless of how it is drawn). The same can be said for the other arXiv paper. Like our work, these recent works contribute to the increasing attention on local issues in redistricting. We will revise our paper by incorporating the more recent works into our literature review.\n\n*Question 7.* If more than the 5% of the plans are locally fair, we take an arbitrary subset of the locally fair plans to serve as the top 5%. Alternatively, we can take the entire subset of locally fair plans and compare it against the ensemble. We did implement this alternative and found no observable difference in the results.", " *Weakness 2*. Please refer to paragraph 5 (compactness) in the [general response](https://openreview.net/forum?id=WSAWRKVjr5K&noteId=2aAmgqDFGj).\n\n*Weakness 3*. Please refer to paragraph 3 (partisanship) and 4 (competitiveness) in the general response.\n\n*Weakness 4*. Please refer to paragraph 2 (Theory vs. practice) and 1 (LF in redistricting) in the general response.\n\n*Weakness 5*. To place our work in context with the FAccT '22 paper mentioned, our paper is similar in the usage of an ensemble in an auditing process. However, there is a fundamental difference between how we use the ensemble and their approach: They directly compare the outcome of the plan in question with the outcome of the plans in the ensemble, while our approach uses the ensemble as merely a way to generate candidate districts (deviating groups). Their proposed metric focuses on packing and cracking, and thereby places more emphasis on the gerrymandering strategies (which detects whether the plan is artificially 'gerrymandered'), while our metric focuses on individual voter's justified complaints (detecting whether the plan appears as fair to voters, regardless of how it is drawn). We will revise our paper by incorporating the more recent works into our literature review.", " We thank the reviewers for their helpful comments, questions, and the committee for this opportunity to respond. We first address common questions. \n\n### 1. LF in redistricting\nWe propose to use local fairness as one way to evaluate a redistricting plan. There is no widely agreed upon notion of fairness in redistricting. We are not suggesting local fairness to be the single criteria that can replace all other notions. Instead, we suggest local fairness be used to complement/augment them. Extant fairness notions focus on global measures (such as the distribution of seat outcomes), while local fairness provides a guarantee to the voters at the individual level. Our experiments suggest that it is compatible with extant fairness notions, as the most locally fair plans in the ensemble are comparable in many other metrics to the entire ensemble, while only incurring a small tradeoff with competitiveness (please refer to the discussion on partisanship/competitiveness below). In practice, there could be other considerations when choosing the \"best\" plan among many locally-fair plans; we leave the question of weighing these considerations to policy makers.\n\n### 2. Theory vs. practice\nThe results of prior work (e.g., the negative example in the 1D case in [6]) imply that there exist inputs for which no locally fair plans exist. However, our experiments on real datasets indicate that local fairness is generally achievable in practice. In Section 4.4 (lines 366-370) we evaluated the unfair scores of the actual plans in use. The actual plans for NC, TX, and PA were not locally fair, and the plans for TX and PA were more unfair than the average plan in the ensemble. Though the NC plan is less unfair than the average plan in the ensemble, it has a higher average partisanship than every plan in the ensemble (Figure 3b), making it less than ideal. We note that high average partisanship is not equivalent to partisan gerrymandering or a skewed seat share outcome (see the next paragraph on partisanship). Finally, it is relatively easy to satisfy local fairness in WI (see Table 1), and its actual plan is indeed locally fair.\n\n### 3. Partisanship\nWe realize that the word \"partisanship\" has caused confusion. We will remedy this in the next version of the paper. Throughout the paper, we measured a redistricting plan by its \"average partisanship\" over its districts (lines 344-346), where the partisanship of a district is the percentage of its majority voters. Therefore, this measure is not directly tied to the overall seat share outcome of the plan, and plans with high average partisanship values are not necessarily artifacts of partisan gerrymandering. We note that partisan gerrymandered plans (plans that appear as statistical outliers in terms of their seat share outcome) can easily be detected by existing auditing approaches.\n\n### 4. Competitiveness\nDistricts with lower partisanship (closer to 50%) are considered more competitive. Less competitive districts are usually considered undesirable as they reduce motivation of citizens to vote. Local fairness is indeed easier to achieve with higher partisanship as it leads to fewer unhappy voters. We thus acknowledge that there is a tradeoff between local fairness and competitiveness. However, in our experiments, we showed (lines 344-354) that local fairness can be achieved with a relatively small increase in the average partisanship (small sacrifice of competitiveness), thereby striking a balance in this tradeoff. An interesting question for future research is to achieve local fairness without compromising on competitiveness.\n\n### 5. Compactness\nAs there is no universally agreed measure (see lines 182-191), we did not tie our problems to a specific compactness requirement, but we agree that the redistricting plans should be reasonably compact. For the plan generation problem, our proposed framework first generates (and selects from) an ensemble of feasible redistricting plans using extant approaches (lines 201-202). As such, given any specific compactness measure, the policy makers can utilize a corresponding approach (that either theoretically or empirically achieves good compactness on that measure) for the ensemble, see lines 214-223. We believe this approach allows more flexibility than explicitly specifying a compactness measure.\n\n### 6. Societal Impact\nAs a controversial topic itself, there has been a lot of work devoted to discuss potential negative impacts of redistricting (e.g., the book containing [8]). We believe that including local fairness in the criteria for selecting a plan will reduce the abuse of redistricting, though it may compromise on competitiveness of the districts a little. Our view is that redistricting is a reality in the US, and it is thus imperative to develop principled algorithms and metrics for it. We will incorporate more discussion on this and point the readers to relevant references on the potential negative social impacts of redistricting.", " The paper introduces a new measure for fairness in redistricting. Specifically, the paper introduces the notion of local fairness where a redistricting plan is considered to be locally fair if no feasible district exists that can includes at least 1/2 unhappy voters in the given redistricting plan (where unhappy means in the minority party). The problem of auditing and generating locally fair plans is shown to NP-complete and heuristics based on ensemble (sampling) approaches and dynamic programming are shown. $\\textbf{Strengths}$:\n\n-I think the notion of local fairness is elegant and as the authors point out has the advantage of highlighting the individuals disadvantaged by the drawing of a given plan. \n\n\n\n$\\textbf{Weaknesses}$:\n\n1-The paper does not include many mathematical guarantees, but this is understandable given that the problem is difficult and generally forbidding to give guarantees to without significant assumptions. \n\n2-This is not very significant but it seems that compactness is de-emphasized as a requirement for a redistricting plan. Although it seems to be universal in the literature (acknowledging that there isn't an agreed upon definition). I would add compactness as a third point along with the connectedness and population balance in lines (149-153). \n\n3-Lines (368-373): the result on NC is a bit odd. One would certainly like local fairness to detected gerrymandering in a highly partisan plan. \n\n4-Following point 3, given a state is it possible that a locally fair redistricting does not even exist? More generally while the notion is interesting, the paper did not elaborate on where we would expect local fairness to help, etc. Perhaps even toy examples could help here. \n\n5-This is a very recent paper so it is acceptable that the authors don't point it out. But it introduces a fairness measure that is similar:\nhttps://dl.acm.org/doi/abs/10.1145/3531146.3533174 \n\n$\\textbf{Minor Issue}$:\n\nIn line (181), I think it is the case that unf(\\Pi)=0 if and only \\Pi is locally fair which is just more accurate than. \n\n My main points to authors are above. Please see points under Weaknesses in the above section. In particular, I think points 3 and 4 are the most important. Authors have adequately addressed the limitations and potential negative societal impact. ", " This work defines a new notion of “local fairness” for redistricting. The main idea is that a plan is unfair if there are unhappy voters who lose in their current district, but could have been grouped together in an alternative district to elect a candidate from their party.\n\nDrawing locally fair maps and auditing for local fairness are shown to be NP-hard problems. The authors thus offer two heuristics for the auditing problem and evaluate them empirically. Strengths:\n\nThe paper addresses and makes progress on a highly relevant and timely problem in society which is still open despite being well-studied. \n\nThe biggest contribution/strength is the new idea of how to measure fairness in redistricting. In my view, it’s a good idea, even though there are some flaws/weakness that I believe could make it unpractical in its current form. There is merit in publishing good ideas that may inspire future work as well as informative debate/criticism. So while I have listed many weaknesses and questions below, one could argue that it is worthwhile to publish the paper for its main idea and let the weaknesses be addressed in future work.\n\nI loved the analysis presented in Figure 2 and felt that was an undersold strength of the paper. These results offer useful insights into political geography in general and the urban/rural divide specifically. It could be of independent interest to areas such as the comparison between different electoral systems using districts. To me, this motivates the local fairness notion almost as much as the idea of using it to audit proposed plans.\n\nThe paper is written and presented clearly.\n\n\n\nWeaknesses:\n\nThe main weaknesses were potential flaws in the fairness metric, some questionable claims, lack of theoretical analysis of the auditing methods, and slightly limited experiments. Overall, I feel that the paper has some great ideas, but isn’t ready for publication. \n\n\nI have several concerns about the fairness metric which are mainly listed in the questions section of this review. \n\n\nI found the claims and analysis of the two auditing methods to be flawed as noted below. However, I hope the authors can correct me if I’ve misunderstood something. Overall, I would have liked a better theoretical analysis of the two auditing heuristics.\n\nFor the ensemble-based auditing method, there are unsubstantiated claims that the method is “likely” to find a deviating group if one exists or that it “provides high confidence”. However, there is no theoretical argument supporting these claims. On the contrary, it seems possible to produce a plan with a deviating group that the ensemble method is unlikely to find especially if non-compact deviating groups are allowed.\n\nFor the dynamic programming in trees audit, the analysis is incomplete. There is a description of how to solve the problem optimally in a tree, but no analysis of what solving the problem optimally in many random spanning trees can tell us about a given plan. It is possible that random spanning trees are very likely or very unlikely to contain deviating groups in their subtrees, but this crucial detail appears to be left out. Then, the dynamic programming approach is used to evaluate the ensemble-based approach, but it’s not clear to me that the dynamic programming approach is more accurate.\n\n\nThe experiments were not a major weakness, but adding some subset of the following would strengthen the paper.\n\nMore states. In the four states considered, only 2.8% of TX plans were fair at c = 0.5 and that’s using a fairness heuristic with an unknown false positive rate (positive being fair). This raises questions of whether some states do not admit locally fair plans.\n\nMore elections. Many of the ensemble papers use data from senate and gubernatorial elections in different years in addition to presidential election data.\n\nA better competitiveness metric. Average partisanship is not a good measure of competitiveness because it doesn’t tell us how many districts in a plan are meaningfully competitive. There are a number of competitiveness measures in the literature. A simple one to test here would be to pick a threshold (e.g. 55% or 53%) and count the number of districts in a plan with partisanship below\n\nA study using small grid graphs for which all possible districts can be enumerated. This could be used to evaluate the auditing heuristics and to gauge the frequency of maps that do not admit any locally fair plan.\n\n\nThis is a minor weakness, but many details are left to the appendix. For example, it would be good to include some sketch or intuition for the NP-completeness claims even just saying which NP-complete problem is used for the reduction. Many aspects of the dynamic programming approach are also in the appendix. While I enjoyed the introduction, I suggest the authors shorten it to make room for more content later in the paper.\n I have listed many questions and will not be offended if the authors choose to address only those they believe are most important or where I have made some mistake.\n\n\n1) Regarding the discussion of the dynamic programming method in Section 3.2.2, why does a deviating group need to be compact? First, compactness of deviating groups is not in the definition provided earlier in the paper. Second, I would argue that the deviating group should not need to be compact. Wouldn’t it be easier to gerrymander if non-compact deviating groups are not considered in defining local fairness? Put another way, suppose there exists a plan which is locally fair and satisfies the standard criteria including compactness. Wouldn’t we want to say that plan is more fair than another plan which has a non-compact deviating group?\n\n\n2) This fairness measure appears vulnerable to the gerrymandering practice of packing and may even encourage packing. Packing a district with mostly voters of the same party is one way to maximize the number of happy voters who cannot be part of a deviating group. This is mentioned briefly in Section 4.3, but can you speak to this more? In particular, I worry that it could be a lopsided fairness metric that favors rural parties packing urban centers. Could this be remedied by letting voters in packed districts be fractionally unhappy (e.g., a voter in a 75% partisan district is 1/3 unhappy)?\n\n\n3) Does this fairness metric risk invalidating a “fair” plan that voters prefer? There are a few plans that have been held up as fair such as the NC judges map considered in [22] or the 2018 PA map drawn by the PA supreme court? Have you checked whether these or similar maps are locally fair?\n\n\n4) Lines 127-129 state that locally fair plans “have comparable seat share outcomes” to those with many deviating groups. However, shouldn’t the fair plans have different seat share outcomes from those which represent partisan gerrymanders? Further, you also note that locally fair plans in your ensemble have similar compactness to those with deviating groups, but couldn’t this be an artifact of ReCom producing similarly compact maps as noted in [34]?\n\n\n5) In the worst case, there are maps that do not have any locally fair plan. How do we compare maps in this case considering that is intractable to compute how “almost fair” a map is and two similarly “almost fair” maps might make different groups unhappy?\n\n\n6) I’m aware of at least two recent papers which have some similarities to this one, although they are clearly different enough to co-exist. Can the authors can say more to place their work in context with them? I believe the first paper would not have been available to the authors at the time of submission. However, it is especially relevant because the metric is similar and also involves comparing against an ensemble.\n\nLin, Jerry, Carolyn Chen, Marc Chmielewski, Samia Zaman, and Brandon Fain. \"Auditing for Gerrymandering by Identifying Disenfranchised Individuals.\" In 2022 ACM Conference on Fairness, Accountability, and Transparency\n\nCampisi, Marion, Thomas Ratliff, Stephanie Somersille, and Ellen Veomett. \"The Geography and Election Outcome (GEO) Metric: An Introduction.\" arXiv preprint \n\n\n7) On lines 338-339, it says, “We compare properties of the top 5% plans in the ranking (which are most locally fair) against the entire ensemble.” How is this 5% chosen for states in which more than 5% of maps have no deviating groups?\n\n\n\nTypos:\nLine 52: accessing -> assessing\nLine 379: “properties properties” I have noted several limitations above which are discussed somewhat to the paper’s credit, but could be discussed more. The only potential negative societal impact is raised in my question about this method being used to discredit “fair” or “good” maps.", " This paper studies local fairness for redistricting plans. The authors give 2 algorithms for auditing plans for local fairness. Both algorithms are approximate and make different types of errors. In experiments in the main text, the authors use the first (simpler) method to audit real and sampled redistricting plans for real data. They show that in many cases, there exist many locally-fair plans for a number of states. They also show that locally-fair plans tend to satisfy global notions of fairness. Other statistics of the locally fair plans–like partisanship, compactness, and minority seat shares–-are also reported. **Originality:** This work combines and extends a handful of pieces from previous work rather than making a completely original contribution.\n- This work borrows a notion of local fairness from recent work [6]. However that work [6] the authors focus on a 1-dimensional problem setting whereas in this work, the authors focus on a 2-dimensional (planar graph) setting.\n- The generate-and-test algorithm proposed in this work relies on the ReCom [15] and is otherwise simple. In the proposed algorithm, first generate a number of redistricting plans using ReCom; then, for each district in each plan generated by ReCom, test whether that district is a c-deviating group in each other plan generated by ReCom.\n- The dynamic program proposed is somewhat more complex, but seems to be neither practical (i.e., it has a very high runtime) nor useful in that it does not lead to improvements over the simpler heuristic proposed.\n- The contributions are clear and as I can tell, relevant work has been cited.\n\n**Clarity:** \n- The paper is very well-written. Most of the ideas are explained clearly and I believe that it would be possible to reimplement the experiments form the description in the text (given access to the ReCom algorithm).\n- I found the definition of a deviating group to be confusing and required many re-reads. I would suggest including a Figure that shows: a) a small graph or portion of some graph, b) the districts according to $\\Pi$ and c) an alternative district–not in $\\Pi$--that shows a deviating group. Instead of (or in addition to) a visualization, it would be helpful to include a definition in words as well, for example: “In other words, a plan is unfair if a new feasible district, $W$, can be constructed such that a c-fraction of the voters in $W$ are unhappy under $\\Pi$ but would be happy in $W$.\n- 3.2.1 is complicated and at the end of the subsection, it seems like the approach is unnecessary. As written in 1.2 (contributions), the DP seems like it will be one of your proposed approaches. I would suggest a different framing where the DP is used to provide further evidence that the more efficient and simpler method doesn’t conclude that certain plans are fair when deviating groups can be found by a more costly and complicated method.\n- The word “properties” appears twice in a row in the last sentence of the first paragraph of the conclusion.\n\n**Quality:** this submission is of relatively high quality.\n- In the experiments, the authors goal is to determine whether plans that are locally fair are achievable, and whether these plans are compatible with notions of global fairness. By and large, both of these questions are answered affirmatively in a convincing manner.\n- At the same time, in my opinion, a discussion of whether local-fairness for redistricting is a good thing (and why) is missing and necessary. While reading, I lean towards thinking that local fairness is good: more people are “happy” (as the paper refers to them) and global fairness is respected. But looking at the results with respect to increased/decreased partisanship and seat shares, I found myself wondering if locally-fair plans are what we want in practice or not. More specific questions related to this thought are in the Questions for the Authors section of this review.\n- The theoretical results in this work are proofs of NP-completeness for the problems studied. No approximation results are proved with respect to the proposed algorithms. In my opinion, such proofs would add to the paper but are unnecessary since the application considered is very specific (i.e., redistricting in the United States) and the experimentation is performed with the relevant data. It’s unclear whether the proposed algorithms could potentially be used in many other contexts. If so, having approximation guarantees becomes more important (so that practitioners would have some idea of how the algorithms are likely to perform).\n\n**Significance:** the results are important, and hold immediate real-world value, however they are limited to a specific problem area.\n- This work follows a small, yet well-established area of work on fair redistricting and gerrymandering. If accepted, this work would likely be built upon.\n- Unlike previous work that focuses on global notions of fairness in districting plans, this work focuses on local fairness, which is conceptually different and requires distinct technical tools and algorithms. This paper gives a new method for applying ideas of locally fair partitioning to 2-dimensional (planar graph) redistricting problems.\n- Experimental results are new and constitute a new and interesting way at auditing redistricting plans for fairness.\n - Footnote 2 was confusing. Is the issue one of double counting? In the construction where we care about unhappy voters as opposed to population is there still double counting? \n- Are there times when plan generation is less scalable than the DP? Are there any instances in which you’d recommend usage of the DP?\n- Why were the particular values of c in Table 1 chosen? Is 50% a meaningful choice of c? What other values should be considered?\n- In the seat shares result, why is a higher percentage of seat shares more desirable than a share that reflects the percentage of the minority? Why is lower variance an important criteria here?\n- For the partisanship result: is it a good thing for partisanship of a plan to decrease with respect to real plans? This confuses me since the text seems to say that the fair plans increase partisanship (over the ensemble), which leads to fewer unhappy voters (a good thing). \n- Related to the question of partisanship: Are all locally-fair plans for the same state equivalently good/bad? In more detail, is it possible to have two distinct, locally-fair plans that would yield different numbers of red/blue elected officials? If so, how should one choose between two locally fair plans?\n- Are plans used in practice locally fair?\n - No ethical considerations addressed, but the problem area have high potential societal impact.\n- One question which is not directly addressed is whether their notion of local fairness should be a high priority consideration in real-world redistricting.\n- The algorithms presented are highly specialized. I’m not sure they are applicable for any other problem areas.\n- There is no discussion about selection among many locally-fair plans. Such selections could have significant ramifications.\n", " The paper introduces a method of creating geographical voting districts composed of smaller precincts which are \"locally fair\". The focus is on two specific problems: determining whether a given redistricting plan is locally fair, and generating a redistricting plan that is locally fair. Both of these problems are shown to be NP-complete and a polynomial time dynamic programming algorithm is described that determines whether a plan is fair. They use this to solve the problem of generating a fair plan by generating many potential plans and ranking them all according to their fairness. Experiments on data from 4 US states show that this method can identify plans with local fairness. The authors also show empirically that they are able to find locally fair plans which meet other fairness and compactness criteria. Generally, the paper is interesting and well written. The problem is well studied; this work provides a strong follow-up to a very related model (ref 6) and addresses some new dimensions of the local fairness issue. The paper is relevant to an ongoing process within the United States which does benefit from new research in this area. I can imagine the included empirical analysis making the work more approachable to non-technical audiences that might bring the results into practice.\n\nStrengths\n- the paper is logically structured and the results are discussed in appropriate detail at various locations (e.g. intro does a good job of highlighting the main points, reasonable choices about what is in appendix vs main paper)\n- the problem of redistricting remains a real issue in America and a great deal of people may benefit from new developments in the field\n- the concept of local fairness seems a reasonable one to include alongside other fairness concepts\n- I found the general idea relatively understandable which is important for practitioners\n\nWeaknesses/ways to improve\n- I think your definition of local fairness may have a small but significant typo; or I misunderstand. (see Questions, below)\n- the \"empirical evidence\" on lines 199/200 could be more explicitly connected to something in Section 4\nFigure 3(a) - unclear what is good here. Is the claim that the minority group should get 0% of seat share rather than an amount proportional to their size?\n3(c) - the large difference between NC/WI and TX/PA doesn't seem to be explained (is it to do with urban/rural divide?)\n- more discussion on how this could be misused is needed Line 162 says a happy voter is one who agrees with the colour of their district. Definition 2 refers to a red district containing unhappy red voters which would seem to be impossible. Should line 162 say \"precinct\" rather than \"district\"? If not, could you attempt to explain this concept more clearly?\n\nWhy is WI not discussed in 4.4?\n\nThe checklist at the end of your paper says \"N/A\" in reference to whether you discussed potential negative social impacts of your work. I have trouble understanding this response. Does this mean you cannot imagine any potential negative impacts from a districting process that generate fair and unfair districts? The authors have not done this. While quite a few readers are likely to be familiar with common ideas about the potential negative impacts of this work some discussion about how this specific process could be misused, either intentionally or not, is still rather important." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "CDivMVRwAW", "IdHTCgqXhV9", "vtUwBCGLxG8", "K-0AK9M5pVl", "Gr9rUq8LGVr", "6rmGlTgOETf", "zwtojYu-1W5", "nips_2022_WSAWRKVjr5K", "nips_2022_WSAWRKVjr5K", "nips_2022_WSAWRKVjr5K", "nips_2022_WSAWRKVjr5K", "nips_2022_WSAWRKVjr5K" ]
nips_2022_I4XNmBm2h-E
Adaptive Oracle-Efficient Online Learning
The classical algorithms for online learning and decision-making have the benefit of achieving the optimal performance guarantees, but suffer from computational complexity limitations when implemented at scale. More recent sophisticated techniques, which we refer to as $\textit{oracle-efficient}$ methods, address this problem by dispatching to an $\textit{offline optimization oracle}$ that can search through an exponentially-large (or even infinite) space of decisions and select that which performed the best on any dataset. But despite the benefits of computational feasibility, most oracle-efficient algorithms exhibit one major limitation: while performing well in worst-case settings, they do not adapt well to friendly environments. In this paper we consider two such friendly scenarios, (a) "small-loss" problems and (b) IID data. We provide a new framework for designing follow-the-perturbed-leader algorithms that are oracle-efficient and adapt well to the small-loss environment, under a particular condition which we call $\textit{approximability}$ (which is spiritually related to sufficient conditions provided in (Dudík et al., 2020)). We identify a series of real-world settings, including online auctions and transductive online classification, for which approximability holds. We also extend the algorithm to an IID data setting and establish a "best-of-both-worlds" bound in the oracle-efficient setting.
Accept
The paper received reviews from experts in online learning, who all support acceptance following some clarifications provided by the authors. From my own look into the paper, I also firmly support acceptance: the paper makes a clear, solid and elegant contribution to a long line of research in online learning, and it is also very well written. I do however strongly encourage the authors to pay close attention to the suggestions in the reviews as to how to improve their presentation for the final version.
train
[ "PfHPfdl1SiU", "AeqJ4jvZmLJ", "5Z6-325N-Y", "lAdMNUGoXa8", "fFOZDSJYYg", "hYuiX4T8OJl", "-Qh9F33C3C", "pc9G09fczg1", "gVeNN1vmbZx", "5Cv5TjoSWhx" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your clarification.\nI now have a better understanding of the contributions of this study and have corrected some of my misunderstandings.\nI am generally satisfied with the responses and am now leaning towards increasing my score, but will update it after the reviewer discussion period.\n", " Dear reviewers,\n\nWe would like to thank you again for your helpful and constructive comments. Please let us know if you have any other questions, and we are more than happy to clarify. Thank you!\n\nBest,\\\nAuthors", " Thank you very much for the positive assessment of our work as well as all of the detailed and constructive comments regarding the presentation! We appreciate your catching the mentioned typos and will fix these immediately. We address a couple of your specific questions/comments below:\n \n>“Line 266. \"Noticing the fact that N= \\Omega (ln K)\". Where are we supposed to notice this from?” \n\nNote that when the PTM Gamma is binary and 1-admissible, every two rows of Gamma differ by at least one element. This means that Gamma must, at the very least, include \\Omega (ln K) columns to encode each row. We will explain this in more detail in the revision.\n \n>“Your notion of the mixability gap is different from that of Rooij et al 2014.”\n\nThank you for the careful comparison. Just to clarify, we use the specific expression for the mixability gap in (the second paragraph, Page 1286 of Rooij et al, 2014 [1]). We will be more specific about the citation in Lemma 6 of the paper to avoid this confusion.\n \n[1]https://homepages.cwi.nl/~pdg/ftp/flipflop.pdf", " Thank you for the review and comments! We appreciate the opportunity to clarify some possible misunderstandings below.\n\n> “Algorithms 1 and 2 are minor modification of algorithms by (Dud{'\\i}k et al., 2020). Algorithm 3 appears to be a direct application of (De Rooij et al., 2014).” \n\nWe address Algorithm 3 later in this response, and address Algorithms 1 and 2 here. It is worth noting that Algorithms 1 and 2 are not minor modifications of G-FTPL (Dudik et al, 2020), and our analysis also deviates significantly from the purely worst-case analysis presented in (Dudik et al, 2020). First, G-FTPL uses a bounded-support uniform distribution on the perturbation noise. It turns out not to be possible to prove an adaptive small-loss bound on the original G-FTPL algorithm (or, indeed, even the original FTPL algorithm) with bounded-support noise; this is for related reasons to the fundamental limitations of the admissibility condition that we show in Lemma 1 of our paper. We instead use DP-style distributions (e.g. Laplace noise) for the perturbation noise, which play well with the new approximability condition that we introduce. We also imbue the algorithm with a time-varying adaptive step size, which has not been done before in the oracle-efficient online learning paradigm.\n\nOur analysis is also very different from (Dudik et al, 2020). Lemma 1 shows fundamental limitations of the admissibility condition introduced by (Dudik et al, 2020) in showing adaptive small-loss bounds. Therefore, naively modifying the analysis of G-FTPL (Dud{'\\i}k et al., 2020) does not work. We successfully address this problem by introducing a new condition for PTM, i.e., approximability. Compared with admissibility, our new condition ensures stronger stability, which makes it works well with DP-style distributions and leads to the stronger small-loss bound. Apart from figuring out the right condition for the small-loss bound, another challenging question is the design of approximable PTMs in real-world applications. In section 4, we utilize several novel techniques for constructing approximable PTMs in different problems and for proving the approximability. \n\n \n\n>“The assumption of admissibility and implementability also follows that by (Dud{'\\i}k et al., 2020).”\n\nThis may be a misunderstanding; the admissibility assumption is not used in the present work, we only present it for comparison. Our counterexamples in Lemma 1 show that admissibility is insufficient for proving small-loss bounds. In lieu of admissibility we introduce the new condition of approximability that is sufficient for small-loss bounds; this can also allow for PTMs that are, in fact, not admissible. As a result, our implementable PTMs need to be approximable rather than admissible, leading to significantly different PTM constructions for many real-world applications. \n\n>“Is my understanding above correct? For example, if the application of Flipfrop is non-trivial and in need of special consideration, I would appreciate it if you could highlight it.” \n\nAlgorithm 3 does resemble Flipflop at a high level, i.e., by switching the algorithms based on a direct performance comparison. However, there are notable differences in the details of the algorithm as well as analysis, listed below:\n1) The Flipflop algorithm of (De Rooij et al., 2014) is specifically designed for combining AdaHedge and Hedge, in which the meta-algorithm relies on comparing the “mixability gap” of two algorithms. It has two major limitations: a) the computation of the mixability gap involves all experts and is thus not oracle-efficient; b) it cannot be directly used to combine other methods, because the mixability gap only makes sense for AdaHedge and Hedge.\n2) Our meta-algorithm instead directly compares the regret upper bounds of the base algorithms, and it can combine any two algorithms with any-time regret bounds. As a result, the algorithm is oracle-efficient whenever a) the base algorithms are oracle-efficient, and b) the regret bounds of the base algorithms can be computed efficiently. We apply our meta-algorithm combining G-FTPL and FTL as base algorithms.\n3) To our knowledge, we are the first to show that the idea of Flipflop can be extended to combine any two algorithms with any-time regret bounds. We believe our analysis, while relatively simple, is of independent interest and can be applied to other problems in the realm of adaptive online learning.\n \n\n>“Is this the first study dealing with the FTPL approach with time-dependent adaptive parameters eta_t?” \n\nYes, our work is the first to imbue G-FTPL with a time-variant step size (while preserving its oracle-efficiency). ", " Thank you for the constructive review and supportive comments about our work!", " Thank you very much for the review and positive comments about our theoretical results! We address your specific questions and concerns below.\n\n>“Scopes of the work appears to be limited since often in the real world problem, coming up with an efficient oracle might not be possible. Papers address few applications but for relatively simple/finite/structured settings.”\n\nEfficient optimization sub-routines exist across a broad range of real-world applications, including a) empirical risk minimization for supervised machine learning, b) data-driven market design (e.g. Nisan & Ronen, 2007) and c) dynamic programming (Bertsekas, 2019). These sub-routines are directly applicable on stochastic data; the paradigm of oracle-efficient online learning assumes access to these sub-routines as optimization oracles and broadens their applicability to non-stochastic data. The focus of this paper is to achieve adaptive guarantees for oracle-efficient online learning, and we present examples of applications across all of these domains. We also note that the oracle-efficient online learning paradigm could be applied to approximation algorithms (see, e.g. Niazadeh et al, 2021), further broadening its applicability.\n\n---\n\n>“The results for best-of-both setting is not at all clear. Entire Sec. 5 is not elaborated well, what are the two different problem environments (adversarial and stochastic ??), what is the regret benchmarked against in both cases, and how does Thm 3 claims that Alg3 achieves simultaneously achieves best setups?”\n\n Thank you for the feedback on Section 5; we will be sure to improve the writing in this section as per your suggestions. To address your specific questions: The regret is measured with respect to the best fixed expert in hindsight in both cases. The FTL algorithm achieves much better regret for stochastic data; in particular, when the loss of each expert is iid (with different means for each expert), the expected regret of FTL becomes a constant. On the other hand, the G-FTPL algorithm achieves better regret under adversarial data. Theorem 3 shows that the regret of Algorithm 3 is the minimum of the regret bounds of G-FTPL and FTL. Thus, it automatically achieves the much better constant regret bound of FTL under iid data without knowing the presence of stochasticity in data beforehand. We will add to the discussion under Theorem 3 to make these points clearer.\n", " \nSummary: One of the primary contribution of this work was to provide a new framework for designing follow-the-perturbed leader algorithms that are oracle-efficient and adapt well to the small-loss environment, under \"approximability\". Authors also extend their results to an IID data setting and establish a “best-of-both-worlds” bound in the oracle-efficient setting.\n \n- Strong theoretical results: Oracle efficient algorithms with regret guarantees\n- Applications\n\nWeakness: Scopes of the work appears to be limited since often in the real world problem, coming up with an efficient oracle might not be possible. Papers addresses few applications but for relatively simple/finite/structured settings.\nAlso the best-of both results (Sec 5) are not well defined/explained. The results for best-of-both setting is not at all clear. Entire Sec. 5 is not elaborated well, what are the two different problem environments (adversarial and stochastic ??), what is the regret benchmarked against in both cases, and how does Thm 3 claims that Alg3 achieves simultaneously achieves best of both both both setups?\n See Qs above", " The paper studies the online learning problem with general (possibly non-convex) losses but a finite number K of possible actions (i.e., the experts setting). Prior works obtain regret bounds that have the optimal dependence on T. A different line of work, such as that for convex losses, has shown that one can obtain adaptive, problem-dependent, regret bounds that are much better than the worst-case bounds when the losses are small or iid. The current paper asks whether such adaptive bounds can be obtained in the expert settings using algorithms that are oracle efficient in the sense that the running time has sublinear dependency on the number K of possible actions, with a logarithmic dependency or better being particularly suited for exponentially-sized action spaces. The paper adapts the algorithm and analysis of Dudik et al., and obtain the first adaptive oracle-efficient algorithms. Originality: In order to obtain an oracle-efficient algorithm that adapts to the small losses, the paper adapts the approach of Dudik et al. A key bottleneck is designing a suitable perturbation matrix. The paper shows that the types of perturbation matrices considered by Dudik et al. are not suitable for the adaptive setting. To overcome this, the paper introduces an alternative type of perturbation matrices that are related to but incomparable to those considered by Dudik et al. The paper also shows that several practically relevant applications admit suitable perturbation matrices and thus the framework can be applied to all of these settings. By building on the work of Rooij et al., the paper also gives an algorithm for iid losses. Although the paper builds extensively on prior works, the new components needed to extend those works to the adaptive setting seem to be sufficiently novel.\n\nQuality: The paper seems to be theoretically sound. Both the theoretical and empirical claims seem to be well supported.\n\nClarity: The main body of the paper is sufficiently clear and well written.\n\nSignificance: The paper gives the first adaptive algorithms that are oracle-efficient in the experts settings. The resulting framework has several important applications. Both the theoretical results and the practical applications seems to be good contributions to the online learning literature. None No", " This paper considers online learning problems given offline optimization oracles and provide an algorithm that requires only a small number of oracle calls.\nThe main proposed algorithm has a cumulative-loss dependent regret bound, which implies improved performances for small-loss environments.\nThe algorithm is extended to establish best-of-both-worlds bounds, which means that it works at least as well as the follow-the-leader algorithm. Strengths:\n\n- The paper provides the first oracle-efficient algorithm with a small-loss regret bound for online learning problem.\n- Assumptions and results are clearly stated\n\nWeaknesses:\n\n- Novelty in algorithms and analysis techniques is somewhat limited. Algorithms 1 and 2 are minor modification of algorithms by (Dud{\\'\\i}k et al., 2020). Algorithm 3 appears to be a direct application of (De Rooij et al., 2014).\n\nComments:\n\nThis study provides the first oracle-efficient algorithm with a small-loss regret bound for online learning problem.\nThe proposed algorithm is based on that by (Dud{\\'\\i}k et al., 2020).\nThe assumption of admissibility and implementablity also follows that by (Dud{\\'\\i}k et al., 2020).\nThe novelty lies in the adaptive updating of the parameter $\\eta_t$ corresponding to the learning rate and in the introduction of the concept of approximability for the analysis.\nA best-of-both-worlds bound is also achieved by a direct application of Flipflop approach by (De Rooij et al., 2014).\n\nWhile there has been steady progress as a result, the impression is that it is merely a combination of existing results.\nUnless this impression is dispelled in future discussions, I cannot strongly support the acceptance. - Is my understanding above correct? For example, if the application of Flipfrop is non-trivial and in need of special consideration, I would appreciate it if you could highlight it.\n\n- Is this the first study dealing with the FTPL approach with time-dependent adaptive parameters $\\eta_t$? If so, that would be strong support for the novelty of this paper.\n \nThe limitations are adequately addressed.", " The paper's main contribution is an FTPL-style algorithm that achieves a small-loss bound in the full-information setting, leading to the first Oracle efficient algorithm to achieve such a guarantee. The key insight leading to this result is the observation that ensuring that $P[x_t\\neq x_{t+1}]$ is small (something that can be achieved for the outputs $x_1, x_2, \\dots$ of existing FTPL algorithms) is not enough to achieve a small loss bound. The authors identify a new condition on the perturbation process called approximability that ensures that the ratio $P[x_t \\neq x^i]/P[x_{t+1}\\neq x^i]$ is bounded, which they show is sufficient for achieving a small loss bound. The authors identify many applications where this condition on the noise process is satisfied. The new approximability condition on the noise process of FTPL that the paper presents seems natural and leads to new, non-trivial results in Oracle efficient Online Learning. This is a strong contribution in my opinion. Weaknesses if any would be in the writing; in particular, the order in which some concepts are presented in the paper can be improved in my opinion. \n- First of all, it would be helpful to spell out explicitly early on that the point of using a PTM is to reduce computation as much as possible by only requiring a noise vector of dimension much smaller than the number of experts (you do explain this, but my point is just that it would be better if this point is crystal clear early on---the explanation on line 66 is not clear enough for someone who hasn't read the rest of the paper). \n- On line 200, you say that a PTM is not compatible with the Oracle, which is rather vague. It is not until you see the definition of an implementable PTM that things make sense. I would suggest mentioning something about implementable earlier in the paper. \n- On line 177, you mention \"However, motivated by adaptive FTPL in the inefficient setting\". Here by inefficient, you mean requiring a noise vector of the dimension the same as the number of experts. Spell this out.\n Some questions are included in the limitations section. See suggestions in the strengths and weaknesses section. Here are some typos:\n- Line 94. condition -> conditions\n- Line 139. The use of \"On the other hand\" does not make sense here. \n- Line 146. You cite Van Erven at. al 2014 when you mention FTPL. I do not think they were the ones to invent it. Perhaps more appropriate to cite Kalai and Vempala 2005 or something earlier.\n- Line 204. It is easy to that for -> It is easy to see that for.\n- Line 225. time-variant -> time-varying\n- Line 244. has a a -> has a\n- In the display within Cor 1, there is an \\epsilon that is not scoped. \n- Line 266. \"Noticing the fact that N= \\Omega (ln K)\". Where are we supposed to notice this from?\n- Line 316. Your notion of mixability gap is different from that of Rooij et al 2014." ]
[ -1, -1, -1, -1, -1, -1, 7, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "lAdMNUGoXa8", "nips_2022_I4XNmBm2h-E", "5Cv5TjoSWhx", "gVeNN1vmbZx", "pc9G09fczg1", "-Qh9F33C3C", "nips_2022_I4XNmBm2h-E", "nips_2022_I4XNmBm2h-E", "nips_2022_I4XNmBm2h-E", "nips_2022_I4XNmBm2h-E" ]
nips_2022_Zvh6lF5b26N
Neural Collapse with Normalized Features: A Geometric Analysis over the Riemannian Manifold
When training overparameterized deep networks for classification tasks, it has been widely observed that the learned features exhibit a so-called "neural collapse'" phenomenon. More specifically, for the output features of the penultimate layer, for each class the within-class features converge to their means, and the means of different classes exhibit a certain tight frame structure, which is also aligned with the last layer's classifier. As feature normalization in the last layer becomes a common practice in modern representation learning, in this work we theoretically justify the neural collapse phenomenon under normalized features. Based on an unconstrained feature model, we simplify the empirical loss function in a multi-class classification task into a nonconvex optimization problem over the Riemannian manifold by constraining all features and classifiers over the sphere. In this context, we analyze the nonconvex landscape of the Riemannian optimization problem over the product of spheres, showing a benign global landscape in the sense that the only global minimizers are the neural collapse solutions while all other critical points are strict saddle points with negative curvature. Experimental results on practical deep networks corroborate our theory and demonstrate that better representations can be learned faster via feature normalization. Code for our experiments can be found at https://github.com/cjyaras/normalized-neural-collapse.
Accept
The paper studies a matrix decomposition problem and shows the problem is in a strict-saddle type. All the reviewers tend to accept the paper. I recommend an acceptance.
train
[ "ZJGLE6_kNAD", "zwi1hJV59DS", "j62jr1Jvh79", "J4oh3rXl6LC", "DEzdu32vqYc", "5-zdAW4xFcV", "zZb2rQ_Kueb" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their helpful comments. Please find our response below. \n\n${\\bf Q1.}$ The main result of the paper is neither introduced nor motivated and no intuitions ... This part should be \"softened\". (see Weakness 1 for details)\n\n${\\bf A1.}$ We thank the reviewer for valuable feedback and suggestions. In the revision, we have provided more motivations and intuitions about our main results by connecting feature normalization to discriminative representations (see the sentences in blue on Page 2 and the discussions in Section A of the appendix). We have also added more details about the calculus on the oblique manifold in Section 2.2. More specifically, we have added Figure 2 to illustrate the tangent space and Riemannian gradient of a reduced oblique manifold, with some sentences in blue for providing more intuitions behind the derivation. Additionally, we provided more technical details for deriving Eq. (6) and (7) in Appendix B.2. \n\n\n${\\bf Q2.}$ The related work section in its current shape is somewhat short and without specific comments. It should be ... As recent neural collapse literature used the term simplex ETF, what is the difference between the simplex regular polytopes and the simplex ETF? Are they equivalent? (see Weakness 2 for details)\n\n${\\bf A2.}$ We have modified our manuscript according to your valuable comments. We have discussed our cited work [7,22,23,24] ([7,30,31,32] in the revised manuscript) in Section A of the appendix as suggested. Moreover, we also cited and discussed [A,B,C,D,E,F] ([15,16,17,18,21,22] in the revised manuscript) in Section A of the appendix. \n\nThe simplex regular polytopes and the simplex ETF are not equivalent. Indeed, we suppose that your mentioned simplex regular polytopes refers to the regular simplex in [E,F], which is a simplex and a regular polytope. In our Theorem 1, the simplex ETF refers to a collection of vectors $\\{h_k\\}_{k=1}^K \\subseteq \\mathbb{R}^d$ satisfying $\\|h_k\\|=1$ and $h_k^Th_\\ell=-\\frac{1}{K-1}$ for all $k\\neq \\ell$. Then, we cannot say the simplex ETF is a regular simplex. However, we can say the $\\textbf{convex hull}$ formed by the simplex ETFs and the regular simplex are equivalent if $K=d+1$.\n\n\n${\\bf Q3.}$ The introduction should be better focused on the related work. (see Weakness 3 for details)\n\n${\\bf A3.}$ We have added the discussion of the cited work in Line 37 in Appendix A in the revised manuscript. \n\n\n${\\bf Q4.}$ What data is used in Fig. 1 and Tab. 1? It seems that it is not real data. How is the data generated? (see Weakness 4 for details)\n\n${\\bf A4.}$ In Figure 1 and Table 1, we do not use any real data. Instead, we use the unconstrained feature model (UFM), meaning the resulting features are independent of any input data, we simply pair each $h_{k,i}$ with the corresponding one-hot label $y_k$. For the normalized version, we optimize Problem (3) with $K=100$ and $n=5$ to find the features and classifiers, whereas for the non-normalized version, we optimize Problem (3) without the manifold constraint, i.e., the features and classifiers are not normalized. To make it clear that the features are not derived from data, we have changed the word \"learned\" to \"found\" in the caption of Figure 1.\n\n\n${\\bf Q5.}$ The most important part related to the proof should be ... is to increase the level of self-containment of the paper. (see Weakness 5 for details)\n\n${\\bf A5.}$ As suggested, we have improved our proof part in Section 2.2 (see the sentences in blue and Appendix B.2) and provided more motivations and intuitions about the oblique manifold (see Figure 2 and the sentences in blue).", " We thank the reviewer for their helpful comments. Please find our response below.\n\n${\\bf Q1.}$ The proof of Theorem 1 on neural collapse properties is not different from the existing techniques cited by this paper.\n\n${\\bf A1.}$ We agree with the reviewer that our major contribution lies in Theorem 2 instead of Theorem 1. Nonetheless, we want to mention that our proof of Theorem 1 is different from that in [4,5]. Indeed, we first reduce Problem (3) with $nK$ variables $h_{k,i}$ for all $k \\in [K],i \\in [n]$ into Problem (12) with $K$ variables $h_k$ for $k\\in [K]$ in Lemma 1 of Appendix B. Then, we just need to analyze the optimal solution of Problem (12). This technique greatly simplifies our analysis and has not been used in [4,5]. We also refer the reviewer to our response A1 to Reviewer V69c for extra comments.\n\n[4] Zhihui Zhu, Tianyu Ding, Jinxin Zhou, Xiao Li, Chong You, Jeremias Sulam, and Qing Qu. A geometric analysis of neural collapse with unconstrained features. Advances in Neural Information Processing Systems, 34, 2021.\n\n[5] Jinxin Zhou, Xiao Li, Tianyu Ding, Chong You, Qing Qu, and Zhihui Zhu. On the optimization landscape of neural collapse under mse loss: Global optimality with unconstrained features. arXiv preprint arXiv:2203.01238, 2022.\n\n${\\bf Q2.}$ Issue of Figure 2\n\n${\\bf A2.}$ First, it is worth mentioning that we have improved our bound from $d>N$ to $d>K$ in Theorem 2 in the revised manuscript by a more delicate analysis. Thus, for $d=100$ and $n=5$ in Figure 2 (Figure 3 in the revised version), we always have $K < d$ for $K \\in [5,50]$ so that the the condition for Theorem 2 holds. Additionally, we have $K<d<N$ (Note that $N=nK$ in our paper) for $K > 20$, and we have plotted results for $K$ up to $K=50$. \n\n\n", " We thank the reviewer for their insightful comments. Please find our response below.\n\n${\\bf Q1.}$ Limited contribution: (1) minor extension of Theorem 1 in [1]; (2) sphere constraint vs. ball constraint in [2,3]\n\n${\\bf A1.}$ First of all, we want to emphasize that our main theoretical contribution lies in Theorem 2, showing that the manifold formulation (3) has a benign global landscape, with proof in Appendix D. We made this clear at the bottom of page 6 during the revision. Additionally, we make the following clarifications for Theorem 1. \n* Compared to Theorem 1 in [1], our Theorem 1 is closer to the practical setting and more general due to the fact that [1] assumes that each class only has one sample $(n=1)$. In view of this, Theorem 1 in [1] cannot demonstrate (NC1) variability collapse. \n* Although the feasible region of the sphere constraint is smaller than that of the ball constraint, the sphere constraint is more widely-used in practice due to the fact that the feature normalization is a common technique in deep learning. Then, it is more reasonable and meaningful to study the sphere constraint. In addition, our Theorem 1 and those in [2,3] can imply each other. It is easy to see the theorems in [2,3] imply our Theorem 1. On the other hand, ours implies theirs due to the inequality \n$(1+c_1)(K-1)(\\bar{f}(W,Q)-c_2) \\ge -\\frac{\\tau}{2}(c_3\\sum_{k=1}^K ||w_k||^2_2 + \\frac{1}{c_3}\\sum_{k=1}^K ||q_k||_2^2)$ in the proof of Proposition 1 in Appendix C.\n\n${\\bf Q2.}$ Explain the advantage of normalization over regularization theoretically and experimental.\n\n${\\bf A2.}$ We agree with the reviewer that both feature normalization and regularization have benign landscape and the same global NC solution under UFM. Like the results in [4,5], as we only characterize the critical points, our current results are limited as it does not directly lead to polynomial convergence for iterative algorithms like GD or Riemannian GD. Indeed, even with benign landscape, in the worst case GD may take exponentially time to converge [A]. However, we conjecture the actual nonconvex landscape is much more benign than the worst-case cooked up in [A], and the local landscape around global NC solutions has certain regularity condition. Based upon our experiments in Figure 5 & 6, we conjecture that the feature normalization leads to better local regularity condition than that of the regularization methods. We leave the thorough analysis as future work. We think one approach is to quantitively characterize and compare the regularity condition between the two approaches, and we can also provide more evidence through visualization like what has been done in [5,Figure 2].\n\n${\\bf Q3.}$ Compare validation/test accuracy in Figures 4 and 5\n\n${\\bf A3.}$ We thank the reviewer for the valuable question. In Appendix E.2, we have conducted new experiments and reported generalization performance of feature normalization vs. regularization in Table 2, showing that feature normalization gives better test accuracy and feature collapse for ResNet models on CIFAR100. \n\n[A] Du, Simon S., et al. \"Gradient descent can take exponential time to escape saddle points.\" Advances in neural information processing systems 30 (2017).", " We express our gratitude to all reviewers for their insightful comments. Before addressing their comments, we would like to highlight some improvements that we made during the revision, where all major changes to the manuscript are highlighted in blue in both main body and supplementary materials.\n\n(1) We improved the dimension bound in Theorem 2 from $d>N=nK$, which is quite loose, to $d>K$. The improved bound is based upon a tighter analysis in Appendix D.3. \n\n(2) We improved the presentation of the paper, by providing more intuition on Riemannian calculus on page 5, and more comprehensive discussion on related work in Appendix A.\n\n(3) We provided additional experimental results regarding better generalization properties of feature normalization in Appendix E.2, and we also conduct exploratory experiments for empirically demonstrating benign global landscapes of other commonly-used losses in Appendix E.4.", " Feature normalization has been a common practice. In order to justify neural collapse for normalized features, this paper formulate an unconstrained feature model (UFM) with spherical constraints. Theoretical results on the neural collapse global optimality and the benign landscape are performed. Experiments are conducted to show that UFM with feature normalization has better training and collapse performances than feature regularization. Strengths:\n\n-\tThe motivation to interpret feature normalization from the neural collapse perspective is interesting. \n\n-\tThe paper is well organized and presented. \n\n-\tTheoretical work is sound. \n\nWeaknesses:\n\n-\tLimited contribution. The first theoretical result, the neural collapse global optimality of Eq. (3), is a minor extension of Theorem 1 in [1], which also has a spherical constraint. Besides, as stated by the author, the problem with spherical constraint in Eq. (3) has the same global solution as the one with ball constraint in [2,3]. But the feasible region of ball constraint in [2,3] is much larger. So, the result in this paper is less significant. \n\n-\tIn order to verify the theoretical results (Theorem 1 and 2) in this paper, the authors show that UFM with feature normalization has faster training and better collapse than feature regularization. But, the same theoretical results, including the same global optimality of neural collapse conditions and the same benign landscape properties, are also applicable to UFM with regularization [4], and UFM without either constraint or regularization [5]. In this case, how to explain the advantage of normalization over regularization in the experiments?\n\n[1] Lu et al., Neural collapse with cross-entropy loss,\n\n[2] Graf et al., Dissecting supervised contrastive learning, ICML 2021,\n\n[3] Fang et al., Exploring deep neural networks via layer-peeled model: Minority collapse in imbalanced training, PNAS,\n\n[4] Zhu et al., A geometric analysis of neural collapse with unconstrained features, NeurIPS 2021,\n\n[5] Ji et al., An unconstrained layer-peeled perspective on neural collapse, \n For practical application, one is more interested in generalization ability. Regularization has been known as an effective scheme to improve generalization. It is not clear about this for feature normalization. So, I think the authors should compare validation/test accuracy in Figure 4 and 5. There are discussions about the limitations of this work. \n\nNo concern about negative social impact. \n", " The paper considers the landscape of the optimization problem when training neural networks with normalized features under the cross-entropy loss. Under the assumption of the unconstrained feature model, the authors reformulated the problem as a Riemannian optimization problem, and showed that (1) global solutions of the problem satisfy the neural collapse properties and (2) the optimization problem has a benign global landscape (i.e. local minimizers are global minimizers, and all other critical points are strict saddle points). __Quality and Clarity__: The paper is written clearly, and I enjoyed reading the motivations that builds up nicely to the main theorems. The experiments also support most of the author's claims well.\n\n__Originality and Significance__: I personally do not work in this field, so I will only provide some brief observations on significance of results, while leaving more discussions to other expert reviewers: The proof of theorem 1 on neural collapse properties is not too different from existing techniques cited by this paper. Theorem 2, on the other hand, is nontrivial, and understanding the global landscape is indeed useful for developing guarantees for algorithms like Riemannian SGD. This is probably a minor question regarding the presentation of Figure 2.\n\nIn the end of section 3, the \"Limitations on the assumptions of the feature dimension d\" part, the authors stated that \"We believe the bound can be improved (from d>N) to d>K+1, .... corroborated by our experimental results (See Figure 2)\". \n - I am expecting Figure 2 to show an experiment with K+1<d<N, which would empirically support the author's claims that we do not necessarily need d>N.\n - But instead, it looks like the authors chose d=100, n=5 for that Figure, which still guarantees d>n, so it does not really corroborate the claim above. Am I missing something, or should the graph be modified? N/A", " The paper provides theoretical justification of the neural collapse phenomenon for normalized features. The proof is based on the unconstrained feature model (UFM) and the resulting optimization problem is analyzed over the Riemannian manifold by constraining features and classifiers prototypes over the sphere. The paper basically shows that critical points for the optimization (i.e. potential minima) are saddles (i.e. negative curvature critical points) while all others are global minimizers for which the neural collapse properties hold. The paper concludes with an empirical evaluation confirming the proposed theory.\n STRENGTHS:\n\n1) Interesting formulation of neural collapse in the case in which features are normalized. Feature normalization is a quite classic strategy/practice that has been used in several deep learning contexts (i. e. face recognition) in which maximal separation between learned features is required and therefore it naturally matches with the concept of maximal separation which is inherent to the neural collapse phenomenon. The provided theoretical analysis is very useful. The fact that a standard practice has a convergent geometric discriminative structure is a clear strength of this paper. \n\n2) The oblique manifold formulation is interesting and it appears to be well suited to the problem.\n\nWEAKNESSES:\n\n1) The paper is well written, but many parts seem to require further work. The main result (the proof according to the oblique manifold formulation) of the paper is neither introduced nor motivated and no intuitions are given. Although very interesting, it is hard for the reviewer and, possibly for the reader, to fully grasp the real contribution of the paper. The paper refers to a book (specifically to some exercises) from which the reader should possibly get some intuition and motivations. This part should be “softened”.\n\n2) The related work section in its current shape is somewhat short and without specific comments. It should be improved by following a thread with respect to what is proposed. The reviewer agrees that feature normalization can improve separability, however the four cited works (i.e. [22,23,24,7]) are not discussed in this regard. In which way those four works achieve separability? What is the quality of the representation? Does it depend on the task addressed? Is there any trade-off in the achievement of separability? Important related works are also missing. Many papers, especially from the feature learning and visual search literature are completely missing [A,B,C,D,E,F]. In particular it seems that [A] is one the first works which studied classifier prototypes and their relationship with the learned features. The works [B,C] and [D] firstly introduced feature normalization and their feature separability (i.e., discriminative features) behavior. Finally, [E] and [F] seem to be the first papers to apply maximal separability in a simplex shaped structure (and other regular polytopes). As recent neural collapse literature used the term simplex ETF, what is the difference between the simplex regular polytopes and a simple ETF? Are they equivalent?\n\n3) The introduction should be better focused on the related work. On line 37 a large number of works are cited in a single shot as related to NC. However no comments are explicitly given. Given the diversity of the cited papers a discussion should be given.\n\n4) What data is used in Fig.1 and Tab.1? It seems that it is not real data. How is the data generated?\n\n5) The most important part related to the proof, should be improved. Some motivations and intuitions about the oblique manifold should be given. The cited book is addressing a dictionary learning problem. Although the affinity is clear, the paper in its current shape does not provide an intuitive access to the oblique manifold formulation. A potential reader of this paper would be forced to read too many references. The reviewer's suggestion is to increase the level of self-containment of the paper.\n\nReferences\n\n[A] Liu, W., Wen, Y., Yu, Z., & Yang, M. (2016). Large-margin softmax loss for convolutional neural networks. arXiv preprint arXiv:1612.02295.\n\n[B] Ranjan, R., Castillo, C. D., & Chellappa, R. (2017). L2-constrained softmax loss for discriminative face verification. arXiv preprint arXiv:1703.09507.\n\n[C] Wang, F., Xiang, X., Cheng, J., & Yuille, A. L. (2017, October). Normface: L2 hypersphere embedding for face verification. In Proceedings of the 25th ACM international conference on Multimedia (pp. 1041-1049).\n \n[D] Liu, W., Wen, Y., Yu, Z., Li, M., Raj, B., & Song, L. (2017). Sphereface: Deep hypersphere embedding for face recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 212-220).\n\n[E] Pernici, F., Bruni, M., Baecchi, C., & Del Bimbo, A. (2019, January). Maximally Compact and Separated Features with Regular Polytope Networks. In CVPR Workshops (pp. 46-53).\n\n[F] Pernici, F., Bruni, M., Baecchi, C., & Del Bimbo, A. (2021). Regular polytope networks. IEEE Transactions on Neural Networks and Learning Systems.\n This form is intentionally left blank as questions are included in the previous form.\n No negative societal impacts. Limitations are included into the Strengths And Weaknesses form." ]
[ -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, 3, 2, 4 ]
[ "zZb2rQ_Kueb", "5-zdAW4xFcV", "DEzdu32vqYc", "nips_2022_Zvh6lF5b26N", "nips_2022_Zvh6lF5b26N", "nips_2022_Zvh6lF5b26N", "nips_2022_Zvh6lF5b26N" ]
nips_2022_H_xAgRM7I5N
Zero-Shot 3D Drug Design by Sketching and Generating
Drug design is a crucial step in the drug discovery cycle. Recently, various deep learning-based methods design drugs by generating novel molecules from scratch, avoiding traversing large-scale drug libraries. However, they depend on scarce experimental data or time-consuming docking simulation, leading to overfitting issues with limited training data and slow generation speed. In this study, we propose the zero-shot drug design method DESERT (Drug dEsign by SkEtching and geneRaTing). Specifically, DESERT splits the design process into two stages: sketching and generating, and bridges them with the molecular shape. The two-stage fashion enables our method to utilize the large-scale molecular database to reduce the need for experimental data and docking simulation. Experiments show that DESERT achieves a new state-of-the-art at a fast speed.
Accept
The paper makes a novel contribution to methods for generating novel molecules from scratch. The core idea is to generate a shape that fits the molecular pocket without looking at the protein structure. Two out of three reviewers recommended acceptance. Reviewers emphasize that the method is innovative and interesting, and the empirical performance appealing (especially given that only the shape information is provided to the model). Strong performance is enabled by good design choices made across the paper, such as including the pretraining stage. The reviewer that recommend rejection raised issues related to the novelty and clarity of the paper. However, I believe the paper is sufficiently clear and novel to meet the bar for acceptance. Overall, it is my pleasure to recommend acceptance of the paper.
train
[ "G7pOqldthb3", "JhutK5xovvR", "LRscidv3AkN", "kycIxhJ47qE", "dV7ceMmRin9", "7hWwH2Kfzlf", "HY6QzzliAjz", "lAFBtsRJMkG", "XEk09eqgZEKi", "CMpzxqHMz5", "OVQHKzYu6iR", "myQ3TBGZyxg", "DY-yzObqlD", "bX0JP-inNtZ", "ZjoInd_rUL", "4tiaB0foaye", "Q4wZyHWAkZ", "naSFP3JZBsF", "3wbdXCKR0h", "VVC9g0n2Ylt", "10s0g2G9yiA", "MM1zPWNNZs" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your comments. Did we fix your concern of this paper properly? If not, we are happy to take further questions!", " Thanks for the response. I understand such style of model and training approach is widely used in domains like machine translation, but I'm still a bit surprised that the spatial constraint could be implicitly handled in the neural network by simply predicting the discretized rotation quaternion and translation vector without mapping them back in the 3D space during the sampling phase. I think it would be an interesting future direction to include some important inductive bias like roto-translational equivariance in the modeling. \n\nIn summary, my concerns are mostly addressed and I remain in favor of accepting this paper.", " [1] David Weininger, SMILES, A Chemical Language and Information System. 1. Introduction to Methodology and Encoding Rules, Journal of Chemical Information and Computer Sciences 1988\n\n[2] Marco Podda et al., A Deep Generative Model for Fragment-based Molecule Generation, AISTATS 2020\n\n[3] Matt J Kusner et al., Grammar Variational Autoencode, ICML 2017\n\n[4] Hanjun Dai et al., Syntax-directed Variational Autoencoder for Structured Data, ICLR 2018\n\n[5] Seokho Kang et al., Conditional Molecular Design with Deep Generative Model, Journal of Chemical Information and Modeling 2018", " Thanks a lot for your further comments on our response. Hope our following answers can fix your concerns. If not, any further questions are welcome!\n\n**Q: Does the decoded output at each step correspond to a specific 3D patch (the input of the encoder)?**\n\n**A:**\n\n- No, although the decoded output at each step is also a 3D object (3D molecular fragment), which could not explicitly correspond to a specific 3D patch of the encoder input. \n- Note that our proposed DESERT works totally in an end-to-end way, which does not include any obligatory and explicit correspondence between the encoding input (3D patch) and the decoding output (3D molecular fragment). \n- We think such end2end learning is very intuitive and widely exists, which learns the implicit correspondence via neural networks and large scaled data. For example, in the standard end2end (sequence-to-sequence) learning task machine translation (e.g., English to German), the output German word at each step does not have explicit correspondence (called alignment in translation) to the input English words, but it gives good empirical translation results. The correspondence could be predicted according to the intermediate attention parameters. \n- The proposed DESERT gives good correspondence results empirically. As we reported in section 3.3, the Shape Tanimoto between input shapes and generated molecules is 0.875 (maximum value is 1.0), which indicates that DESERT can well learn the correspondence between the generated molecules and input shapes.\n\n\n**Q: If the decoding is unordered, how does it align with your tree linearization algorithm (is there any guarantee that the decoded fragment sequence is a valid tree)?**\n\n**A:**\n\n- Do you mean our decoder works in a non-autoregressive way? No, The decoding is ordered. Similar to [1] [2] [3] [4] [5], we generate the fragment sequence in left-to-right order. \n- Although there is no theoretical guarantee for generating a valid tree, we find the proposed DESERT model rarely generates invalid outputs empirically.\nFor example, 95.0% of generated sequences can be converted to valid molecules in our experiments on SBDD's test data. (95% is the percentage of generated molecules that can pass the validity check of RDKit) \n- Practically, we just drop the invalid outputs for convinience. Most of the invalid cases are caused by the valence error, i.e., the number of chemical bonds attached to an atom is larger than the atom can have. The error can be moderated by imposing constraints on the number of new branches at a splitting node.\n\n**Q: Is it same in supplementary experiments compared to 3D SBDD? LiGAN and 3D SBDD only randomly sampled 100 molecules, such a comparison would be unfair. Also, since Vina score is one of the evaluation metrics, I don't think it should be used as the ranking criterion. (It may be fine when compared to GEKO, since it uses Vina score as the training signal.)**\n\n**A:**\n\nThanks for your kind notes. Very good question, which has also been mentioned by Reviewer Qdyd. We conduct experiments on GEKO's benchmark and follow the same postprocessing (using Vina for reranking) as GEKO for comparison. We totally agree that it is not appropriate to compare the proposed DESERT and 3D SBDD in such a setting in the supplementary experiments. \n\nTo fix the concern, we did a quick run on SBDD's benchmark and find that **DESERT outperforms 3D SBDD without the reranking process**.\n\nWe conduct experiments under two settings to make comparisons between 3D SBDD and DESERT:\n\n1. We **remove the post-processing step of DESERT**, and compare it with SBDD.\n2. We **add the same post-processing step to SBDD** by drawing the same number of molecules (200k) as DESERT. Similar to DESERT, we use the released code of SBDD and set `num_samples=200000`, then use Vina to select the top-100 molecules for comparison.\n\nResults show that:\n\n| Metric| 3D SBDD (w/o post-processing)| 3D SBDD (w post-processing) | DESERT-POCKET (w/o post-processing) | DESERT-POCKET (w post-processing) |\n| ------- | ----------|---------- |--------- |--------- |\n| Vina (kcal/mol) | -6.069 | -7.584 | -6.148 | -9.410 |\n| QED | 0.522 | 0.501 | 0.614 | 0.549 |\n| SA | 0.672 | 0.623 | 0.612 | 0.616 |\n| Diversity | 0.873 | 0.826 | 0.926 | 0.908 |\n\n\n**DESERT outperforms 3D SBDD in both with/without post-processing setting on 3 of 4 metrics: Vina, QED and Diversity.** Note that DESERT works in a zero-shot way instead of using protein-ligand labeled data for training (the case of SBDD).\n\nDESERT gives a lower SA score than 3D SBDD. As explained in the previous response to all reviewers, we assume that it is because the generated molecules of DESERT tend to be structurally complicated, which leads to a slightly worse synthesis score.\n\nThanks again for pointing out the concern in the experimental comparison. We will fix it throughout the whole paper to make it clear.", " Thank you for the detailed responses and revisions. I still have some questions about the model and your supplementary experiments compared to 3D 3BDD.\n* Does the decoded output at each step correspond to a specific 3D patch (the input of the encoder)? If the decoding is unordered, how does it align with your tree linearization algorithm (is there any guarantee that the decoded fragment sequence is a valid tree)?\n* If I understand it correctly, during the sampling phase, you sketch 200 shapes and generate 1000 molecules for each shape (which means 200k sampled molecules), then rank them using vina local energy minimization and select the top 100 molecules. Is it same in supplementary experiments compared to 3D SBDD? LiGAN and 3D SBDD only randomly sampled 100 molecules, such a comparison would be unfair. Also, since Vina score is one of the evaluation metrics, I don't think it should be used as the ranking criterion. (It may be fine when compared to GEKO, since it uses Vina score as the training signal.)", " Dear Reviewer, we appreciate your valuable advice, which helps enhance our manuscript. \n\nWe are happy to discuss if you have any further suggestions or concerns.\n\nThanks for your time!", " Dear Reviewer, we appreciate your valuable advice, which helps improve our manuscript a lot. \n\nDid our response and the updated manuscript address your questions? We are happy to discuss any further concerns. \n\nThanks for your time!", " Thanks for the detailed discussion. It may be worth trying to adopt AlphaFold's way to handle quaternion as well in future work which could avoid discretization. Overall, I am happy with your rebuttal. Thanks for the effort to improve the manuscript a lot. I am revising my score accordingly.", " **Q: Further discussion about the difference with AlphaFold.**\n\n**A:**\n\nThis is a very good question! We would like to firstly point out our opinions:**The main difference between our work and AlphaFold is whether to discretize the rotation quaternion**. Our key idea is to **avoid the discontinuity/ambiguity of quaternions when optimizing it** [5] [6]. \n\nWe would like to give a brief discussion here:\n\n - **Overview**: There are several approaches to parameterize the rotation operator: quaternion [2], euler-angle [3], and SO(3) group (i.e., the rotation matrix) [4]. Quaternion and euler-angle are sometimes ambiguous and discontinous[5] [6] (see examples below). \n - **Example of quaternion's ambiguity**: the rotation operator is periodic, rotating $180 \\degree$ is equal to rotating $-180 \\degree$, rotating $179.9 \\degree$ is very close to rotating $-179.9 \\degree$. Now considering a case when some object rotates $179.9 \\degree$ and a neural network outputs $-179.9 \\degree$, should it be penalized or not? Without discretization, the mean-square-error can be $[179.9 - (-179.9)]^2=359.8^2$ . However, with discretization, we convert a regression problem to a classification one, where $179.9 \\degree$and $-179.9\\degree$ is possible to lie in the same bin, i.e., $180\\degree$.\n - **How AlphaFold avoids such ambiguity**: In AlphaFold, the quaternion is an intermediate variable. AlphaFold **does not optimize the quaternion directly** (AlphaFold's Appendix 1.9.3), thus it avoids such an issue. Directly optimizing the quaternion in a regression way may hurt the performance [5] [6], similar to our observation (Figure 3, Appendix). \n - **Future work**: In the area of structural biology, some researchers prefer to optimize two rows of a rotation matrix, instead of the quaternion [7]. We will leave this for future work. \n\n[1] Jumper, John, et al. \"Highly accurate protein structure prediction with AlphaFold.\" Nature 596.7873 (2021): 583-589.\n\n[2] *https://en.wikipedia.org/wiki/Quaternion*\n\n[3] *https://en.wikipedia.org/wiki/Euler_angles*\n\n[4] *https://en.wikipedia.org/wiki/SO3*\n\n[5] Zhou Y, Barnes C, Lu J, et al. On the continuity of rotation representations in neural networks[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019: 5745-5753.\n\n[6] Falorsi L, De Haan P, Davidson T R, et al. Explorations in homeomorphic variational auto-encoding[J]. arXiv preprint arXiv:1807.04689, 2018.\n\n[7] Zhong E D, Bepler T, Berger B, et al. CryoDRGN: reconstruction of heterogeneous cryo-EM structures using neural networks[J]. Nature methods, 2021, 18(2): 176-185.", " Thanks a lot for your attention and the quick reply. We respond to the further two questions as follows:\n\n**Q: Does postprocessing happens before or after evaluation, it seems the postprocessing you explained may affect the evaluation result?**\n\n**A:** \n\n1. Yes, the postprocessing happens before evaluation, which does affect the evaluation result. We include the postprocessing following GEKO (previous SOTA). We conduct experiments on GEKO's benchmark and employ the same postprocessing as GEKO for comparison.\n2. We did a quick run on SBDD’s benchmark without post-processing (mentioned in your previous question) and find that without postprocessing (do not removing duplicate molecules and randomly selecting 100 molecules from DESERT's outputs for evaluation), the proposed DESERT still outperforms SBDD on 3 of 4 metrics. Note that DESERT works in a zero-shot way instead of using protein-ligand labeled data for training (the case of SBDD). Following are the detailed comparisons:\n\n- DESERT (w/o post-processing) achieved comparable (slightly better) Vina scores with 3D SBDD, SBDD employ pocket-ligand labeled data for training.\n- DESERT outperforms 3D SBDD on QED/Diversity.\n- DESERT gives a lower SA score than 3D SBDD. As explained in the previous response to all reviewers, we assume that it is because the generated molecules of DESERT tend to be structurally complicated, which leads to a slightly worse synthesis score.\n\nIn a word:\n - In 3D SBDD's setting, DESERT generates slightly better results, **without any supervised data**.\n - In GEKO's setting, DESERT generates SOTA results, **without any guidance during generation, but 20 times faster**.\n\n\n\n| Metric | 3D SBDD | DESERT-POCKET (w/o post-processing) |\n| --------------- | -------------------- | -------------------------- |\n| Vina (kcal/mol) | -6.069 | -6.148 |\n| QED | 0.522 | 0.614 |\n| SA | 0.672 | 0.612 |\n| Diversity | 0.873 | 0.926 |\n\n\nThanks again for your kind notes and we will add more discussions and comparisons to make this clear in our manuscript.\n\n**Q: The discretization of rotation quaternion and translation vector does not seem very intuitive to me. Could you elaborate more?**\n\n**A:**\n\nYes, we would like to elaborate the discretization more clearly with some intuitive examples.\n\n- In terms of the **translation** vector, we show a simplified example in the 1-dimension space. Supposing the translation vector ranges from 0 to 10, we divide it into 5 bins: $[0, 2), [2, 4), [4, 6), [6, 8) $and $[8, 10]$. Given a translation vector 4.5, \"discretization\" means we put it into the 3rd bin -- $[4, 6)$.\n- The **rotation** quaternion can be expressed as a rotation of an angle $\\theta^\\circ$ around an axis $(x, y, z)$. Therefore, we discretize the quaternion in two steps: a) Enumerating rotation axes. For example, we can enumerate 8 rotation axes from the origin, i.e., $(0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0)$, etc; b) Enumerating rotation angles for each axis. For example, we can enumerate the angle of every $15^\\circ $. Combining the two steps, we can divide the range of quaternions into bins, like $(0, 0, 1, 0^\\circ), (0, 0, 1, 15^\\circ), \\cdots, (0, 1, 1, 0^\\circ), (0, 1, 1, 15^\\circ)$, and so on. Given a quaternion $(0.1, 0.2, 0.9, 16^\\circ)$, \"discretization\" means we map it to the 2nd bin -- $(0, 0, 1, 15^\\circ)$.\n", " I appreciate the effort the authors made to clarify the points and improve the manuscript. Overall, I am happy with the response. Most of my concerns have been resolved in a good way. I have two further questions: \n\n* Does postprocessing happens before or after evaluation, it seems the postprocessing you explained may affect the evaluation result?\n* The discretization of rotation quaternion and translation vector does not seem very intuitive to me. Could you elaborate more? It would also be interesting to see some discussions with other similar methods (e.g. Algorithm 23 in AlphaFold2 [1] Supplementary Material). Could this be a potential way to improve the performance further? It is okay at this point to leave as future work but it is interesting to see the discussion related to the literature.\n\n[1] Jumper, John, et al. \"Highly accurate protein structure prediction with AlphaFold.\" Nature 596.7873 (2021): 583-589.", " Thanks for your valuable comments! We are encouraged that all the reviewers consider our work novel, and we are also sorry for missing some details of the proposed method and experiments. We have updated our submission (with red text) to clarify our approach. For your convenience, we have also listed the changes here:\n\n- In Appendix 2.2, to prove the effectiveness of DESERT more widely, we add experimental results on the test data from 3D SBDD [1], as mentioned by Reviewer Qdyd. \n \n - In general, DESERT-POCKET has better results on 4 of 5 benchmarks. \n DESERT-POCKET achieves a remarkably higher Vina score (-9.377 vs -6.344 [3D SBDD]) and higher affinity (87.30 vs 29.09 [3D SBDD]). The reasons are two folds: a) the pretraining process lets the model have the ability to generate plausible molecules given any shape, whereas 3D SBDD only learns to generate molecules from a limited number of pockets. b) our model adopts more diverse sampling techniques, while the diversity of 3D SBDD is constrained by inputting a unique pocket shape. \n The SA is a bit lower yet acceptable (SA>0.59 is considered good in GEKO). We assume it is because the generated molecules tend to be more structurally complicated.\n \n \n | Metric || liGAN | 3D SBDD | DESERT-POCKET * | Ref | \n | ------------- | ------ | ------- | --------------- | ------ | ------ |\n | Vina | Avg. | -6.144 | -6.344 | -9.377 | -7.158 |\n || Med. | -6.100 | -6.200 | -9.410 | -6.950 | |\n | QED | Avg. | 0.371 | 0.525 | 0.554 | 0.484 |\n || Med. | 0.369 | 0.519 | 0.549 | 0.469 | |\n | SA | Avg. | 0.591 | 0.657 | 0.614 | 0.733 |\n || Med. | 0.570 | 0.650 | 0.616 | 0.745 | |\n | High Affinity | Avg. | 23.77 | 29.09 | 87.30 | - |\n || Med. | 11.00 | 18.50 | 96.00 | - | |\n | Diversity | Avg. | 0.655 | 0.720 | 0.908 | - |\n || Med. | 0.676 | 0.736 | 0.908 | - | |\n | logP | Avg. | - | - | 3.163 | - |\n || Med. | - | - | 3.224 | - | |\n \n - As the test data from 3D SBDD [1] only provide incomplete protein pockets, we recover the complete pockets by aligning the incomplete protein to the structures from the PDB database. Finally, we managed to recover 55 pockets (a little difficult data pre-processing process) and apply DESERT on them. The results of liGAN and 3D SBDD are from [1]\n \n- In Appendix section 3.1, we discuss the connection between our work and previous shape-based methods\n\n- In Appendix section 3.2, we discuss the connection between our work and previous fragment-based methods.\n\n- In Appendix section 1.3, we add a more detailed description of our model's hyperparameters.\n\n[1] Luo et al., A 3D Generative Model for Structure-Based Drug Design, NeurIPS 2021.", " [1] Ari Holtzman et al., The Curious Case of Neural Text Degeneration, ICLR 2020\n\n[2] David Ryan Koes and Carlos J Camacho, Shape-based virtual screening with volumetric aligned molecular shapes, Journal of Computational Chemistry, 2014\n\n[3] Philine Kirsch et al., Concepts and Core Principles of Fragment-based Drug Design, Molecules, 2019\n\n[4] Christopher W Murray et al., The Rise of Fragment-based Drug Discovery, Nature Chemistry, 2009\n\n[5] Wengong Jin et al., Junction Tree Variational Autoencoder for Molecular Graph Generation, ICML 2018\n\n[6] Harrison Green et al., DeepFrag: A Deep Convolutional Neural Network for Fragment-based Lead Optimization, Chemical Science\n\n[7] Marco Podda et al., A Deep Generative Model for Fragment-based Molecule Generation, AISTATS 2020\n\n[8] Yijia Liu et al., Transition-based Syntactic Linearization, NAACL 2015\n\n[9] Yoon Kim et al., Unsupervised Recurrent Neural Network Grammars, NAACL 2019\n\n[10] Oriol Vinyals et al., Grammar as a Foreign Language, NeurIPS 2015\n\n[11] David Weininger, SMILES, A Chemical Language and Information System. 1. Introduction to Methodology and Encoding Rules, Journal of Chemical Information and Computer Sciences 1988\n\n[12] Matt J Kusner et al., Grammar Variational Autoencode, ICML 2017\n\n[13] Hanjun Dai et al., Syntax-directed Variational Autoencoder for Structured Data, ICLR 2018\n\n[14] Seokho Kang et al., Conditional Molecular Design with Deep Generative Model, Journal of Chemical Information and Modeling 2018\n\n[15] Liang Huang et al., LinearFold: Linear-time Approximate RNA Folding by 5’-to-3’Dynamic Programming and Beam Search, Bioinformatics 2019\n\n[16] Binghong Chen et al., Molecule Optimization by Explainable Evolution, ICLR 2020\n\n[17] Yutong Xie et al., Mars: Markov Molecular Sampling for Multi-objective Drug Discovery, ICLR 2021", " **Experiment**\n\n**Q: Comparison of efficiency between proposed method and GEKO.**\n\n- As shown in Figure 7, DESERT is 20 times faster than GEKO in testing. Specifically, the figure demonstrates that GKEO has the slowest generation speed. DESERT-POCKET achieves similar performance but with a much faster speed. The reason is that GEKO employs a time-consuming docking process to train their MCMC-based model in a trial-and-error way. Instead, DESERT makes a more clever choice by pruning the space with its biological knowledge regarding the shape. Although DESERT needs two weeks for pretraining, the whole pretraining process is one-passed. For a novel pocket, we just use the previously trained models. Meanwhile, anytime we apply GEKO to a novel pocket, we have to train the model from scratch, which usually takes days.\n\n**Q: Apply the proposed method to the test data from 3D SBDD.**\n\n- We have reported the results in the general response, which demonstrates that our method can produce drug-like molecules with higher binding affinity. However, as we obtain the seed shape by overlapping multiple molecular shapes, the generated molecules tend to be structurally complicated, which leads to a moderately lower synthesis score. \n\n**Q: More ablation studies for analyzing the contribution of fragment-based style and pretraining framework.**\n\n- For the contribution of pretraining, as shown in Figure 10, when we increase the amount of pretraining data, our model achieves better performance on converting molecular shapes to 3D molecules, which implies that the pretraining component really helps the performance of our model. Although we also observe the model performance stops increasing when the size of dataset is over 4.5M, which makes sense because it is bounded by the model capacity and the problem complexity.\n\n- For the contribution of fragment-based generation style, we are training an atom-based DESERT for comparison. However, since the pretraining process is quite time-consuming, we promise to report it and add discussions after obtaining the results.\n\n**Related Work**\n\n**Q: How tokenization and lineraization are related to the literature of graph generation, fragment/scaffold-based molecule generation, etc.**\n\n- Thanks. We have added related discussions to the appendix.\n - We include discussions here about how the tokenization and linearization procedures relate to fragment-based drug design [3] [4].\n - ***Fragment-based Drug Design*** Briefly, there are two approaches in FBDD: growing the fragment synthetically to a proximal binding site or linking two fragments together [4]. Our method can be classified as the previous type since the linearization generates a molecule in a one-by-one fashion.\n - ***Tokenization*** The procedure is carefully designed for deep generative models to avoid loops and preserve functionalities, which is in spirit to the principle of [5]. Some FBDD work, such as [6] works in a discriminative approach. Thus there are not many constraints when cutting the molecules. Podda et al. [7] use the SMILES-fragment rather than a real molecule-fragment. Thus it can not utilize rich structured features.\n - ***Linearization*** The procedure aims at traversing (or generating) a structured object in a left-to-right approach [8], which is tractable and scalable. It is borrowed from the area of computational linguistics, more specifically, syntactic parsing and structured generation [9] [10]. For micro-molecules, linearization (such as SMILES [11]) has been adopted for several decades. There is a line of research work for SMILES-based generation [12] [13] [14]. Similarly, in the area of macro-molecule, Huang et al. [15] designed a linear structure to estimate the likelihood of the structure of RNA.\n - ***Comparison*** Compared with traditional linearized sequences such as SMILES [11], our method utilizes structural information to segment the molecules to preserve their functionality. Compared with topological generation based on a graph [5] [16] [17], our method is more scalable to big data since generating the variable graph topology is not friendly to large-batch training in neural networks.\n\n**Other**\n\n**Q: Codes are not uploaded, which hinders the reproducibility of this work.** \n\n- We have uploaded the core code of our method. We share the pre-trained checkpoint through an anonymous account for double-blindness: https://drive.google.com/file/d/1YCRORU5aMJEMO8hDT_o9uKCXmXTL5_5N/view?usp=sharing", " We thank reviewer Qdyd for giving constructive comments on our work. In the following paragraphs, we will answer the questions regarding the details of our model, the experiments, and the related work. Hope the replies can make our paper more clear. Further comments are welcome!\n\n**Method**\n\n**Q: How is sampling achieved for generating diverse molecules for a specific pocket? Is sampling only involved after generating molecules from the shape?**\n\n- The sampling is achieved in two steps: a) Sampling molecular shapes based on the given pocket. When sampling molecular shapes, we use different seed shapes and set the initial position of the seed shape randomly. Both of them contribute to the diversity of the generated molecules; 2) For each molecular shape, we further sample diverse molecules that fit it. Specifically, we employ the Nucleus decoding method to selectively combine different fragments in different decoding steps to achieve diversity. The sampling happens through the whole generation process. As reported in Table 1, as expected, our method obtained high diversity.\n\n**Q: How is the post-processing done?**\n\n- As mentioned in line 164, following our main competitor and previous state-of-the-art GEKO, the post-processing contains two steps: a) We remove the duplicate molecules. Specifically, if two generated molecules have the same SMILES, we randomly drop one of them; b) We further re-rank the generated molecules and eliminate the molecules that do not pass the affinity threshold. \n\n**Q: How many bins are cut for the rotation and translation operations? Where is the origin? How is the transformation done?**\n\n- Sorry for missing these details. We have added them to Appendix section 1.2. Thanks for pointing this out.\n\n- For rotation, the total number of bins is 8,712. To be precise, we enumerate 363 rotation axes in 3D space. For each axis, we enumerate 24 rotation angles. For the translation, the total number of bins is 21,952. In Appendix section 2.2, we have conducted several analytical experiments to study the discretization of these two operations. The results show that a) without discretization, the model can not generate molecules that fit the input shape, because of the non-linear relationship between quaternions and rotation angles b) with discretization, different bin sizes (7.5/15/30) do not have a significant difference.\n\n- While due to the trade-off between the granularity of the bin and the accuracy of the model, the number of the bin does not significantly affect the results.\n\n- For a fragment, we set its centroid as the origin. Because when handling a fragment, in order to align the same fragment in different 3D poses, we need to build up an internal coordinate that is not influenced by the external transformation. As we can determine the centroid of a fragment no matter what 3D pose it is, we treat it as the origin of the internal coordinate.\n\n- For rotation operation and translation operation, the transformation is done as follows:\n - We represent the $i$-th rotation bin as a quaternion $q^{\\mathrm{bin}}_i\\in\\mathbb{R}^{4}$. The discreterization of any continuous rotation operator $q \\in\\mathbb{R}^{4}$ can be computed by $\\underset{i}{\\arg \\min}\\|q^{\\mathrm{bin}}_i, q\\|_2$.\n - We represent the $i$-th translation bin as the coordinate of its centre $t^{\\mathrm{bin}}_i \\in \\mathbb{R}^{3}$. The discreterization of any continuous translation operator $t \\in\\mathbb{R}^{3}$ can be computed by $\\underset{i}{\\arg \\min}\\|t^{\\mathrm{bin}}_i, t\\|_2$.\n\n**Q: In training & decoding section 2.4, only ligands data is mentioned while no protein data is mentioned.**\n\n- In section 2.4, we do not mention the protein data. Because after sketching molecular shapes from the given protein, geometric information can be fully provided by the sketched shapes. In other words, we do not need other protein information for decoding molecules when the shape is given.\n\n**Q: What are shape, rotation, translation and category mean in Figure 10?**\n\n- Sorry for the confusion. We have fixed this in the new version of our draft.\n\n- *Shape* stands for the Shape Tanimoto [2], which measures the shape similarity between the input shape and generated molecules. *Rotation* stands for the accuracy of the model in predicting the correct rotation bin. *Translation* stands for the accuracy of the model in predicting the correct translation bin. *Category* stands for the accuracy of the model in selecting the correct fragment. All of them can be treated as metrics reflecting how well the model fits the data, which shows that our model builds up a strong mapping from shapes to molecules.", " **Q: Not all molecules in the ZINC database are used for pharmaceutical purposes.**\n\n- Thanks for pointing this out. We only use the drug-like subset of ZINC to train our model. We have made this clear in our manuscript.\n\n**Q: The tree-like structure is less expressive for molecules with complex structures.**\n\n- The tree-like structure is expressive enough. We did a quick run and found that tree-like structures can describe over 96% of molecules in the ZINC database. Specifically, we sample 10M molecules from ZINC drug-like subset and analyze their structure after fragment cutting. We find that 62% of these molecules have a native sequence structure, and 34% of molecules only have one branch. Based on these results, we think that the tree-like structure is expressive enough.\n\n**Q: The model is limited by ignoring information about the different interaction forces.**\n\n- As mentioned in Appendix section 2.2, our method supports considering information about the interaction forces in the decoding phase of our method and we even conduct some preliminary experiments. However, the preliminary results do not give positive results. The reason might be that interaction forces used in our preliminary experiments do not fit our shape-based pretrained model and maybe we can include more chemical information into consideration, e.g., the bond length (which can be our future work). However, this shows that our model is not limited by ignoring interaction force information. Nevertheless, only using geometric information makes our method outperform previous work, and we leave that utilizing interaction force information as future work.\n\n**Experiment**\n\n**Q: Measurements of drug-like properties(QED), ease of synthesis(SA) and lipid solubility indicators(LogP) and comparisons with other models should be included in the experimental section.**\n\n- As mentioned in section 3.1 and Appendix section 2.1, following [6] [7] [8], we combine QED and SA to build the metric Succ. Specifically, the Succ tells us the percentage of generated molecules that satisfy a widely used rule of thumb [9], i.e., QED >= 0.25, SA >= 0.59, and Vina score <= -8.18. Moreover, we also include these three metrics individually in our new experiment, whose results are shown in our general response.\n\n**Other**\n\n**Q: The source code and pre-trained model should be made public for easy reproduction.**\n\n- We have uploaded the core code of our method. We share the pre-trained checkpoint through an anonymous account for double-blindness: https://drive.google.com/file/d/1YCRORU5aMJEMO8hDT_o9uKCXmXTL5_5N/view?usp=sharing\n\n[1] Chloe Hus et al., Learning Inverse Folding from Millions of Predicted Structures, ICML 2022\n\n[2] Roshan Rao et al., MSA Transformer, ICML 2021\n\n[3] Alan R Katritzky et al., Computational Chemistry Approaches for Understanding How Structure Determines Properties, Zeitschrift für Naturforschung B, 2009\n\n[4] Geza Gruenwald, Plastics: How Structure Determines Properties, 1992\n\n[5] Alan R. Katritzky et al., How Chemical Structure Determines Physical, Chemical, and Technological Properties:  An Overview Illustrating the Potential of Quantitative Structure−Property Relationships for Fuels Science, Energy Fuels, 2005\n\n[6] Yuwei Yang et al., Knowledge Guided Geometric Editing for Unsupervised Drug Design. 2022\n\n[7] Yutong Xie et al., MARS: Markov Molecular Sampling for Multi-objective Drug Discovery, ICLR 2021\n\n[8] Wengong Jin et al., Multi-Objective Molecule Generation using Interpretable Substructures, ICML 2020\n\n[9] Oleg Ursu et al., DrugCentral 2018: an update, Nucleic acids research, 2019", " Thanks for your valuable comments! We will answer your questions regarding the proposed method, and the experiment setting respectively in the following paragraphs. Further comments are welcome!\n\n**Method**\n\n**Q: The generation model is still similar to the transformer-based sequence generation model used in machine translation tasks, and specific model architecture, as well as the parameters, should be given.**\n\n- Yes, we use the widely used transformer architecture for sequence generation [1] [2]. However, which is not the main contribution of our paper. The core contribution of our work is using massive unbound molecules to pretrain the drug design model in a zero-shot fashion, which is model-agnostic. In practice, the architecture is free to change.\n\n- From line 154 to 158, we have listed many details about our model, including the number of network layers, model dimension, batch size, learning rate, etc. We also add other parameter details, including the number of attention heads, and patch size, in Appendix section 1.3 of the new version. Thanks for pointing out this.\n\n**Q: Is there a theoretical basis for using the intersection of a seed shape and a pocket shape to obtain a molecule shape?**\n\n- As we mentioned in section 1, DESERT is not baseless. We design the intersection strategy based on two principles: a) Structure determines properties. [3] [4] [5] show a drug candidate would have satisfactory bio-activity to a target pocket if their shapes are complementary. b) Ligand often attaches tight to a pocket. As we mentioned in line 69 and Figure 1, we have conducted several preliminary studies, which show the average distance between ligands and pockets is $1.52A$, even less than the length of C-C bond, i.e., $1.54A$, in a molecule itself. Based on these principles, our desired molecular shapes should satisfy the property, i.e., complementary to the pocket, to achieve good bioactivity. The intersection method makes the sketched molecular shape meet the requirement.\n\n- The intersection method makes the sketched molecular shape meet the requirement due to two premises: a) Two shapes complement each other if part of their boundaries matches. b) The intersection method ensures that the generated shape shares some boundary with the pocket's shape. \n\n**Q: How are the shape, size, and initial position of the seed shape chosen in the algorithm?**\n- Thanks for pointing out these missing details. We have added them to Appendix section 1.1. \n\n- We get the seed shape by heuristically overlapping the shapes of several drug-like molecules sampled from ZINC. Our desired molecular shapes should satisfy two properties: a) Complement to the pocket to achieve good bioactivity, which means part of their boundaries are close to each other; b) Be a drug-like shape (e.g., not a rectangular solid) and not overly dependent on one specific molecule for diversity. The property a) is satisfied since the boundary of the intersected area matches some part of the pocket's boundary. The property b) is satisfied by overlapping molecules' shapes to avoid generating odd shapes, such as rectangle or triangle shapes which never occur in molecules. The results show that the overlapping method is relatively effective.\n\n- Because we obtain the seed shape by overlapping drug-like molecules, the size of the seed shape is decided by the sampled molecules.\n\n- For the initial position, we randomly sample one as long as the seed shape is outside the pocket shape. With such strategies, we can explore different regions of a given pocket, making our method produce diverse molecules.\n\n**Q: How do different initial parameters of the seed shape affect the generated results?**\n\n- In Appendix section 2.2, we discuss the influence of different types of seed shapes on the model performance. Compared with using the entire pocket directly, using a seed shape achieves a better binding affinity. The results indicate that the seed shape can capture protein's structural information more moderately.\n\n- In section 3.5, we also discuss how the number of molecular shapes sampled with the seed shape affects the method's performance. In Figure 11, we find that increasing the number gives us a performance rise, which implies comprehensive explorations of pockets benefits model performance.\n\n**Q: Does the model then still generate the same or at least similar molecules when given a molecule shape that has been rotated and translated?**\n\n- Following liGAN, when training the model, we randomly rotate and translate the molecules to make our model have rotation and translation invariance ability. We compare the similarity of generated molecules based on the different or same molecular shapes (both randomly rotate and translate). The similarity rises from 0.092 to 0.508, which shows that with the same molecular shape as input, the model produces similar molecules as expected. Sorry for missing the details. We have added them to our manuscript.\n", " **Related Work**\n\n**Q: Related work section discussing the connection between proposed Shape2Mol and existing shape-based molecular methods.**\n\n- Here we discuss the relationship between our method and existing shape-based drug design [3] approaches. Some previous work designed new drugs based on the shape of a known ligand. Traditional approaches work in a retrieval way, i.e., finding molecules whose shape is most similar to a known one [4] [5]. Modern deep learning models can decode a molecule from its shape [6] [1]. Such ligand-based generation can not generalize to unseen pockets. [7] directly generates molecules from the pocket shape, which is the most closed to our work. However, our model works in a fragment-based fashion, while theirs works in an atomic way. What makes our model especially different is that we utilize the power of the pre-training model to make pocket-based drug design more promising.\n\n[1] Miha Skalc et al., Shape-Based Generative Modeling for de Novo Drug Design, JCIM 2019\n\n[2] Koes et al., Shape-based Virtual Screening with Volumetric Aligned Molecular Shapes, Journal of computational chemistry, 2014\n\n[3] Montfort et al., Structure-based Drug Design: Aiming for a Perfect Fit, Essays in biochemistry, 2017\n\n[4] Kumar et al., Advances in the development of shape similarity, Frontiers in chemistry, 2018\n\n[5] Santos et al., Drug Screening using Shape-based Virtual Screening and in vitro Experimental Models of Cutaneous Leishmaniasis, Parasitology, 2021\n\n[6] Masuda et al., Generating 3D Molecular Structures Conditional on a Receptor Binding Site with Deep Generative Models, NeurIPS 2020\n\n[7] Shitong Luo et al., A 3D Generative Model for Structure-Based Drug Design, NeurIPS 2021", " We thank reviewer M9mc for the helpful suggestions. Following the suggestions, we clarify the details of our method. We also address these issues below. Further comments are welcome!\n\n**Method**\n\n**Q: How do you get the seed shape? How is the volume threshold t and step size alpha determined?**\n\n- We are sorry for missing these details. We have added them to Appendix section 1.1. Thanks for pointing this out.\n\n- We get the seed shape by heuristically overlapping the shapes of several drug-like molecules sampled from ZINC. Our desired molecular shapes should satisfy two properties: a) Complement the pocket to achieve good bioactivity, which means part of their boundaries are close to each other; b) Be a drug-like shape (e.g., not a rectangular solid) and not overly dependent on one specific molecule for diversity. The property a) is satisfied since the boundary of the intersected area matches some part of the pocket's boundary. The property b) is satisfied by overlapping molecules' shapes to avoid generating odd shapes, such as rectangle or triangle shapes which never occur in molecules. The results show that the overlapping method is relatively effective.\n\n- For the volume threshold, we compute the averaged volume of some molecules, i.e., $300A^3$. The number can be viewed as an estimator of the expectation of a molecule's size. Through this, we can avoid generating too large or too small shapes.\n\n- We set the step size as $0.5A$ because it can be reflected by the voxelized shapes, whose resolution is $0.5A$ too.\n\n**Q: What is the output of shape encoder and the input of shape decoder?**\n\n- Thanks for pointing this out. We have added both of them to Appendix 1.3.\n\n- The output of the shape encoder is the continuous representation of each 3D patch, which contains the geometric information of inputted molecular shape. It will serve as the context of the decoder to constrain the shape of generated molecules.\n\n- The input of the shape decoder in decoding step *t* is the fragment category, rotation quaternion, and translation vector from the decoder output at time *t-1*. We use them to tell the model how exactly a fragment is placed in 3D space so that the model can generate the next fragment connected with it. The output of the shape encoder is also inputted as the geometric context of decoding.\n\n**Q: How is the spatial correspondency established in the proposed network architecture? Is there any guarantee that the generated molecule will satisfy the shape constraint?**\n\n- We established correspondence by the powerful neural networks trained on large-scale data. Note that there is no guarantee theoretically. However, as we mentioned in section 3.3, the good results of Shape Tanimoto [2] suggest that generated molecules satisfy the shape constraint empirically.\n\n**Q: Which resolution is the shape voxelized at, and will it cause the scalability issue when the pocket size increases?**\n\n- The resolution of the voxelized shape is 0.5A. As the length of the most common chemical bond, i.e., the C-C bond, is $1.54A$, the resolution is clear enough to describe the molecular shape.\n\n- We avoid the scalability issue by using two techniques: a) Limit the maximum number of voxels with a spanned cube. Following liGAN, we only apply the voxelization in the cube around the given pocket, which detaches the connection between the voxel number and pocket size; b) As we mentioned in 2.3.1, we further use the 3D patch to compress the number of voxels. By using the technique, a group of voxels is compressed and processed together, which can let us handle a large number of voxels. In particular, we compress 21,952 voxels to 343 patches in our paper.\n\n**Experiment**\n\n**Q: How do you reduce the number of molecules in the experiments?**\n\n- As mentioned in section 2.4, we reduce the number of molecules in two steps: a) Re-rank the molecules. Following our main competitor GEKO, we use vina local energy minimization to re-rank the generated molecules; b) Drop the unwanted molecules. After the re-ranking, we only keep the top 100 molecules in our experiments.\n\n**Q: \"20 times faster than GEKO\" refers to a per-pocket or per-sample inference time?**\n\n- It refers to the per-pocket case. To calculate the speed, we measure the time which starts from the given pocket to the end of getting the 100 molecules. We have made it clear in our manuscript.\n\n**Q: How the generated molecules could have so good Vina scores without any protein pocket information leveraged in the generation process?**\n\n- Actually, as shown in Figure 3, the pocket information is used in the generation process. When we design molecules based on a given pocket, we sample the molecular shape from the pocket, which contains the geometric information of the pocket. As we reported in section 3.2, the shape helps DESERT produce high-quality molecules.", " This paper proposes a 3D molecular generative model DESERT for structure-based drug design in a zero-shot manner. The method involves two stages: first sketching the molecular shape in the protein pocket, then generating molecules based on the shape. Based on the assumption of \"structure determines properties\", the method aims to find molecules whose shapes are complementary to the pocket and thus massive unbounded molecular data (e.g. ZINC)  can be utilized to train the shape2mol model. Strengths:\n- The main idea of drug design by sketching and generating is novel and well-motivated\n- The proposed method shows better performance over existing methods in the experiments\n- The analysis in experiments is comprehensive\n\n\nWeaknesses:\n- Don't have a related work section discussing the connection between proposed Shape2Mol and existing shape-based molecular generation methods (Ref. [49] - [53])\n- The proposed method section is not very clear to me. See details in Questions - I'm a bit confused about the sketching part (Sec. 2.2). How do you get the seed shape? In algorithm 1 in appendix, how is the volume threshold t and step size alpha determined?\n\n- In Sec. 2.3, what is the output of shape encoder and the input of shape decoder? How is the spatial correspondency established in the proposed network architecture? Is there any guarantee that the generated molecule will satisfy the shape constraint?\n- Which resolution is the shape voxelized at (Sec. 2.3.1)? Since the number of voxels increases cubically to the pocket size, will it have the scalability issue?\n- In line 167, the authors stated \"For each protein pocket, we sketch 200 shapes. For each shape, we generate 1000 molecules\". In line 191, it seems all other baselines methods \"generate 100 molecules for comparison\". I'm wondering how the samples generated by the proposed method (200 x 1000) are utilized to compute scores in Table 1 for a fair comparison with other existing methods? Also, \"20 times faster than GEKO\" refers to a per-pocket or per-sample inference time?\n- Although the proposed method is based on the assumption that \"structure determines properties\", I'm still curious about how the generated molecules could have so good Vina scores without any protein pocket information leveraged in the generation process. The authors discuss some limitations in the experiment section. There is not negative social impact", " This paper proposes a zero-shot drug design method based on sketch. Starting from the property that structure determines properties, the paper uses a sketch model of protein pocket shapes and a pre-trained generative model based on molecular shape sketches to achieve zero-shot drug design that does not rely on docking experimental data and docking simulations. Strengths:\nThe paper presents a molecular generation method based only on shape sketching, which is a very novel approach. Also, the generation method without using protein binding data and simulations is enlightening for future work.\n\nWeakness:\n\tAlthough the sketch-based generation idea is novel, the generation model is still similar to the transformer-based sequence generation model used in machine translation tasks. The paper is also not clear enough in the method description, only the design of the encoding and decoding method and the paper citation of the model used are given, the specific model architecture as well as the parameters should be given.\n 1.\tIs there a theoretical basis for using the intersection of a seed shape and a pocket shape to obtain a molecule shape? How are the shape, size, and initial position of the seed shape chosen in the algorithm? Different initial parameter settings will result in very different molecule shapes and thus completely different generated molecules. How do different initial parameters affect the generated results, and is this approach reasonable?\n2.\tThe model is encoded using discretized spatial information, while the generation directly generates the 3D coordinates of the molecular functional groups. Does the model then still generate the same or at least similar molecules when given a molecule shape that has been rotated and translated? This property is necessary for the generation of physical entities.\n3.\tAlthough the pre-trained model uses a larger molecular dataset, not all molecules in the ZINC database are used for pharmaceutical purposes as far as I know. The molecules generated by the pre-training model may not be suitable for use as drugs. Measurements of drug-like properties(QED), ease of synthesis(SA) and lipid solubility indicators(LogP) and comparisons with other models should be included in the experimental section.\n4.\tIt is mentioned in the paper that the pre-trained model uses a large amount of computational resources, therefore the source code and pre-trained model should be made public for easy reproduction.\n Using only information about the shape of the protein pocket avoids some of the limitations of the data set and binding simulations. However, the model is limited by ignoring information about the different interaction forces (hydrogen bonding, π-Stacking, etc.) that occur when molecules bind to proteins depending on the type of atom since it completely abandoned such information.\n\nDuring the generation phase, the tree-like structure is less expressive for molecules with complex structures. Moreover, the connecting step based entirely on greedy algorithms may lead to unreasonable results.\n", " This paper presents a new framework and pre-training scheme for designing pocket-conditioned ligands. Specifically, it leverages the shape (voxel) of ligands and protein pockets in pre-training a sketching network that specifies the shape of the protein pocket, and learning a shape2Mol network that predicts specific ligands that fit into the given voxel shapes. The model were trained over billions of molecules in ZINC database and achieved state-of-the-art results. Strength\n* This paper proposes an interesting idea to recognize the voxel/shape of the protein pocket and leverage the information for pre-training. It is quite different from the other line of work which leverages the atoms of both proteins and ligands that aim to learn the interaction for generative design. It is interesting to see how this idea plays out and I would suggest adding more discussions about how it is considered in traditional structure-based drug design, shape/voxel vs. atomic interaction.\n* The proposed idea is proper in real-world scenario, where the paired ligand and pocket data are limited, thus I am convinced pre-training is needed. \n\nWeakness\n* How is sampling achieved for generating diverse molecules for a specific pocket? In the paper, only part mentioning it refers to Nucleus [68], does it imply that sampling is only involved after generating molecule from the shape? \n* Efficiency is neither compared nor discussed. The authors make a fair argument in the introduction about the drawback of GEKO, which performs equally powerful as the proposed method. Given the proposed method also takes tons of training time, it would be good to see the comparison and discussion.\n* Many discussions or relations to the literature are not discussed. e.g. how tokenization and lineraization are related to the literature of graph generation, framgment/scaffold-based molecule generation, etc. \n* Experiments are only done over 12 protein targets, I would suggest adding more experiments, e.g. 100 targets in 3D SBDD paper.\n* Ablation study is not complete, the comparison method, especially deep learning-based SBDD, 3D SBDD and liGAN are based on atomic reconstructions, it would be interesting to see whether the proposed model benefit from fragment-based method or the new training framework.\n* Many details are missing or less mentioned, For example, how the post-processing is done. How many bins are cut for the rotation and translation operations, where the origin is, how the tranformation is done, etc. In training & decoding section 2.4, only ligands data are mentioned while no protein data are mentioned. Figure 10 is not clear, not sure what shape, rotation, translation and category mean. \n* Codes are not uploaded, which hinders the reproducibility of this work. It would be beneficial to the community if authors consider publishing the codes if the paper is accepted.\n\nOverall, I think this paper makes a fair point and it is a good attempt in the direction of pocket-conditioned ligand design. I am willing to raise my score if the questions get answered. See above, I put questions related to each point in the strengths and weaknesses part. The limitations are not discussed in the paper but may come from two sides: (1) whether this method can generate diverse ligands that bind to protein targets, (2) the efficiency of the method. The first one concerns whether the formulation makes sense in the biological context, or if not, is still a good way to leverage unlabeled data. The second one needs a bit more experiments to support." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "10s0g2G9yiA", "kycIxhJ47qE", "dV7ceMmRin9", "dV7ceMmRin9", "naSFP3JZBsF", "VVC9g0n2Ylt", "10s0g2G9yiA", "XEk09eqgZEKi", "OVQHKzYu6iR", "OVQHKzYu6iR", "DY-yzObqlD", "nips_2022_H_xAgRM7I5N", "MM1zPWNNZs", "MM1zPWNNZs", "MM1zPWNNZs", "10s0g2G9yiA", "10s0g2G9yiA", "VVC9g0n2Ylt", "VVC9g0n2Ylt", "nips_2022_H_xAgRM7I5N", "nips_2022_H_xAgRM7I5N", "nips_2022_H_xAgRM7I5N" ]
nips_2022_-ZPeUAJlkEu
Why neural networks find simple solutions: The many regularizers of geometric complexity
In many contexts, simpler models are preferable to more complex models and the control of this model complexity is the goal for many methods in machine learning such as regularization, hyperparameter tuning and architecture design. In deep learning, it has been difficult to understand the underlying mechanisms of complexity control, since many traditional measures are not naturally suitable for deep neural networks. Here we develop the notion of geometric complexity, which is a measure of the variability of the model function, computed using a discrete Dirichlet energy. Using a combination of theoretical arguments and empirical results, we show that many common training heuristics such as parameter norm regularization, spectral norm regularization, flatness regularization, implicit gradient regularization, noise regularization and the choice of parameter initialization all act to control geometric complexity, providing a unifying framework in which to characterize the behavior of deep learning models.
Accept
All reviewers and the AC find this paper makes valuable contributions to the deep learning theory community. Thus, the AC recommends acceptance.
train
[ "bUhhZgWVgfC", "J3iVepPBAlk", "18OKVqOyXkj", "MYktCrQqXe", "C9W-IU3oUwg", "tMUYrRGx7Ob", "mnzKK-3b4os", "foRwX14gkdU", "GzJVdA7k3f-", "lkLMmU6YDWt", "6y6fpiPRv6u", "B25vBrdkiKy", "V7wRWuHw9xH" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We'd like to thank you for your thorough response. We are very glad that we were able to clear the misunderstanding and that you find the current work interesting. The question of fully understanding how GC affects generalization is certainly an interesting one as well and something to focus on for future work. This discussion has certainly been very helpful and we appreciate your thoughtful reviews and comments. Thank you for offering to raise your score.", " Thanks for the nice response from the authors! Given the fact the authors have addressed my questions, I will maintain my score. ", " I sincerely thank the authors for this response. Likely because of a personal research bias, my interpretation of the results and of the message of the paper was indeed off. I was trying to force them into something different than the actual content of the paper.\n\nAlthough I still find the proposed (and discussed) direction interesting I now understand that the connection between GC complexity and generalization performance is not meant to be completely understood and explained in this work, and the potential open questions and inconsistencies will need to be addressed in future research.\nThe evidence for the GC minimization being a common underlying trait of many well known learning heuristics is indeed convincing, and I see that this is a sufficiently interesting point in itself.\n\nStill, I hope that at least part of this discussion was not completely pointless, and that mentioning some possible directions of development of this work will increase its impact. \n\nI plan to increase my score to a positive evaluation in view of these clarifications, which solved my misunderstanding. Thanks again to the authors for their efforts in explaining and clarifying their work.", " In light of your new comments, we believe there is a significant misunderstanding:\n\n> \"This means that GC would not help select the best model among them, and this goes against the main message of this work”\n\nWith respect, that is not the main message of this work. The goal of our paper is to introduce a new complexity measure, GC, that is computationally tractable (Sec 2), captures the double-descent phenomena (Sec 6) and to show that this measure is, in fact, implicitly regularized through standard training procedures such as initialization (Sec 3), regularization (Sec 4), and common hyper-parameter tuning (Sec 5). We verify these claims through experiments and motivate them with theoretical arguments, uncovering a novel unifying simplicity bias in DL.\n\nAlthough our experiments suggest this simplicity bias improves test accuracy, and that GC may be used to select best models, we are not making the causal claim, nor it is the message of our work. The scope of such a claim would require its own investigation to understand when/how/if GC can be used to select models and would follow a different approach theoretically. There is no reference to “model selection” in our original submission. The term was only added to the future work section as a conjectural future direction, in response to your initial comments.\n\n> In light of these works [on Jacobian regularization], can the GC really be presented as a novel metric?\n\nJacobian regularization is about a loss regularizer rather than a complexity measure. It is not our aim to devise regularizers. In fact, explicit GC regularization was not part of the original submission, but only added at your request. \n\nThe novelty of our work resides in two main aspects: First, GC captures the double-descent phenomena; second, GC is implicitly regularized by a wide range of common training and tuning heuristics, which shows a novel simplicity bias in DL.\n\n> this work can become relevant for ML practice only if GC turns out to be informative and discriminative also close to state-of-the-art performance, and if it can be used to improve scores or to devise more effective algorithms. \n\nOur paper is a theoretical paper aimed at establishing a unifying mechanism at play in existing and common training heuristics in DL. Although we believe that our findings will be useful in future works to improve SOTA, training procedures, and model selection, it is not our goal here. Our goal is to understand the mechanism.\n\n> the authors should focus on the region with 80+% accuracy.\n\nTo scientifically observe the effect for a given training heuristic on GC, we study it in isolation to remove possible sources of confounding effects as much as possible.\n\nTo achieve 80+% accuracy on CIFAR10, several heuristics need to be combined making it harder to disentangle which heuristic is responsible for the effect. \n\nThis is particularly visible in the learning curves in C.6, Fig. 23 where we achieve 80+% as requested when using multiple heuristics - we note that the observed correlation low GC / high test accuracy persists\n\n> The experiments with direct GC regularization do not reach very good performance. Moreover, both the GCs and the test scores are lower when compared to the models trained with all the known heuristics\n\nAgain, our goal is not to devise novel or performant regularizers\n\nThat said, we disagree with the statement. \n\nOur MNIST experiment in Fig.18 with explicit GC regularization achieved highest test accuracy (0.9857) and lowest GC (0.025679) than any of the MNIST experiments (Sec C.2.). Additionally, Fig.19 shows that GC as an explicit regularizer is beneficial also with CIFAR10. However, it is not reflective of the highest test accuracy that could be achieved: each model had to be stopped before peak test accuracy because of time limitation.\n\n> Some of the plots in the supplementary seem incomplete.\n\nCan the reviewer kindly clarify which plots and how exactly they are incomplete so we can properly address them.\n\n\n> The discussion of the limitations of the work should definitely include that the authors do not believe that their claims are generally true for every data distribution.\n\nWe disagree with this statement as there must be some misunderstanding.\n\nNone of the claims we make in this paper are data-distribution dependent. The only assumption we make is a very general assumption on the form of the loss that covers both least-square and cross-entropy losses, which we already state as limitation.\n\nThe reviewer seems to confuse the conjectural response to the reviewer's conjectural question (“Can GC be used for model selection?”) with our actual paper claims. Again, answering this question is not the goal of the current paper. This is future work and we believe that a full answer to that future question is likely to involve consideration about the data distribution.\n", " I would like to thank the authors for their answers and for their effort in trying to improve the manuscript.\n\nI still have some concerns that I'd like to present to the authors:\n1- I think the strong connection with previous works on Jacobian regularization should be pointed out earlier in the paper. In light of these works, can the GC really be presented as a novel metric?\n2- Too many of the results showing the positive correlation between GC and test score compare models that perform very poorly (compared to what can be achieved with a Resnet 18). This applies almost to all plots, except those that actually implement the known heuristics. In my opinion, this work can become relevant for ML practice only if GC turns out to be informative and discriminative also close to state-of-the-art performance, and if it can be used to improve scores or to devise more effective algorithms. Otherwise, showing that this correlation is strong when very bad models are compared can be a little misleading. In my opinion, the authors should focus on the region with 80+% accuracy.\n3- The experiments with direct GC regularization do not reach very good performance. Moreover, if I am not mistaken, both the GCs and the test scores are lower when compared to the models trained with all the known heuristics. This means that GC would not help select the best model among them, and this goes against the main message of this work. This seems like a crucial point to discuss.\n4- It would be interesting to have a plot that compares some of the optimal models found in each of the presented studies and see if in this comparison the GC is still telling with respect to the best generalization performance.\n3- Some of the plots in the supplementary seem incomplete.\n4- The discussion of the limitations of the work should definitely include that the authors do not believe that their claims are generally true for every data distribution. And they should expand on what they mean with data generated by some form of “harmonic” process. \n\nThank you for your time!\n", " Thank you for the nice reply and the accompanying changes to the manuscript. I am really pleased with the new changes, and I think the paper has significantly improved (specially the double descent section, now it is way clearer).\n\nRegarding the score, I anticipated (based on the quality of the paper) that the authors were going to respond accordingly to the changes requested, so I am happy to see that I was right, yet I believe the score is appropriate at the moment. Having said that, I might consider changing my score after discussing with the rest of the reviewers.\n\nThanks once again and congrats for the great response! ", " Thanks for your review. We are happy that you found “the theoretical connection with harmonic theory allows for a very simple intuition of the concept of smooth interpolators” and that it “might be useful for designing better (more explicit) training algorithms.” This is also our belief! You’ll find below our best effort to answer your questions.\n \nWe have taken steps to address your main concern about whether GC was beneficial in-and-by itself by performing explicit regularization experiments with GC (SM Section C.4) and repeated our Fig. 4 experiments in more competitive settings (SM Section C.6.) Please consider raising your score if you are satisfied with them.\n\n> *“Since GC can be explicitly evaluated and the authors argue that many heuristics implicitly minimize it, I think the authors should try a set of experiments where this minimization is done explicitly*\n\nWe performed experiments adding GC regularization directly to the loss and show that decreased GC leads to increased test set performance (SM Section C.4). As now more clearly mentioned in the updated version (lines 215-219) explicit GC regularization is also known as Jacobian regularization in previous works that have shown the benefit of this type of regularization for increased test accuracy (See Tables III, IV, and V in Sokolic et al. 2017). \n\n> *\"Figure 3: The curve related to L2 norm does not seem to show the expected behavior, can the authors comment on it, or provide a plot where more values are shown in the range of regularization intensity where the GC drops?”*\n\nFor the sake of brevity, we put all the explicit regularization on the same plot. The trade-off is we had to choose regularization rates for different regularization types in the same range. Now, this range does not fully coincide for all three types of regularization, which is particularly visible for L2. We have now added separate plots for L2 with a better choice of the regularization rates in SM Section C.5 (Fig. 20). \n\n> *“The accuracies shown in figure 4 seem to be way below the reference performance of a Resnet 18 on CIFAR10. The effect shown in the plot seems very clean, with lower GC and better generalization, but does this apply also close to state-of-the-art performance?”*\n\nTo avoid masking effects, we studied training heuristics in isolation. That said, to address your concerns we conducted experiments with more of the bells of whistles found in competitive settings. We recover all the effects on GC discussed with SGD with momentum (SM Section C.7); almost all the effects with Adam, except for the batch size impact (SM Section C.8) - we believe that changing the batch size affects the variability in the local loss surface seen by the optimizer at each step; since for Adam the local geometry of this surface determines the rescaling of the learning rate in every dimension, it affects also the pressure on GC in ways that are harder to predict. In SM Section C.6, we analyze the effect of the learning rate and batch size on GC in the presence of a learning rate schedule (which affects the pressure on GC during training) and data augmentation. In all instances, we still observe the general trend that hyper-parameter sweeps with higher test accuracy tend to come with lower GC. \n\n> *“Is there any other way of using the Dirichlet energy and harmonic theory to guide the learning process?”*\n\nWe have observed that well-tuned neural networks naturally converge toward solutions that minimize the Dirichlet energy. These minimizers are known to be harmonic functions. The study of harmonic maps and harmonic map flow is already a well developed area of research and there are many fundamental properties and consequences that could be used to further shed light on the learning process when interpreted in this setting. \n\n> *“Since the GC is data-dependent, how can it be used in the context of transfer learning and in OOD generalization settings? ”*\n\nLooking at Figure 1, the neural network learns the function with minimal volume inside the data region. Outside of it, it is still very close to the parabola that generated the data very far away from the training set. This example suggests an interpolating function with minimal GC within the dataset that may guess a “probable” pattern outside of the data region that’s minimal in some sense.\n\n> *“Can GC be used for model selection?”*\n\nAll of our experiments seem to point in this direction, i.e., that at the same level of train error the solutions with lower GC generalize better. Justifying this claim rigorously would require further investigation and we expect that it will depend heavily on the structure of the data distribution as well. From our preliminary investigation, we don’t believe it to be generally true for every data distribution, but mostly the ones that are generated by some form of “harmonic” process.\n", " Thanks for your review. We are very happy that you “expect this work to be impactful” and you found “the range of methods tested broad and well-structured” and that “the theoretical results and claims look sound” to you. We are glad that “the connection of GC with Dirichlet energy is really appealing” to you (it is to us too!!!). You’ll find below our best effort to answer your questions. Please consider raising your score if you are satisfied with them.\n\n> *“While I agree with the authors on removing extra sources of noise for the main experiments (e.g. not using momentum or lr schedulers), there should be a section/experiment where all the DL machinery is put together”*\n\nTo address your requests we repeated our experiments on more competitive settings and our conclusions hold: 1) we replicated our experiments using optimizers that are more commonly used in SOTA settings (SM Section C.7, and Section C.8), and 2) we conducted additional experiments using more of the bells and whistles commonly used in SOTA settings, such as data augmentation and learning rate schedules (SM Section C.6), which gave us a 10% boost to our previous test accuracy (to reach performance SOTA levels we’d need more extensive hyper-parameter sweeps than time allows).\n\nFor the first direction, when considering SGD with momentum (SM Section C.7), all our conclusions hold as before. We see similar outcomes when using Adam (SM Section C.8) though the influence of batch size on GC is more complex. (We believe that changing the batch size affects the variability in the local loss surface seen by the optimizer at each step. Since for Adam the local geometry of this surface determines the rescaling of the learning rate in every dimension, it affects also the pressure on GC in ways that are harder to predict.) For both sets of experiments, we observe that trend where higher test accuracy tends to come with lower GC. \n\nFor the second direction, we still observe the trend that hyper-parameter sweeps achieving higher test accuracy also tend to achieve lower GC (SM Section C.6); however, the learning curves are harder to interpret than those in Fig. 4. because of the many mechanisms interacting in complex ways with each other. Similarly the effect of batch size and learning rate on lowering GC is visible. \n\n> *“I feel the paper glosses over connections with the proposed metric. For example, line 186 says that Jacobian regularization basically minimizes the proposed metric (GC) explicitly on the loss, but the topic is covered with 4 lines. [...] Similarly, section 7 says that GC is similar to Kolmogorov complexity and MDL, but these similarities are not explained anywhere.”*\n\nConcerning the relation between GC and Jacobian regularization, we rewrote and expanded this section in the updated version (lines 211-223). We also conducted additional experiments where the GC is directly regularized, recovering the results from previous works on Jacobian regularization (SM Section C.4).\n\nConcerning the relation between GC and Kolgomorov complexity or minimal length, we rewrote the corresponding paragraph (lines 297-311) describing our intuition about the relationship between these complexity measures. We clarified that this relationship is conjectural and a topic for future work.\n\n> *“To me, the metric seems novel for this purpose, but strong connections were vaguely mentioned, which I find concerning.”*\n\nWe clarified the connections with other complexity which we believe are related (lines 297-311). Upon acceptance, we will add a section to the supplementary material with the definitions of the standard and more recent complexity measures and how they relate to and differ from GC more explicitly.\n\n> *“What is the training error in section 6? I assume that it is zero, but otherwise I think it is important to show the training error as well. Given that the U-shaped curve explains overfitting, I don't find useful to recover the curve with GC if that does not explain overfitting as in traditional ML.”*\n\nAll the models were fully trained. To address your comments, we've added the training loss curves to the plots in Figure 5. We indeed recover the traditional U-curve test/train loss overfitting pattern if we replace the network width with the geometric complexity on the x-axis (Figure 5, right).\n\n> *“The quantity h/B in line 246 has been studied before, maybe you want to add a few lines commenting on it. See, e.g., Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour, Don't Decay the Learning Rate, Increase the Batch Size, or An Empirical Model of Large-Batch Training.”*\n\nThanks! We added the references (lines 263-264).\n\n> *\"The font of the figures should be increased.\"*\n\nThanks! We'll do upon acceptance for the camera ready when more space is available.\n\n> *\"You should point to specific parts of the SM in the main text, not just the SM in general.\"*\n\nThanks! We updated the references to the SM with precise mention of the sections.\n\n", " Thanks for your review. We are very pleased that you “enjoyed a lot reading this paper, and found it helpful to understand the connections among all the mentioned regularizers” and “ well organized and well written.” You’ll find below our best effort to answer your questions. Please consider raising your score if you are satisfied with them.\n\n> *“What is the advantage of GC over Rademacher complexity? Is it because GC is more computationally convenient? Is it possible to compare these two types of complexity in some sense empirically or theoretically?”*\n\n\nComplexity measures can be broadly classified into two classes as whether they measure the complexity of the whole hypothesis space or whether they measure the complexity of individual functions. The Radamacher complexity measures the complexity of the whole hypothesis space, while GC measures the complexity of individual functions in that space. Measuring the complexity of the whole space is challenging in deep learning for two reasons: \n1) theoretically, because the hypothesis space for a large neural network is extraordinarily complex (measures like the Radamacher complexity measure the space complexity by looking at some notion of the “most complex” hypothesis in the space, which are in general not the ones “selected” by the training procedure in deep learning), \n2) practically, complexity measures defined on the whole space are hard to compute when the space is big, especially when they involve taking a supremum on all the functions in the space as for the Radamacher complexity. An advantage of complexity measures like GC is that we can compute them at every step of the learning process and understand by looking at the learning curves whether a given training procedure favors simple solutions more so than complex ones. Complexity measures focusing on the whole space like the Radamacher complexity have a harder time assessing complexity biases for training procedures in terms of the simplicity of the outcomes.\n\nWe briefly touched upon the difference between GC and Radamacher complexity in the introductory paragraph in Section 2, but space considerations didn’t allow us to develop more and make the point very clear. We added a sentence clarifying the difference between GC and the Radamacher complexity in lines 77-79 in the updated version. We will also add to the appendix a new section with the precise definitions of the main complexity measures and how they compare to GC upon acceptance. \n", " Thank you for your kind and balanced reviews. We are grateful that you “enjoyed a lot reading this paper”, found it “helpful to understand the connections among all the mentioned regularizers”, and that you “expect this work to be impactful.” It is nice to hear that you found “the connection of GC with Dirichlet energy really appealing” and that “the theoretical connection with harmonic theory allows for a very simple intuition of the concept of smooth interpolators”. It’s also our belief that these connections “might be useful for designing better (more explicit) training algorithms.”\n\nWe want to underline that GC is a unique complexity measure in that: it can be easily computed for neural networks (see Section 2); it has been connected with generalization extensively in the submission (See Section 4, Section 5, and Section 6) and in the additional experiments we provide in the rebuttal (SM Section C.4 to C.8); it has been connected with many aspects of the NN ecosystem: initialization (Section 3), optimization (Section 5, and SM Section C.7 and C.8), other regularization methods (Section 4) .\n\nWe tried our very best below to satisfy your requests and answer your questions within the time frame imparted. We think that as a result the uploaded revised paper is now clearer and stronger thanks to your comments. (We highlighted the new parts in red for your convenience.)\n\nWe conducted additional experiments and put their results in the Supplementary Material (Sections C.4, C.5, C.6, C.7, and C.8) to support our responses. We show that:\n* Our results remain valid for optimizers like SGD with momentum and Adam (SM Section C.7 and C.8), learning rate scheduling, and data augmentation (SM Section C.6), all techniques used in competitive settings\n* Direct explicit regularization with GC is beneficial to increase the test accuracy by itself (SM Section C.4).\n* With parameter sweeps targeted for individual explicit regularization, including L2 (SM Section C.5) the curves are clearer.\n\nPlease consider raising your scores in consequence if your requests have been met and questions answered.", " Summary \n\nThe paper proposes a new notion of complexity measure called geometric complexity (GC)
. The paper discusses how GC relates to the familiar concepts in deep learning literature and made extensive links to existing regularizer methods. I enjoyed a lot reading this paper, and found it helpful to understand the connections among all the mentioned regularizers. \n\n\n Major strength \n1. The paper is well organized and well written. \n2. The authors made extensive analysis to link GC with typical deep learning architectures, demonstrating the connections between existing training regularizers and GC. These links explicitly demonstrate that GC , to some extent, unify the regularizers in the sense that penalizing these regularizers would simultaneously penalize GC.\n3. In comparison to other complexity measures, GC is more computationally tractable. \nThe paper empirically evaluates the connection between the generalization property and GC: after the critical region, increasing model size would necessarily decrease GC, showing the usefulness of GC in estimating the generalization error, showing potential benefits of using GC to improve generalization. \n\nQuestions. \nI am not expert on this complexity measurement topic, but I have some basic understanding of complexity measures. If you don’t mind me asking, what is the advantage of GC over Rademacher complexity? Is it because GC is more computationally convenient? Is it possible to compare these two types of complexity in some sense empirically or theoretically? \n Please see the above comments for questions. The author has addressed the limitations. ", " Regularization is key when it comes to model selection; if we seek models that generalize well, choosing the less complex one tends to be a good heuristic in classical machine learning. In deep learning, however, it is not clear what is a good metric to measure the \"complexity\" of a given model, as the classic ones tend to be unreliable (e.g. parameter counting and the double-descent effect), and/or intractable (e.g. VC dimension). \n\nThis work presents a new complexity metric, the Geometric Complexity (GC). Throughout the paper, the authors present their metric in simple scenarios (ReLU and linear networks), relate it with other metrics (Lipschitz smoothness and Dirichlet energy), and empirically demonstrate that many common regularization techniques in DL minimize the GC under the hood. Finally, the authors reproduce the original double descent experiments, and show that GC recovers the traditional U-shaped curve. **Strengths (S)**\n1. The main contribution of the paper can be fairly impactful. GC is computable, and it seems to be well correlated with the test error. More importantly, previous work does not need to be thrown away. The authors related GC with other metrics, and showed that effectiveness regularization techniques greatly affect the GC. I expect this work to be impactful.\n2. The paper has a strong empirical flavour, which I find necessary for a paper on this topic. Moreover, the range of methods tested is broad and is well-structured in different sections (initialization, implicit regularization, and explicit regularization).\n3. The theoretical results and claims look sound to me. Specially, the connection of GC with Dirichlet energy is really appealing. However, I have only fully read the main paper and the proofs until A3.\n4. The paper is well written and easy to follow.\n\n**Weaknesses (W)**\n1. While I agree with the authors on removing extra sources of noise for the main experiments (e.g. not using momentum or lr schedulers), there should a section/experiment where all the DL machinery is put together, and compare the GC of those new models with the ones already on the paper. Otherwise, the performance gap is too big compared to other works. For example, test accuracy on CIFAR10 is around 75% in the paper (Figure 4), while using the same model **other works achieve an accuracy of 95.55%** (according to [papers with code](https://paperswithcode.com/sota/image-classification-on-cifar-10?tag_filter=3)).\n2. I feel the paper glosses over connections with the proposed metric. For example, line 186 says that Jacobian regularization basically _minimizes the proposed metric (GC) explicitly_ on the loss, but the topic is covered with 4 lines. This is _really_ important, Jacobian regularization is not just another regularization method to mention. Similarly, section 7 says that GC is similar to Kolmogorov complexity and MDL, but these similarities are not explained anywhere.\n3. I miss some motivation and intuition for the proposed metric, Def. 2.1 feels arbitrary until you read the end of section 2. I think introducing the metric as a modification of Dirichlet energy minimization would feel more natural.\n\n_Note:_ While I can understand and follow all the paper, I am not that well-versed in the topic, and therefore I cannot fully assess the novelty of the proposed metric as I could miss well-known literature. To me, the metric seems novel for this purpose, but strong connections were vaguely mentioned, which I find concerning. **Questions (Q)**\n1. What is the training error in section 6? I assume that it is zero, but otherwise I think it is important to show the training error as well. Given that the U-shaped curve explains overfitting, I don't find useful to recover the curve with GC if that does not explain overfitting as in traditional ML. \n\n**Nitpicks (N)**\n1. The quantity $h/B$ in line 246 has been studied before, maybe you want to add a few lines commenting on it. See, e.g., [Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour](https://arxiv.org/pdf/1706.02677.pdf), [Don't Decay the Learning Rate, Increase the Batch Size](https://arxiv.org/pdf/1711.00489.pdf), or [An Empirical Model of Large-Batch Training](https://arxiv.org/pdf/1812.06162.pdf).\n2. The font of the figures should be increased.\n3. You should point to specific parts of the SM in the main text, not just the SM in general.\n I think the authors correctly addressed the limitations of the method.", " The authors introduce the Geometric Complexity (GC), a novel measure/predictor of neural network performance, based on the evaluation of a discrete Dirichlet energy on the logits in the output of the network. Given that the Dirichlet energy is a measure of how variable a function is, the Geometric Complexity is closely linked to many more common ML notions e.g. regularization and smoothness.\nThe authors argue that keeping a low GC, while training the NN model, allows a test performance improvement. This conclusion is drawn from the observation that a variety of well-known Deep Learning heuristics, including initialization, regularization and fine-tuning of the parameters of the learning algorithm seem to produce models with low GC that generalize well.\nFinally, the authors show that GC can be used to trace the double-descent behavior, in the transition to the over-parametrized regime. \n\nAFTER REBUTTAL\nThe authors have addressed most of the concerns raised in the review process and clarified and expanded upon their work. Computability is an essential property for any performance metric that is proposed to predict NN performance. The authors argue that the Dirichlet energy, while being conceptually close to other smoothness assessment metrics, is much simpler to estimate, as it is simply defined as an expectation over the training set. Moreover, the theoretical connection with harmonic theory allows for a very simple intuition of the concept of smooth interpolators and might be useful for designing better (more explicit) training algorithms.\nWhat seems to be the main weakness of this work is that the assumption of the authors, of the fundamental importance of GC minimization in training DNNs, is only supported through a limited set of positive observations. The authors go through a range of well-known and functioning deep learning heuristics and show the relationship/correlation with a GC decrease. However, it remains unclear whether this decrease is a sufficient/necessary condition for achieving better performance. What would complete this study is a set of experiments where the GC is explicitly controlled and competitive performance is reached, either with an exact GC minimization (probably too computationally expensive), or with some novel regularization technique that aims at GC minimization.\nAnother weak point of this work is in the presented experiments, involving relatively small models and semi-obsolete datasets (MNIST, CIFAR). - Figure 3: The curve related to L2 norm does not seem to show the expected behavior, can the authors comment on it, or provide a plot where more values are shown in the range of regularization intensity where the GC drops?\n- The accuracies shown in figure 4 seem to be way below the reference performance of a Resnet 18 on CIFAR10. The effect shown in the plot seems very clean, with lower GC and better generalization, but does this apply also close to state-of-the-art performance?\n- Since GC can be explicitly evaluated and the authors argue that many heuristics implicitly minimize it, I think the authors should try a set of experiments where this minimization is done explicitly (even though the required computations may be very expensive). Otherwise, the correlation vs causation question remains open. Is there a reason why such a test was not performed, at least in a small network setting?\n- Is there any other way of using the Dirichlet energy and harmonic theory to guide the learning process?\n- Since the GC is data-dependent, how can it be used in the context of transfer learning and in OOD generalization settings? Could it still provide some useful insight, and would the smoothness vs generalization relationship still hold, even though the network was not trained on that data?\n- Given two different DNN architectures, can their GC still be informative on which will perform best? Can GC be used for model selection? I think that the main unaddressed limitation of this work is that the authors do not spend enough time trying to disprove their claim. This could only be achieved by explicitly controlling the GC and finding out whether its minimization is a sufficient drive for effective training." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "18OKVqOyXkj", "GzJVdA7k3f-", "MYktCrQqXe", "C9W-IU3oUwg", "mnzKK-3b4os", "foRwX14gkdU", "V7wRWuHw9xH", "B25vBrdkiKy", "6y6fpiPRv6u", "nips_2022_-ZPeUAJlkEu", "nips_2022_-ZPeUAJlkEu", "nips_2022_-ZPeUAJlkEu", "nips_2022_-ZPeUAJlkEu" ]
nips_2022_-N-OYK2cY7
Multiclass Learnability Beyond the PAC Framework: Universal Rates and Partial Concept Classes
In this paper we study the problem of multiclass classification with a bounded number of different labels $k$, in the realizable setting. We extend the traditional PAC model to a) distribution-dependent learning rates, and b) learning rates under data-dependent assumptions. First, we consider the universal learning setting (Bousquet, Hanneke, Moran, van Handel and Yehudayoff, STOC'21), for which we provide a complete characterization of the achievable learning rates that holds for every fixed distribution. In particular, we show the following trichotomy: for any concept class, the optimal learning rate is either exponential, linear or arbitrarily slow. Additionally, we provide complexity measures of the underlying hypothesis class that characterize when these rates occur. Second, we consider the problem of multiclass classification with structured data (such as data lying on a low dimensional manifold or satisfying margin conditions), a setting which is captured by partial concept classes (Alon, Hanneke, Holzman and Moran, FOCS'21). Partial concepts are functions that can be undefined in certain parts of the input space. We extend the traditional PAC learnability of total concept classes to partial concept classes in the multiclass setting and investigate differences between partial and total concepts.
Accept
This work is an extension of the theories of partial concept classes and the universal learning framework to multi-class classification tasks. The reviewers have found the work well-rounded and correct, and of substantial interest to the learning theory sub-community at NeurIPS. A drawback might be that this submission might appear as a natural follow up work on previously established results.
train
[ "Zn0Kzf-7aJ", "pRZzW7XzVU", "eQYRKk8IX3W", "xssRoNtHchJ", "xIMe95M4VJd", "RynUY4pyKWV", "-HuGJfrOnA", "G91m2mrzi7z", "zYacsOVctZQ", "ztB1YW3s2XE", "MIgl4PBXcNQ", "agsJuVFU_cz", "U9ek9M0M6S-" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thanks the authors for their detailed response. I suggest that the authors include the aforementioned technical challenges in the final version of their paper.", " I am satisfied with the response of the authors. Regardless of the decision on this submission, I think the community will benefit from an expanded version of the work with no page limit, a smoother presentation, and a discussion of the connection between the two theories. Thank you for the work you have done.", " > *In the theory of partial classes, (yet) there is no such simple and clear principle as ERM (This is more about the original work of Alon et al. (2021), but nonetheless). I understand that (for example) in applied deep learning, SGD is not an arbitrary ERM. But after all, SGD is ubiquitous and extremely successful approximate ERM in deep learning. If this theory claims to be an extension of the classical PAC theory explaining modern learning phenomena, I would like to have a simple and understandable learning principle (rather than resorting to improper learning with a transductive algorithm).*\n\nThis is an exciting research question. We totally agree that it is required to develop algorithmic tools and principles that work with partial concepts and, in general, bridge theory with learning phenomena observed in practice. We underline that our results are very general and hold for any class, without assuming any particular structure on it. As a result, the algorithms we propose are abstract. We believe that an interesting starting point for obtaining simple algorithms for partial classes would be to understand the effect of disambiguation of such classes to total concept classes. This may shed light towards the design of algorithmic principles for partial concept classes. This in turn can lead to more natural algorithms for the universal setting, given the important connection between these two settings that we described in our answer above (part 1 of our response).\n \n> *Have you tried to find a less obfuscated partial class to prove Theorem 16? Your class relies on the class from Alon et al.(2021), which in turn relies on the superpolynomial separation between the chromatic number and the biclique partition number, which in turn is based on a series of 4 reductions in graph theory and complexity theory …*\n\nThis is a very good question. Indeed we have tried various (geometric) binary hypothesis classes and they do not seem to obtain such a disambiguation result. Our approach was to consider a total hypothesis class with infinite VC dimension and then tried to construct a partial restriction of this class with small VC dimension so that any extension of the partial concepts will make the VC dimension unbounded. However, we have not yet managed to make it work. This is a task for future work. To this end, in order to obtain our disambiguation result for the multiclass setting we had to use a tensorization trick and build on the highly non-trivial construction of Alon et al. (2021). In order to make this work, we have to observe that the growth function version of the SSP lemma holds even for partial concept classes (the combinatorial SSP lemma fails, as indicated by Alon et al. (2021)).", " We thank the reviewer for finding our work solid and interesting, and for providing positive and constructive feedback.\n\n> *I am most concerned about the presentation of the material: the density of the narrative is highly heterogeneous throughout the text of the paper. The introduction is well-written, and the motivation of the two proposed theories is immediately clear to the reader. The part about the universal theory of learning is already denser, and by the end of this part it is already difficult to understand what is at stake without looking into the Appendix and googling. The part about partial classes is the most dense, and I personally had to familiarize myself with the binary case first (Alon et al., FOCS 2021) before I could understand anything in this part. My impression is that this submission is an attempt to compress a journal article to the size of a conference paper, and the compression was done very unevenly.*\n\nThank you for your constructive feedback. We would like to underline that we prefered to compress the presentation of the partial concepts section due to space constraints. In the first revision of our work, we will make the presentation more balanced and detailed as the reviewer proposes. Moreover, if the paper gets accepted, we will dedicate the extra space of the camera-ready version to providing more intuition about partial concept classes and moving some of the material that appears in the appendix to the main body.\n\n> *Although both theories try to fill the gap between the theory and practice of machine learning, I honestly did not understand the connection of one theory with another. More precisely, I can imagine a setup where both theories are combined, but this will be a separate framework.*\n\nThere is an interesting and intrinsic connection between these two settings that we did not highlight in the paper: in the universal learning setting, the first step of the approach is to use the data to simulate some Gale-Stewart games and show that, with high probability, most of them will have ``converged’’, i.e., the function that corresponds to the learning strategy of the learner will be correct. In turn, this defines some **data-dependent** constraints. For instance, assume that $g$ is a successful NL pattern avoidance function, i.e., a function which takes as input any $\\ell$ points $x_1,\\ldots,x_\\ell$ and any two (everywhere different) mappings $s^{(0)}, s^{(1)}$ from points to labels and returns an *invalid* pattern, i.e., a binary pattern $y$ of length $\\ell$ that is not compatible with the definition of the Natarajan dimension (i.e., there is no function $h \\in H$ s.t. if $y_i = 1$ then $h(x_i) = s^{(1)}(x_i)$ and if $y_i = 0$ then $h(x_i) = s^{(0)}(x_i)$, for all $i \\in [\\ell]$). Then, we can define a **partial** concept class $H’$, the set of all functions from $X$ to {$1,\\ldots,k,\\star$} that satisfy the constraint of this pattern avoidance function, and it has two important properties: its Natarajan dimension is bounded by $\\ell$ and a learning algorithm for $H’$ also learns $H$. Hence, understanding the learnability of partial concept classes is an essential step in coming up with more natural learning strategies in the universal learning setting. We will underline this connection in the next version of our manuscript.\n\nMoreover, a unifying message of both of these settings is that going beyond the traditional PAC learning framework is essential to understanding the behavior of learning algorithms in practice. Importantly, in both of these settings ERM is not an optimal learning algorithm and the one-inclusion graph predictor is an essential part in deriving results in both theories. \n\n\n", " > *Since some characterizations of hypotheses may be very intractable when we extend the binary setting to the multiclass setting. For example, we can exactly calculate the VC dimension of linear binary classifiers, yet we can only bound the order of Natarajan dimension of linear multiclass classifiers. As a result, I am wondering about the practicality of the proposed definitions. Specifically, is it traceable to calculate the Natarajan/multiclass Littlestone tree of some common multiclass hypothetical classes, e.g. linear multiclass classifier class?*\n\nThis is an interesting point. Let us define the class of linear classifiers with $k$ labels in $\\mathbb{R}^d$ as $L = $ { $h_W(x) = \\arg\\max W x | W \\in \\mathbb{R}^{k \\times d} $}. \nWe first remark that the Natarajan dimension of $L$ is $\\widetilde{\\Theta}(kd)$ as noted for instance in [DSBDSS’11] and so for fixed $k,d$ this is a Natarajan class.\nLet us discuss the complexity of learning this class in the universal setting. It is important to note that our characterization of universal multiclass learning depends only on whether the two proposed trees are infinite or not. In particular, we note that class $L$ has an infinite multiclass Littlestone tree and a finite Natarajan tree (since it can only shatter a finite number of points as a Natarajan class). \nWe also remark that if we consider the class of linear multiclass classifiers over $\\mathbb{N}^d$ (a discrete geometric space), then this class does not even have an infinite multiclass Littlestone tree and hence is learnable at an exponentially fast rate. To see this, one can use the one-versus-one reduction and observe that any classifier $h_W$ corresponds to a collection of $\\binom{k}{2}$ binary classifiers which are halfspaces in $\\mathbb{N}^d$ where $h_W^{i,j}(x) = \\text{sgn}((W_i - W_j) x)$ for any $i < j$. Then, due to realizability, one can use an argument from infinite Ramsey theory (see Example 2.10 from [BHM+21]) and prove that after a finite number of mistakes, they can detect the correct halfspace for any pair $i<j$. Aggregating these predictors, we get a multiclass linear classifier that enjoys an exponentially fast convergence rate. We will add this example to the next revision of our manuscript.\nIn general, we agree with the reviewer that it requires some work for general hypothesis classes in order to understand their exact universal convergence rate. Obtaining tools for easily understanding whether a class has a finite “tree” is an interesting and important research direction.\n", " (continuing our previous response)\n\n* It is not clear to us how to prove the arbitrarily slow rates lower bound if one does not use the specific structure of the Natarajan-Littlestone tree that we have defined. Another natural candidate would be the Graph-Littlestone tree. We kindly refer to line 1451 of our paper for the exact definition. The issue with this combinatorial measure is that, if one follows the approach from [BHM+21] to derive the lower bound, it is not clear how to ensure the realizability of the distribution. To be a bit more precise, after we have picked the target path and we consider a node at depth $d$, then we know that there is some $f$ so that if the target path assigns 1 to $s, x_i$ the $f(x_i)=s(x_i)$ and if it assigns 0 then $f(x_i) \\neq s(x_i)$. Similarly, at depth $d+1$ there is some $f’$ that satisfies the same properties. The issue is that if some $x_j$ is assigned the value $0$, we merely know that $f(x_j) \\neq s(x_j), f’(x_j) \\neq s(x_j)$, so there is no way to guarantee that $f(x_j) = f’(x_j)$. Thus, it is not clear that using the classical tricks from the uniform setting to reduce multiclass classification to binary classification works in the universal setting. We are aware of the the results which show that all these dimensions are essentially the same up to $\\log k$ factors. However, it is not clear to us how one can use these techniques to show that the Graph-Littlestone tree is finite if and only if the Natarajan-Littlestone tree is finite. The main issue is that we only know that these trees are finite and we have no control over their depth. Thus, one needs to extend these proofs to handle ordinals that are less than $\\Omega$. We believe that this is an important question and we hope that our work will lead to future research on whether these tree-based structures are also equivalent. We kindly refer to Open Question 1 at Line 1489 of our paper. \nTo summarize, we have spent quite some time trying to establish our bounds for other dimensions. We have an extensive discussion in the Appendix where we introduce the Graph-Littlestone tree and we show that if a class has only finite such trees, then it is learnable at a linear rate. The situation is similar with the pseudo-dimension.\n\nIn the setting of partial concept classes, our main contribution is a proof that an alternative version of the SSP lemma, which bounds the growth function, holds in this setting. In contrast, [AHHM21] showed that the traditional version of the lemma (the “combinatorial” one) does not hold for partial concept classes. We remark that these two versions are equivalent for total concept classes. This lemma allows us to ``tensorize’’ the disambiguation lower bound that was proved in [AHHM21] for binary classes and establish it for the multiclass setting. The proof of the learning algorithm for partial concept classes in the multiclass setting follows similarly as the one in [AHHM21] and we have stated it in the paper for completeness of our characterization. \n", " We thank the reviewer for appreciating our contribution and for providing positive feedback and questions.\n\n> *As a theoretical work, some of the extensions in the paper just follow the standard idea that generalizing definitions from the binary case to multiclass case (For example, the extension from the Littlestone dimension to the multiclass Littlestone dimension is similar to the standard extension from the VC dimension to the Natarajan dimension. It is similar for the definition of NL tree) and the main results are very similar to those in [1] and [2], hence I have some concerns regarding the technical novelty of this paper. I would appreciate it If the authors can list out some techniques that do not appear in [1] or [2], except for the old-fashioned generalization trick mentioned above. I may adjust the rating according to the quality of the list.*\n\nWe wish to first start with the technical challenges concerning the universal multiclass classification setting:\n* The first natural idea that we tried was to reduce the problem to the binary setting, i.e., use the algorithms from [BHM+21] as a black-box to derive algorithms for the multiclass case. However, this approach introduces various technical challenges when dealing with the induced binary hypothesis classes. We discuss in detail these challenges for the exponential rates case. First, it was not clear to us how to prove that if the original class $H$ does not have an infinite multiclass Littlestone tree, then each of the induced binary classes satisfies this property as well. We kindly refer to Line 290-298 of our work.\n\n* As a result, we developed new algorithms from scratch. There were several technical challenges that we had to overcome. The straightforward extension of the Gale-Stewart games that appear in [BHM+21], i.e., where the adversary presents a point $x$ and the learner picks a label in $[k]$ (imitating the online learning game), does not seem to work. Therefore, we propose a more involved Gale-Stewart game where the learner proposes both a point $x$ and two potential labels for it, and the learner has to commit to one of the two. This game allows us to obtain tight upper and lower bounds that depend on the finiteness of the multiclass Littlestone tree. In the next step of the approach, i.e., in the transition from the online setting to the statistical setting, one needs to show how to simulate this Gale-Stewart game using the data that the learner has access to. Since our Gale-Stewart game is more convoluted than the one in [BHM+21], simulating it using data becomes more challenging. \n\n* In the case of linear rates these challenges are even more technically involved and this is related to the fact that the Natarajan-Littlestone tree has a more complicated structure than the VCL tree (for example, we need to check all the possible mappings from some given points to labels). The Gale-Stewart game that handles the case of linear rates goes as follows: the adversary presents to the learner a tuple of points $x$, and two (different) colorings for these points. Similarly as before, we could not use a simpler game to obtain the result. Subsequently, as in the exponential rates case, simulating this game using data becomes more complicated than in [BHM+21]. We refer to Section 2 of our manuscript for a concrete presentation of our approach.", " > *The authors should discuss why they do not consider other dimensions such as Graph dimensions, the Pseudo-dimension, etc.*\n\nWe kindly refer to the above part of our response.\n\n> *What are the obstacles for obtaining the high probability versions of the bounds for the universal learning problems?*\n\nWe highlight that we focus on the behavior of the expected misclassification error as the number of samples increases (learning curve), following [BHM+21], in order to make the result cleaner and easier for the reader to compare it to the uniform setting. In the case of exponential rates, high-probability bounds can be obtained directly from our approach. Essentially, we prove that a specific ``good’’ event happens with high probability and under this event we get exponentially small error. For the linear rate case, one would have to use high probability bounds for the error of the one-inclusion hypergraph predictor and follow our approach to derive a high probability bound in this setting.\n\n> *Is the dependence of your results on the number of classes optimal? In light of recent results for infinite classes in characterization of multiclass learnability by Brukhim, Carmon, Dinur, Moran, and Yehudayoff, how can one extend the results to infinite classes?*\n\nThis is a very interesting question. In our work we focus on the setting of a bounded number of classes. We present our results with a focus on the dependence on the number of samples. The case of exponential rates depends only on the finiteness of the $k$-multiclass Littlestone tree and the rate does not have an explicit dependence on $k$. The number of different classes implicitly affects the finiteness of this tree. As a result, our bounds hold even when one deals with an infinite number of labels, provided that the induced Littlestone tree is of finite depth. For the linear rates case, we make use of the one-inclusion hypergraph algorithm, hence the rate we obtain is of order $\\log(k)/n$, so it explicitly depends on the number of labels. We underline that this result holds only if the number of labels is finite. It is a very interesting direction for future work to obtain the linear-rate characterization in the case where the number of labels is infinite. We believe that one needs to define a tree-like version of the DS dimension and adapt the technique of list learning that appeared in [BCD+22] to the universal setting. \n\n> *For partial concept classes, in prop 2 the authors provide a sufficient condition for success of ERM. How about the necessary conditions?*\n\nThis is a very interesting question as well. We underline that Proposition 2 provides a necessary and sufficient condition for ERM learnability with a (total) class over $k+1$ different labels. This is because there is an equivalence between finite Natarajan dimension classes with a bounded number of labels and ERM learnability. To characterize ERM learnability in the partial concept class, one needs to provide necessary and sufficient conditions for disambiguation of a partial concept class with finite Natarajan dimension to a total class with finite Natarajan dimension. This question is very challenging even in the binary setting. \n\n> *In the partial concept classes paper, the authors also present results for the agnostic case. What are the obstacles for proving such a result in the multi-class setting?*\n\nIn fact there are no obstacles in extending our partial concept results to the agnostic case. We chose not to deal with this case for coherence of presentation (in the whole paper we deal with realizable distributions). We will comment on this point in a first revision of our work. \n", " > *The authors should discuss why the other dimensions instead of Natarjan dimensions such as graph dimension do not work for these problems.*\n\nThis is an important question. We indeed tried to establish our bounds for other dimensions too. We have an extensive discussion in the Appendix where we introduce the Graph-Littlestone tree and we show that if a class has only finite such trees, then it is learnable at a linear rate. However, it is not clear to us how we can obtain the lower bound when there is an infinite such tree. The main technical difficulty is that if one follows the construction of the lower bound we currently use, it is not clear how to establish that the distribution that is defined is realizable. The situation is similar with the pseudo-dimension. We are aware of the results which show that, in the uniform setting, all these dimensions are the same up to $\\log k$ factors. However, the proof techniques that were used do not seem to help us establish a result of the form “the Natarajan-Littlestone tree is finite if and only if the Graph-Littlestone tree is finite”. The main issue is that we only know that these trees are finite and we have no control over their depth. Thus, one needs to extend these proofs to handle ordinals that are less than $\\Omega$. We believe that this is an important question and we hope that our work will lead to future research on whether these tree-based structures are also equivalent. We kindly refer to Open Question 1 at Line 1489 of our paper.\n", " We would like to thank the reviewer for finding our work interesting and for pointing out various comments and questions. \n\n> *I think the authors should discuss with more details the new ideas and techniques beyond the existing ones that they use in the paper. I have checked the proof of the learning algorithm for the partial concept classes, and it follows very closely from the existing results. Therefore, I think 1) the authors should highlight the technical challenges for extending the existing techniques.*\n\nWe wish to first start with the technical challenges concerning the universal multiclass classification setting:\nThe first natural idea that we tried was to reduce the problem to the binary setting, i.e., use the algorithms from [BHM+21] as a black-box to derive algorithms for the multiclass case. However, this approach introduces various technical challenges when dealing with the induced binary hypothesis classes. We discuss in detail these challenges for the exponential rates case. \n\nFirst, it was not clear to us how to prove that if the original class $H$ does not have an infinite multiclass Littlestone tree, then each of the induced binary classes satisfies this property as well. We kindly refer to Line 290-298 of our work. As a result, we developed new algorithms from scratch. There were several technical challenges that we had to overcome. The straightforward extension of the Gale-Stewart games that appear in [BHM+21], i.e., where the adversary presents a point $x$ and the learner picks a label in $[k]$ (imitating the online learning game), does not seem to work. Therefore, we propose a more involved Gale-Stewart game where the learner proposes both a point $x$ and two potential labels for it, and the learner has to commit to one of the two. This game allows us to obtain tight upper and lower bounds that depend on the finiteness of the multiclass Littlestone tree. In the next step of the approach, i.e., in the transition from the online setting to the statistical setting, one needs to show how to simulate this Gale-Stewart game using the data that the learner has access to. Since our Gale-Stewart game is more complex than the one in [BHM+21], simulating it using data becomes more challenging.\n\n \nIn the case of linear rates these challenges are even more technically involved and this is related to the fact that the Natarajan-Littlestone tree has a more complicated structure than the VCL tree (for example, we need to check all the possible mappings from some given points to labels). The Gale-Stewart game that handles the case of linear rates goes as follows: the adversary presents to the learner a tuple of points $x$, and two (different) colorings for these points. Similarly as before, we could not use a simpler game to obtain the result. Subsequently, as in the exponential rates case, simulating this game using data becomes more complicated than in [BHM+21]. We refer to Section 2 of our manuscript for a concrete presentation of our approach.\nProving the arbitrarily slow rates lower bound required some extra care than in the VCL tree in order to guarantee that the designed distribution is realizable. We kindly refer to the response to your question regarding other dimensions for further details (part 2 of our response).\n\n\nIn the setting of partial concept classes, our main contribution is a proof that an alternative version of the SSP lemma, which bounds the growth function, holds in this setting. In contrast, [AHHM21] showed that the traditional version of the lemma (the \"combinatorial\" one) does not hold for partial concept classes. We remark that these two versions are equivalent for total concept classes. This lemma allows us to \"tensorize\" the disambiguation lower bound that was proved in [AHHM21] for binary classes and establish it for the multiclass setting. Indeed, as you correctly point out, the proof of the learning algorithm for partial concept classes in the multiclass setting follows similarly as the one in [AHHM21] and we have stated it in the paper for completeness of our characterization. We will explicitly emphasize that this proof is for completeness of presentation, in a first revision.\n", " This paper considers an extension of the work by Bousquet, Hanneke, Moran, Van Handel, and Yehudayoff on universal learning, and the work by Alon, Hanneke, Holzman, and Moran on the partial concept classes to the multiclass classification problems. \nIn the first part of the paper, the authors consider the same setting as in the universal learning paper, and aim for finding the distribution-dependent rates for the decay of the risk. Similar to the universal learning paper, they show that a trichotomy occurs. More precisely, they show that by replacing the VC dimension with the Natarjan dimension in the universal learning paper, one can prove similar results.\nIn the second part of the paper, the authors consider the problem of partial concept classes. In this section, the authors, by recent results on the performance of the one-inclusion graph algorithm for multi-class problems, propose a learning algorithm for partial concept classes. More precisely, they show that the learnability in this setting can be identified by natural extension of Natarjan dimension to partial concept classes.\n In general, I find this paper very interesting. To me, it seems that the authors follow very closely the results in “a theory of universal learning “ and “A Theory of PAC Learnability of Partial Concept Classes “ to obtain the results. I think this does not affect the quality of the paper. Having said that, I think the authors should discuss with more details the new ideas and techniques beyond the existing ones that they use in the paper. I have checked the proof of the learning algorithm for the partial concept classes, and it follows very closely from the existing results. Therefore, I think 1) the authors should highlight the technical challenges for extending the existing techniques, 2) the authors should discuss why the other dimensions instead of Natarjan dimensions such as graph dimension do not work for these problems.\n 1- The authors should discuss why they do not consider other dimensions such as Graph dimensions, the Pseudo-dimension, etc. \n\n2- What are the obstacles for obtaining the high probability versions of the bounds for the universal learning problems?\n\n3- Is the dependence of your results on the number of classes optimal? In light of recent results for infinite classes in characterization of multiclass learnability by Brukhim, Carmon, Dinur, Moran, and Yehudayoff, how can one extend the results to infinite classes?\n\n4- For partial concept classes, in prop 2 the authors provide a sufficient condition for success of ERM. How about the necessary condition? \n\n5- In the partial concept classes paper, the authors also present results for the agnostic case. What are the obstacles for proving such a result in the multi-class setting?\n \n The authors provide a list of open problems and future directions.", " This paper studies the problem of multiclass classification with a bounded number of different labels. It mainly extends the main results of [1] and [2] to the multiclass case.\n\nFirstly, the paper provides a complete characterization of the achievable learning rates that holds for every fixed distribution in the universal learning setting and shows that for any concept class, the optimal learning rate is either exponential, linear or arbitrarily slow.\n\nSecondly, the paper considers the structured data, which is captured by partial concept classes proposed by [2].\n\n[1] Olivier Bousquet, Steve Hanneke, Shay Moran, Ramon Van Handel, and Amir Yehudayoff. A theory of universal learning. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 532–541, 2021.\n[2] Noga Alon, Steve Hanneke, Ron Holzman, and Shay Moran. A theory of pac learnability of partial concept classes. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 658–671. IEEE, 2022. Strengths:\n1. This paper extends the main works of [1] and [2] to multi-class classification case. The total work has a good contribution to introducing new frameworks beyond the PAC framework in the multiclass setting. I do like the definition of Natarajan-littlestone tree, which is used to generalize the main results of universal learning framework to the multiclass setting.\n3. This paper describes the technique overview and proof sketches to help understanding the main tools and techniques used in this paper.\n\nWeaknesses:\n1. Some proofs of the main results, e.g. results in Section 1.3, seem to be an adaption of that in [2], except for the difference between the binary setting and multi-class setting. Hence I have some concerns regarding the novelty of this paper.\n\n\n[1] Olivier Bousquet, Steve Hanneke, Shay Moran, Ramon Van Handel, and Amir Yehudayoff. A theory of universal learning. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 532–541, 2021.\n\n[2] Noga Alon, Steve Hanneke, Ron Holzman, and Shay Moran. A theory of pac learnability of partial concept classes. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 658–671. IEEE, 2022. 1. As a theoretical work, some of the extensions in the paper just follow the standard idea that generalizing definitions from the binary case to multiclass case (For example, the extension from the Littlestone dimension to the multiclass Littlestone dimension is similar to the standard extension from the VC dimension to the Natarajan dimension. It is similar for the definition of NL tree) and the main results are very similar to those in [1] and [2], hence I have some concerns regarding the technical novelty of this paper. I would appreciate it If the authors can list out some techniques that do not appear in [1] or [2], except for the old-fashioned generalization trick mentioned above. I may adjust the rating according to the quality of the list.\n2. Since some characterizations of hypotheses may be very intractable when we extend the binary setting to the multiclass setting. For example, we can exactly calculate the VC dimension of linear binary classifiers, yet we can only bound the order of Natarajan dimension of linear multiclass clasifiers. As a result, I am wondering the practicality of the proposed definitions. Specifically, is it traceable to calculate the Natarajan/multiclass Littlestone tree of some common multiclass hypothetical classes, e.g. linear multiclass classifier class? \n\n\n[1] Olivier Bousquet, Steve Hanneke, Shay Moran, Ramon Van Handel, and Amir Yehudayoff. A theory of universal learning. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 532–541, 2021.\n[2] Noga Alon, Steve Hanneke, Ron Holzman, and Shay Moran. A theory of pac learnability of partial concept classes. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 658–671. IEEE, 2022. N/A", " The authors relax multiclass PAC learning in two directions:\n1) Allow the sample complexity to depend on the distribution (whereas in classical PAC learning the sample complexity shall hold for _all_ distributions over the domain).\n2) Allow hypotheses to be partial functions, i.e. be defined not everywhere in the domain (in contrast to the classical PAC, where hypotheses must always be total functions).\n\nIn direction 1), which the authors call the theory of _universal_ learning, the work of [Bousquet et al. (STOC 2021)](https://dl.acm.org/doi/abs/10.1145/3406325.3451087) is generalized from binary to multiclass classification, namely:\n- The learning rate trichotomy is shown: learning occurs either at an exponential or linear or arbitrarily slow rate (in sample size).\n- Characterization of learnability in terms of Littlestone and Natarajan-Littlestone trees is given.\n- The optimal algorithms for each of the learning rates are presented, while they are not ERMs.\n\nIn direction 2), which the authors refer to as the PAC theory of learnability of _partial_ concept classes, the authors generalized the work of [Alon et al. (FOCS 2021)](https://ieeexplore.ieee.org/abstract/document/9719837) again from the binary case to the multiclass one:\n- The definition of the Natarajan dimension in the case of partial classes is introduced, and through it a characterization of the learnability of partial classes and (lower & upper) bounds on sample complexity are given.\n- It is shown that the learnability of a partial class $\\mathbb{H}$ does not imply the learnability of a total class $\\bar{\\mathbb{H}}$ that _disambiguates_ $\\mathbb{H}$.\n- The connection between $k$-class partial learnability and $(k + 1)$-class total learnability (when the value \"undefined\" is treated as an additional label) is established. *Strengths*\n\n- A solid work that generalizes the recent two theories from the binary case to the multiclass one. That said, these two recent theories were built in an attempt to bridge the gap between learning theory and applied machine/deep learning practices.\n\n*Weaknesses*\n\n- I am most concerned about the presentation of the material: the density of the narrative is highly heterogeneous throughout the text of the paper. The introduction is well-written, and the motivation of the two proposed theories is immediately clear to the reader. The part about the universal theory of learning is already denser, and by the end of this part it is already difficult to understand what is at stake without looking into the Appendix and googling. The part about partial classes is the most dense, and I personally had to familiarize myself with the binary case first (Alon et al., FOCS 2021) before I could understand anything in this part. My impression is that this submission is an attempt to compress a journal article to the size of a conference paper, and the compression was done very unevenly.\n- Although both theories try to fill the gap between the theory and practice of machine learning, I honestly did not understand the connection of one theory with another. More precisely, I can imagine a setup where both theories are combined, but this will be a separate framework.\n- In the theory of partial classes, (yet) there is no such simple and clear principle as ERM (This is more about the original work of Alon et al. (2021), but nonetheless). I understand that (for example) in applied deep learning, SGD is not an arbitrary ERM. But after all, SGD is ubiquitous and extremely successful _approximate_ ERM in deep learning. If this theory claims to be an extension of the classical PAC theory explaining modern learning phenomena, I would like to have a simple and understandable learning principle (rather than resorting to improper learning with a transductive algorithm). Have you tried to find a less obfuscated partial class to prove Theorem 16? Your class relies on the class from Alon et al. (2021), which in turn relies on the superpolynomial separation between the chromatic number and the biclique partition number, which in turn is based on a series of 4 reductions in graph theory and complexity theory ... Understanding the paper requires familiarity with the two previous works on which it is based (and beyond)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "G91m2mrzi7z", "eQYRKk8IX3W", "xssRoNtHchJ", "U9ek9M0M6S-", "RynUY4pyKWV", "-HuGJfrOnA", "agsJuVFU_cz", "zYacsOVctZQ", "ztB1YW3s2XE", "MIgl4PBXcNQ", "nips_2022_-N-OYK2cY7", "nips_2022_-N-OYK2cY7", "nips_2022_-N-OYK2cY7" ]
nips_2022_RjS0j6tsSrf
Diagonal State Spaces are as Effective as Structured State Spaces
Modeling long range dependencies in sequential data is a fundamental step towards attaining human-level performance in many modalities such as text, vision, audio and video. While attention-based models are a popular and effective choice in modeling short-range interactions, their performance on tasks requiring long range reasoning has been largely inadequate. In an exciting result, Gu et al. (ICLR 2022) proposed the $\textit{Structured State Space}$ (S4) architecture delivering large gains over state-of-the-art models on several long-range tasks across various modalities. The core proposition of S4 is the parameterization of state matrices via a diagonal plus low rank structure, allowing efficient computation. In this work, we show that one can match the performance of S4 even without the low rank correction and thus assuming the state matrices to be diagonal. Our $\textit{Diagonal State Space}$ (DSS) model matches the performance of S4 on Long Range Arena tasks, speech classification on Speech Commands dataset, while being conceptually simpler and straightforward to implement.
Accept
This paper proposes a simpler alternative to S4 that achieves comparable performance. The method makes sense and the experiments are thorough. All reviewers agreed this is a good paper. I recommend acceptance.
train
[ "TIwx0cIIzm", "FRvkviIAAg", "NmFaLWbejf", "L_GiskA0Iq", "KF0tXNcEKz", "slkqtyzk2MI", "QPXWD0WKCDT", "dyBxYbIq243", "LluHel8dF1b", "Gwo-7YUrZWz" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for all the helpful suggestions and for increasing the score!!\n\nAs per your suggestion we have added the benchmaking results in Section A.4 in the attached updated version.", " Thanks for benchmarking the speed-up. I believe it would be a useful addition to the paper from an empirical perspective. I am happy to increase my original score.", " **Benchmarking speedups** : For S4, we used the official S4 implementation provided by its authors. Both S4 and DSS-EXP implementations below are based on the [pykeops library](https://www.kernel-operations.io/keops/index.html) for efficiency and benchmarking was performed on a single Nvidia 3090 GPU. \n1. On the Path-X task with input length L=16384, the following are the times on the config provided in the paper (Table 4). For S4 and DSS-EXP all model hyper-parameters are identical.\n - Batch size 1 : DSS-EXP: 72sec/1000steps S4: 133sec/1000steps\n - Batch size 16 : DSS-EXP: 40sec/100steps S4: 47sec/100steps \nFor large batch sizes the time taken to perform fft-based-convolution and feedforward part dominates and hence the speedups of DSS are less pronounced. Whereas for smaller batches the time taken to compute the kernel becomes significant and **DSS-EXP provides ~1.8x speedup over S4**. \n\n2. We also isolated and benchmarked the time taken to solely compute the S4 kernel vs DSS-EXP kernel (+ perform a backward pass on the sum of all kernel entries). We used H=256, N=64 and varied L. **For large L, DSS-EXP kernel is >2.3x faster than S4 kernel**.\n| time (msec) | L = 4096 | L = 16384 | L = 65536 | \n| ----------- | ----------- | ----------- | ----------- |\n| S4 | 4.0 | 10.7 | 39.8 |\n| DSS-EXP | 4.0 | 6.7 | 17.6 | \n \n \nIn line 87, we now clarify that this idea is similar to S4 as per your suggestion.", " Thanks for responding to each of my comments in detail. Here is my follow-up:\n\n1. I agree that the simplifications in the paper make the theory much more accessible to ML practitioners. However, from an empirical perspective, I still recommend adding results pertaining to any training speed-ups that may be obtained using DSS over the original S4 model.\n\n2. That makes a lot of sense, and I believe it follows from the Lebesgue measure of matrices that are not diagonalizable. A citation would be helpful to convince readers of the validity of this assumption.\n\n3. No further comments.\n\n4. Again, everything the authors have mentioned is true, but it says more about the requirements of the task than the capacity of the model. A transformer, for example, would also perform better with longer context than truncated context, and show similar trends for the different tasks. If the objective here is to show that DSS is better at capturing long range context, perhaps a better comparison would be to show, for increasing values of context, how DSS compares with a Transformer model. If the difference in performance would grow larger as the context size increases, it would conclusively suggest that DSS is better at capturing long range contexts. If computational resources are a concern, perhaps this experiment can be done on a subset of tasks.\n\n5. No further comments.\n\n6. Perhaps it would be useful to make this clarification (through a footnote, for example), to avoid misplaced credit attribution.\n\n7. No further comments.\n\n\n", " We thank the reviewer for the encouraging review and several helpful comments! Our response to the reviewer’s comments are as follows:\n\n1. \"DLPR\" : fixed (updated submission attached). \n2. With Pytorch broadcasting some squeeze/unsqueeze ops can be omitted. \n3. Algorithm 1: You’re right, as shown in code in the Appendix there is an outer product between $\\Delta$ and $\\Lambda$ but Algorithm 1 is shown for H=1 (i.e. $\\Delta \\in \\mathbb{R}$) and hence element-wise product suffices.\n4. C is *not* the same as W_out and they serve different purposes. C is for projecting the N-dimensional states of an implicit state space to a 1-D output y. On the other hand W_out is used for (position-wise) mixing the H outputs of H independent 1-D inputs. It is possible to have H=1 and N=64. W_out is HxH, there’s a C of size N for each of the H coordinates so HxN. \n5. “/” represents element-wise division. \n6. HiPPO initialization: Yes indeed our proposed “Skew-Hippo” initialization certainly derives from the HiPPO theory but we agree we do not make it clear in the paper as to why it works. After our work, this question has been answered in a follow-up work with mathematical rigor detailing the connection of Skew-Hippo initialization with the original HiPPO operators We have noted this in the **anonymized footnote 5** in the attached updated version.\n7. Line 192: fixed.\n8. Citation style: Neurips policy *allows* “alpha” bibliography style. E.g. following paper from NeurIPS 2021 proceedings uses this style https://papers.nips.cc/paper/2021/file/003dd617c12d444ff9c80f717c3fa982-Paper.pdf (we or our work is in no way related to this paper).\n9. \"cf.\" : fixed. \n10. \"Provably\": the usage is intended to say that “it can be proved” that DSS are as expressive as general SS.", " We thank the reviewer for the positive review and helpful comments! Our response to the reviewer’s comments are as follows:\n\n1. We apologize for the lack of proof sketch - we have **included a quick sketch of the proof idea** in the main text as per your suggestion (lines 92-95, footnote 2 in the attached updated submission). \n\n2. Discretization: added a citation.\n\n3. Indeed many readers weren’t familiar with FFT-based convolution so we believe this should be the part of the main paper.\n\n4. Proposition 1: We apologize for the confusion and agree that this subtle point requires clarification. We have changed the statement of Proposition 1 to say that it applies to all kernels over $\\mathbb{C}$. As in this work we require the inputs u, outputs y and the kernel K to all be over $\\mathbb{R}$ we cast the complex-valued kernel to reals in Step 4 of Algorithm 1, page 4 by simply taking its real part. We have **added a footnote 4 clarifying why this casting does not affect the mathematical soundness of our method and claims**. \n\n5. Initialization of A: This is a great question and, after our work, this question has been answered in a follow-up work with mathematical rigor detailing the connection of Skew-Hippo initialization with the original HiPPO operators We have noted this in the **anonymized footnote 5** in attached updated version.\n\n6. Figure 5: We apologize for the confusion. In our implementation we decided to use a separate Delta parameter for the real and imaginary parts of Lambda respectively. I.e. while forming $\\Lambda$\\*$\\Delta$, we formed it as Re($\\Lambda$)\\*$\\Delta$_re + i\\*Im($\\Lambda$)\\*$\\Delta$_im instead of Re($\\Lambda$)\\*$\\Delta$ + i*Im($\\Lambda$)\\*$\\Delta$. We have **added a clarification** in the Figure 5 caption and also in the Section A.3 (line 496).\n\n7. “larger values modeling long-range dependencies”: this was a typo, fixed.", " We thank the reviewer for the positive review and insightful comments! Our response to the reviewer’s comments are as follows: \n\n1. We believe that simplifying complex approaches and illuminating the main source of the performance improvement of a complex model is essential to research. E.g. DSS is far more accessible to an average ML practitioner and has a much simpler implementation requiring only a handful of lines of code. In particular, compared to S4, the reader no longer needs to know the theory and concepts involving Pade approximations of matrix exponentials (Euler, Bilinear, etc) (2) Woodbury Identity reductions to compute matrix inverse after a low-rank perturbation, and (3) fourier analysis for computing the SSM kernel efficiently via truncated generating functions. \n*EDIT: We **have benchmarked** the running times of DSS-EXP vs S4 and the results are provided in the follow-up comment to Reviewer NmPw below.*\n\n2. We say that diagonalizability of state matrices is a mild technical assumption as the set of diagonalizable matrices forms a dense subset of the space of NxN matrices over C. I.e. a random NxN matrix, with entries sampled from any large enough subset of C, would have distinct eigenvalues with probability ~1 and hence would be diagonalizable. As a toy example, consider the Fibonacci series F = (0, 1,1,2,3,5,...). This can be computed sequentially via a linear RNN as [F_k, F_{k-1}] = [(1,1), (1,0)].[F_{k-1}, F_{k-2}]. The 2x2 matrix [(1,1), (1,0)] can be diagonalized and its eigenvalues are the golden ratio and its conjugate and hence all F_k’s can be computed in parallel as F_k = c(a^k - b^k) where a, b are the eigenvalues.\n\n3. We decided to use the recommended configs provided in the official S4 repo as the purpose of our work was not to outperform S4 but to show that the impressive performance of S4 can also be achieved via a far simpler model. On the TEXT task the performance of DSS using the same config as S4 was 76.6 (compared to 75.4 of S4). After our work, researchers have extensively tuned both S4 & DSS-EXP on LRA to improve average test accuracy to ~85 and again found their performance to be within 0.5 points of each other. Some of the changes for this better performance is use of GLU non-linearities instead of GELU, use of cosine learning-rate schedule with warm-up, no use of Dropout, use of weight decay also on “W” params of DSS. More details can be found in this work which we have cited in anonymized footnote 4 in attached updated version.\n\n4. Section 4.1: The purpose of this experiment is to gain insight into the source of the superior performance of S4/DSS compared to Transformer variants. To demonstrate that the better performance of S4/DSS are due to their ability to better capture long-range interactions, we repeated the experiments after restricting the kernel size as described in the paper and indeed observe a significant reduction in performance, suggesting that S4/DSS indeed capture long-range dependencies. This also highlights that the LRA benchmarking itself has tasks that require long-range reasoning as *if a local model already delivers a good performance on a task then it is not a good task* for testing the long-range abilities of a model in the first place. Therefore, we believe its an informative ablation to include in the paper.\n\n5. We agree with the reviewer that the main paper should be self-contained. We have included some figures in the main section but unfortunately had to move some of the figures to the appendix due to limited space. We **will bring back** the remaining figures back to the main section in the camera ready version which allows 10 pages of content instead of 9.\n\n6. Yes, this is also the idea in S4 - the difference being that our parametrization is different compared to S4. We wanted to keep the transition of general SS to DSS self-contained and hence refrained from referring to it here.\n\n7. This is indeed an intriguing question but unfortunately we do not have anything concrete to say for now except that Skew-Hippo initialization is crucial for empirical performance but it is hard to rule out the possibility of better initializations in the future. ", " The paper builds upon the recently proposed S4 architecture, which was shown to be effective at modeling long-range dependencies. The authors propose simplifications to the S4 model by modifying the kernel such that, instead of the originally proposed diagonal-plus-low-rank kernel, they can remove the low rank correction. The theoretical contribution includes proving a proposition which says that under some mild assumptions, this approximation is equivalent to the original kernel formulation. Empirically, the authors show that it is competitive with the original S4 model on the Long Range Arena (LRA) benchmark. Strengths:\n1. The S4 model itself is quite new and interesting, and as such, further studies to analyze and simplify it are useful for the community at large.\n2. The 2 methods (DSS-exp and DSS-softmax) are theoretically motivated and demonstrate good empirical performance, where they achieve results comparable to the original S4 on most tasks.\n3. The paper is clear and well-written, and the background described in Section 2 is sufficient for new readers (not familiar with S4) to understand the paper. \n\nWeaknesses:\n1. My primary concern is that “simplicity” is subjective. The authors motivate DSS by saying that it simplifies computation of the SSM kernel. It would be a stronger result if they can show that it also provides quantitative gains, perhaps in terms of training time/flops.\n2. Proposition 1 is based on the assumption that the state matrix $A$ is diagonalizable, and the authors say that this is “a mild technical assumption.” Can they justify this assumption, for instance, by showing that it holds for some toy example where explicit computation of the state matrices is feasible?\n3. In Table 1, the large gap between S4 and DSS on the “TEXT” task is attributed to learning rate tuning. Could the authors also try to tune the baseline learning rate for S4, since these numbers can significantly change the “AVG” result?\n4. In Section 4.1, the authors try to analyze the effect of restricting DSS to local interactions. As is clear from Table 3, this would be very task-dependent for any model (not just S4), and it is not clear what is the insight gained from this experiment.\n5. In Section 4.2, the discussion in the last 3 paragraphs refer to figures in the Appendix. I think the main paper should be self-contained. Perhaps the authors could move the figures to the main paper by reducing some of the “Related work” content in Section 5.\n\n**Update (Aug 8):** After discussions and new time benchmarking results from the authors, I am increasing my rating below. \n 1. In the last paragraph of Section 2, the authors write that “our idea” is to use an alternate parameterization of state spaces. Isn’t this the original idea in S4?\n2. In Section 4.2 (paragraph 4), the authors note that during training, the parameters $\\text{Re}(\\lambda)$ and $\\text{Im}(\\lambda)$ move away from the skew-Hippo initialization. Does this mean that this is perhaps not the best initialization for these parameters?\n The authors have discussed the limitations of their work which is sufficient.", " The paper extends prior work on efficient state space models by simplifying the state transition matrix to be diagonal. The authors make the point that diagonal state space models (DSSMs) are almost as expressive as normal state spaces when considering complex diagonal matrices. This is due to the fact that almost all matrices can be Eigen-decomposed (diagonalized) in the complex plane. The paper introduces 2 parameterizations of DSSMs that are based on this insight and show strong empirical performance on the long-range arena as well as raw audio classification; tasks which require processing very long sequences. \n\nOverall, I think the contributions of this paper a very important for the evolution of the DSSM framework wrt Deep Learning. I have trouble with the presentation of the work though which should be addressed by the authors. It is harder to follow than it should be and the community would benefit from improvements on that end. I am happy to change my score after that accordingly. **Strengths**\n\n* Mostly well written\n* very practical insight and simplification\n* strong empirical results\n* insightful analysis of parameters after training\n\n**Weaknesses**\n\nI only found one weakness but it has to be addressed (IMO):\n*Clarity*: I find the main contribution in proposition 1 very hard to follow. This is due to the fact that it is posited without any derivation or intuition, leaving the \"elementary\" proof for the appendix. I think this is bad practice and puts the burden on the reader to gather all necessary background to understand how the proposition comes together. Given that this is so central to the paper I do not understand the choice of the authors to put the whole derivation of it in the appendix.\n\nStating the following at least on a high-level before the proposition would make it much easier to understand proposition 1:\n1) Most matrices diagonalize over the complex plane (via eigen-decomposition) —> Assume A is one of those —> almost no loss of expressibility.\n2) Matrix exponentials are trivial to compute for diagonalised matrices because (VDV^-1)^n = V D^n V^-1. Analogously this is true for power series e^A.\n3) V and V^-1 can be subsumed into B and C, show the resulting equation, show that B, C, V, V^-1 can be squashed into a single vector (w). \n\nThis would go a long way to make the main paper much more understandable and helping the reader understand where Proposition 1 comes from.\n * Background discretization: There is no citation on where this comes from.\n* Background: Is the paragraph about computing y and u necessary? It breaks the reading flow and adds mental overhead. Maybe push to the appendix.\n* Proposition 1: A is not in C, is it?\n* Proposition 1: I fail to see how the resulting K is ensured to be real?\n* Initialization of delta_log: Why is delta_log initialised as e^r with r in U(log(.001), log(.1))? An explanation in the paper would help.\n* Initialization of A: Why the eigen-values of only the normal part of the Hippo matrix. What's the reasoning behind this decision?\n* Figure 5: I don’t understand what are the delta_log values and how they refer to the real and imag parts of the lambda. Please explain and also extend the caption.\n* Figure 5: It is stated that the values of delta_log are larger on short-range tasks? How is this “inline with” larger values modeling long-range dependencies?\n* It seems like these models need to be tweaked quite a bit and for a practitioner it would be super helpful to see how brittle these models still are. That is, I would love to see a table/graph of results with non-optimal hparams.\n\n Limitations have been addressed.", " The paper builds on structured state space (S4) models and shows that one can achieve similar or slightly better performance with just diagonal state matrices (avoiding the low-rank structure). This is interesting because then the model becomes conceptually simpler and easy to implement. In addition, the paper provides ablations covering initialization and kernel truncation, and an analysis of the learned parameters. I like the paper and recommend acceptance. I find it is moderately original, of very good quality, and significant. Clarity can be improved before the final submission.\n\nStrengths:\n* The idea is simple and attractive. Simplifications of existing processes is always welcome, especially if they match the accuracy of the previous, more complicated approach.\n* I think complexity is reduced. This is perhaps not emphasized enough in the abstract and introduction.\n* Good set of evaluations.\n* Insightful ablations and analysis.\n\nWeaknesses:\n* Presentation. At several points I found that the authors could have made a larger effort in polishing notation or structuring the paper (see some comments below).\n* Speed/complexity is only partially assessed. I'd suggest that the authors compare directly with complexities of the original S4 in the small subsection of section 3.2. Alternatively (or in addition), some speed differences can be measured on the same machine.\n Questions/suggestions:\n- I think the abstract could be largely improved. Some things to consider are: mentioning complexities, mentioning ablation experiments, mentioning analysis, etc.\n- Line 17: \"denoising objective\" --> This is a bit misleading, even more now that denoising diffusion models are in hype. Please find a better terminology.\n- Line 39: \"DLPR\" --> DPLR.\n- End of Sec 2: I think it would be better to present here the two views (convolutional and recurrent) instead of just one and then later mentioning the recurrent one.\n- Sec. 3: In general, I found it hard to follow with the notation used (and it is not wrong, it is just that it is not presented appropriately). $N$, $H$, and $L$ could be defined beforehand or in Sec 2 (and also give an idea of their relation: like $H<N<L$ if that was the case). I also think that there are some missing \"squeeze\" and \"unsqueeze\" operations in the algorithm in the Appendix.\n- Algorithm 1: Should there not be an outer product between $\\Delta$ and $\\Lambda$? Which type of division represents $/$ in line 4? Mention $H=1$?\n- Line 137: Isn't $W_{\\text{out}}$ the same as $C$ in Eq. 2? (I'm mostly asking for a clarification in the paper)\n- Line 152: Any intuition on the initialization of $r$? (same)\n- Lines 153-156: So in the end the HiPPO initialization is crucial (according to presented results). Therefore, the model could be considered to still rely on HiPPO theory? More explanation on that would be welcome (in the paper).\n- I find there are many differences between the training setups of S4 and DSS (and this can make the reader suspicious). Perhaps it would be instructive to report a dry-run of DSS without those differences (that is, using the S4 setup unaltered) in addition to the reported results.\n- Line 192: \"images, audio\" --> images, and audio.\n- Sec 4.1: I think that the section would be much easier to parse if the two ablations were presented one after the other. So first Random Initialization: Question, how, result, and then Truncated Kernels: Question, how, result.\n- Line 256: \"absolute values\" --> Better write \"magnitudes\"?\n- Line 280: Remove \"Related work\".\n\nStylistic issues:\n- Do not put references in the abstract.\n- I am not sure that the way to cite references is correct (not sure about NeurIPS policy here; typically one writes \"Surname et al.\" or just numbers).\n- Authors seem to be using \"cf.\" to mean \"see\". Note that \"cf.\" does not mean \"see\": https://en.wikipedia.org/wiki/Cf. (that is, cf. should be equivalent to \"compare\", but the way the authors write it I believe it should be \"see\").\n- A similar thing happens with \"provably\", where I think they mean \"probably\". It is not the same and the former implies that it can be proven mathematically: https://wikidiff.com/provably/probably\n I am happy with the section on limitations." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "FRvkviIAAg", "NmFaLWbejf", "L_GiskA0Iq", "QPXWD0WKCDT", "Gwo-7YUrZWz", "LluHel8dF1b", "dyBxYbIq243", "nips_2022_RjS0j6tsSrf", "nips_2022_RjS0j6tsSrf", "nips_2022_RjS0j6tsSrf" ]
nips_2022_AQd4ugzALQ1
MetaTeacher: Coordinating Multi-Model Domain Adaptation for Medical Image Classification
In medical image analysis, we often need to build an image recognition system for a target scenario with the access to small labeled data and abundant unlabeled data, as well as multiple related models pretrained on different source scenarios. This presents the combined challenges of multi-source-free domain adaptation and semi-supervised learning simultaneously. However, both problems are typically studied independently in the literature, and how to effectively combine existing methods is non-trivial in design. In this work, we introduce a novel MetaTeacher framework with three key components: (1) A learnable coordinating scheme for adaptive domain adaptation of individual source models, (2) A mutual feedback mechanism between the target model and source models for more coherent learning, and (3) A semi-supervised bilevel optimization algorithm for consistently organizing the adaption of source models and the learning of target model. It aims to leverage the knowledge of source models adaptively whilst maximize their complementary benefits collectively to counter the challenge of limited supervision. Extensive experiments on five chest x-ray image datasets show that our method outperforms clearly all the state-of-the-art alternatives. The code is available at https://github.com/wongzbb/metateacher.
Accept
The paper proposes a model for a multiple teacher, single student setting for medical image classification. The reviewers where split, with two reviewers leaning towards accept and one leaning towards reject. The main criticism of the negative reviewers is that the proposed model only slightly outperforms the state of the art. Given the extensive experimental evaluation and the fact that the proposed method consistently outperforms the state of the art, the improvement should be statistically significant. The negative reviewer has acknowledged the improvement in the discussion phase. A second criticism was the lack of significance of the proposed learning setting. As the reviewers find this setting novel and due to the relevance in the medical domain, I vote to accept the paper.
test
[ "kPfZIknNYrq", "x9NAKpwdCOV", "mLb2CtAs_La", "LEXqXWKJcdi", "8FuxC7w2DD", "IPoK4F3J6j2", "Um71NsXGgjb", "LImTE0aCnT", "MXKag0Lf69V", "J37mpqVRwIs", "ZixDxZLLhfT" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your kind answer. We are glad that our responses could clarify all your concerns. In case there are no further questions, we would appreciate if you could improve your score.", " Really appreciate the reviewer for further feedback and interaction.\n\nFirst, we would like to stress that our proposed semi-supervised multi-source-free domain adaptation (SMDA) problem is novel yet significantly more challenging, fully respecting well the real-world situations (data sharing across hospitals is strictly restricted, along with sparse labelled data available in a target domain), as extensively discussed in our paper. \n\nAlso, we consider that the performance gain by our method over the state-of-the-art alternatives (e.g., DECISION, DECISION++) is already non-trivial. Specifically, whilst the most related art model DECISION is not designed for the proposed SMDA problem, we have already adapted it accordingly to DECISION++ (First fine-tuning all source domain teacher models using the labeled target-domain training data, and further leveraging these tuned models to train a stronger variant). Indeed, the small gap between DECISION and DECISION++ in the range of [0.14%, 0.24%] exactly indicates the genuine challenge degree of this new problem, despite this has been the best possible existing solution further equipped with the strong fine-tuning baseline approach. In our community, fine-tuning is still a gold-standard baseline often yielding strong performance. In this context, our model’s gain of [0.69%, 0.84%] over DECISION and further gain of [0.46%, 0.70%] over DECISION++ are meaningful and non-trivial to achieve. We appreciate that the reviewer can take into account all the facts comprehensively in the final review evaluation. \n", " Personally, I'm satisfied with authors' responses, which are mainly about the improvements brought by the proposed method. However, I still cannot get the significance of the proposed new setting, i.e., semi-supervised multi-source-free domain adaptation (SMDA). In my opinion, the proposed approach ought to perform significantly better than previous baselines in SMDA, as those baselines were never specifically optimized for SMDA. However, the relative improvements are fewer than 1% in most cases, making me doubt the practical value of SMDA.", " I appreciate the authors' detailed answers which have addressed my concerns about computational complexity and shed some light on the behaviour of coordination mechanism. ", " We thank all the reviewers for insightful comments on our work. We appreciate the following positive feedback:\n\n1. The multi-source SFDA problem setting is interesting (Reviewer 5gnh, 6gX3). The paper studies a new problem setting. (Reviewer 2Jhq).\n2. The proposed method is novel and interesting (Reviewer 5gnh, 6gX3).\n3. The coordinating weight learning is novel (Reviewer 5gnh). A mutual feedback mechanism is designed (Reviewer 2Jhq).\n4. The experimental evaluation is comprehensive (reviewer 5gnh).\n\nGiven the wide diversity in data collection across hospitals, we propose a practical yet under-studied new problem for multi-label medical image classification, namely **Semi-Supervised Multi-Source Domain-Free Adaptation (SMDA).** We further introduce a novel approach named MetaTeacher in a multi-teacher and one-student framework. For model training, we leverage a bilevel optimization strategy for alternatively updating the student and teacher models, characterized by a coordinating weight scheme for adaptively calibrating the learning directions of both student and teacher models.\n\nWe address all the comments of each reviewer in detail below. We thank all reviewers for the first round of comments, please feel free to let us know if further clarifications and experiments are needed. We would really appreciate it if the reviewers could consider raising the scores accordingly in reflection of our responses and updates.\n", " **Q1: I think the proposed solution cannot reflect well on the value of SMDA**\n\n**A1:** Thanks. Please note that our first important contribution is a novel practically critical problem setting, namely Semi-Supervised Multi-Source Domain-Free Adaptation (SMDA), for multi-label medical image classification. Further, our MetaTeacher method is superior in comparison to SOTA solutions:\n\n**(1) Significant performance gain in medical image classification:** As shown on the transfer scenarios in paper Tables 1-4, our method outnumbers the best competitor DECISION by 1.17%, 0.84%, 0.69%, and 0.72%, which we consider is significant. In comparison, we witness a gain of **0.8%** in supervised medical image classification over the three years from 2019 to 2021 as shown in the table below. Hence, the gain our method achieves should be considered as significant.\n\nTable1: AUROC comparison between supervised SOTA approaches trained with 100% of labelled data on NIH-CXR14.\n|Method|years|AUROC|\n|-|-|-|\n|Ma et al. [1]|2019|81.7|\n|Hermoza et al. [2]|2020|82.1|\n|S2MTS2 [3]|2021|82.5|\n\n**(2)More significant gain in multi-source transfer situation.** On the two-teacher (Table 4) and three-teacher (Table 1) scenario, DECISION increases from 91.26% to 91.67% (a gain of 0.41%), CAiDA from 90.80% to 90.99% (0.19%), vs. our MetaTeacher from 91.98% to 92.84% (0.86%). This clearly shows that our gain is more significant than those by prior SOTA methods. For further validation, we have added a two-teacher transfer experiment: **CheXpert, MIMIC-CXR to Google-Health-CXR**. From the two-teacher case to the three-teacher transfer scenario (NIH-CXR14, CheXpert, MIMIC-CXR to Google-Health-CXR), the performance gain of DECISION is 0.69%, vs. 1.35% by our MetaTeacher. This suggests that our method is superior at leveraging the diversity and complementary effect of multiple teacher models.\n\nTable 2: Two-teacher and three-teacher transfer scenarios for the target domain Google-Health-CXR\n\n|Method|DECISION|OURS|\n|-|-|-|\n|CheXpert, MIMIC-CXR to Google-Health-CXR|81.16|81.34|\n|NIH-CXR14, CheXpert, MIMIC-CXR to Google-Health-CXR|81.85|82.69|\n\n**(3) Stronger at locating the symptoms with practical significance.** From the analysis visualized in the **Appendix**, it is evident that the visualization background of our MetaTeacher is much clearer than DECISION, along with higher confidence in the predictions of samples clearly with and without disease. In practice, such high-confidence results are preferred by physicians.\n\n**(4) Higher fault tolerance.** Let's consider an extreme case where there is a single sample for a specific class, only one teacher model makes the correct prediction, while all the other models fail (This is possible in case of wrong source domain labels). In this case, the current multi-source-free domain adaptation methods would suffer with a tendence of making false predictions. In contrast, our MetaTeacher is likely to excel. As illustrated in Figure 3 (**Appendix**), our model can correctly predicte the disease Atelecsis even though none of the source domain teacher models succeed.\n\n**(5) Comparing with a strong combination of semi-supervised learning and multi-source domain adaptation (MSDA).** We have now additionally compared with an improved DECISION model (denoted as DECISION++): First fine-tuning all source domain teacher models using the target-domain labeled training data, and then using them to train a stronger DECISION model. Experimental results in the Table below show that our MetaTeacher can still surpass both variants of DECISION by a clear margin. This suggests that our method goes beyond combining existing semi-supervised learning and multi-source domain adaptation.\n\nTable 3 Comparison between MetaTeacher and DECISION under the same known conditions.\n|Method|DECISION|DECISION++|OURS|\n|-|-|-|-|\n|NIH-CXR14, CheXpert, MIMIC-CXR to Google-Health-CXR | 81.85|81.99|82.69|\n|CheXpert, MIMIC-CXR to NIH-CXR14| 77.05|77.28| 77.74|\n|NIH-CXR14, CheXpert to Open-i|91.26|91.50|91.98|\n***\n**References**\n\n[1] Ma, Congbo, Hu Wang, and Steven CH Hoi. \"Multi-label thoracic disease image classification with cross-attention networks.\" International conference on medical image computing and computer-assisted intervention. Springer, Cham, 2019.\n\n[2] Hermoza, Renato, et al. \"Region proposals for saliency map refinement for weakly-supervised disease localisation and classification.\" International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, Cham, 2020.\n\n[3] Liu, Fengbei, et al. \"Self-supervised mean teacher for semi-supervised chest x-ray classification.\" International Workshop on Machine Learning in Medical Imaging. Springer, Cham, 2021.\n***\n**Final remark:** We hope that our responses have resolved all the questions and concerns. In the revised version, we will add the above analysis for consolidating our work. Please feel free to let us know for more questions/comments/concerns if any.", " We thank reviewer for positive comments and appreciation for our work.\n***\n**Q1: Please provide some comparison in terms of computational complexity.**\n\n**A1:** Thanks for the great comment. Following the suggestion, we have now compared the **training** runtime of MetaTeacher, DECISION and MME on a single NVIDIA 3090Ti GPU over the transfer scenarios **NIH-CXR14, CheXpert, MIMIC-CXR to Open-i** and **NIH-CXR14, CheXpert, MIMIC-CXR to Google-Health-CXR**. The results are shown in Table 1 below.\n\n**Table 1** Training running time comparison on two transfer scenarios.\n\n|Methods|NIH-CXR14, CheXpert, MIMIC-CXR to Open-i|NIH-CXR14, CheXpert, MIMIC-CXR to Google-Health-CXR|\n|---|-|-|\n|DECISION|32min|33min|\n|MME|41min|43min|\n|MetaTeacher(ours)|36min|38min|\n\n**(1) Slightly slower than multi-source-free domain adaptation (e.g.,DECISION).** Although multi-source-free domain adaptation methods do not need to update a student model, they involve other complex designs (e.g., DECISION need to do k-means clustering, and CAiDA has a searching process of Semantic-Nearest Confident Anchor). Instead, our method only involves a simple matrix calculation. **(2) Slightly faster than Semi-supervised domain adaptation (MME).** This is because semi-supervised domain adaptation needs to train a model for each source domain, whilst other complex computations are involved in their optimization process. Overall, our method has similar running speed as existing alternative methods.\n***\n\n**Q2: Give an example or explain what the behavior of the method for predicting W.**\n\n**A2:** Thanks. The weight \\$W\\$ is critical in the MetaTeacher framework. **Firstly**, for the upper-level optimization objective, it combines the predictions of multiple teachers to provide the updating direction for the student model (Lines 195-198). **Secondly**, for the lower-level optimization objective, we split \\$W\\$ into multiple vectors to provide different updating directions for each teacher (Lines 170-173). In our experiments, on the training of the transfer scenario **NIH-CXR14, CheXpert, MIMIC-CXR to Open-i**, we have noted \\$W\\$ w.r.t. **a sample labeled with Atelectasis and Effusion disease classes** (Table 2 below). Initially, each teacher, as well as their joint predictions ($\\bar{y}_{u}^{t}$ in Eq(12)(Appendix)), failed to predict the Atelectasis disease. During training, each teacher was updated, with teacher1 gained the ability to predict Atelectasis disease (0.371739-\\>0.756016); Meanwhile, \\$W\\$ was also accordingly updated and assigned more weight to the Atelectasis class for teacher1 (0.214039-\\>0.950953). This process is summarized in the table below. We will add this analysis to the revised version.\n\n**Table 2** For a sample labeled as Atelectasis and Effusion classes, the weight changes before and after training.\n||Atelectasis|Cardiomegaly|Effusion|Consolidation|Edame|Pneumonia|\n|-|-|-|-|-|-|-|\n|**Predictions for each teacher (pre-train)**|0.476732|0.267061|0.765137|0.190515|0.301033|0.168340|\n||0.243154|0.079328|0.481982|0.034837|0.400948|0.087144|\n||0.346073|0.345510|0.626205|0.021017|0.061996|0.042028|\n|**W (pre-train)**|**0.214039**|0.149377|0.371503|0.733469|0.404418|0.445663|\n||0.022341|0.458379|0.377687|0.042648|0.404455|0.017554|\n||0.763619|0.392244|0.250810|0.223883|0.191127|0.536783|\n|**Joint predictions (pre-train)**|0.371739|0.211779|0.62335|0.145928|0.295758|0.099113|\n|**Predictions for each teacher (after-train)**|0.770673|0.327727|0.776080|0.161243|0.437272|0.377679|\n||0.430125|0.154535|0.124078|0.069540|0.047292|0.043537|\n||0.554779|0.255637|0.631193|0.052395|0.122772|0.199748|\n|**W (after-train)**|**0.950953**|0.280765|0.984358|0.019614|0.000677|0.025918|\n||0.032633|0.704228|0.011930|0.196210|0.995239|0.943713|\n||0.016413|0.015007|0.003711|0.784177|0.004084|0.030369|\n|**Joint prediction (after-train)**|**0.756016**|0.204678|0.767763|0.057894|0.047864|0.056941|\n***\n\n**Other comments:**\n\n**Q1: Figure 2 indicates that varying beta and gamma does not improve the performance much and that the corresponding loss terms might not be so useful.**\n\n**A1:** Due to space limitations, we did not give the specific values in Figure 2. On the transfer scenario **NIH-CXR14, CheXpert, MIMIC-CXR to Open-i**, it is shown that without adding β and γ, the results were 92.49% and 92.29%, which is 0.35% and 0.55% lower than the final result 92.84% (Table 1 in paper). This suggests their good effectiveness. Please also see our response to Q1 of Reviewer 6gX3.\n\n**Q2: Paper structure suggestions and typos.**\n\n**A2:** Many thanks. We will refine and correct them in the revised version.\n***\n**Final remark:** We hope that our responses have resolved all the questions and concerns. In the revised version, we will add the above analysis for consolidating our work. Please feel free to let us know for more questions/comments/concerns if any.", " We thank the reviewer for positive comments and helpful feedback. We address the specific suggestions below.\n\n* * *\n\n\n\n**Q1: (1) Why do the authors design a multi-teacher one-student scheme? (2) Whether we can set one-teacher and one-student framework? If yes, what is the performance when using one teacher?**\n\n**A1:** This multi-teacher setup is underpinned by the nature of our problem where data privacy protection is fundamentally critical (I.e., data sharing across hospitals is typically banned). That being said, a specific teacher model would be trained using the training data of each individual hospital. This leads to the result of multiple teacher models in practice, rather than manually designed. We will further clarify.\n\nAssuming no above data privacy issue, as suggested we have experimented with a one-teacher one-student design (note this does not respect our problem setting as introduced in this work) where the component adaptive training the teachers goes away naturally. We obtained the results of **89.97%/79.94%/75.38%/90.13%**, inferior to **92.84%/82.69%/77.74%/91.98%** by ours (corresponding to Tables 1,2,3,4). This is due to that each dataset presents unique characteristics in category imbalance and labeling error (e.g., false negatives), resulting in different per-category qualities. Aggregating such datasets into one would introduce negative interference. Besides, using multiple teachers would reduce the learning difficulties of the entire classification problem in a spirit of divide-and-conquer principle, in addition to the opportunity of modeling the confidence per teacher.\n\n* * *\n\n\n**Q2: In Table 1, it can be seen that the proposed model still obtains promising performance when without using coordinating weight learning. Please analyze the reasons for this case.**\n\n**A2:** Great comments, thanks. As stated in Lines 208-213 of the paper, the coordinating weights \\$W\\$ plays two important roles: (1) Providing the directions for updating the student models, and (2) Providing the directions for updating each teacher. The variant **Ours (w/o mapping)** treats each teacher with equal weight, which can still bring performance gains over the source-only baseline due to the following reasons:\n\n(1) For student updating, averaging predictions from multiple teachers is beneficial for student performance, consistent with the finding by [1].\n\n(2) The fixed $W$ is also applied for teacher updating; In this case, the $W_{u}^{j}$ of Eq.(12) (Appendix) is a constant value. It is still applied during our derivation (Eq (13) to Eq(24)) (in Appendix) to benefit the optimization of the teacher models.\n\nWe will further clarify.\n\n* * *\n\n**Q3: In Table, this experiment is only conducted on the proposed model, how does this is compared to other semi-supervised methods?**\n\n**A3:** Good suggestion, thanks. **Compared to existing semi-supervised methods, our method requires less labeled data.** For example, for the transfer scenario **CheXpert, MIMIC- CXR to NIH-CXR14**, to achieve 77.74% classification rate, the existing semi-supervised methods [2,3,4,5] requires about **20,000** labeled samples (20% of the total), vs. our method only needs **500** labeled samples. Hence, our method is more data efficient and preferred in practice.\n\n* * *\n\n**References**\n\n[1] You, Shan, et al. \"Learning from multiple teacher networks.\" ACM SIGKDD2017.\n\n[2] Liu, Fengbei, et al. \"Self-supervised mean teacher for semi-supervised chest x-ray classification.\" International Workshop on Machine Learning in Medical Imaging. 2021.\n\n[3] Liu, Fengbei, et al. \"ACPL: Anti-Curriculum Pseudo-Labelling for Semi-Supervised Medical Image Classification.\" CVPR2022.\n\n[4] Liu, Quande, et al. \"Semi-supervised medical image classification with relation-driven self-ensembling model.\" IEEE TMI (2020).\n\n[5] Aviles-Rivero, Angelica I., et al. \"Graphxnet Chest X-Ray Classification Under Extreme Minimal Supervision.\" MICCAI(2019).\n* * *\n\n**Final remark:** We hope that our responses have resolved all the questions and concerns. In the \nrevised version, we will add the above analysis for consolidating our work. Please feel free to let \nus know for more questions/comments/concerns if any.\n", " The paper presents a meta-learning approach for semi-supervised multi-domain source-free domain adaptation (SFDA). As first contribution, the authors extend the multi-domain SFDA problem to a semi-supervised learning scenario where a few labeled examples of the target domain are provided in addition to the unlabeled ones. As second contribution, they propose a meta-learning framework where a student learns from multiple teachers, each one trained on different source data. The student is trained to predict the correct class for labeled examples and, for unlabeled examples, to be consistent with a weighted average of the teachers' predictions where weights are learned. A bilevel optimization strategy is employed to update the parameters of the student and the teachers. The proposed method is evaluated on a multi-label chest X-ray classification task using five datasets. Results show that the method yield some improvements compared to recent approaches for SFDA. Strengths:\n\n* Novelty: Although it borrows from previous works like Meta Pseudo Labels, the overall method proposed in the paper is novel. In particular, the coordinating weight learning is most original. The extension of multi-source SFDA problem to a semi-supervised setting is also interesting, even if it relates to other machine learning problems like domain generalization and continual learning. \n\n* Experiments and results: The experimental evaluation of the method is comprehensive. It uses five large datasets and compares against several baselines, ablation variants and recent approaches. While the method's performance is only slightly better than competing approaches, it seems to provide better predictions when looking at the visualization examples.\n\nWeaknesses:\n\n* A potential weakness of the method is the high computational complexity brought by the bilevel optimization. It would be useful to compare the runtime of tested approaches, in addition to their accuracy.\n\n* Figure 2 indicates that varying beta and gamma does not improve the performance much and that the corresponding loss terms might not be so useful.\n\n* The coordinating weight learning is interesting, however experiments do not really study this component. If possible, it would be interesting to show what are weights for some examples of different sources.\n\n* I feel that the related works in the Supplemental materials are more relevant than those in the main paper, as they focus on SFDA and compared methods of the experiments. I recommend the authors to add them to the main paper (or swap the content if lacking space).\n\nMinor comments:\n\n* Introduction: define multi-label classification.\n\n* Section 3.2: pesudo-label --> pseudo-label\n\n* Eq (5): min_{theta_S} --> argmin_{theta_S} ?\n\n* Algorithm 1, line 5 (Sup.mat.): missing Theta_s on the right side ?\n\n * Please provide some comparison in terms of computational complexity\n\n* Give an example or explain what the behavior of the method for predicting W.\n\n The limitations of the method are not mentioned in the paper.", " 1. The authors proposed a new problem setting, i.e., semi-supervised multi-source-free domain adaptation (SMDA) for multi-label medical image classification.\n\n2. A novel framework MetaTeacher based on a multi-teacher and one-student scheme is introduced to solve the proposed SMDA problem.\n\n3. A coordinating weight learning method is derived for dynamically revealing the performance differences of different source models over different classes. It is integrated with the semi-supervised bilevel optimization algorithm for consistently updating the teacher and student models. Strength:\n\nThe proposed Semi-supervised Multi-source-free Domain Adaptation (SMDA) setting sounds interesting in the context of medical image classification.\n\nWeakness:\n\nMy major concern lies in the effectiveness of the proposed solution for SMDA.\n\nIn Tables 2-4, it is obvious that the proposed approach (ours (all)) only performs slightly better than DECISION [1] on the transfer to Google-Health-CXR, NIH-CXR14, and Open-i. Considering DECISION is the state-of-the-art multi-source-free domain adaptation method, I think the proposed solution cannot reflect well on the value of SMDA because the multi-source-free domain adaptation approach can already well tackle the SMDA setting.\n\n Please refer to the weakness part. The authors have adequately addressed the limitations and potential negative societal impact of their work.", " This paper proposes a MetaTeacher for semi-supervised multi-source-free domain adaptation of medical image classification. The transfer learning process is modeled as a multi-teacher and one-student scheme. This model not only optimizes student, but also optimizes teachers through student’s feedback in the target domain. The optimization is based on meta-learning, which consists of two main part: coordinating weight learning, and bilevel optimization. Finally, experimental experiments on multi-label chest x-ray datasets empirically demonstrated the superiority of the proposed model against other SOTA methods. Strengths:\n(1) This paper studies a new problem setting, i.e., semi-supervised multi-source-free domain adaptation for multi-label medical image classification. \n(2) A mutual feedback mechanism is designed based on meta-learning between the target model and the source models for more coherent learning and adaptation.\n(3) The coordinating weight learning method is derived for dynamically revealing the performance differences of different source models over different classes.\n\nWeaknesses:\n(1) This is not clear why do the authors design a multi-teacher one-student scheme? Whether we can set one-teacher and on-student framework? If yes, what is the performance when using one teacher?\n(2) In Table 1, it can be seen that the proposed model still obtains promising performance when without using coordinating weight learning.\n(3) Table 5 shows the effects of the size of labeled target data on the transfer from NIH-CXR14, CheXpert, MIMIC-CXR to Open-i. However, this experiment is only conducted on the proposed model, how does this is compared to other semi-supervised methods? \n\n\n (1) Why do the authors design a multi-teacher one-student scheme? Whether we can set one-teacher and on-student framework? If yes, what is the performance when using one teacher?\n(2) In Table 1, it can be seen that the proposed model still obtains promising performance when without using coordinating weight learning. Please analyze the reasons for this case.\n(3) In Table, this experiment is only conducted on the proposed model, how does this is compared to other semi-supervised methods? \n\n See questions.\n\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "LEXqXWKJcdi", "mLb2CtAs_La", "IPoK4F3J6j2", "Um71NsXGgjb", "nips_2022_AQd4ugzALQ1", "J37mpqVRwIs", "MXKag0Lf69V", "ZixDxZLLhfT", "nips_2022_AQd4ugzALQ1", "nips_2022_AQd4ugzALQ1", "nips_2022_AQd4ugzALQ1" ]
nips_2022_pDUYkwrx__w
Privacy of Noisy Stochastic Gradient Descent: More Iterations without More Privacy Loss
A central issue in machine learning is how to train models on sensitive user data. Industry has widely adopted a simple algorithm: Stochastic Gradient Descent with noise (a.k.a. Stochastic Gradient Langevin Dynamics). However, foundational theoretical questions about this algorithm's privacy loss remain open---even in the seemingly simple setting of smooth convex losses over a bounded domain. Our main result resolves these questions: for a large range of parameters, we characterize the differential privacy up to a constant. This result reveals that all previous analyses for this setting have the wrong qualitative behavior. Specifically, while previous privacy analyses increase ad infinitum in the number of iterations, we show that after a small burn-in period, running SGD longer leaks no further privacy. Our analysis departs from previous approaches based on fast mixing, instead using techniques based on optimal transport (namely, Privacy Amplification by Iteration) and the Sampled Gaussian Mechanism (namely, Privacy Amplification by Sampling). Our techniques readily extend to other settings.
Accept
The paper solves a longstanding open problem of showing bounded privacy loss for releasing the last iterate of noisy SGD for convex problems This improves upon previous work by going from GD to SGD and strongly convex to convex. All reviewers agree this is a strong paper and should be accepted.
train
[ "wsPVY9dslk", "sRMdptcMyeJ", "FzrcHvOurZ", "Ok-why5_Qv", "-RT0GxhvVyw", "hdUjnppFgtk", "dlaSSxgZNoS", "hGBYIi9ctD" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are very grateful to the reviewer for the kind words that our result is very interesting and promising for private optimization. \n\n----> Response to concerns:\n\n1. The reviewer is correct that theoretically, minimax utility bounds for DP ERM and DP SCO are obtained already with existing analyses, i.e. where one does not need to run for too long. For example, for (smooth convex) DP SCO (and for non-private SCO for that matter), one epoch of SGD is enough to achieve the minimax rate (https://arxiv.org/abs/2005.04763). However, this is a big difference between theory and practice because in practice Noisy-SGD benefits from running longer to get more accurate training. In fact, this divergence is even true and well-documented for non-private SGD as well, where one epoch is minimax-optimal in theory, but in practice more epochs help. Said simply, this is because typical problems are not worst-case problems (i.e., minimax-optimality theoretical bounds are typically not representative of practice). For these practical settings, in order to run Noisy-SGD longer, it is essential to have privacy bounds which do not increase. Our paper resolves this.\n\n2. Publishing the whole path of SGD over multiple epochs is provably not private. One may hope that an average iterate may have better privacy properties, but as shown in (https://arxiv.org/abs/2005.04763), there are fundamental difficulties with trying to prove privacy even of the average iterate: indeed, the average iterate is provably not as private for Cyclic Noisy-SGD. \n\n---> Response to questions:\n\n1. As discussed in (1) above, the utility-privacy tradeoff is fully understood in the minimax sense. Our new and optimal privacy analysis may help get better utility-privacy tradeoffs in practical settings that are not captured by worst-case analyses. Specifically, privacy-utility tradeoffs are obtained from combining two bounds which are proved separately: (i) privacy of the algorithm as a function of the # of iterations, (ii) utility of the algorithm as a function of the # of iterations. The purpose of this paper is to completely resolve (i); this result can then be combined with any bound on (ii).\n\n2. Noisy SGD with clipping is used in practice for non-convex as well as convex problems. In the common case of GLMs, the clipped gradients can be viewed as gradients of a different convex loss (https://arxiv.org/abs/2006.06783), in which case our results can be directly applied. In general, clipped gradients do not correspond to gradients of a convex loss, in which case our results (as well as all other works in the literature that aim at proving convergent privacy bounds) do not apply.\n\nWe will add discussion of this to the camera-ready.", " We are very grateful to the reviewer for the kind words about our result, the presentation of the paper, the discussion of assumptions, and the technical sketches. \n\n----> Response to the two main concerns:\n\n1. Regarding non-convexity: The purpose of this paper is to resolve the privacy leakage for convex losses since the problem was open even in this foundational setting. We agree that extensions to non-convex losses would be very interesting. Unfortunately, convexity is necessary for the privacy to converge as T -> infinity, and in fact we know of simple non-convex counterexamples. Any such extension would therefore have to make additional structural assumptions on the non-convexity of the function (and possibly also change the Noisy-SGD algorithm), although it is unclear how exactly this would even look. Moreover, this appears to require significant new machinery as our techniques are the only known way to solve the convex problem, and they break down in the non-convex setting (see the detailed discussion about this in L136-145). In the camera-ready, we will add these comments to the discussion of convexity in L136-145.\n\n2. Regarding numerics: Our hope is that because our main result lets us run Noisy-SGD forever without leaking more privacy, this may enable training higher accuracy models with the same privacy budget in many settings. We agree that it would be very interesting to have a thorough empirical investigation of when this does and does not help, and what properties of the optimization problems and datasets influence this. A detailed investigation of these practical considerations is unfortunately outside the scope of this theoretical work. We remark in passing that in Appendix B, we provide tight numerical versions of our main result to make it very easy for others to use and implement our improved privacy bounds. \n\n----> Response to questions: \n\n1. Interesting question. Our techniques are currently the only way of proving convergent privacy bounds for convex losses, and this breaks down without the contractivity of a gradient descent step which is implied by convexity. See (1) above for details.\n\n2. Also a good question. Our techniques readily generalize to any iterative algorithm which interleaves contractive steps and noise convolutions. Such algorithms are common in differentially private optimization, and it would be interesting to apply them to variants of Noisy-SGD.\n\n3. In many optimization problems, the solution set is naturally constrained either from the problem formulation or application. We also mention in passing that one can solve an unconstrained problem by solving constrained problems with norm bounds on the optimum, paying only a small logarithmic overhead on the number of solves. The privacy overhead of doing so can be bounded by a constant using known techniques (https://arxiv.org/abs/1811.07971). Therefore this is a mild assumption.\n\nWe will add discussion of this to the camera-ready version.", " We are very grateful to the reviewer for the kind words that our paper is a significant finding, has done a significant amount of original work, that the proofs are quite easy to follow, and that the supplement seemed like a (almost) perfectly good journal version.\n\n\n\n---> The reviewer brings up two concerns, one about assumptions and one about presentation. \n\n1. Regarding assumptions: We kindly point the reviewer to the detailed discussion on page 4 and elaborate here.\n\n1a. Main assumptions: Briefly, our three assumptions (boundedness, smoothness, and convexity) are in fact necessary and sufficient for the privacy of Noisy-SGD to converge (our main result). Indeed, on one hand, for any strict subset of these three assumptions, there are counterexamples where the privacy breaks down completely as T -> infty. And on the other hand, our main result establishes that these three assumptions are sufficient. Therefore our paper leads to a full understanding of the problem and its reliance on assumptions. With the extra page afforded by the camera-ready, we will add more discussion to clarify this.\n\n1b. Diameter: We agree with the reviewer that “it is not too strong of an assumption, and … there are several real world problems that do have a priori known limited optimization domain.” Moreover, we would like to point out two things. First, every optimization/utility guarantee for (non strongly) convex losses also has a similar dependence on some diameter bound; in fact this is inevitable simply due to the difference between initialization and optimum. Second, one can solve an unconstrained problem by solving constrained problems with norm bounds on the optimum, paying only a small logarithmic overhead on the number of solves and a small constant overhead in privacy using known techniques (https://arxiv.org/abs/1811.07971). Therefore, this diameter bound is a mild assumption in practice and also is in common with all existing theoretical bounds for utility.\n\n2. Regarding presentation: We are very grateful to the reviewer for saying that the supplement “seemed like a (almost) perfectly good journal version, and was really pleasant to read”. The camera-ready version of the paper will give us an extra page, which will let us add more discussion and make the presentation less crowded. We will also polish the paper as the reviewer suggests in the camera-ready, by moving some more content to the Supplementary. This will significantly improve the exposition.\n\n\n---> Response to minor questions:\n\n1. Correct, the Gaussian noise is always there. By “bias is realized”, we mean that this noise is possibly non-centered with this probability. We will clarify this wording in the camera-ready. \n\n2. The term is safely dropped since it is lower order. Indeed, 1/n^2 <= R/n^2 since R is at least 1. We will clarify this wording in the camera-ready.\n\n3. Fantastic question. You are exactly right that the main technical obstacle there is how to control the privacy leakage from how past iterates affect the adaptivity in later iterates. This appears to preclude using our analysis techniques, at least in their current form. It would be very exciting for both theory and practice if one could find some way of extending our results to adaptive methods, and we will highlight this as an interesting open question for future work in the camera-ready version.\n", " Thank you very much for the overwhelmingly positive review. We are glad that you think this is a good paper, that the result is cool, and that you found the paper well-written and easy to read. ", " This paper studies the privacy loss of noisy projected stochastic gradient descent. More specifically, the authors show that after some iterations, noisy projected stochastic gradient will not cause further privacy leakage if we only release the last iterate of the algorithm. This result looks very interesting and promising for private convex optimization. Strengths:\n1. The authors provide a new privacy analysis for noisy projected stochastic gradient descent when optimizing a convex, Lipschitz, and smooth objective with a bounded parameter space.\n2. The new privacy loss upper bound shows that noisy projected stochastic gradient descent will not cause further privacy loss after a certain number of iterations.\n\nWeaknesses:\n1. It is unclear how this result can be used to study the utility guarantee of noisy SGD. According to Bassily et al, 2019, the original private analysis can achieve the optimal utility guarantee for noisy SGD, and thus it is unclear how we can make use of the new privacy analysis.\n2. One drawback of the current analysis is that we can only release the last iterate. I think the result of the current paper is very interesting, and my main concerns are as follows:\n1. How to make use of the new analysis to better understand the utility of noisy SGD.\n2. The new analysis seems to be limited to noisy projected SGD. However, in practice, people often clip the stochastic gradient and then add noisy, and how can this analysis help us to understand the privacy loss of the widely used clipped based method? In other words, whether practitioners can benefit from your analysis in practice to apply noisy project SGD? Yes", " This work proposes an asymptotic privacy bound analysis when $T$ is large in Noisy SGD (SGLD) for convex functions. The results show when the number of iterations $T$ increases at the start, the privacy decreases at a linear rate. After a burn-in period, the privacy is bounded by a constant, and running SGD longer leaks no further privacy. Some techniques based on PABI are also proposed in the analysis. The work also shows the tightness with a lower bound construction based on the trace analysis of random walk, which makes the proposed magnitude more convincing. Pros:\n1. The paper discusses the core problem in the Noisy-SGD analysis and provides a new asymptotic viewpoint for the privacy bound. Some new techniques are also proposed for the problem \n2. The paper is well-organized and some important points and proof sketches are shown in the main context with the detailed proof in the Supplement.\n3. Good and detailed sketches for the existing Noisy-SGD analysis.\n\nCons:\n1. The paper only discusses the function for convex cases, which simplifies the difficulties for analysis. More works might focus on the analysis in non-convex cases (like deep networks).\n2. The paper lacks some numerical studies to give more insights and verify the performance of the proposed framework. Some questions:\n1. Can the proposed framework be extended to the bounded non-convex function with a regularizer, like [Li et al, 2019] for SGLD (Also mentioned in [Chourasia et al. 2021]).\n2. How can the proposed bounds be used/ help the cost minimization and privacy mechanism design?\n3. How a $D$ determined for the real cases? Is $D$ related to the dimension and the convergence of the convex problem? Will the convergence influence the privacy level?\n\n\nSome suggestions:\n1. Some simple examples/numerical studies can be provided to show the effectiveness of the proposed framework more intuitively.\n The assumptions have been discussed in detail, showing the limitations of the work.", " In this paper, authors show that running DP-SGD for a convex function on a bounded domain converges to a bounded privacy cost opposed to the classical composition results that grow to infinity as the number of iterations grow. Similar to Chourasia et al. 2021, authors assume that the internal state (i.e. the intermediate parameter values) are kept private and only the final iterate is released. Authors build a novel analysis, where they split the the parameter trace in two halves and analyse the privacy cost of these halves separately. This involves finding a new improved privacy amplification by iteration (PABI) result. Authors also present a lower bound result for the privacy guarantee. The main result of the paper (the upper bound) is a significant finding. Although, Chourasia et al. 2021 already showed this type of behaviour of the privacy cost for an iterative algorithm, this paper presents following significant updates to the result of Chourasia el al; extending the result for subsampled GD (preferable for computational reasons), and replacing the strong convexity assumption with just convexity. \n\nAuthors have done significant amount of original work to solve the question at hand. The proof of main theorem (Thm. 3.1) comprises of many novel techniques and results, such as the new PABI result. I think the main limitation of the result is the assumption of limited search domain. However it is not a too strong of an assumption, and I think there are several real world problems that do have a priori known limited optimization domain.\n\nTo me, the most apparent weakness of the work is the presentation. The supplement included in the submission seemed like a (almost) perfectly good journal version (~19 pages) of the article, and was really pleasant to read. However, the cut down version is way too crowded currently. I do appreciate authors efforts to have the most crucial bits in the main paper, but I think the paper needs even more polishing and moving less important stuff to the appendix to make it to the NeurIPS format. And I'm afraid that it is going to be really difficult given the amount of material this paper contains.This lack of space also gives the paper a really abrupt ending with no discussion or conclusions. I don't have much to ask, since I found the proofs quite easy to follow. Couple of very minor things that caught my eye:\n- Lines 303-304: \"Notice also that this bias term is only realized with probability 1 − b/n because the probability that i∗ is in a random size-b subset of [n] is b/n.\", maybe I miss something, but I thought the noise is realized w.p. b/n\n- The bound in beginning of page 14 of supplement: I do understand that if you set $R=\\tilde{T}$, the summands inside the $\\min$ get equivalent and with larger $R > \\tilde{T}$ the latter term gets smaller and and former grows. However, I'm not quite sure, how do you form the last inequality. It almost seems like you only have the second term of the summand in the final expression, and the first has vanished somewhere.\n- I'm curious, could one use some adaptive step-size schemes like Adam with this final iterate release analysis? Or would the Adam cause some sort of side-channel privacy leak through the last gradients it uses in adapting the step-size? There is not much discussion of the limitations, probably due to lack of space. I guess the authors could further discuss the limitations that the assumptions make. For example the assumption of bounded domain. ", " The authors show that Noisy SGD achieves a privacy loss that does not grow indefinitely in the setting of convex optimization with smooth Lipchitz functions over a bounded set. The proof technique relies on privacy amplification by sampling and on amplification by iteration. Additionally, the authors also show a lower bound showing that the upper bound is tight up to a constant. Strenghts: Paper is well written, easy to read, provides enough background on the problem and a good literature review. The result is also pretty cool since it shows that previous upper bounds for the privacy loss, where privacy scales indefinitely, exhibit the wrong qualitative behavior.\nIt looks like the only similar results (a bounded privacy loss) only hold in the case of smooth and strongly convex setting, so being able to relax strong convexity is significant. \nWeaknesses: I don’t have much to say here, I think it is a good paper. No questions NA" ]
[ -1, -1, -1, -1, 6, 7, 6, 7 ]
[ -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "-RT0GxhvVyw", "hdUjnppFgtk", "dlaSSxgZNoS", "hGBYIi9ctD", "nips_2022_pDUYkwrx__w", "nips_2022_pDUYkwrx__w", "nips_2022_pDUYkwrx__w", "nips_2022_pDUYkwrx__w" ]
nips_2022_uloenYmLCAo
Block-Recurrent Transformers
We introduce the Block-Recurrent Transformer, which applies a transformer layer in a recurrent fashion along a sequence, and has linear complexity with respect to sequence length. Our recurrent cell operates on blocks of tokens rather than single tokens during training, and leverages parallel computation within a block in order to make efficient use of accelerator hardware. The cell itself is strikingly simple. It is merely a transformer layer: it uses self-attention and cross-attention to efficiently compute a recurrent function over a large set of state vectors and tokens. Our design was inspired in part by LSTM cells, and it uses LSTM-style gates, but it scales the typical LSTM cell up by several orders of magnitude. Our implementation of recurrence has the same cost in both computation time and parameter count as a conventional transformer layer, but offers dramatically improved perplexity in language modeling tasks over very long sequences. Our model out-performs a long-range Transformer XL baseline by a wide margin, while running twice as fast. We demonstrate its effectiveness on PG19 (books), arXiv papers, and GitHub source code. Our code has been released as open source.
Accept
This paper describes a modification to the transformer architecture to use block-recurrence to more accurately model very long sequences, borrowing some ideas from the LSTM. The idea is fairly simple to implement, as it doesn't require much code over a traditional transformer, and results seem good, if not completely overwhelmingly so. All reviewers voted to accept this paper and I agree. It's a fairly simple idea with fairly good results and adds to the body of knowledge regarding how to model very long sequences.
train
[ "7tM5c5YXm6", "i4hp1rRM9zD", "tImudqeIK5Q", "Y_6Cr77HBd", "P66WrRjyrS", "GvxS8tIeFk", "YNs2iRB3zQp", "nIBd7Q50bRe", "kQfGdZCFXNW", "tpQVp9-KSpl", "36mOpe5uDzf", "S3JysCuQnds", "7WOw4weawgr" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nMinor update to paper (64k Memorizing Transformer numbers, wording changes). \nNew sub-sections added to supplementary material, Appendix A, to address reviewer concerns.\n", " \nThanks for taking a second look! We're glad our clarifications helped.\n\n> Some of your answers did not result in changes in the paper, while I think it would be helpful for other \n> readers. E.g.:\n> * Memorizing Transformer comparison.\n> * Discussion on other datasets (enwik8 etc). Note that I still think that Wiki-103 would have been good for comparison.\n> * Difference of Slide model vs Longformer (very briefly).\n\nWe're happy to add the discussion of datasets, and the longformer comparison to the appendix; that's not a problem.\n\nHowever, we're not sure that there's much more to say about the Memorizing Transformer at this point that isn't speculation; a scientific comparison will require follow-up experiments on downstream tasks. We can put the speculation in the appendix if you want. :-) \n\nBTW, wrt. to the Memorizing Transformer, we finished re-running the experiments with 64k memory, as you requested. The numbers are very close. With the additional memory, the Memorizing Transformer does slightly better than Block-Recurrence on arXiv, but two are exactly the same on PG19. We have updated the paper accordingly. \n\nUnfortunately, there's not enough time left during the rebuttal period to try to run experiments with Wiki-103, since we don't have a data pipeline for Wiki-103 set up.\n\n> Appendix: \"The sequence of N tokens is subdivided into blocks of size W\"\n> \"Sequence\" should be \"segment\", right?\n\nYou're right. Fixed. [Done]\n\n> I think a small sentence saying that the theoretical receptive field (TRF) is infinite for the \n> Block-recurrent Transformer would help.\n\nAbsolutely. We already added that to the new Appendix A, as per your earlier suggestion. [Done]\n\n>> Also the terms \"window\" and \"block\" seem to be mixed up. \"blocks of length W\", \"window size W\"\n>>... window/block mean very similar things, and are used somewhat interchangeably.\n\n> But they don't need to be the same length, right? Except that this probably simplifies the implementation.\n> Given that they don't need to be the same, I wonder if they really should be the same, or how it would perform if the\n> sliding window is smaller or larger than the block size.\n\nMathematically, the window size and the block length can be defined separately, but in terms of implementation, it is very convenient to keep them the same. The window size can easily be made smaller than the recurrent block length by changing the causal mask, but there's no benefit to doing so. Making the window size larger would significantly complicate matters, because then the window-blocks would have to be further subdivided into recurrent-blocks, and self-attention would have to be done outside of the recurrent cell. Our current code doesn't support that option.\n\n> > Thus, there is a separate gate for each state vector, but all state vectors are updated in parallel.\n\n> This is still not perfectly clear. It would probably help to specify the dimensions e.g. of the h tensor. h_t is of shape\n> (batch-dim excluded) [S,D], and thus gate i_t and f_t are of the same shape [S,D]? But for the fixed gate case, g \n> just has shape [D]?\n\nYes, g has shape [D], broadcast over [S, D]; it acts exactly like the bias terms in the gate equations, which also have dimension [D], broadcast over [S,D]. We will update the paper to clarify that. (Come to think of it, using separate per-state values for g might be an interesting ablation.)\n\n>>> I wonder about more extreme cases, like: S=1024, block size W=1, window size = 1, like NTM\n\n>> In our current code, this would also set the length of the attention window to 1, so it would no longer be a \n>> transformer. :-) It would also be very expensive to run.\n\n> Yes, that's the point. It would be a transition between a Transformer and a more traditional \n> LSTM, however, due to having a larger S>1, it would have a memory component similar to a \n> NTM. The model formulation and these hyper parameter allow for such transition, so it \n> would be interesting to directly compare it.\n\nThat's an interesting idea, but at W=1, the block-recurrent model would run a whole transformer layer for each token, which would be very expensive; it would essentially be a 512-layer transformer (or worse, a 4096-layer transformer!). I think our current implementation would very quickly run out of device memory without some clever gradient checkpointing hacks.\n\n> Having S=1 and W=1 is maybe also interesting. This should be just as fast as a normal recurrent net.\n\nYou would still be computing the (now useless) key,value,queries, plus a projection, plus a large MLP, so it would be more expensive than an LSTM, although the 2-layer MLP might provide some benefit. \n\n> I think it's a bit unfortunate that many things are left to the appendix.\n\nIndeed! However, due to page limits, we can't fit everything in the main text. It's always difficult to figure out what to cut...\n", " Thank you for the very detailed response, and for the updates! Also thanks for releasing the code!\n\nSome of your answers did not result in changes in the paper, while I think it would be helpful for other readers. E.g.:\n\n- Memorizing Transformer comparison.\n\n- Discussion on other datasets (enwik8 etc). Note that I still think that Wiki-103 would have been good for comparison.\n\n- Difference of Slide model vs Longformer (very briefly).\n\n---\n\n> Appendix: \"The sequence of N tokens is subdivided into blocks of size W\"\n\n\"Sequence\" should be \"segment\", right?\n\n---\n\n>> Also the terms \"window\" and \"block\" seem to be mixed up. \"blocks of length W\", \"window size W\".\n>\n> ... window/block mean very similar things, and are used somewhat interchangeably.\n\nBut they don't need to be the same length, right? Except that this probably simplifies the implementation.\n\nGiven that they don't need to be the same, I wonder if they really should be the same, or how it would perform if the sliding window is smaller or larger than the block size.\n\n---\n\nI think a small sentence saying that the theoretical receptive field (TRF) is infinite for the Block-recurrent Transformer would help.\n\n---\n\n> Thus, there is a separate gate for each state vector, but all state vectors are updated in parallel.\n\nThis is still not perfectly clear. It would probably help to specify the dimensions e.g. of the h tensor. h_t is of shape (batch-dim excluded) [S,D], and thus gate i_t and f_t are of the same shape [S,D]? But for the fixed gate case, g just has shape [D]?\n\n---\n\n>> I wonder about more extreme cases, like: S=1024, block size W=1, window size = 1, like NTM\n>\n> In our current code, this would also set the length of the attention window to 1, so it would no longer be a transformer. :-) It would also be very expensive to run. \n\nYes, that's the point. It would be a transition between a Transformer and a more traditional LSTM, however, due to having a larger S>1, it would have a memory component similar to a NTM.\n\nThe model formulation and these hyper parameter allow for such transition, so it would be interesting to directly compare it.\n\nHaving S=1 and W=1 is maybe also interesting. This should be just as fast as a normal recurrent net.\n\n---\n\nI think it's a bit unfortunate that many things are left to the appendix.\n", " > [Presentation in table 1] Also, step time is unclear, is this for training or inference?\n\nAs mentioned in section 4.1 (relative cost), it is for training. We've updated the caption to clarify this point. [Done]\n\nInference time is also virtually identical between the block-recurrent architecture and transformer XL. Recurrence introduces an extra attention operation over states, which must be done for each token, but only in one layer, and the 13-layer Slide baseline adds an extra layer to compensate. We've updated the paper. [Done]\n\n> The Memorizing Transformer, I assume they have some own implementation because the\n> numbers they report are not from the original paper. In the original paper, the best\n> configuration used a segment/window length of 2048, and a memory size of 65k. \n> Here they choose to use seg/win len 512 and mem size 32k. \n\nWe are using the Memorizing Transformer code from the original paper, but we report numbers in bits per token. If you convert, the original paper reports 3.54/1.21 bpt for pg19/arxiv, while we report 3.50/1.24, so our numbers are quite close to theirs. The difference on arxiv is most likely due to the fact that we use a lower learning rate (Appendix C, training details). The Memorizing Transformer does not use sliding window attention, so the segment and window length are the same (just like Transformer-XL). We set the window length to 512 in both models in order to make a completely apples-to-apples comparison between memory and recurrence.\n\nThe choice of 32k memory size is admittedly a bit arbitrary; we can easily rerun those numbers with a 64k memory instead if you would prefer. There's not much difference between 32k/64k (notice that we actually got a better result on PG19 with 32k than they did with 64k).\n\n> After these results, I cannot tell whether the Memorizing Transformer is better or the\n> Block-Recurrent Transformer. \n\nThe memorizing transformer tends to be a bit better in terms of perplexity. Based on qualitative studies, both memory and recurrence seem to be used mainly for long-range name lookups, and it's probably easier for the model to do name lookups using top-k memory than to store names in recurrent states. However, the two are surprisingly close, and recurrence is both faster to train, and has faster inference.\n\nThere's also a qualitative difference between the two. We hypothesize that memory is better for retrieving fine-grained details, while the recurrent states compress and summarize the history. Recurrence may be better at capturing high-level summaries, or more nebulous characteristics like writing style, while memory is better at looking up facts. More research is needed to characterize which one is \"better\" on downstream tasks.\n\n>> \"We find that recurrence performs roughly on a par with the memorizing transformer, but \n>> trains considerably faster. We do not report the exact speedup ...\"\n> It would still be useful to get some impressions.\n\nThe block-recurrent transformer is almost twice as fast to train as the Memorizing Transformer. We've update the paper. [Done]\n\n> It should be clarified how exactly the scaling is done. The dimension is increased?\n\nYes, the various dimensions are increased in factors of 2, including the number of layers for the largest models. Configurations for various sizes are in the open source release. We've added more details in Appendix F. [Done]\n\n> I wonder about more extreme cases, like: S=1024, block size W=1, window size = 1, like NTM\n\nIn our current code, this would also set the length of the attention window to 1, so it would no longer be a transformer. :-) It would also be very expensive to run. \n\n> Analysis on usefulness of self-attention in recurrent state update (horizontal direction). Is it \n> actually used?\n\nThat's a really good question! I'm not sure we have time to do that analysis in the upcoming week, but it would be a good thing to discuss for the camera-ready copy. \n\n> I wonder, why do the Slide models perform better than the XL models on PG19 but worse on \n> Arxiv and Github? Because they have less context and context is more important on Arxiv or \n> Github? This should be discussed.\n\nThat's another really good question. Slide:12L always outperforms XL:512, which has the same window length, because it's more differentiable. The surprising thing is that on PG19, Slide:12L (W=512) actually matches the XL:2048 model, which has a much longer window length, but it doesn't on arxiv or github. Our guess is that arxiv and github have a lot of complex syntax and name lookups, and thus get a lot of benefit from being able to do direct attention (i.e. single-hop lookups) to matching tokens. PG19 is natural language, and thus has fewer direct dependencies in general, but may benefit from more subtle dependencies that use multi-hop attention lookups. Multi-hop lookups are easier to train in the slide model, because it is fully differentiable over the 4k sequence length.\n", " \n> What is the effective context length in each case for each model? For the proposed recurrent \n> case, it should be infinite, right? For the other variants, it is probably still quite high, sth like \n> num layers * segment length but I'm not exactly sure. But this is an important aspect and \n> should be discussed. For the Slide model variants, it should be less than the XL models, \n> right?\n\nWe use the term \"theoretical receptive field\" (TRF) rather than \"effective context length\", because there is no guarantee that the model will actually be able to use the whole TRF in practice. For example, the TRF for an LSTM is theoretically infinite, but in practice LSTMs tend to forget after a few hundred tokens. Please take a look at the newly written Appendix A, which now covers this subject in more detail.\n\nThe Slide model and Transformer-XL have the same TRF, but Slide makes better use of it. \nSection 4.1: \"XL:512 ... uses a transformer-XL style cache, but no sliding window, so the segment length is the same as the window size, i.e., it is trained on segments of 512 tokens.\"\nSection 4.1: \"Slide:12L is a 12-layer transformer with a window size of 512, but uses a sliding window over a segment of 4096 tokens. This model is almost identical to XL:512; the only difference is that the sliding window is differentiable over multiple blocks, while the Transformer-XL cache is not.\"\n\nIn other words, both XL:512 and Slide:12L have a window/block size of 512, and a cache of size 512. The XL model has a segment length of 512, but the Slide model has a segment length of 4096, divided into blocks of size 512. \nThe TRF for both of these models is the same: W * L, but the Slide model can propagate gradients along the entire segment length, while the XL:512 model cannot. The XL:512 model can attend into the cache, but cannot propagate gradients through it, because the cache is not differentiable. \n\nIn both cases, making use of the full TRF of W * L would require L separate \"hops\" of attention, each of which would have to attend the maximum distance, which is unlikely in practice. If one defines \"effective context length\" as \"the amount of context that the model actually learns to use\", then we would expect the \"effective context length\" to be slightly higher in the Slide model, because it is more differentiable. Our experiments back this up: Slide:12L does have slightly better perplexity than XL:512.\n\nThe TRF for the recurrent model is infinite, and in our qualitative studies, we show that the \"effective context length\" seems to be very long in practice; we show that it can look up names over distances of more than 60k tokens. This is a huge increase; the Slide model has a TRF of only 6k tokens, and an effective context length shorter than that.\n\n>> \"current cell state at time t\"\n> What is time t? \"Time\" is highly confusing. This either means the state index (s in {1...S}) or \n> the block index. I assume the block index but this is not clear here. So it means, the same \n> gate is applied for the whole block? This should be clarified. And then also discussed. I'm not\n> sure that this is the natural solution. \n\nWe agree that \"time\" is not a great word, because there is no physical concept of time. However, \"time\" has traditionally been used to describe successive iterations of a recurrent architecture, as in \"backpropagation through time.\" In this case it means block index, because recurrence is at the block level. We have updated the paper to clarify this point. [Done]\n\n> Why not apply the gating per state index s?\n\nAs mentioned in the introduction, all state vectors are processed in parallel. That's more or less the whole point of the architecture! Using the state index for gating would make no sense; if we were to process states sequentially, then \"block-recurrence\" would be a weird operation with no advantages over traditional per-token recurrence.\n\nAs mentioned in Section 3 (Method), gates replace the residual connections in a standard transformer layer. Thus, there is a separate gate for each state vector, but all state vectors are updated in parallel. We have added a sentence to clarify this point. [Done]\n\n> Presentation in table 1 is a bit confusing. It's not explained well what the model notation means.\n\nThe 5 baselines are described by name in section 4.1, immediately below the table. The gate type and configuration are described in previous sections. We have added a sentence to the caption which explains that \"rec:gate:config\" is the recurrent architecture. Does that help? [Done]\n\n>> \"The baseline model is a 12-layer transformer\"\n> Where is this model in the table? Or is the baseline actually the XL model?\n\nThat's the first baseline: XL:512. All of the other baselines and recurrent models are simply variations on that architecture. We have clarified the paper. [Done]\n", " \n>> Appendix: \"Given a segment of N tokens (N = 4096 in our experiments), the sliding window\n>> applies a causal mask\"\n> How is this related to the blocks mentioned before? Is this about self-attention? The\n> self-attention is anyway only inside a block and not over the whole sequence?\n\nYes, this is about self-attention. Mathematically, self-attention is defined over the entire sequence (not just the segment), no matter how long. However, if there is a causal mask such that each token can only refer to the W previous tokens (e.g. a sliding-window mask), then most of that massive attention matrix will be filled with zeros, as illustrated in Figure 2. (Previously Figure 3.) Thus, as an optimization, the sequence can be divided into segments, the segments are divided into blocks (of size W), and attention is computed locally within pairs of neighboring blocks.\n\n>> Appendix: \"The sequence of N tokens is subdivided into blocks of size W\"\n> This is not the block concept from the block-recurrency, right? This is just an implementation\n> detail here to make it more efficient...\n\nIt is the same block concept. The blocks are an implementation detail in non-recurrent layers, but they are more than just a detail in the recurrent layer. We use the same blocks to implement both sliding-window attention and recurrence. The block-recurrent cell of Figure 1 handles both self-attention and recurrence. Please take a look at the newly updated Section 3, which should hopefully make this clearer.\n\n> It is not exactly clear: The self-attention in vertical direction, does it always get the last W\n> tokens, i.e. potentially including the last block?\n\nYes. As described in the (original) Appendix A, every token can attend to W previous tokens, which means that it can attend to tokens within the current block, and within the previous block. For the first block in a segment, the \"previous block\" is the (cached) last block from the previous segment. We've clarified this point. [Done]\n\n>> \"As with Transformer-XL, the keys and values from the last block are stored in a non-differentiable cache.\"\n> This is the last block of the segment, which is for the next segment? So this is actually \n> different to Transformer-XL, where the whole segment is stored? \n\nYou are correct. Transformer-XL does not use a sliding window, so the segment length and the window/block length are the same (N=W). In TXL, caching the last \"block\" thus caches the entire segment. The sliding window architecture allows the segment length to be longer than the block length (N >> W), but attention still can't look past a single block, so only the last block needs to be cached. We've clarified the paper. [Done]\n\n> Is sliding-window attention and also the cache of the previous sequence also used for all the\n> other Transformer layers?\n\nGood question! Yes. Sliding-window attention is used in all layers. We have clarified that point. [Done]\n\n>> \"Appendix: The sliding window architecture has a context length of W for every token.\"\n> This statement is wrong when there are multiple layers. The context length is multiplied by the\n> number of layers.Or if this is not the case, then something is very unclear here, and should be\n> explained.\n\nYou're right, but we mentioned the number of layers in the very next paragraph! From the (original) Appendix A: \"Thus, the theoretical receptive field (the maximum distance that information can propagate through the model) [of sliding attention] is W * L, where L is the number of layers.\" \n\nThe purpose of that particular section is to distinguish between the \"context length\", which we defined as the distance over which tokens can attend within a single layer, and the \"theoretical receptive field,\" which applies to the model as a whole. (Note that the term \"receptive field\" is also used in the longformer paper; we did not invent this terminology.)\n", " \nIn the interest of making this discussion more visible, we decided to post answers here, using multiple responses, rather than putting them in the supplementary material. \n\n> The Slide model (table 1), is this actually exactly a Longformer model? It would be better if this becomes clear. Or if the Slide model is not a Longformer, how exactly is it different?\n\nThe Slide model is not a Longformer; it only implements the sliding-window attention pattern. Longformer is a much more complicated model, and sliding-window attention is merely one of several different attention patterns that it uses. In addition to the sliding window, Longformer also uses dilated attention and sparse global attention, both of which are implemented with custom CUDA kernels. Moreover, LongFormer uses different window sizes in each layer, and it uses a multi-phase training regimen of pre-training and fine-tuning, following a curriculum that gradually increases window size and sequence length in order to achieve their final results. We do none of these things. (As a side note, Longformer is an excellent example of what we mean by \"special tricks\".)\n\n>> \"Instead of processing the sequence one token at a time, our recurrent cell operates on\n>> blocks of tokens; see Figure 1. Within a block, all tokens are processed in parallel.\" \n> Processing refers to training or inference/generation? This should be made very clear. I\n> assume this is about training because otherwise you cannot process tokens in parallel, right?\n\nYou are absolutely correct, and that is an excellent point. Since this is an autoregressive model, inference must still be done sequentially, token by token. The benefit of handling blocks of tokens in parallel only happens during training. We have clarified this point in the paper. [Done]\n\n> How is training implemented? A loop over the blocks? How is inference implemented (or would be)?\n\nTraining is a loop over the blocks; all tokens within a block are processed in parallel. Inference is a nested loop: an outer loop over the blocks (just like training) and an inner loop over the tokens within each block (for autoregressive decoding). The inner loop is the same as a vanilla transformer. We have updated the paper. [Done]\n\n> The terms \"sequence\" and \"segment\" seem to be mixed up. This is confusing. The\n> terminology should be consistent and it should be made clear what is meant.\n\nWe have tried very hard not to mix up these terms. A \"sequence\" refers to all of the tokens within a document; in the case of PG19, it refers to all of the tokens within a book. You are right that this term was not properly defined, and we have updated the paper to clarify this point. [Done]\n\nThe word \"segment\" is defined in Section 3 (Method): \"we divide the document [i.e. the full book-length sequence] up into segments of length N, which are processed sequentially over a number of training steps. Each training step processes one segment. Each segment is further divided into blocks of length W.\"\n\nThe confusion between sequence/segment may be caused by the fact that most research on transformers does not distinguish between the two. The typical way to train a transformer is to split a document up into segments, and then shuffle them, so the model never sees the original full-length document. In that case, \"sequence\" and \"segment\" are the same thing. Thus, in section 2 (related work), we simply use the term \"sequence\".\n\n> Also the terms \"window\" and \"block\" seem to be mixed up. \"blocks of length W\", \"window size\n> W\". I'm not exactly sure whether they refer to the same thing or not. Is there any reason it\n> uses the same length variable W?\n\nUnlike sequence/segment, window/block mean very similar things, and are used somewhat interchangeably. We agree that this is confusing, and have clarified this in the paper; thanks for pointing that out. Part of the problem here is that due to the 9-page limit, we moved the section describing sliding window attention into Appendix A, where readers do not see it. We have used the increase in page limit to move this information back to its original position, in Section 3 (Method), which will hopefully be clearer. [Done]\n\nThe term \"window\" comes from \"sliding window attention\", which is a term of art that was introduced previously in the literature, and refers to the local distance over which each token can attend within a single layer. We divide each segment into blocks, and both the sliding window and recurrence are implemented over blocks, so the block size and the window size are the same.\n\nFrom the (original) Appendix A: \"The sliding window applies a causal mask in which each token can only attend to the W previous tokens, where W is the window size... [sliding window attention is implemented as follows] ... The segment of N tokens is subdivided into blocks of size W, and each block attends locally to itself and to the previous block, so the size of each local attention matrix is W × 2W\".\n\n\n\n", " \nThank you for your valuable review! We appreciate the time you spent giving us such detailed feedback. We have updated the paper to clarify the points that you identified as confusing. Most of our changes are to Section 3 (method). With the increased page limit, we can now incorporate information that was previously in Appendix A into Section 3, and we have written a new version of Appendix A. Changes to the paper are highlighted in blue. Please take a look! \n\nDue to limits on response length, we cannot answer all of your questions here; we putting detailed answers to your questions in the supplementary material. As requested, we have also released source code in the supplementary material. \n\nSince most of your feedback was on improving the presentation of our paper, we hope that you will consider increasing your review score in response to these changes.\n\nResponse to weaknesses:\n\n> Code not released yet. It's also not clear whether it really will be done.\n\nThe good news is that we released the code as open source on github more than a month ago! We cannot share the link here without violating the double-blind review, but we will include a link in the camera-ready copy. In the meantime, we are attaching a copy of the open source archive, which has been scrubbed of identifying information. We totally agree that it would have been better to include the code with our original submission, since many of your questions might have been answered by looking at the code. [Done]\n\n> The experimental comparison to the Memorizing Transformer looks a bit unfair. \n\nThe comparison uses a completely identical configuration between the Memorizing Transformer, and the Block-Recurrent transformer, so it is 100% fair. The only difference is our use of recurrence in one layer, vs their use of Top-k memory in one layer. The purpose of the comparison is to see whether recurrence provides performance comparable to Top-k memory. Both the block-recurrent transformer and the memory transformer can be scaled up, both in terms of the number of parameters and the window size, but that's not the point.\n\nWe did use a somewhat smaller memory size than the original paper, but we're happy to re-run that comparison with a larger memory if you like. It won't make much difference. Even with the smaller size, our numbers closely match the numbers reported in the Memorizing Transformer paper. We report numbers in bits-per-token, so you have to convert; they get 3.54/1.21 bpt for pg19/arxiv, while we report 3.50/1.24.\n\n> Some more standard language modeling benchmarks like enwik8 or Wiki-103 are not used.\n> This would have been nice as there are more results from the literature to compare to.\n\nWe agree that having standard benchmarks is useful for comparing architectures, and enwik8 has been historically used in many papers for long-range modeling. However, in our opinion, it's not a particularly good benchmark, at least for our purposes.\n\nThe purpose of our experiments is to see whether block recurrence can transmit information over very long lengths: we show retrieval over 60k+ tokens. We chose PG19 specifically because we believe it to be a good dataset for these sorts of experiments. It consists only of long, book-length works, it is much larger than enwik8, it is publicly available, and has been cited in other published work. Arxiv and github are (sadly) not public, but they similarly have long documents in the 50k+ token range.\n\nEnwik8 is not a corpus of long articles. In fact, it doesn't even split the text into separate articles at all; it's just a single text dump that concatenates a bunch of short unrelated articles together. If you do attempt to split it, you will discover that the majority of \"articles\" are merely stubs, with HTML boilerplate and no actual text. Enwik8 is a fine benchmark for data compression, which was the purpose for which it was originally intended, but it is less than ideal for long-range language modeling. \n\nWiki-103 is better because it does break the text into articles, and it eliminates the boilerplate, but the average length is still only 3.6k tokens per article, which is less than the segment length used in our experiments, and a far cry from the 50k+ tokens per book of PG19.\n\nNote that unlike many other papers, our architecture does not incorporate any \"special tricks\" to improve short-range language modeling, so we would expect block-recurrence to provide at best a modest improvement over the Transformer-XL baseline on these data sets.\n\n*** Answers to your other questions are in the supplementary material. ***\n", " \nThank you for your feedback! We have responded to your comments below, and also updated the paper. Please let us know if your concerns are sufficiently addressed, or if you have any other questions.\n\n> Can this network be applied to other NLP tasks ?\n\nYes, but the block-recurrent transformer will be primarily useful in situations that require long-range context, such as writing book reports, summarizing long news articles, code completion, question/answering over book-length works, or chat bots that require a long chat context. These applications remain as future work. We have updated the paper, and added this information to the conclusion. [Done]\n\n> Some failure cases can be given in the paper.\n\nThere are two failure cases that we discuss in the paper. First, there is a failure mode where the transformer learns to ignore the recurrent state; we provide details on gate initialization to avoid this failure mode. Second, as we discuss in Section 5 (Discussion), the block-recurrent transformer does not seem to be making full use of the capabilities of the LSTM gate. Thus, we believe that the recurrent architecture that we present here has not yet achieved its full potential, and there are opportunities for future research and further improvements in this area.\n", " \nThank you for your encouraging review!\n\n> Can we extend the method to a bidirectional recurrent layer?\n\nYes, but there are caveats. A bidirectional layer obviously can't be used for autoregressive language modeling, because it violates causality, so you would have to use some other training objective, like masked language modeling. Our research has also been on long, book-length works, and each book is broken into multiple segments during training. Only one segment will fit into device memory at a time. Running a bidirectional layer over an entire book would thus require two passes, one for each direction, and would be quite expensive.\nA bidirectional layer where the reverse direction only operates within a single segment, (thus providing look-ahead only within that segment), would be easy to implement.\n\n> Why is adding only one recurrent layer enough?\n\nGood question! Based on our qualitative experiments, the recurrent layer seems to be most useful for long-range name lookups, like proper names and places in a novel. A single layer seems to provide sufficient long-range memory to keep track of the proper names within a book, so adding more recurrent layers doesn't necessarily add more capability. Related work on Memory Transformers have found a similar effect. \nIn contrast, the parameters in the non-recurrent transformer layer are used to hold common-sense information about word meanings and general world knowledge, so more layers (more parameters) are always better.\n\nWe've updated the paper to clarify this point. [Done]\n\n> Can you formalize the complexity by a function of N, W, and S?\n\nFor the recurrent layer:\n(N/W) * (W^2 + S^2 + 2SW), where N is the segment length, W is the block/window size, and S is the number of states. N/W is the number of blocks; and each block does self attention (W^2), state self-attention (S^2), and attention between tokens/states and states/tokens (2SW). Note that in our experiments, we set S=W for simplicity, so this reduces to N*W.\nFor the non-recurrent layers:\n(N/W) * W^2 = N*W\n\nWe have added this equation to the paper. [Done -- Appendix A.]\n\n> (Limitations): The paper does not explain how we can use the Block-Recurrent Transformer for a wide range of applications. \n\nThe block-recurrent transformer will be primarily useful in situations that require long-range context, such as writing book reports, summarizing long news articles, code completion, question/answering over book-length works, or chat bots that require a long chat context. These applications remain as future work. We have updated the paper, and added this information to the conclusion. [Done]\n", " This paper presents a new architecture called the Block-Recurrent Transformer. Input sequences are chunked as a block, and each block is operated by a transformer layer. Each block is connected with a recurrent layer. Block id encoding similar to position encoding is introduced. The authors include ablation studies on different variants of the gate mechanism in the recurrent layer. The paper is well-written, and the motivation is clear. The Block-Recurrent Transformer improves the efficiency-accuracy trade-off compared to the Transformer-XL, which is a strong language model baseline. Here, the efficiency is the number of parameters and runtime, and the accuracy is language modeling perplexity. It would be much better to show that the Block-Recurrent Transformer could also be effective in many applications where Transformers are successful. However, the new architecture is only tested on language modeling. Can we extend the method to a bidirectional recurrent layer?\n\nWhy is adding only one recurrent layer enough?\n\nCan you formalize the complexity by a function of N, W, and S?\n The paper does not explain how we can use the Block-Recurrent Transformer for a wide range of applications.", " This paper is interesting. The authors propose a new RNN structure for transformer by introducing new transition states between transfomers and acheived new SOTAs. Strengths\n1. New SOTA results by cascading many transfomers with trival modifications.\n2. Tackled the bottleneck the transformer by using RNN structure for language modelling.\nWeaknesses\n1. This network may be more effective for very long sequences.\n 1. Can this network be applied to other NLP tasks ? 1. Some failure cases can be given in the paper. ", " The paper proposes a new model which adds block-wise outer recurrency to a Transformer.\nThe block-wise recurrency operates on a block of tokens, i.e. the hidden state is updated not after every single token but after a block of tokens.\nThe hidden state is not just a feature vector but a matrix, or a vector of features.\nThe Transformer accesses the hidden state by cross attention on the hidden state, i.e. those vectors of features.\nThe Transformer additionally uses sliding-window attention like the Longformer. At the beginning of the sequence, it has access to the previous sequence and also the previous hidden state via a cache, like the Transformer-XL and Longformer. The gradient flow obviously stops at the sequence boundary.\nIn training, one sequence of tokens consisting of multiple blocks is given to the model.\nGoing over the blocks and updating the recurrent state is orthogonal to going over the tokens. For any token, the hidden state it would access is the last hidden state it has access to.\nThe recurrency update is similar to the layerwise structure of a Transformer. Considering one Transformer layer including the cross attention to the hidden state, mostly the same structure is used for updating the hidden state. Specifically, it uses self-attention on the hidden state itself, and cross attention to the tokenwise features of the current block. However, instead of residual connections, they use a gating mechanism here. Thus, because there is the self-attention and cross-attention (already together) and then a following MLP, there are two gates per recurrent state update. They also have some variations on removing some of the Transformer components, like the MLP, such that the recurrent state update only has one gate. In their experiments, this simpler variant actually performs better.\nThis described actually only one single block-recurrent Transformer layer. Their overall model is actually mostly like Longformer, i.e. Transformer-XL with sliding-window attention, and they simply replace one single layer by such a block-recurrent Transformer layer.\nThey also test another variant (referred to as \"Feedback\" in the table) where they also modify the other standard Longformer layers by additional cross attention to the last recurrent state, where the recurrent state still only comes from a single layer.\n\nThe sliding-window attention is implemented also in a blockwise fashion for better efficiency in training. I think it actually uses the same blocks both for the recurrent state and the sliding-window attention, although this is a model aspect for the recurrency, and an implementation detail for the sliding-window attention.\n\nThey perform experiments on the PG19 dataset, arXiv dataset and GitHub dataset. They reach or surpass state-of-the-art performance in all. Strengths:\n\nThe idea to combine recurrency with self-attention in some way is not novel and has often been tried before in various ways but often only with little success. From the results, it looks like the proposed model really improves considerably due to the recurrency. The results look good.\n\nOverall, the authors do a good job in summarizing related work.\n\nPlan to release the code.\n\nWeaknesses:\n\nIn the next part under questions (suggestions), I will list more things individually. Here this is just a summary of the weaknesses.\n\nUnfortunately the model definition is unclear in many parts. My summary is what I assumed after re-reading many parts again and again and inferring what would have made sense. But this is not good. The model definition should be completely unambiguous and very clear.\n\nThe analysis on aspects of the model is a bit short and leaves many open questions.\n\nThe experimental comparison to the Memorizing Transformer looks a bit unfair. It's not clear whether the Memorizing Transformer can yield better overall perplexity.\n\nSome more standard language modeling benchmarks like enwik8 or Wiki-103 are not used. This would have been nice as there are more results from the literature to compare to.\n\nCode not released yet. It's also not clear whether it really will be done. I have read statements like \"we plan to\" so often where it was never released in the end...\nAlso, in this case, as there were so many things unclear, the code could have helped a lot in clarifying everything exactly up to the latest detail. Memory Networks, Neural Turing Machines are related models considering the type of hidden state.\n\nPerceiver also somewhat similar?\n\n\n> Instead of processing the sequence one token at a time, our recurrent cell operates on blocks\n> of tokens; see Figure 1. Within a block, all tokens are processed in parallel. \n\nProcessing refers to training or inference/generation? This should be made very clear. I assume this is about training because otherwise you cannot process tokens in parallel, right? At inference, even within a block, tokens are handled sequentially, I assume. This should be made clear. When it is not specifically said that it is about training, I assume inference and then this statement is wrong.\n\nHow is training implemented? A loop over the blocks?\nHow is inference implemented (or would be)?\n\nI think, to make it really clear, the full definition of the model as it is used in inference (not training) should be given. Just a figure is not enough.\nRecurrent cell is (mostly) clear, but the overall model is not clear. How is this recurrent cell integrated? How does the overall model look like?\n\n> standard transformer layers ...\n> We also normalize queries and keys\n\nIt would be good to clarify exactly how the overall model looks like. It seems like \"standard Transformer layers\" is not totally correct and there are differences, as mentioned later. Also, using sliding-window attention is also some big difference to the standard Transformer.\n\nIs sliding-window attention and also the cache of the previous sequence also used for all the other Transformer layers?\n\nIt is not exactly clear: The self-attention in vertical direction, does it always get the last W tokens, i.e. potentially including the last block? Or is it always only within a block, so the first token would not get any context (except the recurrent state)? I assume always the last W tokens, but it's not exactly clear.\n\nWhat is the effective context length in each case for each model? For the proposed recurrent case, it should be infinite, right? For the other variants, it is probably still quite high, sth like num layers * segment length but I'm not exactly sure. But this is an important aspect and should be discussed. For the Slide model variants, it should be less than the XL models, right?\n\n> Appendix: Given a segment of N tokens (N = 4096 in our experiments), the sliding window applies a causal mask\n\nHow is this related to the blocks mentioned before?\nIs this about self-attention? The self-attention is anyway only inside a block and not over the whole sequence?\n\nIn general (paper + appendix):\n\nThe terms \"sequence\" and \"segment\" also seem to be mixed up. This is confusing. The terminology should be consistent and it should be made clear what is meant.\n\nAlso the terms \"window\" and \"block\" seem to be mixed up. \"blocks of length W\", \"window size W\". I'm not exactly sure whether they refer to the same thing or not. Is there any reason it uses the same length variable W?\n\n> Appendix: The sequence of N tokens is subdivided into blocks of size W\n\nThis is not the block concept from the block-recurrency, right? This is just an implementation detail here to make it more efficient, I assume. It's unfortunate that this causes some confusion here. This distinction should be made more clear. I think before implementation details for better efficiency are discussed, the model itself should be clearly defined.\n\n> Appendix: The sliding window architecture has a context length of W for every token.\n\nThis statement is wrong when there are multiple layers. The context length is multiplied by the number of layers.Or if this is not the case, then something is very unclear here, and should be explained.\n\n> current cell state at time t\n\nWhat is time t? \"Time\" is highly confusing. This either means the state index (s in {1...S}) or the block index. I assume the block index but this is not clear here. So it means, the same gate is applied for the whole block? This should be clarified. And then also discussed. I'm not sure that this is the natural solution. Why not apply the gating per state index s?\n\n\nAnalysis:\n\nPresentation in table 1 is a bit confusing. It's not explained well what the model notation means. From a first glance after not having carefully read the paper, it's not clear at all, which of those models actually correspond to the proposed block-recurrent transformer, etc. It becomes more clear after having carefully read everything, but still I think this is bad. Also, step time is unclear, is this for training or inference? PG19 perplexity on token-level is not so helpful - better would be on word-level, like it is also in table 2.\n\nThe Slide model (table 1), is this actually exactly a Longformer model? It would be better if this becomes clear.Or if the Slide model is not a Longformer, how exactly is it different? This is not clear then, and should be explained.\n\n\n> Scaling up / table 2\n\nIt should be clarified how exactly the scaling is done. The dimension is increased?\n\nRecurrent block size W, what is the effect? (Window size seems to be studied in the appendix, but not (recurrent) block size.)\n\nI wonder about more extreme cases, like:\n\n- S=1024, block size W=1, window size = 1, like NTM (Even though the trend in Appendix D might suggest this is bad, I think this is still interesting, whether it works at all, and how good this still is. Also, there might be a point where it improves again because it is forced to make more use of the recurrency also for short context, so it might learn that better.)\n- S=1024, block size W=1, window size = 1024\n\n(More recurrent layers? Discussed but not really systematically.)\n\nAnalysis on usefulness of recurrent state, depending on where you are in a block? Later on inside a block, the recurrent state might be less important?\n\nAnalysis on usefulness of self-attention in recurrent state update (horizontal direction). Is it actually used?\n\nAnalysis is quite short.\n\n> We find that recurrence performs roughly on a par with the memorizing transformer, but trains considerably faster. We do not report the exact speedup ...\n\nIt would still be useful to get some impressions.\n\nThe Memorizing Transformer, I assume they have some own implementation because the numbers they report are not from the original paper. In the original paper, the best configuration used a segment/window length of 2048, and a memory size of 65k. Here they choose to use seg/win len 512 and mem size 32k. Why? This looks like they deliberately choose a suboptimal model such that their numbers look better in relation. Or if there is a reason, this should be explained, and then it would still be interesting to see the best Memorizing Transformer setting for sake of completeness.\n\nConsidering scaling the model up (Table 2), there the Memorizing Transformer would also be interesting in comparison.\n\nAfter these results, I cannot tell whether the Memorizing Transformer is better or the Block-Recurrent Transformer.\n \n> As with Transformer-XL, the keys and values from the last block are stored in a non-differentiable cache\n\nThis is the last block of the segment, which is for the next segment?\nSo this is actually different to Transformer-XL, where the whole segment is stored?\nOr maybe I misunderstand sth here. This should be clarified.\n\n> The baseline model is a 12-layer transformer\n\nWhere is this model in the table? Or is the baseline actually the XL model?\n\nI wonder, why do the Slide models perform better than the XL models on PG19 but worse on Arxiv and Github? Because they have less context and context is more important on Arxiv or Github? This should be discussed.\n \nIt is mentioned that the full potential is likely not achieved yet and the current recurrent structure is maybe suboptimal.\n\nNegative social impacts are also shortly but adequately addressed.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "Y_6Cr77HBd", "tImudqeIK5Q", "Y_6Cr77HBd", "P66WrRjyrS", "GvxS8tIeFk", "YNs2iRB3zQp", "nIBd7Q50bRe", "7WOw4weawgr", "S3JysCuQnds", "36mOpe5uDzf", "nips_2022_uloenYmLCAo", "nips_2022_uloenYmLCAo", "nips_2022_uloenYmLCAo" ]
nips_2022_AOSIbSmQJr
Markovian Interference in Experiments
We consider experiments in dynamical systems where interventions on some experimental units impact other units through a limiting constraint (such as a limited supply of products). Despite outsize practical importance, the best estimators for this `Markovian' interference problem are largely heuristic in nature, and their bias is not well understood. We formalize the problem of inference in such experiments as one of policy evaluation. Off-policy estimators, while unbiased, apparently incur a large penalty in variance relative to state-of-the-art heuristics. We introduce an on-policy estimator: the Differences-In-Q's (DQ) estimator. We show that the DQ estimator can in general have exponentially smaller variance than off-policy evaluation. At the same time, its bias is second order in the impact of the intervention. This yields a striking bias-variance tradeoff so that the DQ estimator effectively dominates state-of-the-art alternatives. From a theoretical perspective, we introduce three separate novel techniques that are of independent interest in the theory of Reinforcement Learning (RL). Our empirical evaluation includes a set of experiments on a city-scale ride-hailing simulator.
Accept
Please add proof outlines to the main body in the final version. Also add a discussion on Assumption 1 and more insights for the proposed method.
val
[ "gPaWOoUMPs", "dwnsmw30H5", "-4CdLmUCz3p", "Msz4ma75qx", "BA85mM_tu7", "7B6xaKFZqqL", "V65q1I1zUeB", "5SiCl3HYKd", "AS5aAErBlZX", "8v0JaIpi6s", "KY3vPI1zY1" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your clarification. Adding this discussion about the assumption and more intuition for the proposed method in the main paper can be beneficial. I have no other questions, and I have updated the score accordingly.", " Adding insights like that given in your proof outline will further strengthen the paper. ", " Thank you very much for your reply! My questions are addressed!", " Thank you for your review! Please find a response to your two questions below; we can certainly include some of this discussion in the paper. \n\n### **Potential Outcomes**\nThere are two approaches we could take here to cast the setup within Potential Outcomes. The first is similar to the approach taken in Bajari et. al. 2020, where the domain of the potential outcome function is high dimensional (in that case, depending on the treatment of a subset of other units). Specifically, in our setting, the domain of the potential outcome function would be the sequence of actions taken. However, this is not an ideal approach (as observed already in Bajari et. al.) since it does not yield an approach to approximating ATE with experimental outcomes.\n\nAn alternate approach is to posit potential outcomes that are *state-dependent*. This is akin to a more traditional view of the potential outcomes framework with the exception that the distribution over states as opposed to being exogenous (a situation which would trivially lend itself to potential outcomes), is now endogenous. Our work can be viewed as characterizing the precise bias introduced by this endogeneity. \n\n### **Daily estimates**\n\nOne approach to addressing temporal interference in general is indeed a switchback design, where, say, treatment and control are applied on alternate days, and then the difference in the \"daily estimates\" constitutes a treatment effect estimate. There are two challenges to leveraging this design. The first, is that this design requires that for some period of time, the entire system be subject to the intervention, which in some applications is not viewed as an acceptable risk. The second, more significant challenge, as you rightly recognize, is that the system will often not mix fast enough (i.e. `get to equilibnrium’ as you put it). Our work can be viewed as a solution to this challenge, when it is possible to identify a notion of ‘state’ with respect to which the underlying system is Markovian. \n", " Thank you for the helpful review! In response to your questions:\n\n### **Coherent Policy Within an Episode**\nTo clarify the formulation, and the issue of interference: In a typical problem setting, a user arrives at each time step. Then, with probability $p$, the user receives the treatment (i.e. the system takes the action prescribed by the intervention policy), and with probability $1-p$, the user receives the control (i.e. the system takes the action prescribed by the incumbent policy). Following this, the state of the system evolves (based potentially on the users response), thereby impacting subsequent arrivals to the system. The system is expected to mix, but there is no concept of a ‘reset’ or ‘episode’. This is the setting described in the introduction. \n\nIf the actual identity of a user is relevant to payoffs, then the state of the chain can be expanded to capture said identities (or relevant user features). Persisting treatments for the same user arriving at multiple timesteps can also be handled in this way, by incorporating identity into the state description.\n\n### **Assumption 1**\nAssumption 1 requires the inequality hold for all $k$ and some $\\lambda < 1$ -- sorry for not being more precise. In fact, this is benign: a general, sufficient condition for Assumption 1 to hold is that the Markov chain is ergodic (i.e., irreducible and aperiodic). We can clarify this in the manuscript. This is a **very** general class of Markov chains: in practical terms, this models essentially any Markovian system as long as no action leads to an “irrecoverable” state (i.e., all states are always reachable in finite time). Note that most of the average-reward MDP literature makes this assumption (see e.g. this recent lit review https://arxiv.org/pdf/2010.08920.pdf). Finally, Theorem 1 holds for any $\\delta$ and requires Assumption 1 hold; nothing is required of the mixing times of steady-state distributions of the treatment or control policies. \n\n### **Intuition for the DQ estimator**\nThis is an excellent question! In short, our design of the DQ estimator is based on a novel `Taylor’-like expansion for $\\rm{ATE}$ as a function of $\\delta$. Specifically, writing ${\\rm ATE} := f(\\delta)$, we observe that a Taylor expansion of $f(\\delta)$ can be obtained by iterating a particular perturbation identity for stationary distributions of Markov chains. From here, we discover that:\n\n1. Using the \"zeroth order\" form of this expansion to estimate the values of the intervention yields the Naive estimator (and $O(\\delta)$ bias which corresponds to a bound on the first order remainder) \n2. Using the \"first order\" form of this expansion yields the DQ estimator (and $O(\\delta^2)$ bias which corresponds to a bound on the second order remainder). Now to your question – It turns out that this first order expansion maps precisely to a difference of $Q$-functions. \n\nWe have provided a detailed sketch of this approach in our response to Reviewer P5xj, and additional details can be found in Appendix C of the supplementary materials.\n\n### **Mixing Time and Practical Performance**\nThis is a great question. In the ride-sharing example, the quantity $1/(1-\\lambda)$ translates into about $10^5$ time steps (i.e. $\\lambda$ is very close to 1). Despite this, we see that the bias of the DQ estimator remains low, and in particular much lower than the Naive estimator. We can discuss this further in the update.\n", " ### **Providing more intuition about using $Q$ value**: \nThis is an excellent question! In short, our design of the DQ estimator is based on a novel \"Taylor\"-like expansion for $\\rm{ATE}$ as a function of $\\delta$. Specifically, writing ${\\rm ATE} := f(\\delta)$, we observe that a Taylor expansion of $f(\\delta)$ can be obtained by iterating a particular perturbation identity for stationary distributions of Markov chains. From here, we discover that:\n1. Using the \"zeroth order\" form of this expansion to estimate the values of the intervention yields the Naive estimator (and $O(\\delta)$ bias which corresponds to a bound on the first order remainder) \n2. Using the \"first order\" form of this expansion yields the DQ estimator (and $O(\\delta^2)$ bias which corresponds to a bound on the second order remainder). Now to your question – It turns out that this first order expansion maps precisely to a difference of $Q$-functions. \n\nWe have provided a detailed sketch of this approach in our response to Reviewer P5xj, and additional details can be found in Appendix C of the supplementary materials.\n\n### **If treatment probability is not $\\frac{1}{2}$:**\nSuppose the treatment probability for policy 0 is $q$. The estimator presented carries over essentially unchanged, with the following transformation: we compute the Q-function with ``propensity-score-adjusted’’ rewards $r'$: where $r'(s, 0) = r(s, 0)\\cdot (1-q)/q, r'(s, 1) = r(s, 1)\\cdot q/(1-q)$. Note that since $q$ is an algorithmic choice, the 'propensity-score' here is constant and known so that the transformation is trivial. \nOur results on bias are unchanged, but as one might expect, variance increases by roughly a factor of $O(1/q)$.\n\n### **Finite-sample results:** \nA natural approach to doing this would be to consider using Berry-Esseen type results to refine the CLT analysis at the heart of our variance characterization. The non-triviality here is (a) characterizing the higher order coefficients in the Berry-Esseen approximation, and (b) justifying that our chain is amenable to the approximation (i.e. these would be conditions on the MDP itself). An alternative to this approach is attempting to extend the exciting recent progress on finite-time analyses of TD (Zhang, Zhang, Maguluri ICML2021, Qiu, Yang, Ye, Wang JSAIT2021). It is unclear that this approach would yield a useful characterization on the constants in the rates obtained however, and these constants are actually essential since trivial characterizations that scale with the size of the state space are of limited value. \n\n### **When $\\delta$ is large**: \nAs long as $\\delta < 1- \\lambda$, where $\\lambda$ is the mixing time, the Naive estimator will always have worst case bias that is inferior to the DQ estimator – this can be seen by examining the ‘remainder’ terms in the Taylor-expansion we alluded to earlier. \nIn our experiments, we find that the bias of the DQ estimator remains substantially lower than the Naive estimator even as $\\delta$ grows large. Specifically, in response to your question, we modified the experiments in Section 5.1 to have $\\delta = 0.99$, and the bias of DQ was still 1% that of the Naive estimator.\n", " Thank you for the extremely positive and helpful review! We plan to add proof outlines to the main body. \n\n### **Intuition for the DQ estimator**\nThis is an excellent question! In short, our design of the DQ estimator is based on a novel `Taylor’-like expansion for $\\rm{ATE}$ as a function of $\\delta$. Specifically, writing ${\\rm ATE} := f(\\delta)$, we observe that a Taylor expansion of $f(\\delta)$ can be obtained by iterating a particular perturbation identity for stationary distributions of Markov chains. From here, we discover that:\n1. Using the \"zeroth order\" form of this expansion to estimate the values of the intervention yields the Naive estimator (and $O(\\delta)$ bias which corresponds to a bound on the first order remainder) \n2. Using the \"first order\" form of this expansion yields the DQ estimator (and $O(\\delta^2)$ bias which corresponds to a bound on the second order remainder). Now to your question – It turns out that this first order expansion maps precisely to a difference of $Q$-functions. \n\n\n### **Proof Outline of Theorem 1**\n(Preliminary notation: let $A^{\\\\#}$ denote the *group inverse* of a matrix $A$, which is one type of inverse that is useful in the MDP context.) To begin, let us state two facts:\n\n**Fact 1.** For any policy $\\pi$ and associated transition matrix $P_{\\pi}$ and state-wise rewards $r_{\\pi}$, the bias function $V_{\\pi}$ of $\\pi$ has the following **explicit** formula: \n$$\nV_{\\pi} = (I-P_{\\pi})^{\\\\#} r_{\\pi}.\n$$\n\n**Fact 2.** For any policies $\\pi, \\pi'$ with corresponding stationary distributions $\\rho_{\\pi}, \\rho_{\\pi'}$ and transition matrices $P_{\\pi}$ and $P_{\\pi'}$, we have the following perturbation **identity** for stationary distributions (Meyer 1980): \n$$\n\\rho_{\\pi}^{\\top} = \\rho_{\\pi'}^{\\top} + \\rho_{\\pi}^{\\top}(P_{\\pi}-P_{\\pi'})(I-P_{\\pi'})^{\\\\#}.\n$$ \n\nWe use these two facts to obtain a Taylor expansion for $\\rm{ATE}$. Note that $\\rm{ATE} = \\rho_{1}^{\\top}r_1 - \\rho_0^{\\top}r_0$, where $\\rho_{1} \\in R^{|S|}, \\rho_{0} \\in R^{|S|}$ are the stationary distributions of $P_1$ and $P_0$ respectively; and $r_{1}, r_{0}$ are the reward vectors associated with the actions $1$ and $0$ respectively. We further set $P_{1} - P_{0} = \\delta A$ where $\\delta \\in R$ is chosen such that the row absolute sum of $A$ is bounded by 1. \n\nWe analyze $\\rho_{1}^{\\top}r_1$ first; $\\rho_0^{\\top}r_{0}$ follows by symmetry. Applying Fact 2 to $\\rho_{1}$ based on $\\rho_{1/2}$, we have\n\\begin{align*}\n\\rho_{1}^{\\top}r_1 &= \\rho_{1/2}^{\\top}r_1 + \\rho_{1}^{\\top}(P_{1}-P_{1/2})(I-P_{1/2})^{\\\\#}r_{1} \\\\\\\\\n&= \\rho_{1/2}^{\\top}r_1 + \\frac{\\delta}{2}\\rho_{1}^{\\top}A(I-P_{1/2})^{\\\\#}r_{1}.\n\\end{align*}\nApplying Fact 2 to expand $\\rho_{1}$ again using $\\rho_{1/2}$ in the RHS of the above equation: \n\\begin{align*}\n\\rho_{1}^{\\top}r_1 \n&= \\rho_{1/2}^{\\top}r_1 + \\frac{\\delta}{2}\\rho_{1}^{\\top}A(I-P_{1/2})^{\\\\#}r_{1}\\\\\\\\\n&= \\rho_{1/2}^{\\top}r_1 + \\frac{\\delta}{2}\\rho_{1/2}^{\\top}A(I-P_{1/2})^{\\\\#}r_{1} + \\frac{\\delta^2}{4}\\rho_{1}^{\\top}\\left(A(I-P_{1/2})^{\\\\#}\\right)^2 r_{1}\n\\end{align*}\nThis can be iterated so that the $k$-th order expansion, in terms of O($\\delta^{k}$), can be obtained. But the second order is sufficient for the analysis here. A similar analysis can be applied for $\\rho_{0}^{\\top}r_0$ and after cleaning this up, we obtain\n\\begin{align*}\n\\rm{ATE} \n&= \\rho_{1}^{\\top}r_1 - \\rho_{0}^{\\top}r_0\\\\\\\\\n&= \\rho_{1/2}^{\\top}(r_1-r_0) + \\delta \\cdot \\rho_{1/2}^{\\top} A (I-P_{1/2})^{\\\\#} \\frac{r_1+r_0}{2} + O(\\delta^2). \n\\end{align*}\n\n**Naive Estimator: zeroth-order expansion**: From the above, it is clear that the zeroth order term above $\\rho_{1/2}^{\\top}(r_1-r_0)$ exactly corresponds to the Naive estimator, which naturally has $\\Omega(\\delta)$ bias in general. \n\n**DQ estimator: first-order expansion**: To see why the first order approximation corresponds to our DQ estimator, we can use Fact 1, i.e., $(I-P_{1/2})^{\\\\#} r_{1/2} = V_{1/2}$, and recall that $\\delta A = P_1 - P_0$, then\n\\begin{align}\n&\\rho_{1/2}^{\\top}(r_1-r_0) + \\delta \\cdot \\rho_{1/2}^{\\top} A (I-P_{1/2})^{\\\\#} \\frac{r_1+r_0}{2} \\\\\\\\\n&= \\rho_{1/2}^{\\top}(r_1-r_0) + \\rho_{1/2}^{\\top} (P_1-P_0) V_{1/2} \\\\\\\\\n&= \\rho_{1/2}^{\\top}(r_1 - r_0 + (P_1-P_0)V_{1/2})\\\\\\\\\n&=\\rho_{1/2}^{\\top} (Q_1 - Q_0), \n\\end{align}\nwhere the last equality uses the definition of the $Q$-function. The formula is somewhat surprisingly elegant. As an aside, this also suggests how to construct arbitrary kth-order expansions for further bias reduction, although potentially at the cost of increased variance. See Appendix C in the supplementary materials for details.\n", " The paper introduces a treatment estimator for experiments with markovian interference using an on-policy RL approach. The authors show that their estimator achieves a good bias variance tradeoff vs other baselines (two sided randomization, off policy estimators), and that unbiased estimators necessarily have large variance. Finally, the technical results and estimation strategies are illustrated in two experiments: on that re-uses the framework from Johari et al, and another the author's own simulation code that simulates a ride-sharing experiment. The paper seemed to me to be strong. It's well written and was enjoyable to read. I have little criticism to make, though I have to admit I'm not familiar enough with the RL literature to be entirely confident in my assessment. I particularly appreciated that the experiment section was clear, and re-used available open source code, or made the code available, with sufficient baselines.\n\n One thing that would improve the paper is to relate the notation to the potential outcomes notation which is standard in this setting and which many readers in this space are familiar with. More specifically, it would be nice to characterize in this notation which settings can be considered markovian interference where this framework applies.\n\nnit: typos l. 38, l. 68.\n The averaging of estimates over long periods of time into single point estimates is not always regular practice, and I'm afraid that in this case, the framework wouldn't apply so easily. Often, just daily estimates are compiled and reported. Is this because of non markovian interference, or seasonal effects, or time to reach equilibrium? Some discussion there would clarify things for the reader and map this to a setting they are familiar with. N/A", " This paper examines estimating the difference in reward performance of two policies for a Markov Decision Process (MDP) by running a mixture policy and performing on-policy evaluation. The paper shows theoretically the benefits over current methods of policy evaluation of using a difference in $Q$ values estimation process. Specifically, this approach obtains a small estimator variance and a estimation bias that is second order in the effect of the \"intervention\" policy (the policy being tested against an existing policy). (S) the paper is very well written. It is clear and precise. I found very few typos.\n\n(S) The paper tackles a very important problem and gives a compelling argument and demonstration for how it can be solved in a much better fashion compared to current approaches.\n\n(S) Theoretical results are explained well (NOTE: although no proofs are included in the body of the paper). \n\n(S) The experimental results use a large scale realistic problem and shown strong evidence for substantial and meaningful improvement.\n\n(S) The experimental results are very well presented and interpreted.\n\n(S) Excellent reference list (66 references).\n\n(W) It would have been nice to see proof outlines (at least) in the paper. I assume proofs are given in supplemental material ( I did not check).\n\n Line 38: \"designs are not be an ideal lever\"; perhaps \"be\" should be deleted.\n\nLine 38 and 39: \"and often infeasible\"; perhaps this should be \"and are often infeasible\" .\n\n It would have been nice to see some proof outlines in the paper. There was no supplementary appendix at the end of the paper. I assume it required a separate download.", " The paper considers the problem of estimating the average treatment effect, where experimenting on some treatment group might also affect the control group. The problem is motivated by several practical scenarios, such as inventory management, ride sharing, etc, where both the treatment and the control group share a common pool of resources. The authors cast this as an off-policy evaluation problem and propose a new estimator that adequately balances the bias-variance trade-off in this setting. Experiments on a toy domain and raid-hailing simulator are used to demonstrate the effectiveness of the proposed method.\n\n Strengths\n\n- The problem is applicable to a large number of practical applications and thus the work can be of high relevance to a broad audience.\n- The proposed estimator is straightforward to implement, and at the same time provides adequate bias-variance trade-off.\n- The proposed method provides strong performance in the experiments conducted.\n\nWeakness\n\n- It is not clear how reasonable is Assumption 1.\n A\\. Section 2.1: My understanding is that during implementation the intervention policy is at least tried to be coherent within an episode (here episode might correspond to one user or one session). In the discussion/example it seems like the policy is being switched within a single episode. I can see one argument that from the point of view of the system, it may not be able to distinguish different users, but in that case it is a POMDP setting and not an MDP one?\n\nI see the high-level problem but I am quite confused by the formulation of it. (For disclosure, I am not well aware of the interference literature.)\n\nB\\. Assumption 1: For which values of $k$? I am not sure if I have a complete understanding of what this assumption implies and how practical it is. Particularly, along with the assumption in Theorem 1, what does it say about the steady state distribution and the mixing time for the treatment and the control policies? \n\nC\\. Line 176 and Eqn 3: At both the places the proposed estimator is dropped in without much discussion about why one would expect this to work. I see the proof for Theorem 1, but I am failing to understand intuitively why the proposed procedure helps. Both the Q functions and the distribution for computing the expectations are for the data collecting policy. It feels like this estimator is only making one step correction according to the two policies being compared, which is a lot like what the naive estimator would do as well. What is the correct way to interpret this estimator?\n\n\nD\\. How does the mixing time assumption influence the empirical results? Showing this on the toy-example at least would be helpful. \n While the proposed method is straightforward to express, I do not think I have a good understanding of it to provide any constructive criticism (I had not bid to review this work). As such, I have lowered my confidence for this review and will update my score based on discussions with other reviewers after the rebuttal.", " This paper develops a novel biased estimator for the average treatment effect with an MDP model. Here, the average treatment effect is the difference between the reward obtained with and without treatment. This estimator is the difference between the empirical average of Q-values corresponding to the cases with and without applying treatment, which is very similar to a naive estimator that computes the difference of rewards instead of Q-values. However, the authors show that in some cases, this new estimator has a negligible bias, which is significantly better than the naive estimator. Moreover, with some mild conditions, they show that this new estimator is asymptotically unbiased. The variance of the limit random variable of this new estimator enjoys a smaller variance than all unbiased estimators. Finally, the authors elaborate their results with numerical experiments. Strengths:\n\n1. The idea of this new estimator is simple and clear, and, at the same time, the authors provide strong results.\n \n2. This new estimator is computationally affordable.\n\nWeaknesses:\n\n1. It would be better to have some finite sample results corresponding to Theorem 2. \n \n2. The motivation or intuition of using Q-value instead of reward seems not strong enough. 1. I was wondering whether you could provide more intuition to use the Q-value.\n \n2. If the probability of applying the treatment is not 1/2, which might be a small value, how will this change affect those results.\n\n3. I wonder if there are some finite sample results corresponding to Theorem 2?\n \n4. For the comparison between the naive estimator and the DQ estimator, if the difference of the transition probability distributions between applying treatment or not is large, will the naive estimator work better or not? I think there is no potential negative social impact of the work." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 8, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 3 ]
[ "BA85mM_tu7", "V65q1I1zUeB", "7B6xaKFZqqL", "5SiCl3HYKd", "8v0JaIpi6s", "KY3vPI1zY1", "AS5aAErBlZX", "nips_2022_AOSIbSmQJr", "nips_2022_AOSIbSmQJr", "nips_2022_AOSIbSmQJr", "nips_2022_AOSIbSmQJr" ]
nips_2022_rZalM6vZ2J
DP-PCA: Statistically Optimal and Differentially Private PCA
We study the canonical statistical task of computing the principal component from i.i.d.~data under differential privacy. Although extensively studied in literature, existing solutions fall short on two key aspects: ($i$) even for Gaussian data, existing private algorithms require the number of samples $n$ to scale super-linearly with $d$, i.e., $n=\Omega(d^{3/2})$, to obtain non-trivial results while non-private PCA requires only $n=O(d)$, and ($ii$) existing techniques suffer from a large error even when the variance in each data point is small. We propose DP-PCA method that uses a single-pass minibatch gradient descent style algorithm to overcome the above limitations. For sub-Gaussian data, we provide nearly optimal statistical error rates even for $n=O(d \log d)$.
Accept
The paper studies PCA under the constraint of differential privacy, and presents an improved algorithm which is based on a minibatch SGD where each step contains a private top eigenvalue estimation and a private mean estimation. The reviewers agree that the new algorithm is interesting and that the new results are important.
train
[ "EYmvoyJqi4w", "wA1BdJ6ksp_", "VpUxRZemalG", "qoZL8xfxMLM", "eaUjqpJlSsH", "RldcRcKcgyS", "VEqz6QKIykl", "HWBCtvlj7vo", "_r08z6kju59" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The current revise solves my questions. I will change my rating to 'accept'.", " - Re: Major weakness:\n - We thank the reviewer for their meticulous checking of the proof. We acknowledge that there were a few typos in the proof which we have now fixed. A corrected proof of ***the exactly the same statement of Theorem 5.3*** is provided in the revised pdf (with the changes emphasized in blue in Section C.1 in the new supplementary material). In particular, \n - In Theorem C.2, the very last term in the equation is now fixed (and this has no effect on the final statement we want to prove). The reason is that the error rate scales as $\\tau$ (which is not changed after this fixing of the typo), and the last term $|V|/e^{10n\\phi \\varepsilon}$, which is now revised, only needs to be larger than some universal constant. This is ensured by the condition that $d\\geq C’ n \\phi \\varepsilon \\simeq n\\varepsilon\\alpha \\sqrt{(\\lambda_1-\\lambda_2)^2/(2\\lambda_1 \\lambda_2)}$, which is ensured by our choice of $\\alpha$. \n - In Lemma C.3, the final bound does not depend on $\\alpha$ and we put a complete proof of the lemma in Section C.1.1. Intuitively, we are constructing the set by placing points on the intersection of the (d-1)-dim unit-sphere (which insures the unit norm constraint) and a “certain” ball with radius scaling as $\\alpha$ (which allows us to control the ultimate loss). At the same time, since our goal of $(1/\\sqrt{2})\\alpha \\leq || v-v’|| \\leq \\sqrt{2}\\alpha$ scales linearly in $\\alpha$, the construction is scale-invariant; for smaller $\\alpha$ we will be packing the same number of points, but just closer to each other. We added a complete proof of this lemma, for completeness. \n - In Eq.(20), the constants had typos and we fixed them. \n - Now that we know Lemma C.3 does not depend on $\\alpha$, we can apply this as we have written, even though $\\alpha$ depends on the problem dimension. \n- Re: Minor weaknesses:\n - All typos are fixed in the revised version. \n - There are two ways to extend our framework to general rank-$r$ PCA. We will add a discussion in the final version that states that other methods provide error bounds for the general case. \n - Hotelling’s deflation method: we can iteratively find the PCA component one by one, by alternating our DP-PCA and deflation. For example, in one step of the iteration, we only update the current iterate vector in the directions orthogonal to all the previously found PCA components. Repeating this $r$ steps gives the estimates of the top $r$ principal components. \n - Oja’s algorithm: we can keep track of a $r$-dimensional subspace in the Oja’s update rule for PCA, and perform QR decomposition to keep the iterates on the Grassmannian manifold. It might be possible to extend the analysis of “Streaming k-PCA: Efficient guarantees for Oja’s algorithm, beyond rank-one updates” by De Huang, Jonathan Niles-Weed, Rachel Ward to analyze the private version, but we believe this is beyond the scope of this paper.\n- Re: Question: \n - That is a typo and Theorem 5.1 is now fixed and includes the $d\\log(1/\\delta)/\\varepsilon$. \n", " - Re: choice of learning rate: In the submitted manuscript, we discuss how to search for the appropriate learning rate. The main idea is that the same guarantees can be achieved with an adaptive search for the right learning rate done in differentially private manner. For convenience, we copy the text from the submission, which reads: \n> Currently, DP-PCA requires choices of the learning rates, $\\eta_t$, that depend on possibly unknown quantities. Since we can privately evaluate the quality of our solution, one can instead run multiple instances of DP-PCA with varying $\\eta_t = c_1/(c_2 + t)$ and find the best choice of $c_1>0$ and $c_2>0$. Let $w_T(c_1,c_2)$ denote the resulting solution for one instance of $\\{\\eta_t = c_1/(c_2 + t)\\}_{t=1}^T$. We first set a target error $\\zeta$. For each round $i=1,\\ldots$, we will run algorithm for $(c_1, c_2) = [2^{i-1},2^{-i+1}]\\times [2^{-i+1}, 2^{-i+2}\\ldots, 2^{i-1}]$ and $(c_1, c_2) = [2^{-i+1}, 2^{-i+2}\\ldots, 2^{i-1}]\\times [2^{i-1},2^{-i+1}]$, and compute each $\\sin(w_T(c_1, c_2), v_1)$ privately, each with privacy budget $\\varepsilon_i = \\frac{\\varepsilon}{2^{i+1}(2i-1)}, \\delta_i = \\frac{\\delta}{2^{i+1}(2i-1)}$. We terminate the algorithm once there there is a $w_T(c_1, c_2)$ satisfies $\\sin(w_T(c_1, c_2), v_1)\\le \\zeta$. It is clear that this search meta-algorithm terminates in logarithmic rounds, and the total sample complexity only blows up by a poly-log factor.” (line 252-261)\n\n The choice of the learning rate is inherited from Theorem 2.2 on the analysis of non-private PCA. It is an adaptive choice, as it decreases with the iteration $t$. This is critical in achieving the near-optimal sample complexities. This sheds light on the type of learning rates that efficiently performs private streaming PCA. We note that we have essentially assumed an oracle which sets the learning rate in this paper. An important question for future research in numerical investigations of various differentially private PCA algorithms is how to set the learning rate in a robust, private, and data-driven manner. We will clarify our choice of the learning rate in the final version. \n\n\n\n- Re: experiments\nThe main contribution of this paper is the introduction of the novel algorithm and its theoretical analysis. We believe the numerical investigation of various private PCA approaches is outside the scope of this paper and we leave it for future research. \n", " - Response to question 1: It is a correct observation that since we primarily care about the error rate’s dependence on various parameters (and not the constant), we do not attempt to optimize the constants (like those that can be gained by a better splitting of the data). \n\n- Response to question 2: PCA for the heavy-tailed case is an exciting problem. We know that we need to “add more noise” since the fundamental trade-off is different for heavy-tailed samples, which are known to be achievable with non-efficient algorithms [“Differential privacy and robust statistics in high dimensions”, Xiyang Liu, Weihao Kong, and Sewoong Oh.] but no efficient algorithm is known. The main difference will be in carefully analyzing the clipping threshold, which will govern how much larger the noise is going to be for the heavy-tailed case. We believe our framework will still be applicable with heavy-tailed mean estimators, and we leave it for future research. \n\n- Response to question 3: For the case where there is no spectral gap (i.e. $\\lambda_1 \\simeq \\lambda_2 $), the fundamental statistical trade-offs are known in [“Differential privacy and robust statistics in high dimensions”, Xiyang Liu, Weihao Kong, and Sewoong Oh.]. Combining the techniques from the heavy-tailed lower bound from this reference with the lower bound technique in our manuscript, we believe one can prove a heavy-tailed lower bound for the case of interest in our manuscript where there is a spectral gap (i.e. $\\lambda_1-\\lambda_2 >0$). An educated conjecture for this lower bound is \n$\\sin(v,\\hat v) \\geq C \\min \\{ \\kappa \\Big(\\sqrt{\\frac{d}{n}} +(\\frac{d}{n\\varepsilon})^{(p-2)/p} \\Big) , 1\\}$, for $p$-th moment bounded distributions. \n", " We thank the reviewer for the references and added them to the related work section in Section A of the supplementary materials. Comparisons to Imtiaz and Sarwate are also provided. The newly added parts are indicated by blue fonts. ", " This paper studies principle component analysis (PCA) under differential privacy (DP) constraints. The authors improve the sample complexity from prior SOTA to a near-optimal rate by proposing a new single-pass algorithm. This algorithm is novel, and it is based on a minibatch SGD algorithm where each step contains a private top eigenvalue estimation and a private mean estimation. Although the authors have not shown any simulation or experiment for their algorithm, they provide both upper bounds and lower bounds under various assumptions with solid theoretical analysis. Major strengths:\n\n1. The authors point out current issues for the existing results performing PCA with privacy constraint and use a totally different approach to tackle this problem. They identify the two challenges of the intuitive Private Oja’s Algorithm and solve them by two subalgorithms which perfectly match the need.\n2. The authors provide both the upper bound and lower bound which vastly improves the understanding of the PCA problem under DP constraint. The analysis of the DP-PCA under Gaussian distributions shows that their algorithm is near-optimal.\n\nMinor strengths:\n\n1. This paper is well-written and greatly organized to help readers get familiar with the problem they are solving and the reason behind their solution. Section 6 is very helpful for understanding what is happening during the proof.\n2. The assumptions and the corresponding reasons are clearly provided.\n\nMajor weakness:\n\n1. I have checked most of the theorems with the proof in the appendix, and found all of them are solid and accurate, except for the lower bound, Theorem 5.3, which may invalidate the near-optimality: \n 1. In Section C.1, Line 635, the very last term in the equation is incorrect where from [3] Corollary 4, $\\frac{\\log(|V|)}{n\\phi\\varepsilon}$ should be $\\frac{|V|}{\\exp(10n\\phi\\varepsilon)}$; \n 2. In Line 640 Lemma C.3, the final part $c_1 d$ should contains $\\alpha$. \n 3. In Line 644, the usage of Lemma F.4 seems to have wrong constants. \n 4. In Line 647, the choice of $\\alpha$ depends on $d$ which is not permissible if we want to use Lemma C.3. \n\nMinor weakness: \n\n1. This paper only considers PCA as computing the first principal component. The paper [23], which is the tightest result they have compared with in Line 115, can be used to compute the top $k$ principal components under DP constraint. Therefore, the description that the rate of DP-PCA is better than existing results is correct but may not be very fair. It would be better if the authors provide a discussion on how to use DP-PCA to find top $k$ principal components.\n2. Theorem 2.2 typo: $\\beta$ should be $\\xi$\n3. Line 178 typo, $x_i$ should be $s_i$ 1. In Theorem 6.1, the rate on $B$ contains $d\\log(1/\\delta)$, but in Theorem 5.1, the rate on $n$, which I think should be almost the same as $B$ since $B=O(n/log^2 n)$ in Line 196, does not contain $d\\log(1/\\delta)$ as a whole term. Is there any typo? The authors has discussed the limitation of their analysis and algorithm, e.g., Line 121-123, Line 235-244.", " Principal Component Analysis (PCA) is a fundamental statistical tool with multiple applications and differential privacy (DP) is a widely accepted mathematical notion of privacy. This paper presents a algorithm called DP-PCA and shows that it achieves an error bounded for sub-Gaussian-like data.
 The authors make the Oja’s algorithm private and show that it achieves near-optimal error if the data is strictly from a Gaussian distribution. Then they overcame the two remaining challenges: (i) restricted range of \\epsilon=O(\\sqrt{log(n/\\delta)/n}); (ii) excessive noise for privacy. To overcome each challenge, it critically relies on two techniques: minibatch SGD and private mean estimation. The authors proposed DP-SGD based on the two ideas above. They use minibatch SGD of minibatch size B=O(n/log^{2}n) to allow for larger range of \\epsilon=O(1) and use Private Mean Estimation to add an appropriate level of noise chosen adaptively according to Private Eigenvalue Estimation.\nThis paper considers using private estimation within an optimization in DP-PCA setting while the idea has already been proposed in abstract settings. \n See below for the weakness DP-PCA requires choices of the learning rates. I think authors should better explain their choice of learning rate.\n\nThere are no experiments to validate the effectiveness of the proposed methods in the current paper.\n", " This paper studies the PAC task under differential privacy guarantee and propose private algorithms which achieves the optimal sample complexity. Strengths:\n1. This paper is well written and I enjoy the reading. The motivations for each proposed techniques and improvements, though simple, are clear.\n2. This paper shows DP PCA algorithm of an asymptotically optimal sample complexity. Concrete examples for the lower bound are presented, especially for the effect on concentration assumptions. \n\nWeakness:\nThere is some unclarity in the algorithm and proof. Please see below. 1. In Algorithm 3, why not apply subsampling in the input A_i but instead only implement on split data? Is is because it only concerns the asymptotic properties and there does not matter if the mini-batch is n/log n for log n iterations? \n\n\n2. Can you further drop the Gaussian tail assumption (A4) but instead apply heavy-tailed mean estimation as Algorithm 5 to implement Algorithm 3? It seems that changes in A4 only produce a worse dependence on the variance of the data but some bound can still be derived though not necessarily optimal. \n\n3. Can the techniques to construct the lower bound be generalized to the case when only the second or fourth moment of tail is assumed? I am curious for heavy-tailed data, can the author claim anything interesting on the utility bound? Please see above. ", " The authors propose a Differentially Private PCA (DP-PCA) based on a minibatch GD method to overcome the limitations of existing approaches. Nearly optimal statistical error rates are provided. A lower bound shows that sub-Gaussian style assumption is necessary\nto obtain the optimal error rate.\n Strengths:\n1) The paper is well organized.\n2) The proposed Private Oja’s Algorithm and DP-PCA are well explained and reproducible.\n\nWeaknesses:\n1) There are missing key references:\nH. Imtiaz, A. Sarwate, \"Differentially Private Distributed Principal Component Analysis\", IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2018, pages 2206-2210, 2018.\nH. Imtiaz, A. Sarwate, “Distributed Differentially-Private Algorithms for Matrix and Tensor Factorization” , IEEE Journal of Selected Topics in Signal Processing, December 2018.\nS. Wang, J. Chang, \"Differentially Private Principal Component Analysis Over Horizontally Partitioned Data\", IEEE Conference on Dependable and Secure Computing, DSC 2018, 2018.\nA. Gang, B. Xiang, W. Bajwa, “Distributed principal subspace analysis for partitioned big data: Algorithms, analysis, and implementation\", IEEE Transactions on Signal and Information Processing over Networks, Volume 7, pages 699–715, 2021.\n Can the authors locate their work in comparison of the works of . Imtiaz and. Sarwate published in 2018? The authors have addressed adequately the limitations of the proposed method." ]
[ -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, 2, 5, 3, 3 ]
[ "wA1BdJ6ksp_", "RldcRcKcgyS", "VEqz6QKIykl", "HWBCtvlj7vo", "_r08z6kju59", "nips_2022_rZalM6vZ2J", "nips_2022_rZalM6vZ2J", "nips_2022_rZalM6vZ2J", "nips_2022_rZalM6vZ2J" ]
nips_2022_a3ooPbW0Jzh
Differentially Private Learning with Margin Guarantees
We present a series of new differentially private (DP) algorithms with dimension-independent margin guarantees. For the family of linear hypotheses, we give a pure DP learning algorithm that benefits from relative deviation margin guarantees, as well as an efficient DP learning algorithm with margin guarantees. We also present a new efficient DP learning algorithm with margin guarantees for kernel-based hypotheses with shift-invariant kernels, such as Gaussian kernels, and point out how our results can be extended to other kernels using oblivious sketching techniques. We further give a pure DP learning algorithm for a family of feed-forward neural networks for which we prove margin guarantees that are independent of the input dimension. Additionally, we describe a general label DP learning algorithm, which benefits from relative deviation margin bounds and is applicable to a broad family of hypothesis sets, including that of neural networks. Finally, we show how our DP learning algorithms can be augmented in a general way to include model selection, to select the best confidence margin parameter.
Accept
After the internal discussion, all reviewers agreed that the paper should be accepted. Please take into account the reviewers' comments while preparing the camera-ready version of the paper.
train
[ "9NM6G1OnHy", "tUFG5kVi4F3", "hkGplcPVb7", "EYyz0gD-35", "cEd9xL_o2pt", "spzIy5XVtLO", "w2m6G-OMaP", "M7eRxNAyGmR" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the review. Please, see our response to your comments and questions below.\n\n- “There is no experiments to enhance the theoretical conclusions. It will be better even if the authors can run toy experiments on some small datasets.”\n\n\nOur contributions are theoretical and we believe that the fundamental nature of the problem we address makes this work interesting to the community. We will report empirical results in future work.\n\n\n- “Is it possible to apply advanced composition during the privacy leakage analysis?”\n\n\nFirst, regarding Algorithms 2 and 3 (in Sections 3 and 4, respectively), note that the privacy guarantee follows from the privacy guarantee of the DP-ERM subroutine (Algorithm 4 in Appendix B.2 (invoked in step 4 in Algorithm 2)). Algorithm 4 consists of $M$ rounds, where $M=\\log(1/\\beta)$ and $\\beta$ is the confidence parameter. As discussed in Appendix B.2, the privacy analysis of Algorithm 4 entails three main ingredients: i) we first show that each round is $(\\frac{\\varepsilon}{2M}, \\frac{\\delta}{M})$-DP by combining the privacy guarantee of Algorithm $\\mathcal{A}_{\\mathsf{GLL}}$ from [BGM21] (invoked in Step 6) together with the group privacy property of DP, ii) we then apply basic composition over the $M$ rounds, and finally iii) compose the resulting privacy guarantee with the privacy guarantee of the exponential mechanism invoked in step 7. Note that since $M$ is logarithmic in $1/\\beta$, applying advanced composition over the $M$ rounds does not buy us any meaningful improvement in the final bound. In particular, if we apply advanced composition, then we have $\\epsilon \\approx \\hat{\\epsilon}~\\sqrt{M \\log(1/\\delta)}$, and hence, the $\\log(1/\\beta)$ term in the final bound becomes $\\sqrt{\\log(1/\\beta) \\log(1/\\delta)}$. Note also that the latter bound is worse than the former (our bound) when $\\delta < \\beta$ (which is a common setting since $\\delta$ can be viewed as the probability of a catastrophic privacy breach). \n\nThe remaining algorithms in the paper (the pure DP algorithms) are based on private selection from a finite cover of the hypothesis class via the exponential mechanism. Applying advanced composition does not make sense in this case since there are no rounds to compose over.\n", " Thank you for the careful review and for your comments. Please, see our response to your comments and questions below. \n\n- “The bound in Theorem 5.1 suffers from an exponential dependency on the depth.”\n\nPlease note here that standard learning bounds based on uniform convergence also include a term of the form $\\Lambda^L$. We also note that recent margin bounds (in the non-private setting) such as those in [BFT17] involve a similar dependency (particularly, the product of the norms of the weight matrices of all the layers). \n\n[BFT17]: Peter Bartlett, Dylan Foster, Matus Telgarsky. Spectrally-normalized margin bounds for neural networks. NeurIPS 2017.\n\n \n\n\n\n- “For the pure DP algorithm, the bounds have a quadratic dependency…”\n\nThe mentioned dependency appears in the error term due to privacy, which scales as $1/(\\rho^2 m)$. Please, note that this term is essentially the square of the first term $1/(\\rho \\sqrt{m})$, which is the standard non-private bound. That is, the error term due to privacy is essentially dominated by the standard non-private error (assuming $\\epsilon = \\Omega(1)$). \n\n\n- “The paper presents a series of results. While these results are comprehensive, they may make the paper not focused.”\n\nDeriving dimension-independent guarantees in privacy-preserving learning is a fundamental problem and our results and solutions are new. Thus, we aimed to provide a more comprehensive set of results to show how different aspects can be tackled. Nevertheless, we are happy to follow any suggestion from the reviewer to improve our presentation.\n\n\n- “… it seems that the paper considers a constrained problem setting... Therefore, it is a bit confusing to me whether the dimension-independent bounds contradict with existing results.”\n\nDespite having a bound $\\Lambda$ on the norm given as an input parameter to the algorithm, we do not subject our algorithm’s final output to that constraint. In particular, note that in the final step of our algorithms, we return a predictor in the original $d$-dimensional space by applying the transpose of the original projection matrix $\\Phi^{\\top} \\tilde{w}$. The final output $\\Phi^{\\top} \\tilde{w}$ has expected norm $\\sqrt{d}\\rho/r$ and may not lie in $B^d(\\Lambda)$. Nevertheless, the margin guarantee obtained only depends on $\\Lambda/\\rho$, not on the norm of the output. This enables us to circumvent the dependency on the dimension.\n\n\n- “The paper uses the technique of margin and random projection. It is not clear to me which one is essential to develop dimension-independent guarantee…”\n\nTo answer this question precisely, let us emphasize first that, even in the non-private setting, margin bounds are critical in classification problems to derive dimension-independent (and VC-dimension-independent) guarantees. The challenge here has been to derive private algorithms benefiting from such guarantees. Random projection is one of the essential techniques used in our algorithms and analyses to achieve that goal. As an alternative to random projection, as indicated in section 4, we can use sketching techniques for that purpose. As in the non-private setting, margin guarantees provide us with a trade-off between the empirical margin loss and complexity, depending on the confidence parameter $\\rho$. In summary, both margin guarantees and random projection play a critical role here, indeed.\n\n- We also thank the reviewer for pointing out these typos. We will fix them in the final version of the paper.\n", " We thank the reviewer for the valuable feedback. Below, we respond to the questions raised by the reviewer. \n\n- \"What is $r$ in Theorem 5.1?\"\n\nThe parameter $r$ is defined in line 311 in the beginning of Section 5. Namely, $r$ is the norm bound on the feature vectors. This is also consistent with the notation we use for this parameter throughout the paper. Please, note that the presence of that parameter is also standard in non-private learning bounds.\n\n\n- \"Can the algorithm for learning the NN be efficiently implemented?\"\n\nThis is a good question. The fundamental limitation here stems from the computational hardness of the non-convex optimization problem underlying NN learning (even non-privately). This is due to the non-convexity of the empirical objective in the case of NN learning (even if we replace the zero-one loss with a more well-behaved function such as the hinge loss). There are alternatives that seek either a first-order or second-order stationary point of the non-convex objective; however, since our main focus is to obtain an information-theoretic generalization bound for private NN learning, we did not consider such alternatives here. \n\n\n- \"What is the significance of the bounded norm assumption?\"\n\nWe note here that the bound $\\Lambda$ on the norm is a measurable and, more importantly, a tunable quantity, for which one can select an optimal setting using model selection techniques. Existing non-private margin bounds (whether for linear and kernel-based classifiers [MRT18] or NN learning [BFT17]) fundamentally entail a similar dependency. For our margin bound for NN learning, we can treat $\\Lambda^L / \\rho$ as a tunable parameter as discussed in the paragraph given by lines 198-202. \n\n[MRT18]: Mehryar Mohri, Afshin Rostamizadeh, and Ameet Talwalkar. Foundations of machine learning. MIT press, 2018.\n\n[BFT17]: Peter Bartlett, Dylan Foster, Matus Telgarsky. Spectrally-normalized margin bounds for neural networks. NeurIPS 2017.\n\n\n- We also thank the reviewer for pointing out these typos. We will fix them in the final version of the paper. \n", " We thank the reviewer for their comments and appreciation of our work.", " The paper presents differentially private algorithms with dimension-independent margin guarantees for various classification problems such as linear classification, kernel-based classification, and feed-forward neural network classification. The key to the algorithm for linear classification is using the FAST JL transform for feature space reduction and then using the DP-ERM algorithm from [BGM21] for solving the convex optimization problem (along with standard boosting techniques). The key to the algorithm for private kernel-based classification is to first approximate the feature map by a finite-dimensional map determined via Random Fourier Features. Finally, for Feed Forward Neural Networks classification, the authors rely once again on the JL transform to reduce the dimensionality before each layer and apply the exponential mechanism to achieve pure differential privacy.\n\nOverall, this is a compressive and systematic study on differentially private classification and I recommend it be expected. The paper provides a bunch of elegant algorithms for differentially private classification. While the technical components used in the algorithms are well known, the literature review, as well as results, are comprehensive. None. Yes.", " This paper introduces several new differentially private learning algorithms with performance guarantees that are independent from the data input dimension and instead given in terms of a confidence margin parameter.\nAlgorithms are presented for linear classification, kernel classifiers, and neural networks. One of the key tools of the analysis is to use the JL transform to map the hypotheses to a lower-dimensional space, perform learning there, and then to “invert” the transform. This paper provides interesting results regarding a relevant topic. While many of the main ideas appear to be similar to prior work, the analysis requires non-standard technical extensions, and the resulting bounds are clear improvements.\nThe models that are studied cover a wide variety of use cases, and both inefficient algorithms, but which are stronger in terms of privacy guarantees, as well as computationally efficient algorithms, at the cost of weaker privacy guarantees, are considered.\n\nIn terms of weaknesses, the practicality of the results and limitations could be discussed to a further extent. The bound in Theorem 5.1 could be further discussed to elucidate the role of the parameters—it appears that $r$ is not even defined here. What is $r$ in Theorem 5.1? It appears undefined in the main text. \nCan the algorithm for learning the NN be efficiently implemented? What is the significance of the bounded norm assumption?\n\nMinor comments: The sentence in lines 102-104 should be fixed \n111 and 112: yields/makes: remove s \n128: “gave construction” \n138 and 139: the term “$\\rho$-hinge loss” is overloaded \nAlg. 1 line 4: “an” \n186: “empirical $\\rho$-margin” should be “empirical $\\rho$-margin loss”? The impact of the assumptions that are made is not extensively discussed in the paper.", " This paper develops several differentially private algorithms with dimension-independent utility guarantees. The results include a pure DP-algorithm and an efficient private algorithm for linear classification. The paper further extends these results to kernel-based hypotheses with shift-invariant kernels and feed-forward neural networks. For all the algorithms, the paper develops margin guarantees independent of the dimension. Strength:\n\nThe existing DP algorithms suffer from a bound with a dependency on either the dimension or the norm of the optimal model, which would not be effective if the dimension or the norm of the optimal model is large. This paper shows that these dependency can be removed and develops several algorithms based on margin guarantees. Therefore, the results are interesting to the differential privacy community.\n\nWeakness:\n- The bound in Theorem 5.1 suffers from an exponential dependency on the depth. Therefore, the result is not efficient if $L$ is moderately large.\n- For the pure DP algorithm, the bounds have a quadratic dependency on $1/\\rho$. The bounds would not be effective if $\\rho$ is small.\n- The paper presents a series of results. While these results are comprehensive, they may make the paper not focused.\n Question\n- In the introduction, the authors say that learning guarantee necessarily admits a dependency on $d$ for constrained setting. However, it seems that the paper considers a constrained problem setting: Algorithm 2 involves a minimization over $B^k(2\\Lambda)$, the linear predictors are also constrained as $w\\in B^d(\\Lambda)$. Therefore, it is a bit confusing to me whether the dimension-independent bounds contradict with existing results.\n- The paper uses the technique of margin and random projection. It is not clear to me which one is essential to develop dimension-independent guarantee. It would be helpful if the authors could comment on how the margin technique is useful in improving the DP guarantee.\n\nTypos:\n\nSection 4: $(\\tilde{\\Psi}(x),y))$\n\nSection 4: can designed\n\nSection 4: builds on on\n\nLemma A.2: the meaning of fat is not rigorous Yes.", " This paper proposes a series of DP algorithms for learning linear classifiers, kernel classifiers, and neural-network classifiers with dimension-independent and confidence-margin guarantees. It leverages JL-transform and a faster DP-ERM algorithm to design the faster DP linear classifier with margin guarantees. It combines kernel approximation with the use of the JL-transform to design the DP kernel classifiers with margin guarantees. And in the end, the authors propose a DP neural network learning algorithm for a family of feed-forward neural networks for which they prove a confidence-margin bound that is independent of the input dimension via embedding-based ``network compression'' technique. This enables a broader analysis of DP neural network in the future. Strengths:\n1. This paper introduces its DP algorithms step by step from the DP linear classifer, to the DP kernel classifier and finally the DP neural nework with a very large input dimension and margin guarantees, which makes it easier to understand.\n2. Using fast JL-transform to reduce the high dimensionality to improve the time complexity is a nice optimization.\n3. Using the embedding-based network compreesion technique to improve the DP neural network learning algorithm.\n\nWeakness:\n1. There is no experiments to enhance the theoretical conclusions. It will be better even if the authors can run toy experiments on some small datasets.\n2. It is possible to improve the privacy leakage via advanced composition. 1. Is it possible to apply advanced composition during the privacy leakage analysis? Yes. The authors have addressed potential negative societal impact." ]
[ -1, -1, -1, -1, 7, 7, 6, 5 ]
[ -1, -1, -1, -1, 2, 2, 4, 3 ]
[ "M7eRxNAyGmR", "w2m6G-OMaP", "spzIy5XVtLO", "cEd9xL_o2pt", "nips_2022_a3ooPbW0Jzh", "nips_2022_a3ooPbW0Jzh", "nips_2022_a3ooPbW0Jzh", "nips_2022_a3ooPbW0Jzh" ]
nips_2022_SbHxPRHPc2u
Oracle-Efficient Online Learning for Smoothed Adversaries
We study the design of computationally efficient online learning algorithms under smoothed analysis. In this setting, at every step, an adversary generates a sample from an adaptively chosen distribution whose density is upper bounded by $1/\sigma$ times the uniform density. Given access to an offline optimization (ERM) oracle, we give the first computationally efficient online algorithms whose sublinear regret depends only on the pseudo/VC dimension $d$ of the class and the smoothness parameter $\sigma$. In particular, we achieve \emph{oracle-efficient} regret bounds of $ O ( \sqrt{T d\sigma^{-1}} ) $ for learning real-valued functions and $ O ( \sqrt{T d\sigma^{-\frac{1}{2}} } )$ for learning binary-valued functions. Our results establish that online learning is computationally as easy as offline learning, under the smoothed analysis framework. This contrasts the computational separation between online learning with worst-case adversaries and offline learning established by [HK16]. Our algorithms also achieve improved bounds for some settings with binary-valued functions and worst-case adversaries. These include an oracle-efficient algorithm with $O ( \sqrt{T(d |\mathcal{X}|)^{1/2} })$ regret that refines the earlier $O ( \sqrt{T|\mathcal{X}|})$ bound of [DS16] for finite domains, and an oracle-efficient algorithm with $O(T^{3/4} d^{1/2})$ regret for the transductive setting.
Accept
As summarized very well in the reviews, this is a well-written paper that makes a solid and elegant contribution to the recently active line of work on smoothed online learning. The authors have successfully addressed the main concerns brought up in the discussion. I genuinely agree the paper should be accepted. As a side note to the authors: I honestly found your reaction to Reviewer gcMM’s comments rather aggressive and incongruous. Disagreements naturally arise in a discussion and should not be automatically considered as an attempt to “greatly harm the review process and the community at large”.
train
[ "P6He2jHprMG", "gz3N1lFcTkq", "ignCG8v_m6l", "p7s6E9aojN", "BcuClpFEO3u0", "3WhmVtaEchW", "XujFwR782kF", "wEVblKHIr1B", "_x8U29614BM", "2nGRVS1kuXx", "1MbunLhFtS", "skrB4rQe-e", "FpFk2N60bgG", "ULh1pjjqov0", "hKKHir1J_zD", "yfn2bHxH4z", "5fWU8BiM6b9", "_VOOMWKtTd", "mOzoo8DfMVU", "1vjBDQ3vIP" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed response on the connection with classical settings ! I think explaining how your results recover/improve regret bounds in classic settings is very important: it convinces audiences that the smoothed analysis setting is legit, and I believe it's worth using a separate sub-section discussing these (the current response is already good with some polishing). \n\nAbout the overlapping part: this was my minor concern. Now that you have addressed my main concern, I will raise my score accordingly.", " Thank you for the response. \n\n**In response to “it would be great if you can recover the regret bound in the classical online learning setting using smooth analysis …”:** Our techniques indeed recover and improve upon several classical results in online learning. We will restate these below. We also emphasize that the connection to these classical settings is unique to our work — existing and concurrent works in this space are not capable of achieving these results, in part due to the weaker nature of their results.\n\n1. **Classical Online Learning with Finite domain**: In the finite domain case, our Poissonized FTPL algorithm achieves a regret of $\\sqrt{T(d|\\mathcal{X}|)^{1/2}}$, which improves upon the state-of-the-art regret bound of $\\sqrt{T|\\mathcal{X}|}$ for oracle-efficient by [DS16], because the VC dimension $d$ is bounded by $|\\mathcal{X}|$ in the worst case and can be significantly smaller for most hypothesis classes.\n2. **Classical Transductive Learning**: In the transductive learning setting, our algorithms recover and improve upon the best known results. In particular, \n - **For the relaxation-based algorithm (Algorithm 1)**, as discussed in Section 4.3 of our paper, the relaxation we use to design this algorithm is a more general form of the relaxation that characterizes the standard transductive learning setting. Therefore, if we directly set $S^{(t)}$ to be the union of unlabeled instances for the future rather than a set of uniformly sampled instances from the base distribution, our Algorithm 1 reduces to the standard transductive learning algorithm proposed by [RSS12], and achieves the regret of $\\sqrt{Td}$.\n - **Poissonized FTPL algorithm (Algorithm 2)** can be combined with the observation that transductive learning is equivalent to having a finite domain of size $|\\mathcal{X}|=T$. Using our $\\sqrt{T(d|\\mathcal{X}|)^{1/2}}$ regret bound for finite domains, we achieve a $T^{3/4}d^{1/4}$ regret bound for transductive setting. This improves upon the regret $T^{3/4}d^{1/2}$ of [KK06] by improving the dependence on the VC dimension. This improvement is significant for several reasons. On the historical front, up to now the FTPL approach for solving transductive learning was considered to be inferior to the RSS12’s relaxation-based approach since they shared the same $d^{1/2}$ dependence but FTPL had a worse dependence on $T$. Our work shows that these two approaches both have their strengths, since Poissonized FTPL demonstrates a better dependence on $d$ compared to RSS12’s relaxation-based approach. This is also welcomed news for applications where the “proper-learning” aspect of FTPL is in particular important. \nThe improvements above are only possible because our novel techniques and approach are capable of achieving the very strong regret bound of Theorem 3.2 with $\\min\\left(\\sigma^{-1/4}\\sqrt{dT}, \\sqrt{T(d|\\mathcal{X}|)^{1/2}}\\right)$ dependence. The regret bounds with dependencies of $\\sigma^{-1/2}\\sqrt{dT}$ and $\\sigma^{-1}\\sqrt{dT}$, such as those in [BDGR22], are not capable of improving upon the existing results, or as in the case of [DS16], recovering these bounds.\n\n3. The aforementioned results are highlighted in Table 1 and Cor 3.3 for more detail. We will expand on the discussion of Cor 3.3 in the final version of the paper to emphasize the significance of these results more.\n", " **In response to “the significance of the overlapping part somewhat degrades”:** We believe that the overlap with concurrent results are minor, on the other hand, there are several results and techniques that are unique to our work and have significant implications. Above, we highlighted two of the important consequences of the fact that our work achieves a regret of $\\min\\left(\\sqrt{dT\\sigma^{-1 / 2}},\\sqrt{T(d|\\mathcal{X}|)^{1/2}}\\right)$. Furthermore, these stronger results and consequences are only enabled by the novel techniques we introduced in this work, i.e., the modified generalization error and poissonization techniques. We believe these techniques are of independent interest and will impact the field more broadly.\n", " Thanks for your response.\n\n* Your reply about adapting to sigma is fully convincing.\n* I also agree with your view on the relevance of oracles to practical ML. It would be nice to include some of this discussion in the paper, which I believe would make it stronger.\n* Thanks for clarifying the role of the lower bound.", " Thank you for your clarifying comments. I agree with them, and I am maintaining my score.", " About technical stuff: thank you for the very detailed response! Though the assumption still feels strong to me, I agree it's an established notion and really appreciate the introduction of the history of smoothed analysis, and why it makes sense in online learning. As you mentioned that \"In the online binary classification setting, our bound in Theorem 3.2 shows that efficient learning is possible even when the smoothness parameter is linear in $T$, which is exactly the case of transductive online learning.\", it would be great if you can recover the regret bound in the classical online learning setting using smooth analysis (I believe one can approach the $\\sqrt{T}$ rate by smoothing the domain and use a $\\sigma$ with dependence on $T$), which would be a very convincing argument supporting the assumption.\n\nAbout comparison with BDGR22: this was my minor concern, I was very aware it was a concurrent work and I had no \"implicit\" intent to treat it as a follow-up. The fact is, there's already a published paper with very similar results. Whether it was a concurrent work or not, now the significance of the overlapping part somewhat degrades, which one can't pretend to ignore. I understand this situation (having a concurrent work with worse results accepted) is upsetting and irritating, but I feel it's offensive to imply subjective opinions of a reviewer from a few sentences.", " We thank the reviewer for the comments. However, we disagree with the reviewer on the importance of smoothed analysis and its characterization. We also address the comparisons with concurrent works.", " **Standard beyond the worst-case model in algorithm design** There is a long and rich line of work in TCS and machine learning that uses smoothed analysis to justify the empirical performance of algorithms that are used in practice. A celebrated example of this is the analysis of the simplex algorithm by Spielman and Teng. This notion has gone on to be influential in various areas such as combinatorial optimization and game theory. See [Part 4, Roughgarden 2021] for a more detailed look into smoothed analysis. In particular, the specific notion of smoothness we use in online learning has been around at least since [Rakhlin, Sridharan, &Tewari, 2011], and has played a key role in several other important works [Gupta & Roughgarden 2017, Balcan et. al, 2021, Cohen-Addad & Kanade 2017, Haghtalab, Roughgarden & Shetty 2020-2021, etc]. Using an upper bound on the density of the distribution is the modern (and more general) interpretation of smoothed analysis. Please refer to [Part 4, Roughgarden 2021] for more details.\n\n**The value of our work**: What is unique and interesting about our work is not the introduction of smoothed analysis, but rather enabling computationally efficient algorithms to exist. We fulfill the promise of smoothed analysis in online learning by establishing that we simultaneously get improved regret bounds and efficient algorithms under smoothness.\n\n**Why smoothed analysis is particularly compelling for online learning**:\nSmoothness generalizes various notions of noise, because the distribution of the perturbed instances is a convolution with the noise distribution. For example, in R^n, if we consider Gaussian noise, it is easy to check that the input at each time step then becomes smooth (i.e. with bounded density that depends on dimension $n$ and standard deviation). In fact, the reason that noise makes this setting learnable is because it introduces anti-concentration to the input distribution, which is exactly what smoothness aims to capture.\n\nSmoothness is a general notion that works equally well in both combinatorial and continuous (geometric) settings, without having to worry about details of the noise generating process. Moreover, the smoothness parameter $\\sigma^{-1}$ need not be constant and can depend on the problem parameters such as dimension (our algorithms still get non-trivial regret in such settings), and can even depend sublinearly on $T$. In the online binary classification setting, our bound in Theorem 3.2 shows that efficient learning is possible even when the smoothness parameter is linear in $T$, which is exactly the case of transductive online learning. Therefore, we believe that working in this general setting allows us to design algorithms that are robust to assumptions about noise. ", " Our improvement in regret is significant: Note that without the smoothness assumption, even any hypothesis class that is expressive enough to contain thresholds in one dimension is unlearnable (due to infinite Littlestone dimension), which means that no algorithm can get regret of o(T). However, our algorithms show that, under mild smoothness assumption, these classes become statistically and computationally learnable. This is a significant improvement because the dominating term decreases from $\\Theta(T)$ to $O(\\sqrt{T})$.\n\n**Why we should not expect a better than sqrt{T} regret using assumptions similar to that of Smoothed Analysis**: The primary feature of Smoothed Analysis in online learning is to restrict the harmful correlations that an adaptive adversary can cause over time. ven if the adversary is completely removed and replaced by full stochasticity, e.g., in an extreme case where instances come from a fixed distribution that is uniform over the domain, the regret of $\\Theta(\\sqrt{T})$ is tight in an information-theoretic sense. This is a well-established result in PAC learning theory. Therefore, a further improvement on the $T$ term cannot be hoped for.\n\n**General notions of beyond worst-case analysis**: In some sense, we can think of smoothness as a stochastic version of predictable sequences. This is because the coupling argument guarantees that with high probability, any sequence of $T$ instances generated by adaptive smoothed adversaries can be seen as a subset of $T/\\sigma$ uniformly random instances. This formulation is especially useful if we want to work on general spaces while not making any specific assumption on a sequence that depends heavily on the particular representation. Furthermore, it is not clear if one can define notions such as predictability of sequences in representation-independent ways and essentially all previous work in that space use hypothesis classes on normed spaces.", " We disagree that there is not enough improvement and novelty upon the concurrent work of BDGR22. Moreover, we disagree with the reviewer’s implicit assessment that our work should be treated as a follow up to BDGR22 rather than a concurrent work.\n\n**Comparison with BDGR22**: First, our results are stronger in terms of both the regret dependence on smoothness parameters and the number of oracle calls required in each time step. In the binary classification setting, our Poissionizationed FTPL algorithm achieves regret $O(\\sqrt{Td\\sigma^{-1/2}})$, while their algorithm only obtains the worse bound of $O(\\sqrt{Td\\sigma^{-1}})$. For the case of real-valued functions, we obtain regret of order using $O(\\sqrt{Td\\sigma^{-1}})$ using 2 oracle calls per round, while their corresponding relaxation-based algorithm only achieves regret $O(\\sigma^{-1}\\sqrt{Td})$ with $O(T)$ calls per round. Subsequent to when the first versions of both papers were made available online, BDGR22 improved their relaxation technique and now demonstrate the same regret bound as ours, while still using more oracle calls. BGDR22 also developed an FTPL-based algorithm for real-valued functions that calls the oracle only once per round, but the regret bound of this algorithm is $O(T^{2/3}\\sigma^{-1/3})$, which is significantly suboptimal in $T$. \n\nSecond, our paper uses a different and novel approach. In order to control the stability of FTPL in the binary setting, we use the novel Poissonization technique, which gives us an additional degree of independence and allows us to use information-theoretic tools. More importantly, we introduce an entirely new framework, called modified generalization error, which takes a different perspective on generalization and stability analysis of FTPL-type algorithms. It looks into the generalization error of ERM classifiers when they are trained on uniformly generated training samples and tested on smoothly distributed fresh instances. We believe it will be of independent interest for transductive and other settings of beyond the worst-case learning. \n\n**On treating concurrent works in review processes**: Setting aside the fact that our results are stronger, novel, and of independent interest (even conditioned on the publication of BDGR22), the publication of the concurrent works in other conferences is only evidence that the community is very interested in this line of work. Our work has been conducted concurrently and independently from BDGR22 which is evident by the fact that the first versions of these papers were made public within days of each other. We urge the reviewer to not treat concurrent works as follow-ups to one another. We believe doing so will greatly harm the review process and the community at large by encouraging treating publication as competition. Our work and BDGR22 have independent insights and contributions and their publication will draw independent readership and interest from the field.\n", " We thank the reviewer for the positive review. Below we answer your questions.\n\n**Unknown the value of sigma**: Please see our general comments for a detailed explanation. In particular, we do not need to know the exact value of sigma. \n", " We thank the reviewer for the thoughtful review and the recognition of the importance of our work. We have added the references on transductive learning and made notational changes according to your comments in the revision of our paper. We discuss the other mentioned weaknesses below.\n\n**On use of Distribution-dependent Complexity Measures**: Essentially, all our results can be stated in terms of Rademacher complexity which is data dependent. In fact, our bounds depend on the Rademacher complexity of $T/\\sigma$ i.i.d. instances with respect to the base measure, which is usually taken to the uniform measure on some structured set. We have clarified this in the revision.\n\n**On the motivation and applicability of the ERM oracle**: Please see our general comments for a detailed explanation.", " We thank the reviewer for the positive feedback and detailed comments. We have addressed the minor comments in the revision. Below we address specific comments and discuss the mentioned limitations.\n\n**On the efficiency and relevance of oracles to practical ML**: Please see our general comments for a detailed explanation.\n\n**About running times**: The notion of running in our Theorem 5.1 and 5.2 refers to the total number of operations performed by our algorithm, in which each oracle call takes unit time, and maintaining each element in the input to the oracle also takes unit time. In this model, the running time of Algorithm 1 is $\\widetilde{O}(T^2/\\sigma)$, and the expected running time of Algorithm 2 is $\\widetilde{O}(T^2/\\sqrt{\\sigma})$. \n\nIn Theorem 5.1, the reason that running-time requirement $\\omega(\\sqrt{d/\\sigma})$ does not depend on $T$ is because we are more interested in the regime $d/\\sigma \\gg T^2$, where regret lower bound is sublinear. In this regime, the running times of our Algorithm 1 and 2 are $\\widetilde{O}(d/\\sigma^2)$ and $\\widetilde{O}(d/\\sigma^{3/2})$ respectively. We remark that Theorem 5.1 shows a poly($d, 1/\\sigma$) computational lower bound to achieve the statistical regret $O(\\sqrt{Td\\log(1/\\sigma)})$, while the $\\epsilon$-net argument in [HRS21] requires an exponential poly($\\sigma^{-d}$) computational time. From this observation, we pose an open question about whether $\\Omega(poly(T, 2^d, 1/\\sigma))$ computational time is unavoidable to achieve $o(T)$ regret in the smoothed analysis setting, even given access to an oracle.\n\n**Unknown the value of sigma**: Please see our general comments for a detailed explanation. ", " We thank all the reviewers for their thoughtful feedback. In general comments below, we discuss two questions raised by multiple reviewers. The more specific reviewer comments are addressed separately below.", " We clarify that the exact knowledge of $\\sigma$ is not needed by our approach. Our algorithms and regret bounds can work with any approximation of sigma value that is a lower bound of the real $\\sigma$ up to constant multiplicative factors. This corresponds to settings where the world is more smooth than we give it credit. \nEven when we have extremely poor upper and lower bounds, we can use hedging (as suggested by reviews) to still get non-trivial regret with only a minor blow up in computation. We will provide more details next as to how we work with knowledge of approximate sigma.\n\nIn general, given (loose) upper and lower bounds on the exact value of $\\sigma$ (call them $\\sigma_{\\max}$ and $\\sigma_{\\min}$), we can use a geometric doubling approach to deal with the unknown $\\sigma$. To be specific, one could construct $\\log(\\sigma_{\\max}/\\sigma_{\\min})$ experts, where each expert runs a local version of our algorithm with parameter $\\sigma_i=2^i \\sigma_{\\min}$. We then run Hedge on these experts. Note that the parameter of the best expert satisfies $\\sigma/2 \\le \\sigma_{i*} \\le \\sigma$, so its regret matches the regret of the same algorithm that runs on true $\\sigma$ up to a constant factor. Therefore, the expected regret of this meta algorithm is comparable to the bound in Theorem 3.1 and 3.2, with an additive term of order at most $\\sqrt{T\\log(\\log(\\sigma_{\\max}/\\sigma_{\\min}))}$. \nThe number of oracle calls also blows up only by $ \\log ( \\sigma_{\\max} / \\sigma_{\\min} ) $ per round. This could potentially be improved using a more aggressive step size for the Hedge meta algorithm.\n\nIn addition, it is natural to assume knowledge of $\\sigma_{\\max}$ and $\\sigma_{\\min}$. For instance, when smoothness comes from Gaussian perturbations, $\\sigma$ directly relates to the standard deviation of the Gaussian distribution as well as the dimension of the instance domain. In this case, it is reasonable to assume that confidence intervals of the variance estimation are known in advance. ", " **Applicability of oracle-efficiency to practical ML**: The oracle-efficient framework is practically important because it allows us to directly tap into existing deployed algorithms, without having to design and implement an algorithm from scratch. These sub-routine algorithms can be heuristics and do not have to be provably efficient. Modern computer science is full of such heuristics that perform exceedingly well in practice even when NP hardness barriers exist in theory; a great example of this is the poster child for NP hardness, SAT. The oracle-efficient method for designing online algorithms has been extremely popular recently and has seen a lot of use in varied contexts such as contextual bandits and reinforcement learning (see https://vowpalwabbit.org/) and is even used in production. We see our work as putting forth a general framework on the instances under which we can design such online algorithms (for general learning problems). \n\n**Sufficiency of weaker Oracles and circumventing Hardness**: Note that the oracle in our algorithm is called on either “smoothed” instances given by the adversary, or random instances sampled from the uniform distribution. In such settings, NP hardness results usually do not hold since they are proven mostly for worst case instances. Therefore, when implementing our algorithms in practice, instead of using an oracle that is provably efficient for all worst-case inputs, it suffices to have a weaker oracle that performs reasonably well on “average” case instances. In fact, NP hardness should not be a prime concern for assessing our results. Beyond the worst-case settings, the complexity landscape is in fact much richer — with hardness much harder to come by — and remains an active area of research (see https://simons.berkeley.edu/programs/si2021 ). This is due to the fact that “average” case instances are much easier to handle, and can be seen as implicitly being the driver of the current machine learning revolution. This gives us hope that our methods can even be provably efficient for many hypothesis classes.\n\n**History of Oracle-efficiency**: The oracle efficiency framework has a rich history in the online learning literature beginning with the work of Kalai & Vempala 2007, Awerbuch & Kleinberg 2007. The original motivation was problems such as online shortest paths, tree update and adaptive Huffman coding where the “set of experts” was combinatorially large but solving the corresponding offline problem was efficient. See Kalai-Vempala 07 for a great discussion on the original motivation. This framework has been built on and extended in various settings such as auctions [Dudik et. al, 2017] , contextual learning [Syrgkanis, Krishnamurthy & Schapire 2016, Foster & Rakhlin 2020, Simchi-Levi & Xu 2021], approximation algorithms [Kakade, Kalai & Ligett 2007], reinforcement learning [Foster, Kakade, Qian, Rakhlin 2021] etc. Note that, since the offline problem is at least as difficult as the online problem, in some sense an efficient oracle for the offline problem is the minimum requirement to hope to have an efficient online algorithm. \n", " For online learning with smoothed adversaries, optimal statistical\nregret rates are known, but there are no efficient algorithms. Assuming\naccess to an offline ERM oracle, the present paper develops new\nonline algorithms that are computationally efficient (making 1 or 2\ncalls to the oracle per round, with the input size to the oracle also\nunder control). These algorithms achieve expected regret rates that are\noptimal except for their dependence on the parameter sigma that measures\nthe smoothness of the adversary. There is also a lower bound that holds\nfor the expected regret for any algorithm with run-time below a certain\nthreshold.\n Strengths:\n* Smoothed adversaries are seeing a lot of recent interest, but to move\n towards practically relevant results it is essential to have efficient\n algorithms, which is what the current paper tries to address.\n* The algorithms are elegant reductions to calls to the oracle.\n* Their analysis is clean, and clearly explained.\n\nWeaknesses:\n* I am not sure that the proposed algorithms will be so efficient that\n they can actually be used in practice, because implementing the oracle\n might still be a bottleneck.\n* The lower bound in Thm 5.1 seems rather weak because the run-time\n requirement of omega(sqrt{d/sigma}) does not grow with the total\n number of rounds T.\n\nRemarks:\n* Line 125: mathcal{Y} cannot be an arbitrary subset of [-1,+1], because\n the predictions of Algorithm 1 can span the whole range [-1,+1].\n\nMinor comments:\n* Line 101: \"unaviodable\"\n* Line 125: mathcal{Y} should not be an element of [-1,1], but a subset\n* Line 128: For Y = {-1,+1}, convexity and Lipschitzness make no sense.\n (I assume these requirements do not apply in this case.)\n* Paragraph below Definition 2.1: Equivalence between the two versions\n of the protocol is not obvious, because it seems to depend on whether the\n adversary and the learner are allowed to randomize, which is not\n specified explicitly.\n* Multiple uses of \"arginf\" throughout the paper. Use \"argmin\" instead,\n because it only makes sense if the minimum is achieved.\n* Line 179: Please specify distribution of \\mathcal{E}^(t)\n* Line 181: \"labled\"\n* Multiple places: \"\\tilde{O} hideS factors\"\n* Theorem 3.2 and Corollary 3.3: regret -> expected regret.\n* Section 4.2: historically, the perturbations for Follow the Perturbed\n Leader were not of the type described here, so perhaps clarify that\n your description is a special case.\n* Line 282: please point out that beta is o(1/T). I initially thought it\n was of order T.\n* Theorem 5.1: please explain how \"total running time\" is measured\n (e.g., copy explanation from the appendix)\n* Line 522: I think you meant that beta = o(1/T) instead of o(T).\n* Line 841, \"both lower bounds\": there is no lower bound on the regret\n in Theorem 5.1\n* Lemma E.2: \"supreme\" -> \"supremum\"\n * Could you give examples of offline oracles that are sufficiently\n efficient to make your algorithms practically feasible?\n* For comparison with the result from Theorem 5.1: what is the running\n time of Algorithms 1 and 2 in your model of \"running time\"?\n\n * The efficiency of the algorithms relies on the availability of an\n efficient offline oracle, but the paper does not discuss for which\n hypothesis classes such offline oracles are available. In particular\n for the binary classification case, this might be an issue, since\n computing the ERM is NP-hard for many common classes.\n\n* The algorithms require knowledge of sigma, which would typically not\n be available in practice.", " This work provides so called oracle efficient algorithms for online learning, in the smoothed adversarial setting. This problem corresponds to: given access to an ERM oracle, to design an algorithm whose regret in the worst-case regret under distributions which are $\\sigma$-smooth, is as small at possible. The results of the paper provide $O(\\sqrt{T})$ regret algorithms in this setting, where also the noise parameter $\\sigma$, as well as the VC dimension $d$, appear in the bounds. The paper focuses in the setting of convex Lipschitz loses and binary cases separately. For the former, a relaxation framework (a technique tracing back to [RSS12]) is proposed, which also leverages a coupling technique that exploits the smoothness of the adversary to establish that any smooth sample will be contained who on a larger sample of iid uniform random variables. For the binary case, tighter bounds are obtained via a follow the perturbed leader (FTPL) approach, which also leverages the aforementioned coupling. Several other ideas, regarding the use of a form of regularized Rademacher complexity, stability analysis, etc.; are used. The paper also provides some computational lower bounds, showing that for small values of $\\sigma$, the improved regret bounds of previous work necessarily require polynomial running time in $1/\\sigma$. Strengths: \n\n1. Strong results, that compare favorably even to concurrent works.\n2. Technically solid paper. Many interesting ideas which may be for independent interest.\n3. Although the motivating question is more conceptual than practical, I still think it is worth of the attention of the machine learning community.\n\nWeaknesses:\n\n1. I find the idea of an ERM oracle quite unappealing. The optimization problems that ERM entail can be quite difficult to solve (specially for binary classification). So, although I believe that the reduction question is interesting on its own, it is hard to believe this model has any concrete consequences for ML.\n2. Bounds depending on combinatorial parameters, such as VC dimension, can be quite pessimistic, and can be majorly improved by distribution dependent quantities, such as margin conditions. Maybe the results of the paper can easily transfer to these more benign characterizations, but it is not evident from the analysis. 1. Can the authors add some references on transduction learning, just to point interested readers into relevant literature, to better grasp the results?\n2. Page 15, equation (5). ${\\cal Q}_t$ is used as notation in the equations before it is introduced. I suggest to mention this before the chain of equations, rather than after.\n3. Page 17, line 568. What is $I$? Also I am inferring (but it is not clear from the writing) that ${\\cal E}=(\\epsilon_1,\\ldots,\\epsilon_I)$?\n4. Page 22, Lemma C.6. It was unclear for me (until reading the proof), that $X_I$ is being conditioned on $E,X_{\\setminus I}$. I would suggest writing it instead as $X_I | (E,X_{\\setminus I})$. Yes, I think limitations of the paper are discussed, and it even points out to an interesting open problem. Social impact is out of the scope of the paper.", " \nThis paper considers online learning with smoothed adversaries, which is a special case of online learning. Online learning is a sequential setting in which in round $t = 1, \\ldots, T$ a learner issues a prediction, suffers a loss corresponding to that prediction, and subsequently observes some feedback. The goal is to control the regret, which is the difference between the cumulative losses of the learner and the cumulative losses of the best fixed prediction in hindsight. Normally, the losses are chosen by an adversary or sampled from a distribution. However, in this paper the authors consider a setting in which the adversary generates losses from a distribution that is bounded by $1/\\sigma$ times the uniform distribution. \n\nNon-efficient algorithms for this setting obtain regret bounds of order $\\sqrt{dT\\ln(\\sigma^{-1})}$, where $d$ is a problem specific parameter. The authors provide an efficient algorithm with $\\sqrt{d T \\sigma^{-1}}$ regret for real or binary losses and an algorithm with $\\sqrt{dT\\sigma^{-1/2}}$ for binary losses. \n\nLower bounds for any efficient algorithm are of order $\\sqrt{d^{1/2}T\\sigma^{-1/2}}$ and for algorithms in the same class of the algorithms that are used to obtain the new upper bounds are of order $\\sqrt{dT\\sigma^{-1/2}}$. \n Strengths \n\nThe paper is well written.\n\nThe techniques to obtain the upper bound are nice and could prove useful beyond this paper. \n\nThere are immediate consequences beyond the setting at hand: the regret bounds imply improvements in worst-case binary classification in terms of the domain size and in transductive binary classification. \n\nThe paper is well contextualised.\n\nThe jump in regret bounds from inefficient to efficient algorithms is explained by the lower bounds, although it is not clear whether these lower bounds are tight. \n\n\n\nWeaknesses\n\nThe setting seems to be a natural interpolation between adversarially and stochastically generated losses. However, it also seems to be quite a strong assumption to assume knowledge of $\\sigma$. Is there a reason why in general one can assume knowledge of $\\sigma$?\n\nThe algorithm needs knowledge of $\\sigma$ to draw samples from $1/\\sigma$ times the uniform distribution, which makes me believe that even having a slightly inaccurate $\\sigma$ would lead to profound consequences for the regret bound. Naively trying to learn $\\sigma$ choosing a grid of possible $\\sigma$ and then running an expert algorithm seems quite costly in terms of oracle calls. Is there a more clever approach?\n See the weaknesses. The authors have adequately addressed the limitations and potential negative societal impact of their work", " This paper considers the oracle-efficiency of smoothed online learning. In this setting, the output distribution of the nature is constrained to have a constant upper bound. Assuming an offline optimization oracle, they give an $O(\\sqrt{T d \\sigma^{-1}})$ regret bound for real valued loss and an $O(\\sqrt{T d \\sigma^{-1/2}})$ regret bound for binary valued loss as their main results. The algorithms achieving these regret bounds are based on FPL. The results are sound and solid, and the algorithms and proof ideas are very intuitive. The Poissonization technique is novel and may be of independent interest.\n\nThe major concern I have is on the setting of smoothed online learning. There are many ways to characterize online learning beyond worst-case, and smoothed online learning doesn't seem natural to me. I don't see any practical scenario supporting the smoothed setting, and in theory there are other more intuitive ways: for example assuming noise or predictable sequences. The way that smoothed analysis restricts the distribution of the nature's outputs doesn't make much sense and the constant upper bound seems a very strong assumption.\n\nThe improvement over regret bounds of classical online learning doesn't seem significant: the dominating term $O(\\sqrt{T})$ isn't improved. One may expect a more significant improvement from the strong additional assumption. A (not very relevant) example is that one can improve the regret bound to $O(\\log T)$ assuming an abstention option.\n\nThe paper is well-written and easy-to-follow. It spells out the assumptions, algorithms and intuitions clearly.\n\nThough the results of this paper are solid, due to the concurrent work of BDGR22 which is already accepted at COLT2022, there doesn't seem to be enough improvement and novelty upon it. For this reason (along with my concern on the smoothed analysis setting), unfortunately I tend to reject this paper. Can you give more justification on the smoothed analysis setting? Or any concrete example? Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 9, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 2, 3 ]
[ "gz3N1lFcTkq", "3WhmVtaEchW", "3WhmVtaEchW", "FpFk2N60bgG", "skrB4rQe-e", "XujFwR782kF", "1vjBDQ3vIP", "1vjBDQ3vIP", "1vjBDQ3vIP", "1vjBDQ3vIP", "mOzoo8DfMVU", "_VOOMWKtTd", "5fWU8BiM6b9", "nips_2022_SbHxPRHPc2u", "nips_2022_SbHxPRHPc2u", "nips_2022_SbHxPRHPc2u", "nips_2022_SbHxPRHPc2u", "nips_2022_SbHxPRHPc2u", "nips_2022_SbHxPRHPc2u", "nips_2022_SbHxPRHPc2u" ]
nips_2022_qZUHvvtbzy
Systematic improvement of neural network quantum states using Lanczos
The quantum many-body problem lies at the center of the most important open challenges in condensed matter, quantum chemistry, atomic, nuclear, and high-energy physics. While quantum Monte Carlo, when applicable, remains the most powerful numerical technique capable of treating dozens or hundreds of degrees of freedom with high accuracy, it is restricted to models that are not afflicted by the infamous sign problem. A powerful alternative that has emerged in recent years is the use of neural networks as variational estimators for quantum states. In this work, we propose a symmetry-projected variational solution in the form of linear combinations of simple restricted Boltzmann machines. This construction allows one to explore states outside of the original variational manifold and increase the representation power with moderate computational effort. Besides allowing one to restore spatial symmetries, an expansion in terms of Krylov states using a Lanczos recursion offers a solution that can further improve the quantum state accuracy. We illustrate these ideas with an application to the Heisenberg $J_1-J_2$ model on the square lattice, a paradigmatic problem under debate in condensed matter physics, and achieve state-of-the-art accuracy in the representation of the ground state.
Accept
This submission proposes to apply Lanczos step improvements over a shallow neural quantum state based on restricted Boltzmann machines (RBM). The authors report highly competitive empirical results with the state of the art on one challenge benchmark: the J1-J2 model. The reviews are mixed for this contribution: while the provided empirical results look promising, a clear and comprehensive comparison with SOTA's results was lacking in the initial submission, which was later improved during the interaction between the authors and the reviewers. We think these suggested changes should be included in the revision. Most of the reviewers see the novelty of using Lanczos step improvements and feel this technique could be generally applicable. However, the authors should also discuss the limitation of this technique, especially the size consistency of the approach and the fact that returns will be smaller and smaller on larger lattices, explicitly in the revision.
train
[ "zHb3ePrRKgh", "v6z0-I7UhkM", "Q536r-Byego", "yi-yGJaN5PC", "IMFcS_QeO2t", "V_3rjcJFXNe", "CmEWNTjSg5k", "WRMzXWWcnwk", "roPhHarGQ0f", "AGhB3pdfPs5", "1dv4nm4lbFv", "YcXEikoiORm", "c1JqXXBGtCm", "sSG5VDTRell", "5EufKSw693", "GyurOK0ZrjP", "jgbrbPQ889k", "E1g6mE7PZUD", "6bMujXr9d_n" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for bringing this data to our attention. It again shows the superiority of our method on both square and triangular lattice. But due to the approaching deadline, we don't have enough time to work on kagome and honeycomb lattices. We hope to encourage other researchers to use similar strategies on other quantum systems and lattices. In addition, adding other geometries would imply a significant re-write of the manuscript and distract the reader from the main message. As states before, we think that in order to focus on the method we focus on only one particular problem, which already is significantly challenging and relevant to the physics community.\n\nUsing Lanczos recursion to systematically improve NNQS is the main contribution of our work, and we select a few $J_2$ with the highest relative error to illustrate the effect of Lanczos recursion. Therefore, it is appropriate to compare our Lanczos' results with \"RBM+PP\" results. **But we did not carry our Lanczos steps for $J_2=0.4, 0.45$, hence, our results for those particular data points are not representative of the accuracy of the Lanczos method**. For $J_2/J_1=0.5, 0.55$ on a 6x6 square lattice, our results are slightly better than \"RBM+PP\" results(See table 2). For a 10x10 lattice, they reported energy for $J_2/J_1 = 0.5$, which is $8\\times10^{-4}$ is lower than our results(See table 4). Our method yields the best results except for this particular data point. In this case, if the reviewer insists that this approach is not SOTA, we are willing to concede this point though we have different opinions. **We added these results in the data table 2 and 4 in the revision, and also highlighted the fact in the discussion that our variational energy at $J_2/J_1 = 0.5$ on a 10x10 lattice is second to \"RBM+PP\" results**. We hope this can address the reviewer's concern. ", " We appreciate the reviewer's suggestion, and we apologize for not submitting a revision in a timely manner as reading revision is optional for reviewers. We have updated the manuscript accordingly, where the reviewer can attest of the changes addressing their concerns. In the revised manuscript we added Nomura's data in Table 4 and as \"RBM+PP\" results for $J_2/J_1 = 0.5, 0.55$ on a 6x6 lattice in data table 2. **We also highlighted the fact in the discussion** that we obtain slightly better variational energy at $J_2/J_1 = 0.5, 0.55$ on a $6\\times6$ lattice while **for a $10\\times10$ lattice at $J_2/J_1 = 0.5$, their variational energy is $8\\times10^{-4}$ lower than ours**. Our method yields the best results except for this particular data point. In this case, if the reviewer insists that this approach is not SOTA, we are willing to concede this point if that satisfies the reviewer, although we have different opinions. We also added comments on the limitation of the Lanczos recursion, and we hope this can address your concern. We again thank the reviewer's appreciation of our work and valuable suggestions. ", " I thank the authors for their response addressing my concerns. Given the other reviews, I'm still not fully convinced by the significance of this work and, hence, will not change my score.", " I thank the authors for their replies. I think the paper is interesting and stand with my initial assessment, however the quality of the presentation has not been improved: \n\nConcerning my remark on ref [28], I believe that by not reporting the energy of Nomura et al (−0.497629) in their table 4, the authors are doing a bit of \"cherry-picking\" of the existing results, that is not up to the standards one would expect from this conference. Table 4 now contains all energies that are higher than those obtained in the presented work, however neglecting a result where better energies are obtained. The authors must (should have) include this and discuss more explicitly that the results they obtain are not state of the art, as otherwise claimed throughout the text.\n\nThe authors also should mention in the text that the Lanczos step approach proposed here is going to improve only marginally the wave function on larger systems. It is indeed well known that in the thermodynamic limit the Lanczos corrections will become smaller and smaller. ", " Thanks for the author’s reply. I understand there’s not enough time for authors to do experiments on all different lattices with various J2 values. But if possible, I still would like to see the performance of your method on 32 sites honeycomb with J2=0.2, and 36 sites kagome with J2=0. (compare with value in [1] Table IV)\n \nAnother concern is that the authors claim the presented work to be SOTA, but it seems results of RBM+PP are often better than the presented work. In this paper, the authors only compare their results with RBM+PP on 6x6 square with J2=0.55 but don’t compare for 6x6 square with J2=0.4, J2=0.45 and 10x10 square with J2=0.5 with RBM+PP, where RBM+PP is better than this work. So I think it is misleading to claim SOTA for the presented work.\n", " We are truly grateful for the reviewer's insightful comments and suggestions. As summarized by the reviewers, by restoring symmetry for simple RBM neural network wave functions, we obtain highly impressive results that improve previous best neural network estimations by a large factor on an exceptionally difficult problem. Moreover, through the first application of Lanczos-step style improvements to neural quantum states, we bring a new approach that produces SOTA results on the famous $J_1-J_2$ model in 2D.", " We appreciate the suggestions and we tend to agree with this assessment. We have properly addressed this comment in the manuscript but we also highlight the fact that this cost is lower or comparable to other approaches.", " 2: Technically, translational invariance in quantum mechanics actually means that f(Tx)=K(f(x)) where K is a complex phase. This means that after a symmetry operation the wave function remains invariant up to a phase. It is true that the wave function picking up a phase is an example of equivariance but in the context of this quantum mechanical problem there is no distinction and it becomes a matter of semantics. In general, our wave function is equivariant. This is explicitly accounted for by our construction, where the phase is determined by the lattice momentum and the eigenvalues of the rotation operation. This is achieved by taking appropriate linear combinations of translations and rotation of the RBM as described by our Eq.(6). \n\n3, 4: We appreciate the suggestions, and we will fix them in the next update. \n\n", " We thank the reviewer's insightful suggestions and comments. We do not have a clear understanding of why our RBM wave functions outperform ConvNets. Even though ConvNets are invariant by construction, Choo et al (Ref. [9]) include the rotational by averaging over them as we do. Unveiling the reasons for this is beyond the scope of our work, but this could be a novel research topic for the NNQS community.\n\nEarlier works, such as the original paper by Carleo and Troyer (Ref. [4]) have attempted inherently symmetric RBMs. However, imposing a translational invariant W matrix (weights) reduces the expressivity of the wave function, implying that a much larger number of hidden variables is required, and it is not even clear that this would improve the performance. Henceforth, this approach has largely been abandoned in the literature. \n\nWe actually tried RBM wave functions without symmetry or with only translational symmetry, and apply Lanczos steps on these. But we haven't tried Lanczos on ConvNets. Based on the three different wave functions, we conclude that Lanczos works on all three and will produce a larger improvement on the poor variational wave function. Being worried that these data would distract from the main contribution of our work, we didn't report them. We will consider putting these data in the appendix in the next update. Now we attach some of our results for $J_2=0$ below to highlight the effects of restoring the symmetries.\n\n\n| Lattice Size | 6x6 | 8x8 |10x10 |\n| :-------- |:------- | :------ | :------|\n| No symmetry | -0.677625 | -0.671543 | -0.66737| \n| Translational symmetry | -0.678844 | -0.673425 | -0.671288|\n| Translational symmetry + Point group symmetry | -0.678873 | -0.673487 | -0.671519|", " This is a general problem with variational Monte Carlo (VMC), and it is even worse for other approaches such as tensor networks or DMRG. Regardless, we can safely say that VMC is currently one of the most promising techniques able to tackle (relatively) large system sizes of the order of hundreds of spins.", " Quantum Monte Carlo methods and, in particular, variational Monte Carlo, are low-memory methods. The number of double precision float point numbers required to store in memory are of the same order of variational parameters($N^2$), where N is system size. The natural gradient descent method demands much more memory usage, and it is proportional to the product of the number of samples and parameters $O(N_{sample}\\times N_{parameter})$. To be more specific, for a 10x10 square lattice, the memory used in our method ranges from a few hundred megabytes to one thousand megabytes depending on the number of samples and variational parameters. As a comparison, exact diagonalization on a 6x6 lattice may use more than 10 gigabytes of memory. The memory usage of the CNN-based VMC method will have a similar dependence on samples and parameters as ours. Thus, the memory usage between different variational ansatz depends on the parameters. From Ref.[34], we use a similar number of parameters for 6x6 lattice, and three times more for 10x10 compared to the CNN method.\n\nAs for the computing cost, our method scales as $O(N_{sample}\\times N \\times N_{parameter})$. For other VMC methods that restore translational symmetry by averaging, the time complexity is the same as ours. Thus, the difference depends on the number of parameters used.", " Thanks for pointing out the concern about the significance of the \"small\" improvement. We hereby again explain the meaning of pursuing extremely accurate ground state calculations. The nature of the ground state in the J1-J2 model on the square lattice is one of the longest-standing open problems in condensed matter physics. The existence of a large number of competing states in a small energy window around the ground state makes it particularly challenging, and states with very similar energies may have significantly different physical properties representing different phases, such as dimer or plaquette states. As we showed in the \"model\" section, there have been numerous research efforts on developing numerical algorithms capable of capturing the true ground state for decades. **Thus, an incremental improvement of even one significant digit in the energy can have important consequences and help answer open questions**. Moreover, as stated by reviewers ZGgm and PiLy, we improved the previous best result significantly/by a large factor on an exceptionally difficult/challenging problem.", " In the spin liquid region, the CNN method doesn't provide cutting-edge results as they said (Ref.[9]). In the frustrated region on a 6x6 lattice, our relative error is 10 to 100 times smaller compared to the best CNN-based NNQS method. As stated by reviewer ZGgm and PiLy, we improved the previous best result significantly/by a large factor on an exceptionally difficult/challenging problem. We thank and agree reviewer's suggestion for showing the advantage of our method by applying it to different quantum systems and different lattices. But by showing better ground state energy for the entire range of the J2/J1 parameter and specifically in the frustrated region for the $J_1-J_2$ model in 2D, which is a challenging benchmark as agreed by reviewers ZGgm and PiLy, our conclusion has firm support.\n\nOur method can be seamlessly applied to different types of lattices like triangular, honeycomb, and kagome lattices or even random graphs. The only thing that needs to be modified is the lattice point symmetry which depends on the symmetry properties of the target lattice. Due to the time limit, we try it on a 6x6 triangular lattice for two different $J_2$ values, 0 and 0.125, used in the G-CNN paper (Ref.[34]). We obtain variational energy -0.55994, -0.51415 for $J_2 =0.0, 0.125$ respectively without using Lanczos as a comparison with -0.55922 and -0.51365 reported in the G-CNN paper. Our method still performs better. We do not include the results in the current version of the paper because it will represent a significant revision that will distract the readers from the main message, and we focus on the J1-J2 model instead, as originally presented.", " We thank the reviewer for pointing out those references. We actually cite Szabo's paper and mentioned this work in section 4.1, and we will add other suggested references in the next update. Xiao's paper is a pioneering work in this area, but their performance is much worse than ours. On the 10x10 cluster $J_2=0.5$, a variational energy $-0.4736$ was obtained as compared to our result $-0.4968$. Dmitrii's paper doesn't provide specific numbers for us to compare. But from their figure 4, we can see that their accuracy is of the same magnitude as Choo's paper (Ref. [9]), which we make a full comparison with. As shown in figure 1 of our manuscript, we can conclude that our relative error is at least 10 times smaller than theirs. Since we are presenting our method and results as stat-of-the-art, we need to compare to the most accurate results available in the literature obtained by competing techniques or variational wave functions. Therefore, we compared the best QMC (Ref. [36]), VMC (Ref. [20]), and DMRG (Ref. [15]) results available. For NNQS, we select Choo's paper (Ref. [9]) and its follow-up work (Ref. [7]) because it still has the best accuracy compared with recent CNN work, and it offers lots of specific numbers to compare with. In addition, one important contribution of our work is precisely that by restoring similar symmetry, relatively simple NNQS such as RBMs can outperform more deep and complex NNQS, which is an important finding.", " Thanks for your thoughtful comments. \\\n1: Nomura's paper (Ref. [28]) is a very relevant work that we cite repeatedly. However, the paper does not include many numbers to compare. For a 10x10 lattice, they only reported energy for $J_2 = 0.5$, which is $8\\times10^{-4}$ is lower than our results(See table 4). Considering other energies reported, our method is sometimes better, sometimes worse, thus comparable in accuracy.\\\n2: Carleo and Troyer attempted this in their original work (Ref. [4]). However, imposing a translational invariant W matrix (weights) reduces the expressivity of the wave function, implying that a much larger number of hidden variables is required, and it is not even clear that this would improve the performance. Henceforth, this approach has largely been abandoned in the literature.", " The paper demonstrates that by leveraging symmetrization of variational states and employing Lenczos recursion, neural network quantum states can surpass the current state-of-the-art accuracy for representing the ground state of the J1-J2 model on the square lattice. The main finding is that instead of using more intricate NN anzatz (e.g., ConvNets, Graph NN, or Transformers), the paper proposes that using the much simpler RBM anzatz coupled with symmetrization and Lanczos steps is sufficient to represent complex wave-functions. With that method they show that they can obtain much better ground state energy estimation across the entire range of the J2/J1 parameter and specifically near the point of highest frustation (~ 0.5). The paper is well written and offers a simple introduction to the subject matter that is a bit outside of the common domain of ML. The main strength of the paper are its impressive results, improving on the previous best results by a large factor on a problem that is considered exceptionally difficult. Moreover, it is noteworthy that these improvements are obtained by using relatively simple RBMs, as opposed to more intricate architectures (e.g., ConvNets or Transformers).\n\nHowever, the presented methods are not novel and are based on previous works. Specifically, it appears that the symmetrization method is identical to those used by prior papers including with NN-based approaches [1,2], and to the extent of my knowledge it is a well-known widely-used technique in the field. Similarly, Lanczos iteration for improving the accuracy of a given ground-state approximation was suggested by other works and was specifically used in the past to improve results on the J1-J2 model [3]. It is worth noting that to the best of my knowledge this is the first use of this method with NN quantum states. While the authors do properly cite prior works, it is not sufficiently emphasized that they merely implement these ideas for NN quantum states -- something that should have been more clearly articulated in the introduction and abstract.\n\nAs a final note, while the paper clearly shows the superiority of their proposed method on the given task, there is not a lot of effort spent on attributing these gains to the various choices. Some abalation study (even if done only on the 6x6 case) could go a long way to clarify the contribution of each element (e.g., employing Lanczos on a non-symmetric RBM and the ConvNet model from [1], or using RBMs with a symmetric matrix to account for translation symmetry vs. brute-force symmetrization).\n\n[1] - Choo et al., Two-dimensional frustrated J1−J2 model studied with neural network quantum states, PRB, 100:125124, Sep 2019.\n\n[2] - Sharir et al., Deep Autoregressive Models for the Efficient Variational Simulation of Many-Body Quantum Systems, PRL 124:020503, Jan 2020.\n\n[3] - Iqbal et al., Spin liquid nature in the Heisenberg J1-J2 triangular antiferromagnet, PRB 93:144411, Apr 2016. * On the topic of symmetries it would have been helpful for the authors to discuss other approaches to brute-force symmetrization, e.g., symmetric constructions of RBMs / ConvNets / Graph NN. It is especially interesting that applying brute-force symmetrization to RBMs worked better than the ConvNet used in [1] which follows translational symmetry by construction and uses the same brute-force symmetrization only for the C4 symmetries. It would be great if the authors could comment on why that might be the case given that both models follow the same symmetries and one (the ConvNet) is supposedly more expressive (prior to using Lanczos). Moreover, given that the set of methods advocated by the paper could be applied to any anzatz, have the authors considered using ConvNets or any other NN architecture to verify whether the improved results are due to the use of simpler RBMs, or comparing the use of brute-force symmetrization vs. inherently symmetric models (whether using symmetric RBMs, or symmetric ConvNets)?\n\n* There is a slight mismatch between the usual meaning of invariance to symmetries and how you define your symmetrized anzatz. Specifically, one usually refers to symmetric invariant function as one where f(x) = f(Tx) for all T in some set of symmetric operators, but in this case it appears that this requirement is soften to be equal in amplitude but it does allow for a shift in phase, and so it is more akin to equivariant property. Could the authors please elaborate on this point to clarify what is the property they wish to enforce and why?\n\n* \"point group symmetries\" might not be clear for all readers, and it would be great to be specific on what it means in this context (rotations + reflections).\n\n* While not mandatory, given that it is a method defined by prior works, it would be helpful to include a derivation of equations 14-18 in the appendix to this work. The authors have properly addressed the limitations of their approach, and the difficulty of scaling it to larger lattices. My only comment would be that the limitations paragraph at the end should also include the cost of symmetrization in the training step (so it should be N^3 not N^2), and that they should also repeat the cost of the Lencoz step. I would also add that it might be challenging to scale this method to larger NN, which might be necessary to solve certain cases with high accuracy.", " This paper proposes a symmetry-projected variational solution in the form of linear combinations of simple restricted Boltzmann machines, which allows one to explore states outside of the original variational manifold and then increase representation power. Also, an expansion in terms of Krylov states using a Lanczos recursion is used to further improve quantum state accuracy. Experiments is conducted on Heisenberg J1-J2 model on square lattice. **Strengths:**\n- *clarity*: This paper is overall well written and easy to follow. The method and results are clearly presented.\n- *significance*: The proposed neural network quantum state can obey the internal symmetries of the quantum model and the point group symmetries of the lattice. This could make the neural network being more effective to represent quantum state by satisfying internal constraints of quantum many-body problem. Also, Heisenberg J1-J2 model has the infamous sign problem due to frustration that cannot be handled well by traditional numerical methods. So applying neural network to this problem is of great importance to quantum physics.\n\n**Weaknesses:**\n- *originality*: Based on my background and the paper itself I cannot judge if the method of this work is novel, but apparently this work misses some relevant works, such as the following:\n - *Liang, Xiao, et al. \"Solving frustrated quantum many-particle models with convolutional neural networks.\" Physical Review B 98.10 (2018): 104426.*\n - *A. Szabó and C. Castelnovo. Neural network wave functions and the sign problem. Physical Review Research, 2(3):033075, 2020.*\n - *Kochkov, Dmitrii, et al. \"Learning ground states of quantum Hamiltonians with graph networks.\" arXiv preprint arXiv:2110.06390 (2021).*\n\n And authors don’t explain the difference between the proposed method and those works (mentioned in related works) that also try to restore symmetry. And how is the performance comparison with these methods also considering symmetry, not just one CNN method.\n\n\n- *quality*: Authors primarily compare the results with a specific CNN method. Could authors explain why only compare with this specific deep learning method?. Also, experiments are only performed on square lattice. Since the compared CNN method has performed so good on square lattice, the proposed method can only show very small improvements. I think authors could show the advantage of the proposed method over other methods on some other difficult quantum systems, such as several different lattices where the ground state is harder to learn, etc.\n\n I was aware of another recent work on solving quantum many-body problem over various lattices [1]. Have authors tried the proposed methods on other types of lattices, such as triangular and honeycomb, etc? How is the performance of proposed neural network wave function on these lattices?\n\n[1] Kochkov, Dmitrii, et al. \"Learning ground states of quantum Hamiltonians with graph networks.\" arXiv preprint arXiv:2110.06390 (2021).\n Limitation of scaling up to larger systems is well addressed. Could authors explain more on the generality of this method on different type of lattices or even random graphs?", " In the present paper, the authors introduce a method to model the wave function of quantum systems, in particular spin systems in the $J_1$-$J_2$ model. They use a restricted Boltzmann machine (RBM) and form linear combinations of them in order to incorporate certain symmetries of the investigated system. Furthermore, the authors introduce a method to determine the ground state utilizing the Lanczos recursion to iteratively improve the estimate.\n\nThey compare their method on spin lattices of various sizes to baseline methods such as Quantum Monte Carlo and a CNN based method on the basis of comparing the ground state energies and the spin structure factor obtained through the different procedures. Their method is competitive with or outperforms the baselines. **Strengths**\n\nIn general, the paper is concise and well written, making it pleasant to read and understand. \n\nThe authors utilized the physical knowledge about the symmetries of the system and incorporate it into the model, which is a common theme in applying machine learning models to problems in the natural sciences.\n\nDifferent concepts from machine learning (RBMs), advanced statistics (Fubini-study metric), and numerics (Lanczos recursion) are combined to form a new algorithm capable to compute the ground state energy of a spin system, which is competitive or outperforms its baselines.\n\n**Weaknesses**\n\nI am not familiar with algorithms for estimating the ground state energy of quantum systems, so I cannot judge how big or small the improvement of this procedure over existing baselines is. However, the difference of the results is often very small, only manifesting in the fourth or fifth digit, so it is not clear to me whether this is actually significant.\n\nThe authors mention that the key for such algorithms is that they use a small amount of memory and compute. A naive implementation would use an exponential amount of memory, while this procedure has a controllable number of variables. However, they do not discuss in detail how much compute and memory competing procedures use. They only mention in the conclusion that their method is slightly more expensive than the CNN-based method while having a better accuracy, but it is unclear what this means. \n\n**Conclusion**\n\nGiven my criticism and my lack of knowledge in the field, I am unsure whether to accept or reject this article. For me, it is important to clarify the amount of memory and compute needed by different methods, putting their respective performances in perspective. I am willing to raise my score if this concern is addressed appropriately. * How much memory and compute do other methods use?\n* How do they scale with the system size? The authors state that the computational cost of their method scales quadratically with the system size, which limits their ability to tackle larger systems.", " The paper applies Lanczos step improvements over a shallow neural quantum state based on RBMs. \nSymmetries are also exploited successfully, thanks to projections in the relevant symmetry sectors. \nThe authors report results that are highly competitive with the state of the art on a challenge benchmark (J1-J2 model in 2d). To my knowledge, this is the first application of the Lanczos-step style improvements to neural quantum states. This idea has been applied in the past to other variational states, and carries a cost that is exponential with the number of Lanczos steps. \n\nThe technique reported here allows to improve significantly on previously reported \"pure\" neural quantum states results. \nThe main limitation of the approach is the known lack of \"size extensive\" scaling of the Lanczos iterations, which make them ineffective for larger systems approaching in the thermodynamic limit. However, there could be cases of finite clusters where the improvement offered is still important. \n\nMy main criticism is the lack of comparison with the current best-known result on the model (more in the questions), reported in Nomura and Imada, PHYSICAL REVIEW X 11, 031034 (2021). \n\n 1. While the authors report on a comparison on the 6x6 model, it is crucial to understand what happens on the larger 10x10 model, if they want to claim new SOTA results on the J1-J2 model. The paper by Nomura and Imada in Table 2 reports the relevant energy to compare to. How does the Lanczos-step approach compare? \n\n2. The effect of symmetries seems absolutely crucial to improve the bare RBM results. What does it happen if one imposes translation symmetries in the weights instead of summing over the group explicitly? yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 5 ]
[ "IMFcS_QeO2t", "yi-yGJaN5PC", "roPhHarGQ0f", "5EufKSw693", "sSG5VDTRell", "nips_2022_qZUHvvtbzy", "GyurOK0ZrjP", "GyurOK0ZrjP", "GyurOK0ZrjP", "E1g6mE7PZUD", "E1g6mE7PZUD", "E1g6mE7PZUD", "jgbrbPQ889k", "jgbrbPQ889k", "6bMujXr9d_n", "nips_2022_qZUHvvtbzy", "nips_2022_qZUHvvtbzy", "nips_2022_qZUHvvtbzy", "nips_2022_qZUHvvtbzy" ]
nips_2022_HjwK-Tc_Bc
Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering
When answering a question, humans utilize the information available across different modalities to synthesize a consistent and complete chain of thought (CoT). This process is normally a black box in the case of deep learning models like large-scale language models. Recently, science question benchmarks have been used to diagnose the multi-hop reasoning ability and interpretability of an AI system. However, existing datasets fail to provide annotations for the answers, or are restricted to the textual-only modality, small scales, and limited domain diversity. To this end, we present Science Question Answering (ScienceQA), a new benchmark that consists of ~21k multimodal multiple choice questions with a diverse set of science topics and annotations of their answers with corresponding lectures and explanations. We further design language models to learn to generate lectures and explanations as the chain of thought (CoT) to mimic the multi-hop reasoning process when answering ScienceQA questions. ScienceQA demonstrates the utility of CoT in language models, as CoT improves the question answering performance by 1.20% in few-shot GPT-3 and 3.99% in fine-tuned UnifiedQA. We also explore the upper bound for models to leverage explanations by feeding those in the input; we observe that it improves the few-shot performance of GPT-3 by 18.96%. Our analysis further shows that language models, similar to humans, benefit from explanations to learn from fewer data and achieve the same performance with just 40% of the data. The data and code are available at https://scienceqa.github.io.
Accept
The paper introduces a large new multimodal dataset for science question answering, and thoroughly evaluates a range of models, including a version of chain-of-thought. Reviewers agree that the paper is generally solid and well written, and the dataset is potentially useful. The major concerns around the technical novelty of the contributions, which are somewhat incremental extensions to chain of thought (e.g. with fine tuned and multimodal models). Some reviewers are also confused why generating the answer first gives better chain of thought results, because this appears inconsistent with the step-by-step reasoning explanation of chain of thought, and this point could be better explained. Overall I think the submission is borderline, leaning towards acceptance.
train
[ "54ZdVR6yTRU", "q3skGccT4Kt", "j_ea9h4KkgSh", "kLXCh4yjFt", "PBf92acWI7K", "TOEZOy0L7s", "d9W1k42Ufiv", "plB92G82cJz", "FKEKiUXwSAg", "z8Jbgx0jTnj", "NvUg0ERHlkv", "AFO-egfAKG", "d16dvdCKVFt", "AC-siDjbjuQ", "m_Oz40h5RW", "RCR_IgwBocc", "lTNCLQCUTC0", "I6PJn9oScNg", "j0S21Fs_2QN", "JTuc_wW0p4S" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for your great efforts and time!\n\nIt is a great encouragement to our work, and we are glad to see that we have addressed most of your concerns. And many thanks for your feedback to make our paper stronger and more solid!", " Thank you for the extensive and thoughtful reply and additional results. I think you addressed most of the issues I had with this work and would be happy to see it in the conference.", " Thank you so much for improving your rating!\n\nYes, language large models like GPT-3 could perform better in the few-shot setting if the provided in-context examples follow the in-distribution format of the training corpora than the out-distribution format. This phenomenon is also supported by recent work where the authors find the *human-like content effects*—language model predictions often reflect human knowledge and beliefs about the world.\n\n[24] Language models show human-like content effects on reasoning, https://arxiv.org/abs/2207.07051\n\nIt is a great encouragement to our efforts, and we are happy to see that we have addressed your comments. We appreciate all your suggestions!", " Thanks for the additional experiments and the nice discussions! \n\nThe empirical results are quite convincing for putting answer before explanations! It would be nice to see that the models assign higher probabilities to answer-explanation than explanation answer to justify your point that this is more in-distribution.\n\nI have improved my rating to 6. ", " Dear reviewer, thank you for the insightful comments and we appreciate your time and effort. We are excited that you evaluate that our dataset is interesting, covers multiple domains, and fills gaps in existing datasets. We are glad that you recognize we establish extensive baselines and our models lead to performance improvements in both fine-tuning and few-shot settings. We are willing to address your concerns below.\n\nIf you have any further questions, please feel free to let us know. We are grateful to have the chance to resolve these questions with you in the discussion phase!\n\n> **W1: The originality of the work.**\n\nPlease see the first point, “**Novelties and contributions of our work**” in the **general responses to all reviewers** above.\n\n> **W2: Details of evaluation.**\n\nThe evaluation metrics and key implementation details are introduced in Section 5.1. More details can be found in Appendix B.1, where we discuss the fine-tuning settings, batch sizes, the newline character, the captioning model tool, compute resources, and GPT-3 settings. We will release the dataset, scripts, and prediction results for easy reproduction of our results when the camera-ready version is ready.\n\n> **W3: To explore more experimental settings.**\n\nThank you for your suggestion! We did try more experimental settings, however, owing to the space limitation, we had to omit some exploration experiments that did not work. We have included more settings, including QCM→EA, QCM→LA, QCM→ELA, and QCM→LEA in the revised paper now (please refer to Table 6 and Table 7). The results are listed below as well:\n- QCM→EA, 56.0% (added)\n- QCM→LA, 60.6% (added)\n- QCM→ELA, 51.5% (added)\n- QCM→LEA, 55.4% (added)\n- QCM→ALE, 55.4% (final)\n\n> **W4: A round of proofreading of the paper.**\n\nWe have incorporated your mentioned notes and fixed some typos we found in the revised paper. We will definitely do several rounds of careful proofreading to make sure the paper is technically sound before it is released.\n\n> **W5: Renaming the SQA dataset.**\n\nWe really appreciate your helpful suggestions. We agree with you completely. We are considering renaming SQA as \"ScienceQA\"; fortunately, \"ScienceQA\" has not been used by released datasets on https://paperswithcode.com/datasets. To avoid confusion for other reviewers, we will tentatively keep the name SQA in the rebuttal phase and update it in the camera-ready version. Thanks again for your great efforts in reviewing our paper!\n", " > **Q1: What is the evidence that images are actually necessary for answering the questions in VQA experiments?**\n\nFirst, we conducted ablation studies to validate the importance of each input element on top of the Top-Down model and report the results in Table 3. Q+$C_T$ +M only refers to the Top-Down model that does not take the image context into the inputs. Results show that Q+$C_T$ +M achieves an accuracy of 52.80% for the questions with image context, a decrease of 2.06% compared to the full model (Top-Down).\n\nSecond, visualization of the SQA examples (please check the Appendix or the visualization tool we provide in the supplementary for more information) shows that image contexts like charts, maps, and tables are critical and necessary to answer the questions correctly (e.g., “What is the capital of the country highlighted on this map?” cannot be answered without the image by definition).\n\nThird, all VQA baselines in Table 3 can achieve larger performance improvements from the random guess baseline for questions without context than for questions with image context. For example, VisualBERT achieves an improvement of 24.88% from 33.66% to 58.54% for questions without context (“NO”), while this improvement is only 20.09% for questions with image context (“​​IMG”). It shows that the image context poses additional challenges for existing VQA baselines. For example, other than natural images, there are diagrams with diverse formats in an image context, including tables, maps, and illustrations. Current VQA baselines are still struggling to understand the diagrams well because they are mostly trained on natural images.\n\n\n> **Q1.a: Are questions that require an image to be answered included in the evaluation of text-only models and why does that make sense? How do we know the captions capture the necessary information?**\n\nYes. For fair comparisons, we need to make sure that different baselines are evaluated on the same set of questions. For example, the text-only models (e.g., Q+M only, UnifiedQA, GPT-3) are all evaluated on the whole test set, and the results for different classes are listed in the corresponding columns, respectively.\n\nThe image context could provide either complementary information that is helpful for question answering or critical visual information that is necessary for the models. According to rough estimates, about 30% of questions feature critical images for question answering. Even though our GPT3 (CoT) is able to achieve promising average prediction performance, it fails many times because of some information gaps associated with the captioning systems. We apply the SOTA captioning model to extract the textual descriptions for GPT-3 (CoT). But this captioning model fails to capture fine-grained semantic information in images such as maps, diagrams, tables, and illustrations. Therefore, this issue with the captioning model results in the prediction failures for GPT-3 (CoT), which are discussed in Appendix B.4.\n\nOne prospective solution to this problem is to propose unified vision-and-language large models to have a more powerful ability to capture both textual and visual information for question answering and explanation generation. We are glad to see that recent work [20] has been exploring this direction and has achieved promising results.\n\n[20] Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks, 17 Jun 2022.\n\n\n> **Q2: Why no QCM→(L)EA?**\n\nWe had the result for QCM→(L)EA but unfortunately, we didn’t keep it in the submitted paper due to limited space. The result has been added to the revised paper now (please refer to Table 7)!", " > **Q3(i): The segments in Table 3 are not labeled.**\n\nThe segments in Table 3 are listed below:\n- Segment 1: Random chance\n- Segment 2: Ablation studies on top of the Top-Down model\n- Segment 3: VQA baselines\n- Segment 4: UnifiedQA baselines and UnifiedQA with CoT\n- Segment 5: GPT-3 baselines and GPT-3 with CoT\n- Segment 6: Average human performance\n\nThank you for your suggestion. We have added them to the revised version.\n\n> **Q3(ii): When evaluating VQA models, are all the questions used (even the ones without image context, and does this make sense)?**\n\nYes. For the results in Table 3, we evaluate the questions of the whole test set, and the last column, \"Avg\" reports the average accuracy of all questions. To further analyze how different baselines perform across different classes, we also report the accuracy in the columns from 4 to 11. When evaluating VQA models, all the questions are used. \n\nNote that the VQA model can also operate without the image inputs. The embeddings for the image context input for those questions without image context are set to be zero vectors, which is widely used in previous VQA work when conducting blind studies [21,22].\n\n[21] VQA: Visual Question Answering, 2015\n\n[22] CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning, 2016\n\n> **Q3(iii): What do the TXT and IMG columns mean? What about questions without either?**\n\nThe TXT column refers to results for a set of questions with a text context, while IMG refers to results for questions with an image context. We added the results for questions without either in the \"No\" column in the revised paper. For most baselines in Table 3, the performance gaps between most baselines and the random guess baseline are larger for questions without context than for those without context. It indicates that the text and image contexts in SQA are critical for answering the questions.\n\n> **Q4: What is the variance in the reported numbers?**\n\nWe visualized the error bars for the ablation studies in Figure 6. Instead, as GPT-3 is expensive to run for repeated experiments on the entire test set, we only ran the reported experiments in ablation studies on a subset of the test set and ran the model with optimal parameters for the final model once, as reported in the main table (Table 3).\n\n> **Q5: Details of using the caption model.**\n\nWe only used the caption model for text-only baselines and models. We adopted a caption model to generate a natural language description for the image and then concatenate the description with other texts as the inputs of the model. The image description can provide the semantic information in the image and further enables the text-only models to work on the SQA dataset. The caption model we used is a SOTA pre-trained image captioning model, and we applied an online API developed by Huggingface (https://huggingface.co/nlpconnect/vit-gpt2-image-captioning) to generate captions. These captions can be easily reproduced by others using this easy-to-use online tool. And we will also release the captions in the dataset and the generating scripts for easy reproduction.", " Dear reviewer, thank you for the insightful comments. We appreciate your time and effort. We are glad that you recognize our large-scale dataset features rich domain diversity and aids the development of model reasoning in related areas. We address your questions below.\n\n> **W1: The novelty of the proposed dataset and baselines.**\n\nPlease see the first point, “**Novelties and contributions of our work**” in the **general responses to all reviewers** above.\n\n> **W2: Better automatic evaluation methods specifically for CoT.**\n\nIt is really a good point to introduce new automatic evaluation methods specifically for CoT that are able to obtain consistent evaluation results as human evaluators. Existing automatic evaluation metrics such as BLEU-1, ROUGE-L, and Sentence Similarity prefer \"similar\" generations to the training data, while human beings prefer predictions that are relevant, correct, and complete. Thanks for your valuable suggestions! We are looking forward to more consistent automatic evaluation methods in the follow-up research work.\n\n> **W3: Discussion of the correlation between question answering performance and explanation generation performance.**\n\nOverall, the correlation between question answering performance and explanation generation performance is positive. For example, 2-shot GPT-3 (CoT) with the QCM→ALE prompting type achieves the highest QA accuracy and, at the same time, obtains the highest proportion of gold explanations among all the predictions. But we do have some special cases where the generated explanation is gold while the predicted answer is wrong, and vice versa. These failure cares are well discussed in Appendix B.4 for case study and limitations.\n\n> **Q1: Why aren't all examples annotated with lectures and explanations?**\n\nWe automatically collected the raw data for the SQA dataset from online resources. Lectures and explanations could be missing due to the lack of annotations. We kept the examples without lectures or explanations to maximize the domain diversity of the constructed SQA dataset.\n\n> **Q2a: Why are CoTs lectures and explanations instead of explanations? Shouldn't the lecture be part of the model's input?**\n\nWe believe that lectures, providing external background knowledge, are a general kind of explanation for the questions and answers. We conducted extensive ablation studies for different prompting formats and methods and further discussed the results in Section 5.4 (Table 6). It is found that the format of QCM→ALE performs the best among various settings. \n\nWe studied the upper bound of GPT-3 (CoT) by feeding the lecture or solution to the model's input. The results show that with the gold lecture in the input, QCML*→AE performs worse than QCM→ALE with an accuracy decrease of 1.58%. It is probably because, in pre-training data such as science and math textbooks, solutions appear in the output after the answer, so this representation (QCM→ALE) makes it familiar to the model.\n\n> **Q2b: Are there any experimental results for QCML->A and QCML->AE?**\n\nYes, we had these results. We added them to the revised paper in Table 7.\n\n> **Q3: In Table 3, why does GPT perform worse in the 2-shot setting than in the zero-shot setting?**\n\nFor the standard GPT-3 models, the accuracy difference between these two settings is 0.07%, which could result from a random deviation. It shows that without the CoT prompting, GPT-3 might not gain benefits from the in-context examples in the SQA dataset. Please also note that a similar observation has been made in other works where it has been shown that adding examples does not really help in the in-context setting for GPT-3 [23]. \n\n[23] Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm, 2021\n", " Dear reviewer, we appreciate your time and thank you for your helpful comments. We address your concerns below. \n\n> **W1: Why the proposed chain-of-thought (QCM-ALE) would be helpful when generating answers before generating explanations (especially for the few-shot GPT3 examples)?**\n\nFirst, note that the success of prompting is dependent on how similar the task is to pre-training data [14]. For example, instead of asking \"who is the president of the US?\", if I ask \"The president of the US is _,\" it becomes easier for the model because filling in the blank is much closer to the next word prediction loss function used in pre-training. Similarly, in our case of multimodal scientific reasoning, the model performance is dependent on how similar our formulated task is to the text that is seen during pre-training. In textbooks, specifically on mathematical reasoning questions, we usually have an answer field right after the question, and then we have a detailed solution that follows, not the other way around. Since the big language models are pre-trained with textbooks and internet data that follow this format of an answer and then a solution, our representation (ALE) is similar to the pre-training data and that helps the model in improving performance.\n\n[14] Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing, 2022\n\nSecond, the pre-trained large language models learn to generalize to unseen instances via few-shot learning from a couple of in-context examples. Provided with in-context examples in the format of QCM→ALE, pre-trained GPT-3 would learn the intrinsic relationship between the input question and the final answer, and understand the corresponding lectures and explanations required for the correct answer. Therefore, GPT-3 (CoT) could benefit from in-context examples to improve the reasoning ability, thus leading to better performance on question answering and CoT generation on the test examples in SQA.\n\nThird, to recall, our SQA dataset features long annotations of lectures and explanations. There could be huge performance drops for formats such as QCM→LEA, where lectures and explanations appear before answers. Note that GPT-3 (CoT) addresses the SQA task as a text generation problem. If the pre-trained GPT-3 is formalized to generate the long lecture and explanation first (e.g., QCM→LEA), as the lecture size is large, often the answer at the end can be truncated. Similarly, in the case of the prediction, model can just predict the lecture and may run out of tokens (note that maximum output token size is limited by the GPT-3 API) for generating the actual answer.\n\n> **W2: Comparisons to QCM-LEA or QCM-ELA.**\n\nThank you for pointing out the suggestion. We did have these results and have added these results in the revised paper, and we’d like to include them below as well.\n- QCM→ELA, 51.5% (added in the revised paper)\n- QCM→LEA, 55.4% (added in the revised paper)\n- QCM→ALE, 73.6% (our final model in the submitted paper)\n\nThere are large performance drops for QCM→ELA and QCM→LEA because the GPT-3 model stops earlier if it generates the stopping symbol or uses up the tokens when generating the long text of lectures and annotations before arriving at the final answers.\n", " > **W3a: The chain-of-thought is not used in its most powerful form.**\n\nWe respectfully disagree with you on this. We conducted extensive ablation studies in Section 5.4 (Line 262 - 306) to discuss the performance regarding different numbers of in-context numbers, different prompting formats, and different sampling methods for GPT-3 (CoT) on SQA. Please take a look at Table 7 where we do other forms of CoT like QCM→LEA, QCM→ELA, etc. and they do not work. The analysis shows that GPT-3 (CoT) performs best in the format of QCM→ALE with two in-context examples via dynamic sampling (same topic). Finally, we report the result of GPT-3 (CoT) with the optimal setting in Table 3.\n\n> **W3b: Novelty of the chain-of-thought idea.**\n\nThe chain-of-thought reasoning in language models is still understudied after Wei et al. and Wang et al. first found that the chain of thought has the potential to improve the performance of complex QA tasks like math word problem-solving in 2022. In this paper, we extensively explore CoT prompting on SQA and show that CoT benefits large language models in both few-shot and fine-tuning learning by improving model performance and reliability via generating explanations. To be more specific,\n\n(1) CoT has been shown to be helpful in a few-shot setting (Wei et al.). We extend CoT to the fine-tuning setting and show that it improves the performance consistently.\n\n(2) CoT in the original paper [15] was shown to improve performance in the task of question answering. We show that, along with this performance, it also helps generate reliable explanations for the answer.\n\n(3) The original CoT paper [15] is in the text-only domain. We show that CoT and its variants also work in the multi-modal setting, provided we extract the visual information in natural language from the input images.\n\n(4) We also see evidence that CoT in our final setup (QCM→ALE) also helps the model learn from fewer data. This is analogous to how humans being taught with explanations learn quickly just with a few examples.\n\n(5) We further explore the upper bound of GPT-3 and find that if the GPT-3 model is fed with the gold lecture and explanation in the inputs.\n\n(6) We also investigate the effect of various parameters: (a) prompt format, (b) number of examples, and (c) sampling methods on model performance with CoT to better understand the sensitivity and statistical significance of the performance gains.\n\n[15] Chain of Thought Prompting Elicits Reasoning in Large Language Models, 2022\n", " > **Q1: The lectures could be retrieved rather than generated.**\n\nYes, it is a good point! The improvement direction that the lectures could be retrieved rather than generated is based on the assumption that lectures could be correctly retrieved and the retrieved lectures could further benefit the performance accuracy. \n\nHowever, our initial experiment results in Tables 5 and 6 indicate that retrieving lectures might not be helpful in the SQA dataset:\n\n- In Table 6, we study the upper bound of GPT-3 (CoT). We use gold and perfect lectures in the input (QCML*→A, QCML*→AE) and it is not helping. For example, the performance of QCML*→A decreases from 73.97% (QCM→A) to 73.59%. Retrieval will result in imperfect lectures, so we are not sure how it can help with SQA. \n\n- In Table 5, we investigate if GPT-3 (CoT) could benefit from more similar in-context examples. For example, if we adopt dynamic sampling with the same skill, it is more likely that the lectures in the in-context examples are similar to or the same as the lectures regarding the test example. But results show that similar lectures in the in-context examples would not improve the QA performance.\n\n> **Q2: Captioning system that converts images to text might be suboptimal. I think you might want to train end-to-end, or find a system that's focused on image description, rather than image captioning.**\n\nWe only convert images to captions when evaluating large language models such as UnifiedQA and GPT-3. For VQA baselines, the raw images are fed into the neural networks. And it is found that current SOTA VQA baselines consistently perform worse when answering questions with visual context than questions with textual context. \n\nWe respectfully disagree with you on “finding a system that's focused on image description, rather than image captioning.” In the area of vision-and-language learning, generating the description for an image is the same task as image captioning. For example, [16] defines the task of image captioning as “automatically generating a natural language description of an image”. \n\nWe totally agree that large unified vision-language models for general purposes are good options for a better joint understanding of both image and text contexts. We believe that it is a worthwhile direction for follow-up research work [17].\n\n[16] Image Captioning With Semantic Attention, 2016\n\n[17] Unified-IO: A Unified Model for Vision, Language, and Multi-Modal Tasks, 17 Jun 2022.\n\n\n> **Q3: Report standard deviations in the main table.**\n\nThe GPT-3 API is very expensive, and it is not practical to repeat the experiments multiple times on the whole test set over different settings in the main table. Instead, following recent work [18,19] on large models via prompting, we conduct extensive ablation studies on a subset of the test and report the final results for the models with optimal hyperparameters. For example, in Figure 6, we discuss the accuracy of GPT-3 (CoT) across different prompt types and the number of training examples with reported deviation ranges.\n\n[18] Self-Consistency Improves Chain of Thought Reasoning in Language Models, 2022\n\n[19] Large Language Models are Zero-Shot Reasoners, 2022\n\n> **Q4: Discussion of broader, higher-level limitations of this class of methods.**\n\nWe appreciate your feedback on the broader discussion of this class of methods. Although large pre-trained language models such as GPT-3 can achieve amazing results on a lot of downstream tasks like science question answering, they still suffer from several limitations that researchers are working to address. First, they are often inconsistent—failure cases in the appendix show that you can get correct answers with wrong explanations. Second, GPT-3 is parroting the patterns in the huge amount of training data, which could lead to generation bias. ", " Dear reviewer, we really appreciate your effort in reviewing our paper, and thank you for your helpful comments. We are glad that you recognize that our dataset is novel, large-scale, and covers diverse science topics. We are encouraged to see that you note our designed models boost the overall SQA performance via CoT. We address your questions below:\n\n> **Q1: How can we know the reasoning performance of QA models with this dataset? Does predicting the correct explanation guarantee that the model’s prediction (answer) is based on the correct reasoning processes?**\n\nIt is really a good question! As discussed in Table 1 and Line 122-129, SQA is a new large-scale multi-modal question answering dataset with multi-modal contexts across a wide range of scientific domains. To answer the SQA questions well, QA models are required to have the abilities of joint reasoning of multi-modal inputs, commonsense reasoning and knowledge acquisition for domain-specific topics, and multi-hop reasoning. Besides, as SQA contains the detailed annotations of lectures and explanations, it serves as an important benchmark to diagnose the models’ ability to provide the evidence to reveal the reasoning steps to arrive at the answers.\n\nThe consistency and coherence of large pre-trained language models are still understudied and some recent work [6] has been proposed to investigate these issues. The failure cases visualized in Figures 16 and 17 in the appendix show that GPT-3 (CoT) could predict the correct answers with wrong explanations or generate the wrong answers with gold explanations.\n\n[6] Are NLP Models really able to Solve Simple Math Word Problems? 2021\n\n> **W1/Q2: Comparison of SQA and other science QA datasets. How much improvement can we make with other science QA datasets?**\n\nThank you for your suggestion! In this work, we construct a large-scale science QA dataset with detailed annotations of lectures and explanations across diverse scientific topics. We further compared various baselines on SQA and designed models with chain-of-thought prompting in both few-shot and fine-tuning settings. It would be meaningful to evaluate how large language models perform on other science QA datasets. \n\nHowever, SQA is the only multi-modal science question answering dataset that features the lectures and explanations as evidence for the answers. We are unable to do this experiment because other multi-modal science datasets don't contain explanations. We hope SQA will encourage the development of more similar resources, and that will eventually facilitate the dataset comparison.\n\n> **Q3: Are current VQA models not generalized to process the challenging questions in SQA? Discuss reasons for the poor performance of current VQA models.**\n\nThanks for your insightful comments! Based on the current experiments on SQA, current SOTA VQA models such as VisualBERT and ViLT don't perform well in both the textual-only contexts (the highest accuracy is 66.96%) and multi-modal contexts (the highest accuracy is 62.17%). \n\nThe main reason is that current VQA models are mostly trained on VQA (e.g., VQA, GQA) and image captioning datasets (e.g., COCO). These datasets usually feature natural images as the visual context and short lengths for both the inputs and outputs. Instead, SQA contains multimodal contexts (i.e., text, images, diagrams), and input questions with a wider length distribution (i.e., the question lengths range from 3 words to 141 words). Furthermore, SQA consists of multi-modal scientific questions with multi-modal contexts, and it features a diverse range of scientific domains across 3 subjects and 379 skill categories. All of these features make it difficult for existing SOTA VQA methods. Last, as we feed the image captions as the visual context input, VQA methods might fail to answer a set of visual-critical questions due to the missing visual information. Last but not least, textual models like UnifiedQA and GPT-3 can take advantage of explanations in the few-shot or fine-tuned setting, which VQA models can't.\n", " > **Q4: Evaluation metrics for the generated lectures and explanations.**\n\nThanks for your suggestion of using rank-based metrics. Recently, BERT and its variants have been widely used in tasks of sentence-pair regression and semantic textual similarity [7,8,9,10,11]. These models are capable of computing the similarity score of two sentences in terms of semantic meaning instead of extract match or n-gram match. Thus, we report the similarity score of the generated explanations and annotated explanations using the Sentence-BERT model [8]. \n\nWe also report metrics like BLEU-1/4, ROUGE-L, and human evaluation scores because they are widely used in text generation tasks like machine translation, question answering, and image captioning. Rank-based metrics such as top-k might have some limitations in our work because the explanations are generated from sequence-to-sequence models instead of ranking explanation candidates. As there is no exact match between generated explanations and annotated explanations, the top-k scores for different baselines are 0. However, it is still possible to transfer our similarity score to rank. We would appreciate it if you could clarify your suggestion a bit further, and we would love to add this metric to our next version.\n\n[7] SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation, 2017\n\n[8] Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks, 2019\n\n[9] RoBERTa: A Robustly Optimized BERT Pretraining Approach, 2019\n\n[10] XLNet: Generalized Autoregressive Pretraining for Language Understanding.\n\n[11] SimCSE: Simple Contrastive Learning of Sentence Embeddings, 2022\n\n\n> **Q5: What is the reason for choosing social science and language science?**\n\nInspired by the definition of science on Wikipedia [12], there are several branches of science: natural science, social science, formal science, and so on. Besides, many questions of language science overlap with domains in the social sciences, life sciences, culture, and humanities, thus making language science a multidisciplinary field [13]. We believe that incorporating questions of social science and language science into the proposed SQA dataset makes SQA a more reasonable dataset to diagnose the multi-modal reasoning abilities of existing models over a wide range of topics.\n\n[12] Science, https://en.wikipedia.org/wiki/Science\n\n[13] The Science of Linguistics, https://www.linguisticsociety.org/resource/science-linguistics\n\n> **Q6: What is the difference between “Unique” questions and “Total” questions (line 133)?**\n\n\"Unique\" questions refer to different questions where we only count one for duplicate questions, while total questions are the whole number of questions in the dataset. Normally, the number of unique questions is smaller than the total number of questions. Note that examples that share the same questions appear with different contexts, choices, and answers in the SQA dataset. As shown in the table, there are 21,208 questions \"in total\", and 9,122 unique questions. We have clarified it in the revised paper.\n\n> **Q7: In Table 4, what is the reason for using the BLEU-1 score instead of BLEU-4, a more widely used evaluation metric?**\n\nWe have the result scores of BLEU-4 and added them in the revision: \n- UnifiedQA_BASE (CoT), QCM→ALE: 0.370\n- GPT-3 (CoT), QCM→AE: 0.048\n- GPT-3 (CoT), QCM→ALE: 0.052\n\nHowever, these scores of BLEU-4 are inconsistent with those of other automatic metrics like BLEU-1, ROUGE-L, and (semantic) Similarity. The reason is mainly that UnifiedQA_BASE (CoT) is fine-tuned on the SQA dataset and it tends to learn similar N-gram patterns in the training data. UnifiedQA_BASE (CoT) can generate \"plausible\" lectures and explanations that are similar to the annotations in the training data. Thus, we can find that the score for UnifiedQA_BASE (CoT) is unusually higher than that for GPT-3 (CoT). Therefore, BLEU-4 might not be an ideal evaluation metric for our setup.\n", " Dear reviewer, thank you for the constructive comments and we appreciate your time and effort. We are glad that you recognize that our dataset is 1) novel and diverse, 2) could fill in the gap in the existing datasets, and 3) facilitate future research on CoT. Furthermore, it is helpful that you recognize our experimental findings in the few-shot learning setting and that our paper is clearly written. We address your questions below:\n \n> **Q1: What is the motivation to concatenate evidence and explanation after the answer instead of before it?**\n\nThe short answer to choosing the format of QCM→ALE in our final model is that it performs best for both question answering and explanation generation according to extensive experiments, as supported by the numbers in our submitted/revised paper.\n\nWe hypothesize that generating the answer first before generating the more general lecture and explanation helps GPT-3 in the CoT setting to generalize better to unseen examples. Empirically, we find this to be true and find an increase in performance on 1,000 test examples when the answer is generated before the evidence (lecture, explanation):\n- QCM→EA, 56.0% vs QCM→AE, 67.6%\n- QCM→LA, 60.6% vs QCM→AL, 73.0%\n- QCM→ELA, 51.5% vs QCM→AEL, 73.5% \n- QCM→LEA, 55.4% vs QCM→ALE, 73.5%\n\nTo recall, our SQA dataset features long annotations of lectures and explanations. In our final model (QCM→ALE), we prompt GPT-3 with CoT and the output generated consists of the answer, lecture, and explanation, in that order. We do this for the following reasons:\n\n- There is an implementation level hurdle in generating the answer after the evidence. As we only have access to GPT-3 via the OpenAI API, there is a chance that GPT-3 stops generating or uses up maximum tokens while generating the evidence before it can produce the answer. This leads to an obvious performance drop in the answer accuracy before obtaining the answer prediction.\n\n\n- Note that the success of prompting is dependent on how similar the task is to pre-training data [2]. For example, instead of asking \"who is the president of the US?\", if I ask \"The president of the US is _,\" it becomes easier for the model because filling in the blank is much closer to the next word prediction loss function used in pre-training. Similarly, in our case of multimodal scientific reasoning, the model performance is dependent on how similar our formulated task is to the text that is seen during pre-training. In textbooks, specifically on mathematical reasoning questions, we usually have an answer field right after the question, and then we have a detailed solution that follows, not the other way around. Since the big language models are pre-trained with textbooks and internet data that follow this format of an answer and then a solution, our representation (ALE) is similar to the pre-training data and that helps the model in improving performance.\n\n[2] Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing, 2022\n\n> **Q2: What is the insight that few-shot GPT-3 is better than finetuned UnifiedQA?**\n\nGPT-3 has 175 billion parameters, almost 1,000 times more than the UnifiedQA model (200 million parameters for the version we used in the paper). Recent studies have shown that the performance of large pre-trained increases linearly with the increase of the logarithmic model size [3,4].\n\nBesides, GPT-3 is trained with about 500 billion tokens for a wide range of downstream language tasks, including question answering and long text generation. Also note that OpenAI's default engine for GPT-3 has recently been changed to a more powerful version, *InstructGPT*, which is a fine-tuned version of GPT3 on many datasets [5]. Instead, UnifiedQA is trained on eight datasets specifically for question answering tasks. Thus, a few-shot GPT-3 has advantages over most existing fine-tuned models, including UnifiedQA.\n\n[3] Language Models are Few-Shot Learners, 2020\n\n[4] OPT: Open Pre-trained Transformer Language Models, 2022\n\n[5] Aligning Language Models to Follow Instructions, https://openai.com/blog/instruction-following/\n\n> **Q3: For the questions without any context, how were they handled in the evaluation?**\n\nAs discussed in Section 4.1 (Line 170-171, Line 178-182), the textual elements of the input are concatenated and then fed to baseline models. For example, the format of QCM→A takes the concatenation of tokens of the question text (Q), the context text (C), and multiple options (M) as inputs. \n\nFor questions without any context, the context text is replaced with an empty string (added in Line 578-579 in the Appendix). The evaluation results of various baselines over different classes are presented in Table 3. The column NO is added in the revised paper (Table 3), showing the accuracy of different baselines for questions without any context.", " We would like to thank the reviewers for providing us with thoughtful comments and constructive feedback. \n\nWe are encouraged that our constructed **dataset** SQA is new/novel (R1, R2, R3, R4), large-scale (R2, R5), promising (R3), and interesting (R4), and it features diverse topics/questions/domains (R1, R2, R4, R5). \n\nWe are pleased with our **experimental evaluations** on the SQA dataset, which is extensive, reasonable, and illuminating (R4, R5); our designed **method**, which has the validity of increasing answer prediction performance in both few-shot and fine-tuning contexts (R2, R3); and our **paper**, which is clearly and well written and easy to follow (R1, R2, R4, R5).\n\nWe appreciate that R1, R4, and R5 recognize that, due to the nature of including lectures and explanations, our proposed **dataset** can fill gaps in the existing datasets and further facilitate the development of future research on model reasoning.\n\nWe have incorporated the feedback and highlighted the updates in blue in the revised paper. We address the general concerns below.\n\n**(1) Novelties and contributions of our work.**\n\nFirst, to fill the gaps in existing datasets in the scientific domain, we built Science Question Answering (SQA), a new dataset containing 21,208 multimodal science questions with rich domain diversity. To the best of our knowledge, SQA is the first large-scale multi-modal science question answering dataset that features detailed lectures and explanations.\n\nSecond, the chain-of-thought reasoning in language models is still understudied after Wei et al. and Wang et al. first found that the chain of thought has the potential to improve the performance of complex QA tasks like math word problem-solving in 2022. In this paper, we extensively explore CoT prompting on SQA and show that CoT benefits large language models in both few-shot and fine-tuning learning by improving model performance and reliability via generating explanations. To be more specific,\n\n1) CoT has been shown to be helpful in a few-shot setting (Wei et al.). We extend CoT to the fine-tuning setting and show that it improves the performance consistently.\n\n2) CoT in the original paper [1] was shown to improve performance in the task of question answering. We show that, along with this performance, it also helps generate reliable explanations for the answer.\n\n3) The original CoT paper [1] is in the text-only domain. We show that CoT and its variants also work in the multi-modal setting, provided we extract the visual information in natural language from the input images.\n\n4) We also see evidence that CoT in our final setup (QCM→ALE) also helps the model learn from fewer data. This is analogous to how humans being taught with explanations learn quickly just with a few examples.\n\n5) We further explore the upper bound of GPT-3 and find that if the GPT-3 model is fed with the gold lecture and explanation in the inputs.\n\n6) We also investigate the effect of various parameters: (a) prompt format, (b) number of examples, and (c) sampling methods on model performance with CoT to better understand the sensitivity and statistical significance of the performance gains.\n\nThird, our extensive experiments across VQA and pre-trained language models show that SQA is a challenge for state-of-the-art models in the multi-modal setting. This indicates that there is significant room for future work in this direction, and SQA provides a platform to facilitate those studies.\n\n[1] Chain of Thought Prompting Elicits Reasoning in Large Language Models, 2022\n\n**(2) Comparisons to other prompting formats.**\n\nWe did conduct comparisons of other formats such as QCM→A, QCM→AE, QCM→ALE, QCML→A, QCME→A, and QCMLE→A. However, results show that these formats are less likely to work. That is why we omitted these exploration experiments in the submitted paper to save space on the main page.\n\nWe have added these results in the revised paper, and we’d like to include them below.\n\n- QCM→EA, 56.0% (added)\n- QCM→LA, 60.6% (added)\n- QCM→ELA, 51.5% (added)\n- QCM→LEA, 55.4% (added)\n- QCM→AEL, 73.6% (added)\n- QCM→ALE, 73.6% (our final model in the submitted paper)\n- QCML*→AE, 73.3% (The upper bound experiment suggested by R3)\n\nSettings for the results above: \n- Q: question, C: context, M: multi-choice options, A: answer, L: lecture, E: explanation.\n- Experiments were done on 1,000 test examples.\n- We adopted 2-shot learning with a sampling seed of 3.\n", " - The paper presents a Science Question Answering (SQA) dataset, which contains evidence and explanations in multimodalities and covers a diverse category of questions. \n- Authors evaluate standard baselines as well as chain of thoughts (COT) baselines on the established dataset. Empirical result shows that COT can improve the few-shot performance. Due to the introduction of lectures and explanations annotation in SQA, it is also possible to evaluate COT under the supervised finetuning setting. And it could be shown that COT also helps.\n- Human performance is measured as the upper bound of the task. Strengths:\n\n- The paper introduces a novel dataset that fills in the gap of the existing datasets as shown in table 1. Lectures and explanations annotation are introduced to facilitate future research on COT.\n- Empirical of SOTA models with and without COT is evaluated to set baseline performance of this dataset\n- The paper is clearly written and easy to follow\n\nWeakness:\n\n- See the question section - What is the motivation to concatenate evidence and explanation after the answer instead of before it?\n- What is the insight that few-shot of GPT-3 is better than finetuned UnifiedQA? It is surprising that a few-shot model outperforms a supervised one, so more analysis is expected.\n- For the questions without any context, how were they handled in the evaluation? The authors adequately addressed the limitations and potential negative societal impact", " This paper provides a novel dataset for science question answering, SQA. Specifically, this paper studies a dataset for evaluating the reasoning/explanation performance of QA models. The authors claimed that existing science question answering datasets either 1) lack annotations for explanations for the answers or 2) has a small data scale/set of topics. SQA consists of 21K examples, a comparably large-scale dataset compared to existing science QA datasets. SQA covers more topics than existing datasets, and SQA is a multi-modal dataset; this dataset provides image context as well as textual context. This paper shows the validity of SQA by showing that the answer prediction performance increases when QA models are fine-tuned on SQA. Also, this paper conducts explanation prediction experiments. In this experiment, QA models predict the annotated explanations/lectures, and humans evaluate the predictions. The experimental results indicate that training QA models on SQA increases the explanation prediction (reasoning) performance of QA models. Strengths\n\n1. The proposed dataset, SQA, is a large-scale dataset and covers more diverse science topics. Also, SQA consists of more features such as detailed lectures and explanations related to the given question (line 129).\n2. This paper showed that predicting explanation increases the overall SQA performance (Table 3).\n3. This paper is well written and easy to follow.\n4. This paper provides a sufficient analysis of SQA.\n\nWeaknesses\n\n1. This paper claims that SQA is a better dataset for evaluating the reasoning performance of QA models. However, this paper does not compare SQA and other science QA datasets (see Q2 in the “Questions” section for more details).\n2. I have left my concerns in the “Questions” section. Q1: This paper provides SQA for better evaluation of the reasoning performance of QA models (lines 29-30). How can we know the reasoning performance of QA models with this dataset? Does predicting the correct explanation guarantee that the model’s prediction (answer) is based on the correct reasoning processes? Although some dataset papers in this field claim similar arguments (evaluating the reasoning performance of QA models can be done by explanation generation), the explanation prediction task is insufficient to argue that SQA enables evaluating QA models’ reasoning performance.\n\nQ2: Line 58-59: How much improvement can we make with other science QA datasets? Please compare SQA with other datasets to show SQA's efficacy. One possible experiment is fine-tuning QA models on the SQA and TQA datasets, then comparing how much improvement the QA models get.\n\nQ3: The image captioning model also affects the models’ performance. How can you conclude that current VQA models are not generalized to process the challenging questions in SQA (lines 229-230)? Would it be possible that the poor performance came from the poor performance of the image captioning model used in this paper?\n\nQ4: In Table 4, the authors reported the semantic similarity scores computed by Sentence-BERT. However, reporting the absolute similarity scores is not a standard way to show semantic similarity. In semantic textual similarity, ranking-based evaluation metrics are usually used. Please use rank-based metrics such as top-k, recall@K, and MRR.\n\n**Minor questions from here**\n\nQ5: What is the reason for choosing social science and language science?\n\nQ6: What is the difference between “Unique” questions and “Total” questions (line 133)?\n\nQ7: In Table 4, what is the reason for using the BLEU-1 score instead of BLEU-4, a more widely used evaluation metric? The authors have addressed the limitations and potential negative social impact of their work.", " The paper introduces a new dataset ScienceQA which consists of multiple choice questions with knowledge source and explanations. The dataset covers three domains of elementary through high school questions. The authors also conduct experiments using chain-of-thought methods, which prompts GPT3 and unified QA with a concatenation of question context, answer and lecture+explanation. Strength: \n1. The dataset looks promising, given that it covers both image and text modality and various science domains. \n\nWeakness: \n1. I don't understand why the proposed chain-of-thought (QCM-ALE) would be helpful when you generate answer before generating explanations (especially for the few-shot GPT3 examples). I think this might improve the quality of explanations, but given GPT3 is left-to-right autoregressive, the generated answers cannot take advantage of the generated explanations in the right contexts, which seemingly weakens the power of CoT, that looks like QCM-LEA or QCM-ELA. \n\n2. Should compare to QCM-LEA or QCM-ELA. I think generating answers conditioned on the explanations would probably yield more performance gains, because the answers can condition on the generated explanations. \n\n3. The method is not novel enough. Other than my concerns (1) that the chain-of-thought is not used in its most powerful form: the chain-of-thought idea is not novel and nor is explanation/rationale generation. \n\n Possible improvements directions: \n1. I think the lectures could be retrieved rather than generated, assume that you have a large documents of lecture contents. Using a retrieval based model to retrieve the lectures and use the GPT-3 models to condition on the lecture and generate explanations, and finally the answers could boost the model performance. \n\n2. Captioning system that converts images to text might be suboptimal, and could also explain why the accuracy for image questions seems to be significantly lower than the text questions. I think you might want to train end-to-end, or find a system that's focused on image description, rather than image captioning. \n\n3. Report standard deviations in the main table. I think there are high variances for in-context learning, but the table doesn't show it. The authors discussed limitations but in a quite narrow and empirical way, would be nice to see broader, higher-level limitations of this class of methods. ", " In this work, the authors propose a new multi-modal multiple-choice question answering dataset for science questions, with explanations. The dataset covers multiple domains, and has questions with text and image contexts.\n\nThe authors evaluate several baselines in different setups: VQA, fine-tuning UnifiedQA, prompting GPT-3 and find that the explanations provided in the dataset lead to improved accuracy on the dataset, both when using in a fine-tuning context as well as when used in prompting GPT-3.\n ## Strengths:\n* (mostly) well-written\n* new dataset appears to fill a few gaps from existing datasets\n* extensive evaluation with VQA models, as well as text-based finetuned and prompt-based models (UnifiedQA and GPT-3)\n\n## Weaknesses:\n* mediocre originality, not clear what really new questions or lessons\n* confusing evaluation presentation and confusing to use\n* some experimental settings are not explored (QCM→(L)EA)\n* a round of proof-reading is needed to fix some mistakes and improve sentence construction\n* I think it needs renaming because SQA already exists: https://paperswithcode.com/dataset/sqa\n\n## Detailed review:\nThe paper is mostly well-written, and presents an interesting dataset for the QA community, and establishes reasonable baselines in various scenarios. Even though the contribution is technically solid (once a few concerns are addressed), there doesn’t appear much new to be learned from this work. It appears to be an interesting dataset for science QA, in particular because it provides explanations. The problem is rather that it’s easy to dismiss this work as showing that chain-of-thought based training works better with some baselines in a similar set-up to what was already explored for different domains. The addition of the visual modality is unfortunately not tied to CoT experiments and appears as another loose dimension of the dataset.\nSome concerns that I had while reading the work were that the way evaluation results are presented is quite confusing, which could lead to confusion in practical use by others.\nAll main results (VQA and text-based) are crammed into Table 3 and there are several issues: (i) the segments in the table are not labeled (ii) when evaluating VQA models, are all the questions used (even the ones without image context, and does this make sense)? (iii) what do the TXT and IMG columns mean? What about questions without either?\nI think the main evaluation needs more explanation and improved presentation.\nThen, some important settings are missing from evaluation, in particular QCM → (L)EA (generating explanation before answer) hasn’t been included. I think there could be a significant difference between QCM→A(L)E and QCM→(L)EA for auto-regressive decoders because the reasoning steps generated before producing the answer can contribute to predicting the answer more correctly (see also Fig. 5 in [48]).\nIn addition, it would have been nice to provide some numbers for VQA with CoT. The currently tested VQA models are using a classifier to choose between the multiple choices, but it seems a baseline can be made by replacing that with a decoder? I think improvements in VQA due to CoT could also be an interesting additional selling point for this work.\n\nConclusion: if the above concerns regarding evaluation presentation and missing evaluation settings are addressed, I think it’s a solid work that presents an interesting dataset for the QA community. However, I wish there were more interesting new questions raised or new lessons learned.\n\n(Note: I tried to formulate all of these concerns in the questions section as well for easier answering)\n Q1: what is the evidence that images are actually necessary for answering the questions in VQA experiments? Do you have questions referring to an image without having textual context that contains all the necessary information? Can you include at least one such example in the paper somewhere because I’m not sure the baby example is clear in this regard?\n\nQ1.a: are questions that require an image to be answered included in the evaluation of text-only models and why does that make sense? How do we know the captions capture the necessary information?\n\nQ2: why no QCM → (L)EA?\n\nQ3: questions (i) - (iii) regarding table 3 (see above)\n\nQ4: what is the variance on the reported numbers?\n\nQ5: Are you always using a caption model ? How is the caption added to the context? How is the caption model trained? What is the effect of including generated captions for your text-based experiments? I think the captioning part should be explained in more detail, and evaluated in an ablation. Are these captions provided in the dataset for use by others?\n\nSmall notes:\n* table 1 columns should be explained\n* l. 82: “multi-model”?\n* l. 97: “for examples”?\n* l. 97-99: sentence incomplete\n* l. 219: “will be stopped” → “is stopped”\n* l. 246: “on SQA so far (75.17%) on SQA”\n* Fig 3,a: the lines are a bit too thick, it is a bit difficult to read this figure\n* Fig 3,b: I think this figure is unnecessary because it repeats what was already mentioned in the text, and in table 2, and takes up a lot of space relative to the contained info (unlike Fig. 3,a for example). I would add the % of questions without any context in Table 2 though.\n QCM→(L)EA not evaluated and not clear why.\n\nCoT not evaluated with VQA.\n\nThe authors includes an impact statement in appendix.\n", " This paper aims to aid the development of a reliable model that is capable of generating a coherent chain of thought (CoT) when arriving at the correct answer to reveal the multi-step reasoning process. Since existing science question datasets either lack annotated explanations for the answers or are restricted to the text modality with small data scales and limited topics, the paper presents Science Question Answering (SQA), a large-scale multi-choice dataset that contains multimodal questions with explanations and features rich domain diversity. It further designs language models to learn to generate lectures and explanations as to the chain of thought to mimic the reasoning process. Experimental results show that CoT benefits large language models in both few-shot and finetuning learning. And analysis shows that CoT helps language models learn from fewer data.\n Strengths:\n1. The presentation is good. The paper is easy to read.\n2. The proposed datasets may aid the development of model reasoning in science QA.\n3. Experiments are extensive and enlightening.\n\nWeaknesses:\n1. The novelty of the proposed dataset and baselines are limited.\n2. The evaluation of the generated CoTs (lectures and explanations) still relies heavily on human judgment. Since the automatic metrics used in the paper are less consistent with human judgment, better automatic evaluation methods specifically for CoT are desired.\n3. The correlation between question answering performance and explanation generation performance is not discussed.\n 1. Why aren't all examples annotated with lectures and explanations?\n2. Why are CoTs lectures and explanations instead of explanations? Shouldn't the lecture be part of the model's input? Are there any experimental results for QCML->A and QCML->AE?\n3. In table 3, why does GPT perform worse in the 2-shot setting than in the zero-shot setting?\n The authors have discussed the limitations of this paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3, 4 ]
[ "q3skGccT4Kt", "PBf92acWI7K", "kLXCh4yjFt", "NvUg0ERHlkv", "j0S21Fs_2QN", "j0S21Fs_2QN", "j0S21Fs_2QN", "JTuc_wW0p4S", "I6PJn9oScNg", "I6PJn9oScNg", "I6PJn9oScNg", "lTNCLQCUTC0", "lTNCLQCUTC0", "RCR_IgwBocc", "nips_2022_HjwK-Tc_Bc", "nips_2022_HjwK-Tc_Bc", "nips_2022_HjwK-Tc_Bc", "nips_2022_HjwK-Tc_Bc", "nips_2022_HjwK-Tc_Bc", "nips_2022_HjwK-Tc_Bc" ]
nips_2022_rBCvMG-JsPd
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning
Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unseen task without any gradient-based training by feeding a small number of training examples as part of the input. ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made. Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning, sparse update methods, etc.) offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task. In this paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs. Along the way, we introduce a new PEFT method called (IA)^3 that scales activations by learned vectors, attaining stronger performance while only introducing a relatively tiny amount of new parameters. We also propose a simple recipe based on the T0 model called T-Few that can be applied to new tasks without task-specific tuning or modifications. We validate the effectiveness of T-Few on completely unseen tasks by applying it to the RAFT benchmark, attaining super-human performance for the first time and outperforming the state-of-the-art by 6% absolute. All of the code used in our experiments will be publicly available.
Accept
This paper demonstrates that Few-shot Parameter-efficient Fine-tuning (PEFT) is more accurate and dramatically less computationally expensive than in-context learning (ICL), and introduces a new PEFT method that varies the activity level depending on the learned vector and achieves high performance with only a few parameters. In addition, this paper proposes a simple way to apply it to the T0 model and shows that the proposed fine-tuning method performs better than the baselines. This paper is well written. The proposed method provides a simple and practical recipe for few-shot learning and shows strong performance on popular and challenging benchmarks. All three reviewers had similar positive comments on this paper. Thus the meta-reviewer recommends it for acceptance.
train
[ "0cklhcyPO7A", "cKx9nkSQ9iQ", "qax_IGgtP0s", "lD6e6Kqq7tP", "YOj_EjKI5u", "YfzRP0Jhtfn", "Iyirer7BypB", "Hty0BCjtHa7", "jrwvckediRi" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks very much to all of the reviewers for their constructive suggestions. We have responded to all reviewer comments and questions below and have updated our draft accordingly. Changes made include:\n\n- Ran experiments on all the PEFT methods ablating the additional losses and added them to appendix D. SAID takes 3 days and so we weren't able to run all experiments during the weeklong rebuttal period, but they will be done in 2 days and can be included in future revisions. \n- Describe the ablation more in the main text section 4.4 \n- Clarified the introduction of the length-normalized loss\n- Emphasized that we use the same recipe for all experiments after section 3\n- Noted that the ingredients of our recipe don't always produce gains on every individual task but do consistently and significantly provide gains on average across tasks\n- Added additional related work\n\nIf the reviewers have additional suggestions, we would be happy to incorporate them into an updated draft. Thanks again for your input.", " Thank you for your suggestions and questions. We have addressed each below and updated our submission accordingly.\n\n> However, I have the following concern regarding evaluation. Although the proposed method clearly outperforms baselines, I'm worried that the comparison is not fair. For instance, IA^3 is powered by two new training objectives and mix-task batch training. All these have been proved effective in improving few-shot performance. However, in the evaluation, those tricks are not applied on baselines (especially new losses), resulting in an unfair comparison.”\n\nMixed-task batches is something that is possible with IA3 during during inference, but we don’t use mixed-task batches during training since the T-Few recipe has no shared trainable parameters between different downstream tasks and it would confer no practical benefits. Per your suggestion, we have run additional experiments of using the different training objectives across all PEFT methods and included them in our revision. Unlikelihood training and length normalization improve most of the methods, but IA3 is still the highest performing method. In addition, we note that all the results in Figure 2 have all the additional losses, so the comparison is fair. \n\n>There are also some mistakes in the description of the methodology, not sure if it's a typo or not. e.g., L212 we should minimize L_{LN} instead of maximizing it.\n\nThanks for noting this. We have clarified in our revision that we maximize the length-normalized log probability by minimizing L_{LN}.\n", " Thanks for your thorough review and suggestions. We've updated our submission and responded below.\n\n> Although fig. 2 presents an extensive comparison, it is unclear whether the authors accounted for the difference in model architectures, pre-training data, etc.”\n>In Fig 2, how do you account for the difference in architecture, difference in pre-training data, etc.? Ideally, we would want to compare IA3 to other approaches for the same base model.\n\nAll the results in fig. 2 correspond to different PEFT applied to the same backbone model with the same pre-training data, so there are no such differences to account for. We have updated our submission to make this more clear.\n\n> How were hyperparameters and other design choices made? Did you stick to the ‘true few-shot learning setting’, or were there any validation sets involved? I would recommend making this clear to make sure that comparisons with baselines are fair.”\n\nWe use the validation set of the T0 held-out tasks to help design T-Few on T0-3B. However, we used the same set of hyperparameters across all tasks and did not extensively tune them. To test the true few shot ability of T-Few, we applied the T-Few recipe to T0-11B and RAFT (which has no validation set and is \"true few-shot learning\" by design) without any hyperparameter tuning. The fact that T-Few on T0-11B works so well on the T0 tasks and achieved SoTA and superhuman performance on RAFT suggests to us that our submission indeed satisfies the real-world requirements of few-shot learning.\n\n> Since the main method applies IA3 to T0, it is difficult to isolate the performance contribution of each. I would recommend presenting an analysis of IA3 applied to plain language models (no instruction tuning).\n\nOur primary goal in this paper was to present a single recipe (including a fixed model and set of hyperparameters) that worked well on any unseen few-shot NLP task. While we agree that it would be interesting to see how IA3 performs on other (plain) language models, we are interested in maintaining or focus on designing a specific recipe rather than developing a PEFT method alone. We would be interested in exploring applying IA3 to different models and developing improved PEFT methods in future work.\n\n> Did you also try baseline methods (adapters, prefix tuning, etc.) with/without unlikelihood training, length normalization, etc.?\n\nThanks for suggesting this additional set of experiments. We have run these experiments and have added them to appendix D. SAID takes 3 days and so we weren't able to run all experiments during the weeklong rebuttal period, but they will be done in 2 days and can be included in future revisions.", " Thank you for your detailed review. We have responded to your comments and questions below and have updated our draft accordingly.\n\n> One main weakness is the training cost of the T-Few. More specifically, when training the rescaling vectors on a new tasks, training T-Few will involves forward and backward passes through the whole model (though only rescaling vectors require gradients for updates) which will be computational costly for each tasks. Suppose there are a mass of tasks, the training cost is considerable.\n\nWhile the forward and backward cost could add up if there are many tasks to be fine-tuned on, the fine-tuning cost is only incurred once and will be amortized as the model is used more and more during inference, unlike ICL (Line 136). We found that the break-even point with GPT-3's computational costs occurs when GPT-3 has done inference on only about 20 examples. We think it's highly likely that most models will be used for inference on more than 20 examples. Table 1 and Section 4.2 on training costs both include additional information about the computational cost of computing gradients.\n\n> Also, applying the proposed method to larger model can also increase the cost, e.g. applying to GPT-3 with 175B parameters.”\n\nWhile the T-Few recipe was specifically designed to T0, we agree that it could be applied to larger models. We note that even with a larger model, using T-Few will be dramatically cheaper than using ICL since the inference cost will be k times smaller (where $k$ is the number of in-context examples). For example, if T-Few was applied to GPT-3, the inference cost would still be 32-times smaller for a few-shot dataset with 32 examples. We also would argue that applying T-Few to larger models may not be necessary given that already outperforms GPT-3 and while being able to be trained on a single GPU.\n\n> Some related literatures are missing [ref1,2,3].\n\nThanks for pointing these out. We've added them to our paper.", " Thank you for your review and suggestions. We have responded to your comments below and updated our submission accordingly.\n\n> Some of the ablations in the appendix do not show reliable improvements and standard deviation seems too high (assuming the subscript numbers in the the appendix table cells refer to stdev).\n\nJust for clarification, the subscript numbers refer to the interquartile range. We have made this more clear in the revision. While we do not always achieve a significant improvement on every dataset, we do consistently see a significant improvement when averaging across datasets. Removing pre-training decreases accuracy by 1.6%, removing unlikelihood training and length normalization decreases accuracy by 4.1%, and removing both pre-training and our additional loss terms reduces accuracy by 2.5%. Our goal in creating the T-Few was to make a general-purpose recipe that could be applied as-is to many datasets, and we believe the consistent average-case gains support this goal.\n\n> Figure 2 shows the benefits of IA3 compared with prior work when fine-tuning T0-3B. But does the entire “T-few recipe” fail if another parameter efficient model (like adapters) is used instead of IA3? I.e., how important is AI3 within the proposed framework?\n\nThank you for making this interesting point. To clarify, fig 2 shows the performance for all parameter efficient methods. We have added an ablation of unlikelihood training and length normalization for all parameter efficient methods in appendix D. SAID takes 3 days and so we weren't able to run all experiments during the weeklong rebuttal period, but they will be done in 2 days and can be included in future revisions. However, swapping IA3 with other PEFT methods in the full T-Few recipe would involve pre-training each PEFT method. Since pre-training is computationally expensive beyond our means, we are unable to include this ablation. \n\n> I encourage the authors to move important ablations to the main text. If space is concern, related work could be shortened.\n\nUnfortunately, since the related work section is already in the appendix (Appendix B), we aren't able to include all ablation results tables in the main text due to space constraints. However, per your suggestion, we have included additional summary and emphasis of ablation results in the main text. While the gains are not always significant across datasets, we consistently do better on average across datasets.\n\n> The only ICL baseline considered is GPT-3. There have been several follow up works on improving ICL. More recent baselines like Chinchilla [Hoffmann et al., 2022] could have been also considered.\n\nIn principle, we would love to directly compare with other few-shot ICL results from models like Chinchilla. Unfortunately, most large language models are not publicly available, and published results for non-public models (like Chinchilla) have only a small overlap with the held-out tasks considered by T0, T-Few, and GPT-3. Comparing on a small number of tasks would likely not be statistically meaningful. We hope that more publicly-available LLMs will be released so that more baseline methods can be compared to in future work.", " The paper introduces a parameter efficient fine-tuning method that works for few-shot learning. The main contributions of the paper include: Extending T0 to work for few-shot learning. A single recipe for few-shot fine-tuning and a new parameter-efficient approach that outperforms prior work like prompt tuning and adapters. Strengths:\n\n- The paper is extremely well written. It was a joy to read.\n- The new parameter-efficient approach (IA3) achieves strong results compared with other methods.\n- The paper achieves strong few-shot results with moderate language model sizes (up to 11B), outperforming more expensive models like GPT3.\n- A single hyperparmeter and model configuration setting for fine-tuning on all tasks.\n\nWeaknesses:\n\n- Some of the ablations in the appendix do not show reliable improvements and standard deviation seems too high (assuming the subscript numbers in the the appendix table cells refer to stdev). \n- Figure 2 shows the benefits of IA3 compared with prior work when fine-tuning T0-3B. But does the entire “T-few recipe” fail if another parameter efficient model (like adapters) is used instead of IA3? I.e., how important is AI3 within the proposed framework?\n- I encourage the authors to move important ablations \n to the main text. If space is concern, related work could be shortened.\n- The only ICL baseline considered is GPT-3. There have been several follow up works on improving ICL. More recent baselines like Chinchilla [Hoffmann et al., 2022] could have been also considered. Please see above. There is no discussion about limitations. For example, it is still an open question what makes AI3 to work better than methods like Adapters or LoRA. ", " The paper focuses on few-shot learning in NLP and shows the limitation of few-shot in-context learning method (high computational cost and so on) and propose an adaptation strategy that adds light-weight learnable parameters (rescaling vectors) into a frozen pre-trained model and fine-tune only the learnable parameters on few labeled samples for few-shot learning. The experimental results shows the proposed strategy obtains much better performance than existing few-shot ICL methods and PEFT methods with lower inference computation cost while it requires largest training computation cost, disk cost and and higher memory cost during training T-Few. Strengths:\n\n1. In overall, the paper is well-written.\n\n2. The idea of reusing the pre-trained network and only learn additional light-weight parameters to adapt the pre-trained model for downstream tasks with few samples is interesting and shown to be obtain promising performance on downstream tasks.\n\n3. The introduced rescaling vectors are light-weight.\n\nWeaknesses:\n\n1. One main weakness is the training cost of the T-Few. More specifically, when training the rescaling vectors on a new tasks, training T-Few will involves forward and backward passes through the whole model (though only rescaling vectors require gradients for updates) which will be computational costly for each tasks. Suppose there are a mass of tasks, the training cost is considerable.\n\n2. Also, applying the proposed method to larger model can also increase the cost, e.g. applying to GPPT-3 with 175B parameters.\n\nIn overall, I think the idea is interesting and promising in the context of few-shot learning. However, I am not an expert on NLP field and I can not assess the novelty of the paper in NLP. Some related literatures are missing [ref1,2,3].\n\n[ref1] Requeima, James, et al. \"Fast and flexible multi-task classification using conditional neural adaptive processes.\" Advances in Neural Information Processing Systems 32 (2019).\n\n[ref2] Triantafillou, Eleni, et al. \"Learning a universal template for few-shot dataset generalization.\" International Conference on Machine Learning. PMLR, 2021.\n\n[ref3] Li, Wei-Hong, Xialei Liu, and Hakan Bilen. \"Universal representation learning from multiple domains for few-shot classification.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. The limitations of the method are discussed.", " This paper presents a new parameter efficient fine-tuning (PEFT) method for pre-trained transformer models in NLP. The authors argue that, with a careful choice of parameters to be fine-tuned, this approach can be computationally much cheaper than in-context learning (ICL) and perform better than in-context learning approaches. The paper proposes a new method called IA3, where the trainable parameters are vectors that scale the activations in each layer. When applied to the T0 model, the authors show that the proposed fine-tuning method performs better than prior ICL and PEFT methods. In addition, a careful analysis is presented that shows the computation consumed by the proposed method and other approaches. Strengths\n* Clarity on experimental details, helps reproducibility. \n* Extensive experiments and strong performance.\n* Offers a practical recipe for few-shot learning. \n* Clearly presented motivations.\n* Computationally advantageous and better than in-context learning and other few-shot/parameter efficient learning methods.\n\nWeaknesses\n* The presentation can be reorganized so that the main contributions are highlighted better. For instance, IA3 could be introduced early on. The current presentation is dense and could use better structuring. \n* Although fig. 2 presents an extensive comparison, it is unclear whether the authors accounted for the difference in model architectures, pre-training data, etc.\n\nOverall this is an interesting paper with extensive evaluation of the proposed approach. I appreciate the careful analysis of computational requirements. The following issues have to be addressed by the authors.\n* How were hyperparameters and other design choices made? Did you stick to the ‘true few-shot learning setting’, or were there any validation sets involved? I would recommend making this clear to make sure that comparisons with baselines are fair. \n* Since the main method applies IA3 to T0, it is difficult to isolate the performance contribution of each. I would recommend presenting an analysis of IA3 applied to plain language models (no instruction tuning). \n* Did you also try baseline methods (adapters, prefix tuning, etc.) with/without unlikelihood training, length normalization, etc.? \n * In Fig 2, how do you account for the difference in architecture, difference in pre-training data, etc.? Ideally, we would want to compare IA3 to other approaches for the same base model. Seems adequate", " This paper introduces T-Few, a parameter-efficient few-shot learning protocol that achieves STOA few-shot performance on many tasks, and also outperforms full-model finetuning and in-context learning with huge LMs. They rescale inner activations of LMs with learned vectors, such that only those vectors are updated during training. Their design is slightly different from existing parameter-efficient methods to enable mix-task batch training. \nThis paper demonstrates good engineering practice of parameter-efficient learning but there are flaws in the evaluations. The proposed method demonstrates strong few-shot performance on popular benchmarks and challenging RAFT benchmark. Those engineering practices will be a good contribution to the community and serve as bases for feature research on parameter-efficient tuning. \nThe presentation is clear and easy to follow, and the motivation is well-justified and supported. \n\nHowever, I have the following concern regarding evaluation.\nAlthough the proposed method clearly outperforms baselines, I'm worried that the comparison is not fair. For instance, IA^3 is powered by two new training objectives and mix-task batch training. All these have been proved effective in improving few-shot performance. However, in the evaluation, those tricks are not applied on baselines (especially new losses), resulting in an unfair comparison. \n\nThere are also some mistakes in the description of the methodology, not sure if it's a typo or not. e.g., L212 we should minimize L_{LN} instead of maximizing it.\n\n . yes" ]
[ -1, -1, -1, -1, -1, 7, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "nips_2022_rBCvMG-JsPd", "jrwvckediRi", "Hty0BCjtHa7", "Iyirer7BypB", "YfzRP0Jhtfn", "nips_2022_rBCvMG-JsPd", "nips_2022_rBCvMG-JsPd", "nips_2022_rBCvMG-JsPd", "nips_2022_rBCvMG-JsPd" ]
nips_2022_3LMI8CHDb0g
Reproducibility in Optimization: Theoretical Framework and Limits
We initiate a formal study of reproducibility in optimization. We define a quantitative measure of reproducibility of optimization procedures in the face of noisy or error-prone operations such as inexact or stochastic gradient computations or inexact initialization. We then analyze several convex optimization settings of interest such as smooth, non-smooth, and strongly-convex objective functions and establish tight bounds on the limits of reproducibility in each setting. Our analysis reveals a fundamental trade-off between computation and reproducibility: more computation is necessary (and sufficient) for better reproducibility.
Accept
The paper studies how the noise inherent in optimization affects “reproducibility,” which the authors measure by the Euclidean distance between two independent runs of the algorithm. The results of the paper reveal fundamental tradeoffs between computation (in terms of gradient oracle complexity) and the proposed notion of reproducibility. The reviewers have reached a clear consensus toward accepting this paper, citing its novelty and technical depth. I concur, and recommend acceptance as a spotlight presentation.
train
[ "RySvrFNvX4o", "pETK5pjI2Qv", "Zuk2LfL4Imw", "oDeuqcTdvoV1", "B-ZGNkFR8VS", "FIg92pwf0QG", "YWl0KASRF4q", "DFzycLzkCN", "ab1kjBP2hZ1" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for raising the score, and for giving us very useful feedback for improving the presentation of the paper (specifically, the discussion about $\\delta$, and the discussion about the role of $||x_f - x^*||^2$ in various settings). We’ll reflect your comments in our final version.", " Thank you for the response. This response has basically addressed my concerns, and I have upgraded my score accordingly. I think that the paper would benefit from some of this discussion being added (e.g. describing how delta might be very large even just from floating point arithmetic errors if you're training some huge model) to make things more concrete.\n\nI also take your point that my claim about Theorem 1 holding for the distance to x^* can't be true exactly, and I didn't mean to suggest that Theorem 1 or its proof are trivial. That said, I think || x_f - x_f' ||^2 is at least qualitatively similar to the conventional wisdom about || x_f - x^* ||^2, i.e. that the latter will be small for strongly convex functions, not necessarily small for non-strongly convex functions, that it can be monotonically non-increasing for smooth objectives (at least when the stepsize is sufficiently small) but can be completely unpredictable for non-smooth objectives, etc. I think it might be interesting for the reader and potentially helpful for building intuition if this analogy was made in the paper, but perhaps the authors disagree, in which case fair enough!", " I have read the authors’ rebuttal and I appreciate their answers. I don’t have any further questions. Good work!", " Thank you for your questions! We answer them one by one below.\n\n1. We study an exhaustive characterization of $(\\epsilon,\\delta)$-deviation for different settings in order to identify for which case the irreproducibility might be an issue. Although your interpretation is largely correct because $\\delta$ is quite small, we’d like to mention that in modern giant machine learning models with ~100 billions of parameters, $\\delta$ which roughly scales as (machine precision)$\\times$(dimension)$^{1/2}$ could be quite large indeed. Moreover, our results show that for non-smooth costs, even very small $\\delta$ can lead to large irreproducibility.\n\n2. For strongly convex costs, we agree that the deviation is much smaller than other cases due to the existence of a unique minimizer. We’ll modify our prose as per your comment. However, we’d also like to highlight that it is still nontrivial to get the exact dependency on $\\delta$, $\\epsilon$, $T$ and provide matching upper and lower bounds, as we did in our paper. In fact, some of the lower bound proofs require more delicate constructions than those of the non-strongly-convex cases. \n\n3. Regarding the results in Section 6 “Optimization for Machine Learning”: We agree that fixing the order of minibatches would ensure higher reproducibility. However, in practice, fixing the order is not a practical choice: many works have shown that the stochasticity in minibatch helps improve generalization in practice. Moreover, in applications like federated learning, it is hard to control the order of examples since clients connect to the server at random times that are beyond our control (see e.g. [Kairouz et al.](https://arxiv.org/abs/1912.04977)). Hence, we believe that studying reproducibility under the stochasticity in minibatch is still an important question.\n\n4. We would like to highlight that $x^*$ may not even be unique without strong convexity, and many of our analyses do not rely on the relation between $||x_f - x^* ||^2$ and $|| x_f - x'_f ||^2$. For instance, in the proof of Theorem 6, we directly lower bound $var(x_T)$ instead of $|| x_f - x^* ||^2$. For the same reason, we disagree with your assessment “Theorem 1 holds as stated also for the quantity $|| x_f - x^* ||^2$” because again for non-strongly convex costs the minima may not be unique. We are surprised that the reviewer found our results obvious since obtaining tight upper and lower bounds was quite challenging in several cases (for instance, see Appendices B and F) and the form of the deviation bounds was not obvious to us. If the reviewer has an easier way to derive our bounds, we would appreciate it if they could provide the easy analysis in an update to the review.\n\n Moreover, while the upper bound analyses might seem similar to stability analyses of gradient based algorithms (in the context of generalization bounds and differential privacy), the lower bounds certainly are novel and it is in fact surprising that simple algorithms like GD and SGD are already optimal in terms of the convergence rate vs reproducibility tradeoff.\n\n5. We haven’t found any results in the numerical analysis literature that theoretically study the deviation in terms of $\\epsilon$, $\\delta$, and $T$. To the best of our knowledge, the primary focus in the numerical stability community has been on important linear algebraic algorithms (e.g., solving linear systems, matrix inversion, exponentiation, square root etc) and less on general purpose convex optimization.\n\n Apart from the numerical stability literature, some works from the optimization literatures (e.g. [d'Aspremont](https://arxiv.org/abs/math/0512344) or [Devolder et al.](https://link.springer.com/article/10.1007/s10107-013-0677-5)) study similar inexact gradient oracle models but their main result is about characterizing the convergence rate under inexact oracle instead of the deviation in the iterates as we did in our work. Also, we noticed a similar formulation of deviation from control/dynamical systems literature in the form of input-to-state stability, but those results usually deal with general dynamical systems with different motivations.\n\n ", " Thank you for your very encouraging comment! We agree that our results help us understand the irreproducibility in ML and open many interesting directions for future study.\nLet us answer your questions one by one.\n\n1. Intuitively, you are correct that reproducibility should improve when $\\epsilon$ increases because it gets easier to reach the suboptimality level. For example, consider the extreme case where $\\epsilon \\approx 1$. Then one can simply output an initialization itself (or an output of gradient descent after some small number of steps). Hence, one would expect high reproducibility. \n\n On the other hand, we’d like to clarify that as the target accuracy $\\epsilon$ tends to zero, this intuition no longer holds: the main reason is the appearance of $T$ in the denominator of the reproducibility guarantee. For example, let’s consider the smooth case bound of $\\Theta(\\frac{\\delta^2}{T\\epsilon^2})$. Although it may seem that the appearance of $\\epsilon$ in the denominator suggest better reproducibility when $\\epsilon$ is larger, here note that $T$ has to be at least $\\Omega(\\frac{1}{\\epsilon^2})$ in order to achieve the $\\epsilon$-accuracy in the cost. Hence, merely increasing the target suboptimality $\\epsilon$ in such a regime does not improve reproducibility. In that case, one way to improve the reproducibility is to increase the number of iterations $T$ relative to the target accuracy $\\epsilon$. This can be achieved for instance by using smaller step sizes.\n\n2. We agree that an extension to general non-convex setting might require a careful formulation precisely due to the issue of multiple stationary points. In the worst case, we agree that the nonconvexity would result in very large irreproducibility. One possible concrete first step is to consider benign non-convex functions like those satisfying PL inequality. This would help us understand the non-convex case when we initialize near a single basin.\n", " Thank you for your encouraging comments!", " This paper studies the notion of reproducibility in convex optimization, defined as the requirement that two runs of the algorithm on the same input should produce the same (or similar) output. The authors define a quantitative notion of irreproducibility (the ($\\epsilon,\\delta$)-deviation) which measures how much an algorithm (that successfully minimizes the objective up to error $\\epsilon$) can change the output when the algorithm has an error of up to $\\delta$. The errors can come from stochastic gradient oracles, or numerical error, or inexact initialization. For each source of error, the authors compute tight bounds on the ($\\epsilon,\\delta$)-deviation for first-order methods, in the cases of smooth and non-smooth, convex and strongly convex functions. The authors also show results for optimization problems motivated by machine learning, including finite-sum optimization and stochastic convex optimization, and show the bounds are tight. This is a nice paper that studies an interesting conceptual question and proposes a quantitative analysis and answers. The paper is well-written and well-motivated, with good discussion of alternative notions. The setup and definition is clear and concise. The results on the tight bounds on the deviations are very nice, and some of the results surprising. \n\nThe paper studies a rather simple setting of convex optimization, but in a comprehensive way. - Yes", " This paper studies the reproducibility in optimization and provides the first theoretical framework to analyze the irreducibility of optimization algorithms in modern machine learning problems. The authors have provided the $(\\epsilon, \\delta)$ deviation of first-order stochastic optimization algorithms in different problems (strongly convex, convex smooth, convex non smooth), and support the bounds with information-theoretical lower bounds. The paper is mainly theoretical. As a theory person, I like the paper very much. The fundamental problem of irreducibility in machine learning is common and cannot be ignored. However, in real-life people simple re-train the models and use the one with the highest performance. This paper is, as far as I know, the first one that studies the problem rigorously and I really appreciate the results.\n\nStrengths.\n 1. The paper is very well-written, with clear notations, nice results, good presentations and easy-to-understand words and sentences. I thank the author for writing such a good paper. \n 2. The results are new and very interesting, which help us to understand why ML models are often irreproducible.\n 3. The authors have nice analysis results in different cases (strongly convex, convex smooth, convex non smooth), and support all of them with lower bounds.\n 4. I believe this work opens many new directions for future study.\n\nWeaknesses.\n I cannot find any obvious weakness of this paper. I believe as a theory paper, it is already good enough. If the authors can have some simple experiments in convex optimization to support their claims, it would be even better. However, I believe the paper is good as it is. I do have some questions for the authors though.\n\n1. Does the results in Table 1 imply that reproducibility is better (i.e., the deviation is smaller) when we do not converge to the neighborhoods of the optimum(i.e., when $\\epsilon$ is large instead of small)? I find it a little bit hard to interpret the results in this direction and I wonder what the authors think. Please correct me if I think it wrong.\n\n2. Although the authors mention that irreducibility for non convex optimization is another potential direction, I do not think it is quite possible to derive an upper bound in the non convex case since convergence in nonconvex problems is (usually) measured by the stationarity in gradients, and any two stationary points can be far away from each other. Therefore, I cannot think of an easy way to adapt the results in this paper to the non convex case. It's worth discussing though. yes.", " This paper considers the reproducibility of convex optimization algorithms, meaning roughly how much the parameters output by an optimization algorithm can change when the algorithm is run a second (on the same data). The paper studies the reproducibility of first-order convex optimization algorithms under three different types of noise/errors: stochastic gradient noise, inexact gradient computations, and inexact initialization. In all cases, the paper shows that such noise/errors can lead the output of two independent runs of the same algorithm to differ by an amount which depends mainly on (1) the amount of noise/errors, (2) the final optimization accuracy, and (3) the number of iterations used by the algorithm. Tight upper and lower bounds are proven in a number of settings, and one of the main upshots is that in order to match the reproducibility lower bounds, it is necessary to run an optimization algorithm for more iterations than are needed just to reach the desired accuracy. Strengths:\nThis is an interesting paper with a somewhat new perspective on convex optimization algorithms, and the paper is well-written and clear in how it presents its ideas. I read through the proofs of the theoretical results, and they appear correct to the best of my knowledge. The identified tradeoff between reproducibility and computation is quite intriguing---I have not seen anything like this before, and it seems like something that is worthy of additional investigation. Overall, I think this paper brings some interesting new ideas to the optimization community. \n\nWeaknesses:\nI think that at several points in the paper, it would be useful to make things more concrete with some examples. I feel this most strongly in the discussion of inexact initialization and the (non-stochastic) inexact gradient oracle. In particular, it seems from the motivation that the main source of inexactness would come essentially from numerical errors? If so, it seems to me that \\delta in this case should be extremely small, roughly machine precision, or perhaps machine precision times the square root of the dimension. If this is the case, the irreproducibility level shown in Theorems 2/3 in all cases excluding the non-smooth, non-strongly convex case seem like they would be quite small, and it's not clear to me that we would have anything to worry about here. \n\nIn the strongly convex settings, I feel that the level of irreproducibility is somewhat exaggerated in the prose below the theorems. E.g. around line 283, I don't think that it is the least bit surprising that irreproducibility can manifest at all---of course deviations in the gradients will affect the solution *some* amount, although presumably not too much. And indeed, the level of irreproducibility shown is at most epsilon, which seems quite small, and probably nothing to worry about!\n\nI am also a little skeptical about the formulation of the stochastic optimization oracle and the \"optimizing the population loss\" settings. In particular, if you are someone that is worried about reproducibility, then wouldn't you want to fix the order in which you use your samples? In this case, although there still might be some discrepancy between different runs (due to, e.g., non-stochastic inexactness in the gradient computations, inexact initialization, etc), it would likely be at a much smaller level since corresponding gradients calculated by each run of the algorithm would be based on the same sample. \n\nFinally, I have one slightly unfair complaint, which I don't weigh too heavily since it is a little unfair, but I want to mention it in case there might be a way to address this in future versions of the paper. Essentially, I found all of the results in the paper to be fairly obvious. Although there is a bit of a difference between looking at the iterate distance from the solution \\| x_f - x* \\|^2 and the reproducibility \\| x_f - x'_f \\|^2, these are very similar quantities, and unless I am mistaken, all of the results in this paper can basically be rephrased as saying that these quantities are within a constant factor of each other in the worst case (e.g. I'm pretty sure that Theorem 1 holds as stated also for the quantity \\mathbb{E} \\| x_f - x* \\|^2). \n\nTypos:\nline 249-250: \"(epsilon)-deviation bounds for parametter reproducibility immediately transform into (epsilon, delta)- deviation bounds for parameter reproducibility\" \nline 297: the lower bound is 1/eps + delta^2/eps^2 in the smooth setting, not 1/eps^2 This work looks at something similar to the classic idea of numerical stability. I am not personally super familiar with this literature, but I would be shocked if the numerical stability of various optimization algorithms had not been studied before, and if it has been, it would be good to discuss similarities/differences in the related work section. Have you searched for this type of paper? Is there just nothing relevant? See weaknesses above." ]
[ -1, -1, -1, -1, -1, -1, 6, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "pETK5pjI2Qv", "oDeuqcTdvoV1", "B-ZGNkFR8VS", "ab1kjBP2hZ1", "DFzycLzkCN", "YWl0KASRF4q", "nips_2022_3LMI8CHDb0g", "nips_2022_3LMI8CHDb0g", "nips_2022_3LMI8CHDb0g" ]
nips_2022_RNZ8JOmNaV4
Unsupervised Image-to-Image Translation with Density Changing Regularization
Unpaired image-to-image translation aims to translate an input image to another domain such that the output image looks like an image from another domain while important semantic information are preserved. Inferring the optimal mapping with unpaired data is impossible without making any assumptions. In this paper, we make a density changing assumption where image patches of high probability density should be mapped to patches of high probability density in another domain. Then we propose an efficient way to enforce this assumption: we train the flows as density estimators and penalize the variance of density changes. Despite its simplicity, our method achieves the best performance on benchmark datasets and needs only $56-86\%$ of training time of the existing state-of-the-art method. The training and evaluation code are avaliable at $$\url{https://github.com/Mid-Push/Decent}.$$
Accept
This paper addresses the density-mismatch problem in image-to-image translation by introducing a patch-wise variance constraint regularization. The approach is simple and effective, according to the reviewers. There were some general concerns about the validity of the assumption, but the authors appear to have sufficiently addressed those concerns. I would encourage the authors to make it clear that this is an inductive bias that they're relying on to make their method work: it's a valuable contribution but I think it's worth being extra clear that this is a reasonable assumption they built into their model but it might not be the best one. I therefore recommend acceptance of this paper to NeurIPS. There was one negative review from zLZj that had some useful content, but the authors seemed to address those concerns fairly well. The reviewer was showed skepticism towards the method that wasn't entirely clear to me and wanted to look at the code themselves but never followed through. I wasn't terribly convinced by the score being so low after the discussion and the author rebuttals, and I don't see evidence that reviewer looked at other reviews or discussion, so I believe the score does not accurately represent the paper's quality and I will treat the score (the discussion was good) as an outlier. Both wXXZ and o4oL did well as far as discussion and engagement.
train
[ "IMeUn7C1fjh", "800cDC3LSnG", "kp5SnwIlqLO", "9RRf8OzKfNx", "U0AxNPcPvO", "nLWMCrKxSL7", "CaStmvkeXHw", "b_X9-Sn88fO", "rlD7GGaewHD", "CoOhu0VGvQo", "JJMMXmuiXDn", "Vk5bZBehgYX", "qoGAoZhH7Yd", "UjK5pXvv39", "ZHcUB4MHQpD", "dsvHdbAIGI9" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi, we tried to download the checkpoints of selfie2anime of AttentionGAN to add the results in Table 2, but unfortunately we encounter the size mismatch problem, which is the same as https://github.com/Ha0Tang/AttentionGAN/issues/20 \n\nWe have included the results of horse2zebra in table 1. But the pretrained models for label2city and cat2dog are unavailable right now. So we are unable to report them right now.\n\nThanks,\nAuthors\n\n", " 1. **the evaluation code**\n\nSure! We have uploaded our evaluation code to https://github.com/anonymous-aisubmission/Neurips2022 \nWe also provide the generated results by our method, QS-Attn and MoNCE in the zip file. Reviewer can also verify the results by downloading the pretrained model checkpoints from QS-Attn and MoNCE.\n\n\n2. **how to evaluate**\n\nSo we report the FID scores as in previous papers. We checked the results of previous method with the FID and the results are consistent with the numbers they report. We use the public torch-fidelity package. \n\n3&4. **user study and qualitative results**\n\nSorry for the confusion, but we don't have enough time to provide you the results now. We will add them in our updated paper. \n\n5&8. **why U-GAT-IT and AttentionGAN not in two tabless**. \n\nThe reason is that we use the numbers reported the paper. The result of U-GAT-IT is taken from SpatchGAN and the result of AttentionGAN is taken from the original paper. U-GAT-IT only provides checkpoints for selfie2anime, so we cannot include it in table 1. \nAttentionGAN provides results of selfie2anime in KID (the KID protocol is same as U-GAT-IT but different from the KID in SpatchGAN). If needed, we can download the checkpoints of AttentionGAN and report more results in Table 1 and 2. We also would like to remind that we already included the most recent methods (CVPR2022) in table 1 and 2 while U-GAT-IT was published in ICL2020 and Attention-GAN is published in 2021. \n\n6. **highlight the difference**\n\nThanks for your suggestion. We will add the highlight in the figure. But for label2city, we think the difference is significant. The QS-Attn clearly frequently flips the building and tree as shown in the main paper and supplementary. \n\n7. **why not compare all methods in the same figure**\n\nThe reason is that the images in label2city are much more complex than images in other dataset. If we put all methods in the same figure, users have to zoom in to tell the difference. We put four images in each row just for the ease of reading. We provide enough examples for comparison in the supplementary. We beat SOTA methods quantatively and qualitatively. We provide the 500 generated results by our method, QS-Attn and MoNCE in the above github link. Feel free to compare them. \n\n\n \n\n\n", " Thanks! Your suggestion of the justification of the assumption really benefits our paper a lot! We are also glad that your concerns have been addressed to some extent. \n\nRegarding the coefficient, the Pearson correlation measures the linear correlation, so the real (non-linear) correlation should be stronger between the two domains across datasets, which is also demonstrated by the successful experiments. \n\nWe agree that the assumption can be violated sometimes. But it is nearly impossible to get a very general assumption that works across all datasets since we are only given two marginal distributions. For example, the preliminary paper \"CycleGAN\" also faces failure when the optimal mapping is not one-to-one (e.g., the label$\\rightarrow$city dataset in our paper). We believe that our method is a good complement to existing i2i methods and the density chaning regularization can be further combined with other methods to handle very complex scenes.\n\nAs for the Figure 4, yes, the densities decreases from from red to the blue, so the density of light blue (building) is higher than the density of dark blue (tree). Since we have to compute the density changes, so we also have to estimate densities in the source domain (first row, first column). Yes, the second row, first column shows the estimated density of the input image. We can observe that the road is of the highest density while the density of building (light blue) is higher than the density of tree (dark blue). So the estimated densities of source domain is in line with human. We will adjust the contrast and add a color bar to indicate this. Thanks! \n", " >1. The results of mAP, pAcc, and cAcc are not consistent with those in the original QS-Attn paper.\n\nThis is a bit tricky, please upload your evaluation code, I want to check it out.\n\n>2. The results of SWD on both Cat2Dog and Horse2Zebra should be included and compared in Table 1.\n\nIt seems that the evaluation metrics of this task are not very uniform, so how to judge the performance of the proposed model?\n\n>3 & 4. User study results and qualitative ablation results should be provided.\n\nNo user study results were provided. Because the evaluation metrics are not uniform, the user study may be the best evaluation method. However, the author did not provide the user study results. Moreover, Figure 12 in the supplementary material is just the ablation study results for the hyper-parameter lambda. What I would like to see more is the ablation study results of the proposed model, which means I want to know which part of the model has the most impact on performance and training speed. The authors do not provide experimental results in this regard.\n\n>5. More attention-based GAN methods such as [1,2] should be included and compared in Tables 1 and 2.\n\nWhy not add the results of U-GAT-IT in Table 1 and the results of Attention-GAN in Table 2?\n\n>6. From Figure 3, the proposed method does not significantly improve the performance of the label2city, horse2zebra, and anime datasets.\n\nIt is difficult to recognize that the results generated by the proposed method are significantly better than other methods, which can be highlighted in the figure.\n\n>7. Results of more methods should be provided in Figure 3, especially the label2city and anime datasets.\n\nBecause the authors use their own evaluation codes to evaluate all the methods in Table 1. Therefore, the author has the visualization results of all the methods in Table 1. Then why does Figure 6 in the supplementary material only compare with QS-Attn, Figure 7 compare with only MoNCE, and Figures 8, 9, and 10 only compare with four existing methods? Why not compare with all the methods in Table 1? I guess that these results may be carefully selected, and the method proposed in this paper is not comparable to some methods in some tasks, so the author makes a selective comparison.\n\n>8. Results of SOTA methods should be provided and compared in Figure 4.\n\nWhat I want to see is the result of comparing all the methods in Table1.\n\n``Since the author did not address my concerns well, I keep the original score.``\n\n\n\n\n", " Regarding the violation of the main assumption, I am still not completely convinced because it seems too vulnerable. In addition, I don't think this can be applied to more general conditions with various objects, which are partly shown by the lowest correlation of \"label2city\" dataset. Still, the authors empirically showed that this assumption works in practice, which I also value. \n\nRegarding figure 4, I still do not get this. In line 284, the authors say that \"For example, human can easily tell that gray patches should have more neighbors than green patches in the first row. The densities of gray patches shown in the second row are higher than the green patches.\" \n\nDoes this mean that the gray patches with lighter sky blue have a higher density than the green patches with (very slightly darker) sky blue? If it is, please change the example to make this clear, giving much contrast (e.g., compare them to purple patches (road), which are colored red, showing much higher density) What is the first column image of the second row? Is that also the output of the density estimation or a ground truth? (Similarly for the last one) How should it be ideally? Please clarify this in the main content. \n\nAll in all, although I am not fully happy with the soundness of the main assumption, the authors have addressed my concern and showed that it works in practice. Thus, I raise my score from borderline reject to borderline accept.", " Dear Reviewer o4oL:\n\nThanks a lot for your efforts in reviewing this paper. We tried our best to address the your concern of the assumption. We would highly appreciate it if you could provide some feedbacks on our added justifications.\n\nBest, Authors.", " Dear Reviewer zLZj:\n\nThanks a lot for your efforts in reviewing this paper. We tried our best to address the mentioned concerns. Are there unclear explanations here? We could further clarify them.\n\nBest, Authors.", " Thank you all for these helpful suggestions and comments! We have updated our manuscript and supplementary accordingly.", " 1. **The limitation should be stated up front and discussed already in the introduction.**\n\nThanks for your kind suggestion. Accordingly, we also put this limitation discussion in the introduction section in line 48-53 of the updated manuscript. \n\n2. **An ablation study into the size of patches**\n\nThis sounds like a great idea! We have run experiments on different patch representation layers.\n And the results are as follows (we also included it as Table.1 and more details are discussed in section 3 in appendix): \n \n| PatchSize | mAP | PixAcc | ClsAcc | |\n|-----------|-----|--------|--------|---|\n| Base | 21.86 | 53.85 | 28.81 | |\n| 1 | 26.03| 62.40 | 34.24 | |\n| 9 | 28.33 | 72.41 | 35.91 | |\n| 15 | 24.18 | 58.80 | 32.36 | |\n| 35 | 23.52 | 55.42 | 32.19 | |\n| 99 | 28.07 | 67.63 | 36.30 | |\n| Full | 30.97 | 72.93 | 39.3 | |\n\n We can observe that our regularization works on different patch sizes and when the patch size is 9, it works pretry good. If we adopt all patch layers, we can obtain the best result. \n\n3. **An assessment of the limitations, such as examples or analysis of the cases where the assumption does not hold.**\n\nThanks for your suggestion. Due to space constraints, we put the discussion and failure samples in Section 9 and Figure 5 in the appendix. We added this clarification in line 301 of the revised main paper.\n\n\n\n\n", " 1. **The results of mAP, pAcc, and cAcc are not consistent with those in the original QS-Attn paper.**\n\nWe feel sorry about the confusion. The main reason is that there is no public available evaluation model for the label2city dataset (See this link https://github.com/taesungp/contrastive-unpaired-translation/issues/138 ). And different methods and DRN model can cause huge differences in the results (https://github.com/taesungp/contrastive-unpaired-translation/issues/104). For example, for the 2 CVPR2022 papers (MoNCE, QS-ATTN), the values for CUT are also different (MoNCE reports thatCUT can achieve 78.22 of PixAcc while QS-ATTN adopts the reported number 68.8 in CUT). For fair comparison,, we evaluate all methods and report the results with our code. We are also planning to publish the evaluation code upon the publication of this paper. We added some clarification in the revised version to explain the inconsistency in line 196-198 of the updated manuscript. \n\n2. **The results of SWD on both Cat2Dog and Horse2Zebra should be included and compared in Table 1.**\n\nThanks for your suggestion. We considered SWD in our initial paper. It is worth mentioning that as you may notice, there is no widely-adopted SWD implementation. The reported SWD results are different for different papers. For example, MoNCE reports that SWD of CUT is 32.02 while QS-Attn reports 31.5. There also has been some problems with the reported SWD result (https://github.com/sapphire497/query-selected-attention/issues/2 ). \t\n\n3& 4. **User study results and qualitative ablation results should be provided.**\n\nThanks for your suggestion. We have included qualitative results in Figure 12 in the supplementary. We can clearly observe that the generations by Base-GAN often flips the building and tree. By contrast, our method is robust under different\nvalues of the hyper-parameter $\\lambda$.\n\n5. **More attention-based GAN methods such as [1,2] should be included and compared in Tables 1 and 2.**\n\nThanks for your kind suggestion. We agree that these two papers should be included since attention-based methods are also an important branch in I2I. We included U-GAT-IT in table 2 and Attention-GAN in table 1.\n\n6. **From Figure 3, the proposed method does not significantly improve the performance of the label2city, horse2zebra, and anime datasets.**\n\nWhen compared to the Base-GAN model, our method achieves clear performance gain, which highlights the effectiveness of our proposed regularization. We admit that the performance gain is not so big when compared to most recent methods, e.g., QS-Attn (CVPR2022) and NEGCUT (ICCV2021). But it is worth noting that all of them are improved versions of CUT method while our method only contains a single regularization. Our method beats these complex SOTA methods with a simple regularization on various datasets and is the fastest method among recent methods. In particular, our method only needs approximately 56% training time of NEGCUT. We hope that our method could be a potential foundation for a set of methods and believe that further larger performance gain can be obtained by improving our regularization in the future.\n\n7. **Results of more methods should be provided in Figure 3, especially the label2city and anime datasets.**\n\nDue to the space limitations, we are unable to put more results on the main paper. We put more samples in the appendix, please check figure 6-10.\n\n8. **Results of SOTA methods should be provided and compared in Figure 4**\n\nThanks for your suggestion. We have added the results by QS-Attn in Figure 1 and 11 in the supplementary. Without our density changing regularization, the SOTA method QS-Attn still generates low-density objects (tree) in\nthe high density region (building). The reason is that there is no explicit density changing regularization as in our method.\n\n\n\n\n\n\n", " 1. **Justification of the assumption is needed**\n\nThanks for your great suggestion! We added statistical tests to justify our assumption. For the label2city dataset, we have ground truth pairs. For other datasets, we use generation by most recent method as pseudo pairs. Then we randomly crop paired images to get paired image patches. Then we fit kernel density estimators for each domain. For each pair of image patch $(x,y)$ cropped from the same location in pair images $(X,Y)$, we feed them into nonparametric gaussian kernel density estimators $f_x$ and $f_y$. Then we can obtain a pair of densities $(f_x(x), f_y(y))$. Finally, we compute Pearson correlation coefficients for the two sets of estimated densities. \n\nWe observed that p-values for all datasets are 0, which allows us to safely reject the null hypothesis that the densities of patches are uncorrelated. The coefficients are significantly greater than 0 across datasets (0.540 for label2city, 0.837 for cat2dog, 0.511 for horse2zebra, 0.779 for selfie2anime), which highlights the positive correlation between the densities of paired image patches. We also provide visualization in the revised version. Please check section 5.2 of the updated manuscript for more details. In summary, the statistical tests well align with our assumption.\n\nWe agree that there are cases that our assumption can be violated, as discussed in section 6. At the same time, from our successful experimental results and statistical tests, we humbly believe that our method works well when preserving neighboring information is needed while this problem has not been well addressed by existing cycle consistency and contrastive learning methods. We have explicitly claimed the possibility of violating our assumption in some problems in line 48-53, in the updated manuscript.\n\n2. **Typos and term “unsupervised”**\n\nThank you very much for sharing it! We have changed the term and corrected the typos\n\n3. **density estimator training**\n\nThe density estimators are trained on patch representations and the gradient will not flow back to the patch representations. Only when we compute the density changing loss, the gradient flows back to the generator network. Therefore, training the estimators is just a likelihood maximization problem and is stable. In addition, we use the exponential moving average (EMA) of the estimator when computing the density changing loss. The EMA model produces a more stable gradient for our regularization. \n\n4. **how to read figure 4**\n\nFor figure 4, it shows the learned densities. For each image patch representation extracted by the CNN generator, we can obtain a density by feeding it to the density estimator. Then we can trace back which patch in the input image generates this representation. Therefore, we can compute the density for each patch in the input image. For the patch with red colors, it means the estimated densities for this patch to be high (Line 282). \n\n", " Thanks a lot for your very helpful suggestion! We hope that followings comments and the revision in our paper have addressed your concerns.\n\n1. **Including the Conclusion**. \n\n We added section 7 as a conclusion in our revised version, to address your concern.\n\n2. **move deduction from Line 126-133 to appendix**\n\nWe agree this short paragraph may not be enough to explain it. We added more explanation and moved it to the appendix now (Line 2-11).\n\n3. **what is** $L$\n\nIt is the number of layers we use to extract the patch representations. We added the explanation in the revised one (Line 175).\n\n4. **More explanation of PatchDist**\n\nDistanceGAN proposes to preserve pairwise distance within images. Then we build a\tstronger baseline – PatchDist which is trained to preserve pairwise distance within image patches. The major difference between our method and PatchDist is that our method uses density information while PatchDist uses pairwise distance as a proxy for neighboring information.The clear performance gain suggests that the density function contains more information that is relevant to our task than pairwise distance quantity. \n\n5. **Why choose variance rather than KL**\n\nOur assumption states that image patches of high density should also be mapped to patches with low density. Therefore, we would like to control the density changes for each patch to be close. Variance function is a good and direct yet simple measure that depicts the variance of a random quantity as a function of its mean. If the density changes of some patches are too high or too low, the variance can well reflect it and minimizing the variance will encourage the density changes for each patch to be close. Computing the KL divergence between the distribution of density changes and the dirac distribution whose value is the mean of density changes is another way to achieve the goal. But we have to estimate the distribution of density changes and it may cause extra computation cost and error. We added this clarification in our revised paper (L 127).\n\n\n\n\n\n\n", " The paper introduced a novel method for unsupervised image-to-image translation with density changing regularization. Different from most recent work, which either assume a cycle consistency or employ contrastive learning, the paper proposed a simple yet efficient method that adds density distribution regularization atop a simple GAN model. More specifically, the density changing regularization is to minimize the change of patch density distribution after translation, where the distribution is estimated from generator layers. The paper provides examples and numerical results which show improvements over baselines on both quality of the images and training speed. Strengths:\n1. The paper is generally well written. The authors provide detailed explanation on methodology and results.\n2. The proposed method is simple yet efficient. Such method can be easily replicated to build a strong baseline.\n3. The proposed method shows improvement over baselines on most the tasks.\n\nWeaknesses:\n1. The results on horse-to-zebra task is not as good as SOTA model. The authors provide potential issue and solution.\n2. The ending of the paper is abrupt. While section 6 includes decent discussion on potential issues and solutions, a proper conclusion section is still recommended.\n 1. Why choose variance as the distance function between distributions instead of other functions such as KL divergence? It would be good to see the how KL divergence would affect the model\n2. The equation on line 121 has no number.\n3. The deductions from line 126 to line 133 is not super clean to me, could you provide more details in appendix? I also recommend moving this part to appendix.\n4. What is L in line 180?\n5. The description of PatchDict in table 3 is too brief. Could you add more details in appendix?\n The authors have addressed the limitations and potential negative societal impact of their work.\n\nI would suggest authors to investigate potential solutions in the future works.", " This paper proposes a density changing regularization for better image translation under the assumption that the regions of the high probability density of two domains must be mapped to each other. To do so, the authors define a density constraint as the variance of the density ratio of two domains. Here, they train density estimators for each domain, numerically compute the value, and minimize it to reduce the ratio gap. The idea is simple and effective. When the assumption holds, the results show that the density changing regularization improves the translation performance. \n\n\nMy major concern with this paper is its main assumption. In fact, I could not agree with the argument that the patches with a high (low) density in one domain should be mapped to patches with high (low) density in another domain. I could think of many trivial examples that this is violated (as also noted by the authors). As the authors also noted in the last section, the key to the proposed method is whether the assumptions are met. However, despite its importance, there is no justification for this assumption and no theoretical or empirical observations are supported. The authors simply state the assumption and use it without any explanation, leaving a big logical jump. There must be some experiments or analyses to validate this assumption (at least in part, for the datasets used in the paper). Or, one must verify this with enormous amounts of experiments in various benchmark datasets so readers can agree that this constraint would generally work in many practical situations. If this is not resolved, I cannot give high scores. \n\nAnother issue is the confusing usage of the terminology; “unsupervised”. In fact, this is an “unpaired” setup, not “unsupervised”. Many recent papers have addressed this misusage, the first of which is TUNIT in ICCV 2021. Please revise the term accordingly.\n\n\nThe subscript index “i” is used without defining it. One needs to wait until seeing eq (4) to know what it is. \n\nThere are several typos in the paper: \n\nLine 104, there are can -> there can\n\nLine 147, {b_i^l}-> {c_i^l},\n\nLine 153 Please note that a^l and c^l represents -> b^l represents.\n\nLine 189 L_mle -> L_nll \n\nLine 285, Table 2 -> Table 1\n\nIt is not clear if the density estimators are trained together or separately. Looking at line 189, it seems like the estimators are trained together. Please clarify this more. If this is the case, wouldn’t this be too unstable? What happens in the early phase of the density estimator working poorly? \n\nFigure 4 and its description in the “learned densities” paragraph are not easy to understand. How should it be read? It is not clear what the densities under the input semantic maps (first column, second row) in Figure 4 mean (similarly for Truth, last column, second row).\n\n--\nAfter the rebuttal, I changed my score from borderline reject to borderline accept. Please see the above comments. Please see the above comments.", " In this paper, the authors make a density-changing assumption where image patches of high probability density should be mapped to patches of high probability density in another domain. Then the authors propose an efficient way to enforce this assumption: they train the flows as density estimators and penalize the variance of density changes. Strengths: \nThis paper is well written and easy to understand.\n\nWeaknesses:\n1. The results of mAP, pAcc, and cAcc are not consistent with those in the original QS-Attn paper.\n2. The results of SWD on both Cat2Dog and Horse2Zebra should be included and compared in Table 1.\n3. User study results should be provided.\n4. Qualitative ablation results should be provided.\n5. More attention-based GAN methods such as [1,2] should be included and compared in Tables 1 and 2.\n6. From Figure 3, the proposed method does not significantly improve the performance of the label2city, horse2zebra, and anime datasets.\n7. Results of more methods should be provided in Figure 3, especially the label2city and anime datasets.\n8. Results of SOTA methods should be provided and compared in Figure 4.\n\n[1]Kim, Junho, Minjae Kim, Hyeonwoo Kang, and Kwanghee Lee. \"U-gat-it: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation.\" ICLR 2020.\n[2]Tang, Hao, Hong Liu, Dan Xu, Philip HS Torr, and Nicu Sebe. \"Attentiongan: Unpaired image-to-image translation using attention-guided generative adversarial networks.\" IEEE Transactions on Neural Networks and Learning Systems (2021). See Weaknesses. The author does not provide potential negative societal impact of their work.", " The paper proposes a solution for the unsupervised image-to-image translation problem which is based on matching the distribution of generated patches in the output domain Y to that of the input patch distribution in the source domain X. To this end, the paper proposes a loss that aims to minimize the variance between the density of patches in X and the corresponding density in domain Y. The loss is based on density estimation using an auto-regressive flow model applied on patch representation at different layers. The paper demonstrates superiority on a number of datasets in comparison to baseline methods. Strengths:\n1. The paper is well written and clearly illustrates the motivation behind the proposed solution for the unsupervised image-to-image problem. \n2. The main technical contribution is the density loss. The motivation for using flow models to match the distribution of patches from the input distribution and the generated ones makes sense. I find the simple realization of the main idea through a single loss term to be a strength of the paper, especially as it leads to clear improvements. \n3. The evaluation is relatively thorough, comparing to recent methods and direct alternatives on a range of datasets, both quantitatively and qualitatively. \n\nWeaknesses:\n1. The paper makes an assumption regarding the fact that patch distribution should be matched. While this is discussed in the limitations, I think it should be stated up front and discussed already in the introduction. The method will only work where the assumption holds. \n2. An ablation study into the size of patches would be useful to understand their effect. In the limit, one could consider pixels or the entire image. \n\n 1. An assessment of the effect of patch size on the output. \n2. An assessment of the limitations, such as examples or analysis of the cases where the assumption does not hold. The authors adequately addressed the limitations and potential negative societal impact. However, it may be useful to further illustrate, qualitatively or visually the cases where the assumption made does not hold. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 4 ]
[ "9RRf8OzKfNx", "9RRf8OzKfNx", "U0AxNPcPvO", "CoOhu0VGvQo", "JJMMXmuiXDn", "UjK5pXvv39", "ZHcUB4MHQpD", "nips_2022_RNZ8JOmNaV4", "dsvHdbAIGI9", "ZHcUB4MHQpD", "UjK5pXvv39", "qoGAoZhH7Yd", "nips_2022_RNZ8JOmNaV4", "nips_2022_RNZ8JOmNaV4", "nips_2022_RNZ8JOmNaV4", "nips_2022_RNZ8JOmNaV4" ]
nips_2022_aXf9V5Labm
Network change point localisation under local differential privacy
Network data are ubiquitous in our daily life, containing rich but often sensitive information. In this paper, we expand the current static analysis of privatised networks to a dynamic framework by considering a sequence of networks with potential change points. We investigate the fundamental limits in consistently localising change points under both node and edge privacy constraints, demonstrating interesting phase transition in terms of the signal-to-noise ratio condition, accompanied by polynomial-time algorithms. The private signal-to-noise ratio conditions quantify the costs of the privacy for change point localisation problems and exhibit a different scaling in the sparsity parameter compared to the non-private counterparts. Our algorithms are shown to be optimal under the edge LDP constraint up to log factors. Under node LDP constraint, a gap exists between our upper bound and lower bound and we leave it as an interesting open problem, echoing the challenges in high-dimensional statistical inference under LDP constraints.
Accept
This paper considers the important problem of change point detection in networks under local differential privacy. The paper provides bounds for the problem under both node and edge privacy constraints. While the bounds in all cases are not matching the results are timely and will be interesting for several researchers.
train
[ "c1pE1HQ2aB0", "zmOcyMkRRn", "wDEc_QZttEX", "2XUT2z39O6mB", "VCIBL4GbXDu", "Ls1YITo1ECa", "0TLJBwgDntU", "_O_8NHs8Y6" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your appreciation and constructive comments. We reply to all your comments and questions point-by-point in the following. We have submitted revised main text file and supplementary materials.\n\n**On the network models**\n\nThanks for giving us the opportunity to further elaborate on our motivation. In this paper, we consider two types of network models. Both of them are inhomogeneous which means that each edge can be connected with different probabilities. \n\nThe inhomogeneous Bernoulli network (IBN) detailed in Definition 1 is a fairly general definition, which includes a wide range of important network models as its special case, including Erd\\H{o}s--R\\'enyi random graph models, stochastic block models, mixed membership block models, random dot product graph models and general random dot product graph models. Our results can serve as a benchmark for future work on these models with special structures under local differential privacy constraints. In our paper, we focus on the case that IBNs are undirected networks, but the analysis can also be extended to directed networks. We use this model to emphasise the edge LDP concept, where a pair of nodes/individuals jointly own a unit of information. In application, sensitive networks such as sexually transmitted disease networks motivated us to adopt this model, as described at the beginning of Section 2.1. \n\nAnother commonly encountered type of network appears in recommender systems, where a network encodes the relationship between a set of users and a set of commodities. In particular, each edge here is owned by a single user rather than a pair of users. This is widely seen in the context of Netflix and Amazon types data, which motivated our study as described at the beginning of Section 2.2. This type of network can be modelled by a bipartite IBN model (Definition 2) where dependence among each user's choice over products can also be accommodated, and we argue in the last paragraph of Section 2.2 that it is convenient to consider this type of network under node LDP constraint which is stronger than the notion of edge LDP. \n\n**Numerical results**\n\nFollowing your suggestions, we provide some representative simulation results here, with more details in the revised supplementary materials, including plots of the results. \n- Setting. We generate a sequence of $T$ independent IBNs or bipartite IBNs when considering node LDP, with the network size $n_1 = n_2 = n = 50$ and entrywise sparsity level $\\rho = 0.4$. There is one and only one change point with a balanced spacing, i.e.~the change point $\\eta = \\Delta = T/2$, where $\\Delta$ is the minimal spacing. The expectations of the adjacency matrix before and after change point are $\\Theta_{\\text{pre}} = 0.1 \\times 1_{n \\times n}$ and $\\Theta_{\\text{post}} = 0.4\\times 1_{n \\times n}$, respectively, where $1_{n \\times n} \\in \\mathbb{R}^{n \\times n}$ has all entries being one. The normalised jump size is therefore $\\kappa_0 = \\|\\Theta_{\\text{post}} - \\Theta_{\\text{pre}}\\|_{\\text{F}}/(n\\rho) = 0.75$. We consider different minimal spacing $\\Delta$ and privacy budget $\\alpha$ in the simulations.\n- Method. We use a simplified version of NBS algorithm based on the binary segmentation procedure. For small number of change points, our theory still holds for this computationally less demanding algorithm. The thresholding tuning parameter, above which change points are declared, is fixed to be $n\\log^{1.5}(T)/10$, $n\\log^{1.5}(T)/30$ and $n^2\\log^2(n^2T)/10$ in the no privacy, edge LDP and node LDP cases. \n- Metric. Let the estimated set of change points be $\\set{\\widehat{\\eta}_i\\}_\\{i=1\\}^{\\hat\\{K\\}}$. We use $\\max_i |\\widehat{\\eta}_i - \\eta|/\\Delta \\in [0,1]$ to evaluate the performances. If no change point is returned, we output one. \n- Result. The result is collected in the table below, each cell of which is the **median** over 100 repetitions. Without any privacy constraint, i.e. using raw data, the change can be easily detected with $\\Delta$ as small as $7$. Imposing privacy guarantees require a larger $\\Delta$ to consistently localise the change points. We can see that for the same sample size, the performance deteriorates as $\\alpha$ decreases under edge LDP. The node LDP is a more stringent requirement, compared to the edge LDP. From the last three columns, we can see that with the same sample size the change can be perfectly localised with no error in the no privacy case, and very well localised under edge LDP with $\\alpha = 0.1$, but to obtain a reasonable estimator, the node information can only be protected at level $\\alpha = 1$. \n\n| $\\Delta$ | 7 | 15 | 23|700 |1100 | 1500 | \n| ---- | --- | --- | ---- | ----- | ---- | ---- |\n|No privacy|0.143|0.091|0.091 | 0.000 | 0.000 | 0.000 |\n|Edge LDP $\\alpha = 0.5$ |0.429|0.429|0.273|0.000|0.000|0.000|\n|Edge LDP $\\alpha = 0.1$ | 1.000 |1.000 |1.000 | 0.018 | 0.007 | 0.003 |\n|Node LDP $\\alpha = 1$ | 1.000 |1.000|1.000|0.897|0.175|0.084|\n", " Thank you very much for your appreciation and constructive comments. We reply to all your comments and questions point-by-point in the following. We have submitted revised main text file and supplementary materials.\n\n**Novelty**\n\nYou are indeed right that the methods we adopt in the paper are from existing literature. We in fact see this as a strength rather than lack of novelty. We adopt existing methods to study a new and important problem, while deriving new insights for existing popular privacy mechanism along the way (e.g. Lemma 5). The message that simple methods such as randomised response can be minimax optimal is useful for the community. \n\n**Numerical results**\n\nFollowing your suggestions, we provide some representative simulation results in the responses, with more details in the revised supplementary materials, including plots of the results. \n\n- Setting. We generate a sequence of $T$ independent IBNs or bipartite IBNs when considering node LDP, with the network size $n_1 = n_2 = n = 50$ and entrywise sparsity level $\\rho = 0.4$. There is one and only one change point with a balanced spacing, i.e.~the change point $\\eta = \\Delta = T/2$, where $\\Delta$ is the minimal spacing. The expectations of the adjacency matrix before and after change point are $\\Theta_{\\text{pre}} = 0.1 \\times 1_{n \\times n}$ and $\\Theta_{\\text{post}} = 0.4\\times 1_{n \\times n}$, respectively, where $1_{n \\times n} \\in \\mathbb{R}^{n \\times n}$ has all entries being one. The normalised jump size is therefore $\\kappa_0 = \\|\\Theta_{\\text{post}} - \\Theta_{\\text{pre}}\\|_{\\text{F}}/(n\\rho) = 0.75$. We consider different minimal spacing $\\Delta$ and privacy budget $\\alpha$ in the simulations.\n- Method. We use a simplified version of NBS algorithm based on the binary segmentation procedure. For small number of change points, our theory still holds for this computationally less demanding algorithm. The thresholding tuning parameter, above which change points are declared, is fixed to be $n\\log^{1.5}(T)/10$, $n\\log^{1.5}(T)/30$ and $n^2\\log^2(n^2T)/10$ in the no privacy, edge LDP and node LDP cases. \n- Metric. Let the estimated set of change points be $\\set{\\widehat{\\eta}_i\\}_\\{i=1\\}^{\\hat\\{K\\}}$. We use $\\max_i |\\widehat{\\eta}_i - \\eta|/\\Delta \\in [0,1]$ to evaluate the performances. If no change point is returned, we output one. \n- Result. The result is collected in the table below, each cell of which is the **median** over 100 repetitions. Without any privacy constraint, i.e. using raw data, the change can be easily detected with $\\Delta$ as small as $7$. Imposing privacy guarantees require a larger $\\Delta$ to consistently localise the change points. We can see that for the same sample size, the performance deteriorates as $\\alpha$ decreases under edge LDP. The node LDP is a more stringent requirement, compared to the edge LDP. From the last three columns, we can see that with the same sample size the change can be perfectly localised with no error in the no privacy case, and very well localised under edge LDP with $\\alpha = 0.1$, but to obtain a reasonable estimator, the node information can only be protected at level $\\alpha = 1$. \n\n| $\\Delta$ | 7 | 15 | 23|700 |1100 | 1500 | \n| ----------- | ----------- | ----------- | -------- | ----------- | ----------- | ----------- |\n| No privacy | 0.143 | 0.091 | 0.091 | 0.000 | 0.000 | 0.000 |\n|Edge LDP $\\alpha = 0.5$ |0.429|0.429|0.273|0.000|0.000|0.000|\n| Edge LDP $\\alpha = 0.1$ | 1.000 |1.000 |1.000 | 0.018 | 0.007 | 0.003 |\n| Node LDP $\\alpha = 1$ | 1.000 |1.000|1.000|0.897| 0.175 | 0.084|\n\n**Presentation**\n\nThank you for the suggestion of adding some informal theorems in the introduction. After some thought, we added a table summarising the main results in the Conclusion section rather than in the Introduction section, due to the difficulty of introducing any form of results without formal definitions of different privacy concepts. The notions of local differential privacy on network data are still ambiguous in the existing literature and our paper contributes to this discussion. \n\n**Edge dependence**\n\nThis is indeed a very valuable suggestion. In the network statistics literature, even without privacy concerns, incorporating the dependence among edges is an important yet open task. Due to the lack of a natural distance among edges, it is often hard to directly adopt the dependence assumptions used in time series or spatial statistics. One way to impose edge dependence is to assume a hierarchy structure in modelling, for instance, random dot product graphs, where each node is associated with a latent position. Statistical analysis involved usually requires singular value decomposition of the whole adjacency matrix and corresponding results under local differential privacy need to be developed. We will consider such models in our future work. ", " Thank you very much for your appreciation and constructive comments. We reply to all your comments and questions point-by-point in the following. We have submitted revised main text file and supplementary materials.\n\n**Numerical results**\n\nFollowing your suggestions, we provide some representative simulation results here, with more details in the revised supplementary materials, including plots of the results. \n- Setting. We generate a sequence of $T$ independent IBNs or bipartite IBNs when considering node LDP, with the network size $n_1 = n_2 = n = 50$ and entrywise sparsity level $\\rho = 0.4$. There is one and only one change point with a balanced spacing, i.e.~the change point $\\eta = \\Delta = T/2$, where $\\Delta$ is the minimal spacing. The expectations of the adjacency matrix before and after change point are $\\Theta_{\\text{pre}} = 0.1 \\times 1_{n \\times n}$ and $\\Theta_{\\text{post}} = 0.4\\times 1_{n \\times n}$, respectively, where $1_{n \\times n} \\in \\mathbb{R}^{n \\times n}$ has all entries being one. The normalised jump size is therefore $\\kappa_0 = \\|\\Theta_{\\text{post}} - \\Theta_{\\text{pre}}\\|_{\\text{F}}/(n\\rho) = 0.75$. We consider different minimal spacing $\\Delta$ and privacy budget $\\alpha$ in the simulations.\n- Method. We use a simplified version of NBS algorithm based on the binary segmentation procedure. For small number of change points, our theory still holds for this computationally less demanding algorithm. The thresholding tuning parameter, above which change points are declared, is fixed to be $n\\log^{1.5}(T)/10$, $n\\log^{1.5}(T)/30$ and $n^2\\log^2(n^2T)/10$ in the no privacy, edge LDP and node LDP cases. \n- Metric. Let the estimated set of change points be $\\set{\\widehat{\\eta}_i\\}_\\{i=1\\}^{\\hat\\{K\\}}$. We use $\\max_i |\\widehat{\\eta}_i - \\eta|/\\Delta \\in [0,1]$ to evaluate the performances. If no change point is returned, we output one. \n- Result. The result is collected in the table below, each cell of which is the **median** over 100 repetitions. Without any privacy constraint, i.e. using raw data, the change can be easily detected with $\\Delta$ as small as $7$. Imposing privacy guarantees require a larger $\\Delta$ to consistently localise the change points. We can see that for the same sample size, the performance deteriorates as $\\alpha$ decreases under edge LDP. The node LDP is a more stringent requirement, compared to the edge LDP. From the last three columns, we can see that with the same sample size the change can be perfectly localised with no error in the no privacy case, and very well localised under edge LDP with $\\alpha = 0.1$, but to obtain a reasonable estimator, the node information can only be protected at level $\\alpha = 1$. \n\n| $\\Delta$ | 7 | 15 | 23|700 |1100 | 1500 | \n| ---- | --- | --- | ---- | ----- | ---- | ---- |\n|No privacy|0.143|0.091|0.091 | 0.000 | 0.000 | 0.000 |\n|Edge LDP $\\alpha = 0.5$ |0.429|0.429|0.273|0.000|0.000|0.000|\n|Edge LDP $\\alpha = 0.1$ | 1.000 |1.000 |1.000 | 0.018 | 0.007 | 0.003 |\n|Node LDP $\\alpha = 1$ | 1.000 |1.000|1.000|0.897|0.175|0.084|\n\n**On the randomised response mechanism**\n\nThanks for pointing out this relevant literature which we have cited in our revision.\n\n> [1] Mohamed, M. S., Nguyen, D., Vullikanti, A., & Tandon, R. (2022, June). Differentially Private Community Detection for Stochastic Block Models. In International Conference on Machine Learning (pp. 15858-15894). PMLR.\n\nIn the following, we first comment on the use of randomised response (RR) mechanism in our paper and then discuss the connection with [1].\n\nIn our paper, the simple RR mechanism is shown to be optimal in the edge privacy case, but *sub-optimal* in the node privacy case. The sub-optimality of RR in the node privacy case motivates our study of the more involved sampling mechanism in change point analysis. To be specific, using RR with parameter $\\alpha/n_2$ to each entry of the network can achieve $\\alpha$-level node privacy. We consider the simple case that $n_1 = n_2 = n$ and ignore logarithmic factors. Similar arguments as those in the proof of Theorem 3 show that, to consistently localise change points, an RR-based method would require the condition\n\\begin{align*}\n \\kappa_0^2 \\rho^2 \\gtrsim \\frac{n}{\\Delta\\alpha^2}. \n\\end{align*}\nOur proposed method improves the previous condition by a factor of $n$ to \n\\begin{align*}\n\\kappa_0^2 \\rho^2 \\gtrsim \\frac{1}{\\Delta\\alpha^2}.\n\\end{align*}\n\nIn [1], RR is shown to be sub-optimal under edge *central* differential privacy to conduct community detection. In our paper, RR is shown to be optimal under edge *local* differential privacy to conduct change point analysis. Under central differential privacy, a central data curator has access to all the raw data. As for local differential privacy, the raw data can only be accessed by the owners of data. The existence of central data curator enables algorithms to borrow information from other data points, which changes the fundamental limit of the problem.", " Thank you very much for your appreciation and constructive comments. We reply to all your comments and questions point-by-point in the following. We have submitted revised main text file and supplementary materials.\n\n**Presentation of privacy definitions**\n\nIn the revision, we have changed the notations in Section 2 to improve readability.\n\n**Q1: Assumptions 1 & 2**\n\nYou are indeed right that these two assumptions contain notation definitions and have substantial overlapping. In the revision, we have merged these two assumptions following your suggestion. We however keep the notation definition within the Assumption to save space. \n\n**Q2: the number of change points**\n\nWe seek consistent estimators satisfying that \n\\begin{align*}\n \\Delta^{-1} \\max_{k = 1}^K |\\widehat{\\eta}_k - \\eta_k| \\to 0 \\quad \\mbox{and} \\quad \\widehat{K} = K,\n\\end{align*}\nas the sample size $T$ grows unbounded. The number of change point $K$ is also allowed to be a function of $T$, which means when $T$ grows unbounded, $K$ is also allowed to diverge. The only condition required regarding $K$ can be inferred from the signal-to-noise ratio condition. We use the edge privacy as an example to explain. In eq.~(9) in Theorem 3, we require that \n\\begin{align*}\n \\kappa_0^2 \\rho^2 n \\Delta \\alpha^2 \\geq c_0 \\log^{2+\\xi}(T).\n\\end{align*}\nThis means that we require the minimal spacing between two consecutive change points $\\Delta$ to satisfy\n\\begin{align*}\n \\Delta \\gtrsim \\log^{2+\\xi}(T)/(\\kappa_0^2 \\rho^2 n \\alpha^2).\n\\end{align*}\nSince $K \\leq T/\\Delta$, this implies that the number of change points needs to be bounded by\n\\begin{align*}\n K \\lesssim T \\kappa_0^2 \\rho^2 n \\alpha^2/\\log^{2+\\xi}(T).\n\\end{align*}\n\n**Q3: notation $A$**\n\nThanks for pointing this out. In the revision, we have changed the notation set $A$ to set $S$, while leaving $A$ to denote adjacency or biadjacency matrices.\n\n**Q4.1: privacy mechanism $Q$**\n\nThe privacy mechanism $Q$ is a *randomised* mechanism so it is indeed a probability measure. It is in fact a conditional distribution, which, conditioning on the raw data, outputs privatised data. The local differential privacy definition imposes directly on the likelihood ratio, as stated in eq.~(1) that\n\\begin{align*}\n Q_i(Z_i \\in S | X_i = x)/Q_i(Z_i \\in S | X_i = x') \\leq e^{\\alpha},\n\\end{align*}\nwhere $X_i$ is a piece of generic notation that refers to the data point provided by user $i$. \n\n**Q4.2: notation $X$ in Section 2**\n\nThanks for pointing this out. The $X$ and its related quantities indeed correspond to the adjacency matrix, and we did use $X_i$ to denote the $i$-th row of the adjacency matrix. In the rebuttal revision, we have changed $X$ and its related quantities to $A$ and its related quantities to improve readability. We still use $X_i$ in equation (1), since it denotes a general LDP definition. \n\n**Q4.3.1: previous work and eq. (3) in our paper**\n\nThanks for the opportunity to elaborate on this subtle point. Previous work allows user $i$ to privatise the information owned jointly by themselves and other users, and this requires trust on user $i$ not to leak information about others. The core of local privacy is to allow minimal trust among users. To overcome this issue that a single corrupted user could affect other users' privacy guarantee, in our edge LDP part, we privatise each edge individually, with the consent/decision from both involved individuals $i$ and $j$. \n\n**Q4.3.2: edge vs. node privacy**\n\nOne way to explain the difference between edge LDP and node LDP is to investigate how the privacy budget is spent. The edge LDP definition (Definition 3) requires that\n\\begin{align*}\nQ_{ij}^{(t)}(Z_{ij}{(t)} \\in S | A_{ij}{(t)} = x)/ Q_{ij}^{(t)}(Z_{ij}{(t)} \\in S | A_{ij}{(t)} = x')\\leq e^{\\alpha}.\n\\end{align*}\nfor any $x, x' \\in \\{0,1\\}$, which says each single edge is protected at level $\\alpha$. In the bipartite node LDP definition (Definition 4), it requires $n_2$ edges to be protected at level $\\alpha$. In other words, in the node privacy case, the privacy budget needs to be split among $n_2$ edges. To be specific, one can, although inefficiently, privatise each edge at level $\\alpha/n_2$ to satisfy the node LDP definition at level $\\alpha$. Since smaller $\\alpha$ means stronger privacy guarantee, each edge is *more* private under node LDP than edge LDP. A more intuitive explanation is as follows. Under edge LDP, one wants to make any two graphs differ by one edge look similar after privatisation. Under node LDP, one wants to make any two graphs differ by one node's connection look similar after privatisation. The level of similarity is quantified by $\\alpha$.", " The paper looks the problem of finding change point of networks sequences for two networks under edge and node LDP. For each setting, it shows the feasible regime and an algorithm that guarantee LDP. Strength: The paper looks at an important problem which hasn't been worked on before.\n\nWeakness: The privacy definitions is a bit hard to follow. I mainly have some questions regarding the setup.\n\n- I'm a bit confused by Assumption 1 & 2. I think they define the change point and some properties of the change point for the two networks. But isn't there a unified definition for any network? Why would we need two? (And, why are they \"assumption\" not \"definition\"?)\n\n- Line 118 says we want the estimated change points to be close to the actual change points as sample size T grows. Is that under the assumption that the number of change points K does not grow with T?\n\n- I think $A$ is used for both the adjacency matrix and the set $A \\subset Z$. Maybe you can considering change the notation.\n\n- For the privacy definitions in Section 2\n - what is Q? If it is a mechanism (as is stated in Line 144), I feel like its should be a function that operate on the input so the expression should be something like Z = Q(X)? But the way it is used in the formula make it seems like some probability measure.\n - what is X? Is that the adjacency matrix (so the same as A in Section 1)? And what is $X_i$? Is that one row of the adjacency matrix?\n - Regarding the edge-LDP definition:\n - You mentioned that some prior work is looking at $X_i$ instead of $X_{ij}$ and it requires trust between nodes. I didn't follow that part. Does that refer to the case where user $i$ is supposed to privatize the edge (or lack of edge) between $i$ and $j$? But if so, wouldn't (3) also have the same problem?\n - I don't quite follow the definition. I think what you mean is that for user i, even if every {i, j} changes, we would still be able to hide the change. But if so, isn't that the same as node-LDP? Yes.", " This paper studies the problem of change detection (or change point localization as mentioned by the authors) in dynamically evolving graphs subject to differential privacy. Focusing on two generative models, namely, inhomogeneous Bernoulli networks (IBNs) and bipartite IBNs, conditions are provided under which change localization is infeasible. Subsequently, a simple randomized response mechanism is studied (applied together with CUSUM type statistic for change detection) and analyzed. + This is a technically strong and very well written paper, which I enjoyed reading. I very much appreciated the care taken by the authors in describing the similarities/differences in techniques which they borrowed/adapted from prior works. In addition, some of the discussions along with the technical results (such as impact of privacy constraints on the graph sparsity) were quite intriguing. Overall, I believe this could be a useful contribution to the broader ML community working on the area of privacy preserving graph based algorithms. \n\nSome minor weaknesses and suggestions for the authors:\n\n- While I understand that this is a theoretically oriented paper, and the technical results are solid, I would have still liked to see some numerical results to actually see the finite sample performance of the proposed algorithm (RR + CUSUM) on some dynamic networks in practice. \n\n- I would like to point out a somewhat related work on \"Differentially Private Community Detection For Stochastic Block Models\", ICML 2022, which also derives some tradeoffs between privacy and the phase-transition boundary (for community recovery). In contrast to the conclusions of this submission (which show that RR is almost optimal for edge LDP), the above reference shows that RR may not be a good choice for private community detection with edge DP, as it significantly increases the average node degree (and other mechanisms, such as exponential, and stability based methods can fare better and provide a better privacy-recovery tradeoff). This is a complementary viewpoint, which authors may wish to discuss in the paper, i.e., what (if any) are the limitations of RR based perturbations. \n See comments above Yes, limitations were appropriately discussed. ", " This paper studies the problem of change point detection in sequences of networks/graphs with respect to local differential privacy constraints (respectively edge and node LDP). Under assumptions on the random structure of the graphs (IBN for instance), the authors establish minimax rates for change point detection using Le Cam’s method. Then, the authors show that using randomized response to obtain edge privacy alongside the NBS algorithm obtains the minimax rate for change point detection. The authors show similar results for node LDP for bipartite IBNs. Strengths: To the best of my knowledge this problem has not been attempted before. Moreover, the need for edge/node LDP is clearly presented in the paper with practical examples. The problem is concisely contained, as both lower and upper bounds rates are presented for the change point estimation. The paper was clearly presented, although presenting the key results/rates as an informal theorem e.g. in the introduction may be useful. There is also thorough discussion in the conclusion of why the techniques do not generalize to the interactive setting.\n\nWeaknesses: The problem discussed and approaches used in the paper are not particularly novel, as the paper is concerned with taking existing statistical procedures and studying them in through the lens of (local) privacy. Moreover, it would be interested to have some experimentation to demonstrate the empirical performance of these estimators when being run on privatized networks. Lastly, the random structure assumed on the families of graphs (IBN, bipartite IBN) seems somewhat restrictive, especially when considering practical network data (e.g. is it realistic to assume that networks connections are independently generated?). Is there a way to incorporate correlation between the network edge indicators into this result (outside of that included in the bipartite IBN)?\n As aforementioned, it may be valuable to introduce an informal theorem for your main results in the introduction of the paper. Other questions and comments can be found above. The authors discuss the limitations of their paper in the conclusion. In particular, they point out that their paper and arguments do not go through in the setting of interactive LDP. Investigating the problem in this setting seems important, especially for network data where one would expect the edge/node inclusion probabilities to be correlated.", " In this paper, the authors consider consistently localizing change points under both node and edge local differential privacy (LDP). The authors proposed an algorithm under edge LDP that is shown to be optimal. For node LDP, the authors show there is a gap exists between upper and lower bounds. Strengths:\nS1. The work seems solid. I didn't check for all the technical details, but the formal definitions and results seem to be finished well.\nS2. The authors provide theoretical results to the problems.\n\nWeaknesses:\nW1. I did not fully get the reason why we should focus on these problems. I can understand network data is important, but I do not see why we focus on these two network models. The paper could benefit from better motivating why these problems are important in practice, more concrete examples of the two network definitions, and how close these two definitions model real-world scenarios.\n Q1. Is it possible to provide some empirical evaluation since you have provided an algorithm? The authors adequately addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, 6, 7, 6, 5 ]
[ -1, -1, -1, -1, 2, 5, 3, 2 ]
[ "_O_8NHs8Y6", "0TLJBwgDntU", "Ls1YITo1ECa", "VCIBL4GbXDu", "nips_2022_aXf9V5Labm", "nips_2022_aXf9V5Labm", "nips_2022_aXf9V5Labm", "nips_2022_aXf9V5Labm" ]
nips_2022_qfC1uDXfDJo
Annihilation of Spurious Minima in Two-Layer ReLU Networks
We study the optimization problem associated with fitting two-layer ReLU neural networks with respect to the squared loss, where labels are generated by a target network. Use is made of the rich symmetry structure to develop a novel set of tools for studying the mechanism by which over-parameterization annihilates spurious minima through. Sharp analytic estimates are obtained for the loss and the Hessian spectrum at different minima, and it is shown that adding neurons can turn symmetric spurious minima into saddles through a local mechanism that does not generate new spurious minima; minima of smaller symmetry require more neurons. Using Cauchy's interlacing theorem, we prove the existence of descent directions in certain subspaces arising from the symmetry structure of the loss function. This analytic approach uses techniques, new to the field, from algebraic geometry, representation theory and symmetry breaking, and confirms rigorously the effectiveness of over-parameterization in making the associated loss landscape accessible to gradient-based methods. For a fixed number of neurons and inputs, the spectral results remain true under symmetry breaking perturbation of the target.
Accept
Thank you for your submission to NeurIPS. This paper is on the structure of critical points and local minima in over-parameterized two layer ReLU neural networks. The reviewers and I, after the author response, are in agreement that there are interesting contributions in this work. However, the reviewers noted significant issues with the presentation (see below). Four knowledgeable reviewers recommend accept/borderline accept, and I concur, in light of the contributions made. The reviewers also noted several weaknesses in the presentation: In particular, the reviewers noted that (1) the technical terms used in this paper and notation are hard to follow, which makes the paper not easily accessible (2) the paper assumes previous knowledge from previous works and is not highly self-readable (3) the theoretical assumptions are strong, (4) it is not clear whether the results can give insights on practical neural networks. Moreover, the analysis techniques are non-standard (to most ML theorist, in my opinion). The reviewers most likely did not check the proofs, but feel confident with the mathematical rigor. The statement of Theorem 1 looks informal, which should either be made more rigorous or stated that this is an informal version of a formal result that appears later. Please take into account the updated reviewer comments when preparing the final version to accommodate the requested changes.
train
[ "CCTR6v2SVH", "qSphkv8c1E8", "ijxoEutrwl", "39VAmhwLhpO", "SAI2Rbc8dgO8", "FZXi_9BDoEw", "5bwg4Z9GpNW", "AszRdcIRv_", "oUF3J8JUMQs", "F7AEQ9qasMT", "XICjZLZTIZ", "62F1HRKHbgt", "rKPihoqlTHf", "N6Jg8TxcbqH", "GwXQfc4OBjd" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed responses.\n\n&nbsp;\n\nStatistical methods, often from theoretical physics, have so far not been particularly successful in explaining phenomena seen in neural networks. For example, explaining how adding neurons can remove spurious minima---the main focus of our paper. Our approach is to start from an analytically tractable case---one already acknowledged as difficult: ``As far as we know, an analytical expression for the roots might not even exist'' (Safran & Shamir, [Introduction, 29]).\n\n&nbsp;\n\nAs indicated in the concluding comments, the results proved hold under symmetry breaking perturbations: \n*we are describing phenomena that are robust and so already our results have the power to disprove or support general conjectures in the field* (for example, on Hessian spectra [5,6]). At this point, we feel it is premature to attempt a `general theory' since our focus is still partly exploratory though guided by properties of the symmetric models that can be expected to hold in more general situations where symmetry cannot be invoked. \n\n&nbsp;\n\nAt this stage, we want to try and keep things relatively simple but certainly agree with the need to expand the scope to include neural networks trained on finite data. Following a recent series of works [4, 26, 35, 29] and Hardt et al., 2016; Hardt & Ma, 2016, we focus on the generalization error. Various properties of the training loss can be readily deduced by concentration of measure arguments, e.g., using generalized vector-valued Rademacher complexity (e.g., Foster et al. 2018, Mei et al. 2017).\n", " I again thank the authors (! this time correctly addressed) for their clarifying reply. \n\nIf I understand correctly, the generalization to trainable second layers only applies if all second layer converge duuring training to positive weights, a limitation that I think should be mentioned. Secondly, the necessary theoretical assumptions on the symmetries in the underlying input distribution still seem rather strong to me. The example of (isotropy groups of) local minima under consideration reflect (and require) the symmetries in the input distribution. The theoretical setting is therefore different to neural networks trained on finite data and it may be difficult to relate the findings to local minima and saddle points in the more practical setting; a relation I was hoping for when asking for a comparison to related work. \n\nAs a result, my evaluation of the paper remains unchanged. As a theoretical paper, the submission certainly offers novel techniques to prove novel, theoretical results. However, the applicability of the theory seems to require strong theoretical assumptions, and it remains unclear to me how the techniques and results can be carried over to a more realistic setting. Taken together, my evaluation remains that this is a \"technically solid paper where reasons to accept outweigh reasons to reject\". \n\nI appreciate the author's intent to extend and polish the presentation to make the paper more self-contained, which can contribute to the understanding of readers not sufficiently familiar with previous work on symmetry-breaking.", " > I thank the authors for their detailed reply. A few points that may be worth discussing:\n\nThank you for your questions and remarks. They have given us an opportunity to better understand how different parts of our paper are perceived, and to hopefully improve the presentation of our work. \n\nAll points below shall be clarified in the text. \n\n&nbsp;\n\n> I do not understand the clarification in the main comment 3. The activation function is only positive homogeneous, so the second layer can only be reduced to a vector of (plus) ones (using homogeneity) if all weights are positive.\n\nFor the concrete families of minima investigated in [6], the weights of the second layer are positive, and can be therefore rescaled to plus ones. Critical points with negative weights in the second-layer are not covered in [6]. (Indeed, more and more critical points with different structure and isotropy are being discovered and classified as investigation progresses, see also general comment 4 above.) All families of minima studied in the present work have positive second-layer weights, and are therefore amenable to the same (straightforward) reduction used in [6]. \n \n&nbsp;\n\n> I would argue that it is important to distinguish between invariance to irrelevant variations of the input with a physical meaning (similar to the translation-equivariance of convolutional layers), and symmetries of the loss function based on permutations of input neurons. Am I missing an intuitive understanding of these type of symmetries?\n\nAt this time, it is not clear whether a conclusive answer to your question exists. Indeed, the results shown in the present work can be interpreted as suggestive evidence in favor of a different explanation, namely: it is the very intricate interplay between symmetries that are inherent to data distributions and those that are (“forcibly”) incorporated into neural network models that make such blends successful.\n\nConcretely, in our setting:\n1. Invariance properties of the loss function are determined by symmetries of the neural network model and the underlying distribution. \n2. These properties are reflected in the symmetry (isotropy) of spurious minima. \n3. Since increasing the number of student neurons does not affect the isotropy type, spurious minima must, provably, transform into saddles.\n\n&nbsp;\n\n> I understand that the paper does not aim to show existence of non-global local minima, but it could nonetheless be placed into related work on non-global minima and generalized saddle points. If the (isotropy groups of) local minima are well-understood, then they can be related to other local optima found in related work.\n\nFamilies of minima of the loss landscape considered in the present work are classified by their isotropy type and their type (the latter being determined by the sign of the diagonal entries). This taxonomy has been kept largely consistent throughout the line of work on symmetry-breaking. Therefore, in all existing works, ``type II $\\Delta S_{d-1}$-minima’’ indicates the same family of minima – In particular, minima relate to one another across *different* works by their very classification.\n \nThe loss landscape exhibits a \"zoo\" of minima and critical points which, owing to symmetry, can be nonetheless organized in a systematic way (just described). This zoo has yet to be completely understood and characterized, especially in the case when $k > d$. For example, there can exist distinct families of critical points with the same limiting behaviour but different Hessian spectrum (e.g. a family of minima and a family of saddles). Further, the path-based techniques on the fixed point space allow us to identify regular families with isotropy $\\Delta (S_q \\times S_q)$ - here $d = 2q$ will be even and, for fixed $q$, the critical point with isotropy $\\Delta( S_q \\times S_q)$ can appear in a family $\\Delta(S_{d-q} \\times S_q)$ where now $q$ is fixed and $d$ is varied. \n\nIn contrast, the structure of matrices belonging to a given isotropy subgroup *is well-understood*. For example, any matrix of isotropy $\\Delta S_d$ is necessarily a linear combination of the identity and the all-ones matrix. See [4, 7] for a complete account. \n\nWe are happy to elaborate more on this point if we misunderstood your question. \n\n&nbsp;\n\n> One minor point aside, I don't think the presentation is hard to follow at some parts because the reviewers not trained in algebraic geometry and representation theory, but that the paper assumes knowledge of those research paper which the current submission builds upon.\n\nIt is true that summarizing the “prerequisites” has caused us some problems on account of the required background (representation theory, FPS techniques, and path based methods - giving dependence on real parameter d). As well as working on the preliminaries in the main paper, we will make the supplementary material broader in scope so as to make the submission more self-contained.\n", " I read all comments and other reviews and will keep my score as well as confidence fixed! ", " I thank the reviewers for their detailed reply. A few points that may be worth discussing:\n\nI do not understand the clarification in the main comment 3. The activation function is only positive homogeneous, so the second layer can only be reduced to a vector of (plus) ones (using homogeneity) if all weights are positive.\n\nI would argue that it is important to distinguish between invariance to irrelevant variations of the input with a physical meaning (similar to the translation-equivariance of convolutional layers), and symmetries of the loss function based on permutations of input neurons. Am I missing an intuitive understanding of these type of symmetries? \n\nI understand that the paper does not aim to show existence of non-global local minima, but it could nonetheless be placed into related work on non-global minima and generalized saddle points. If the (isotropy groups of) local minima are well-understood, then they can be related to other local optima found in related work.\n\nOne minor point aside, I don't think the presentation is hard to follow at some parts because the reviewers not trained in algebraic geometry and representation theory, but that the paper assumes knowledge of those research paper which the current submission builds upon. ", " The authors address most of my concerns and I raise the score.", " Typos\\suggestions, including those not listed below, have all been adopted. \n\n\n**Assumptions and scope of applicability.** \n\nExisting theoretical techniques aimed at addressing ReLU networks in complete generality fail to rigorously explain why optimization is possible in deep learning. This is a barrier inherent to distribution-free approaches (Blum \\& Rivest (1992); Brutzkus \\& Globerson (2017); Shamir (2018)). We approach the problem from a standpoint taken by [10, 12, 34, 27, 20, 36, 5, 30]. That is, rather than striving for generalization, our emphasis is on a rich model that helps elucidate some of the key foundational questions in a framework amenable to detailed and rigorous analysis. Orthogonality of the target matrices is not required by the symmetry-breaking framework: other choices of target matrices, distributions, activation functions and architectures have been addressed in previous works, and are a topic of our current research. As mentioned briefly in the concluding comments and in this response, when $d$ is fixed and the symmetry of $V$ is broken the phenomena we describe persist. Of course, for a general asymmetric $V$ there is not much quantitative one can say about spurious minima. Our expectation is that the analysis of symmetric case and symmetry breaking perturbations [cf. [6]) will help inform and guide the formulation of the right questions in the general case. For example, the existence of different types of spurious minima with different asymptotics of the decay of the loss, or spectral properties of the Hessian (see [4,5] for more on the latter point). \n\n\n> ... Why it would be interesting to consider permutations of the dimensions in the input space at all.\n\nInvariance properties of the input space are widely-believed to be one of the key factors for the success of deep learning (e.g., the influential paper ‘Convolutional networks and applications in vision’ by LeCun et al.). Any symmetry group related to the input space can be embedded in a permutation group (Cayley's theorem). In our setting, the invariance to subgroups of the permutation group considered in the paper is reflected in the structure of critical points, making possible the use of powerful methods for analytically characterizing the loss landscape. \n\n\n> Are all spurious local minima (possible under the given setting) covered by the theory? \n\nMost likely not - if by all you mean all isotropy types. Certainly, we do not expect the case where there is no symmetry or very small symmetry groups. For fixed $d$, when the symmetry of $V$ is broken the minima will all persist (assuming the Hessians all have non-zero eigenvalues) but there will be no symmetry. Please see also general comment 4 above.\n\n\n> Existence.. have been shown previously.\n\nOur goal in this work was not to formally establish the existence of minima, but rather to study the mechanism which makes the loss landscape accessible to simple optimization methods — despite being highly nonconvex. To this end, we use the rich symmetry structure to characterize the Hessian spectrum at different families of critical points and investigate how increasing the number of model parameters (i.e., hidden neurons) turn spurious minima into saddles. Our results are also precise - for example, exact spectrum up to given order in $1/d$.\n\n> Translate the results into statements on specific weight configurations.. ?\n\nThe structure of weight matrices for the class of isotropy groups considered in the paper is well-understood. For example, matrices of isotropy $\\Delta S_d$ lie in the two-dimensional space {$ a I_d + b I_d I_d^\\top | a,b \\in \\mathbb{R}$}. A detailed study is given in [4, 7].\n\n\n> “overparameterized“... is in conflict with existing works\n\nTerminology in our field has yet to stabilize. Over-parameterization, as an indication for richer student models, has been used for example in (but certainly not only) https://arxiv.org/pdf/1901.09085.pdf and https://arxiv.org/abs/1712.08968. Other (plenty of) examples exist. We will add a remark clarifying the possible naming collision. \n\n> Line 116 only holds if the same permutation as applied to the output weights\n\nPlease see general comment 3 above.\n\n> Line 120, the meaning of the equality sign is not defined\n\nThe sign $\\approx$ indicates a group isomorphism. \n\n\n> Line 126:.. I suppose, the authors mean that the teacher network is invariant.\n\nNote, the teacher network is *not* $S_k \\times S_d$-invariant. Rather, the loss function *is* $S_k \\times S_d$-invariant. It is perhaps somewhat surprising at first glance.\n\n\n> Line 138: what exactly is meant by the terminology “nondegenerate“ and “spurious“?\n\n‘Spurious’ (line 18) is a widely used to indicate a non-global minimum (E.g., https://arxiv.org/abs/1712.08968 and https://papers.nips.cc/paper/2016/file/7fb8ceb3bd59c7956b1df66729296a4c-Paper.pdf)\n\n‘Nondegenerate’ (line 133): “non-degenerate critical points (no zero eigenvalues)”\n\n\n> Questions:\n\nPlease see the answers above.", " Typos\\suggestions, including those not listed below, have all been adopted. \n\n> Presentation and background. \n\nPlease see general comments 1 above.\n\n> In Line 152, what does $i_g:\\Delta S_d \\subset \\Delta S_{d+1}$ mean?\n\nPlease see general comments 2 above.\n\n> Some notations are not defined. For instance, what is in line 150 and what is in line 165?\n\nThe symbol $F|_X$ is conventionally used to denote the restriction of the domain of $F$ to the set $X$. Fixed point spaces $M(k,d)^G$ are defined in Section 3. \n\nPlease let us know if this addresses your concerns properly. We have also made changes in notation in the supplementary materials.\n\n> The optimization variable is conflicted... the optimization variable.\n\nSince the weights of the second layer are assumed to be normalized to one (allowed by homogeneity), the families of minima we study can be described only by the weights corresponding to the first layer. Our goal is then to investigate how the extremal properties of a given family of minima varies when the number of student neurons is increased. We found that adding neurons turns minima into saddles, thus indicating why gradient-based methods might succeed in detecting minima with an improved (lower) loss. Please also see general comment 3 above. \n\n> The paper states that the over-parameterization annihilates spurious local minima... the regime of over-parameterization. \n\nQuite remarkably, already adding one and two neurons affects the loss landscape in a dramatic way, transforming certain families of minima into saddles. In fact, it is unclear whether bad local minima exist at all if more than two neurons are added. This has been also observed by [29]. Our work is the first to provide a rigorous explanation of this peculiar phenomenon for minima of certain symmetry, and indicate why this might be the case for all minima. We discuss the case where $k-d>2$ in more detail in general comment 5 above.\n\n\n> Can the results in the paper be extended to the empirical loss, i.e., the loss function is only evaluated at limited number of data points?\n\nThat is a very interesting question: current theoretical bounds for the number of samples required for the training loss to uniformly converge (UC) to the population loss are worst-case and are known to become completely vacuous when applied to parameter regimes encountered in practice (cf., [https://arxiv.org/abs/1703.11008]). We hope that the symmetry-based approach might provide a different perspective (not based on UC) as to why SGD succeeds in these highly non-convex problems nonetheless. Preliminary numerical experiments we conducted confirm that minima of the training loss are also (approximately) highly symmetric. Thus, potentially, rather than worst-case notions of sample complexity measured over the space of all possible weight matrices, improved generalization bounds may be obtained by focusing on a restricted set of highly symmetric matrices.\n\n", " Typos\\suggestions, including those not listed below, have all been adopted. \n\n> I can't understand why the loss function is in invariant by the group of row permutations (line 116). shouldn't the weights of 2nd layer play a role here? why do you just consider $W$? I think I'm convinced by looking at line 124 where you set 2nd layer to be all ones, but this should come before line 116 to avoid confusion.\n\nPlease see general comment 3 above. If all weights of the second layer are set to one, the network output is simply the sum of the values of the hidden neurons. Summing the values in a different order (i.e. permuting the rows) does not affect the output. That’s the easy part. The second part of the invariance follows from the properties of the underlying data distribution and the structure of the teacher matrix. See [4] or [7] for a direct derivation.\n\n\n> What will be the main barrier in using these methods for analyzing $k-d>2$?\n\nPlease see general comment 5 above.\n\n> For line 198, where do you show that the loss at type II minima decays as $\\Theta(\\frac{1}{d})$? and is $\\Theta(1)$ loss at type I minima characterized by equations in lines 210-211?\n\nFor $k=d$, this is established in [5] and [6] and uses the fractional power series (FPS) expansion substituted into the loss (only initial terms of the FPS are important in determining initial terms of the FPS of the loss at a minimum). For $k = d+1$, this is given in Figure 1 and indeed lines 210-211.\n\n\n> Regarding Definition 2, are there only two types of critical points for all k, d or just the cases you consider? I can see in Theorem 2 you prove there are 1 or 2 family of critical points for specific isotropies. Also for the remarks in lines 337-340, is there any evidence that the 2 families mentioned are representative of other symmetric spurious minima?\n\nFor the isotropy types considered in the paper, the only types that occur are type I and type II. Empirical evidence suggests that this applies rather generally. In the extensive numerical experiments we have conducted minima (and critical points which are minima for the loss restricted to the fixed point space) were always type I or type II, regardless of their isotropy type. \n\n> for line 200, why it is reasonable for the loss at initialization to be compared to loss in minima?\n\nWhile type II minima are detected by SGD, type I are not. Since the loss at type II, $\\Theta(\\frac{1}{d})$, is smaller than the loss at type I, $\\Theta(1)$, one might argue that the latter is not detectable by SGD since the (expected) loss at initialization is $\\ll \\Theta(1)$. However, this turn out not to be the case as we prove that the initial loss is $\\Theta(d)$, which is larger than the loss at both types of minima. \n\n\n> Regarding line 347, can you point out where in the paper/proofs you assumed $V$ having high symmetry? is it the assumption of $V=I_d$ in line 126?\n\nIndeed, that has been clarified.\n\n> In section B1, what is an eigenvalue transition matrix?\n\nFor each copy of a given irreducible representation, a representative vector has been chosen. As the Hessian matrix is stable on isotypic components, its action on representative vectors belonging to the same irreducible representation may be described in terms of a matrix, namely the transition matrix. A complete account of the technique is given in section 3 “The method: a symmetry-based analysis of the Hessian” in [5].\n\n> What is the meaning of subscripts 1,2,3 in $\\nabla \\mathcal{L}$ in beginning of page 19?\n \nThe subscript denotes the entries of the gradient map. We have worked to make the notation clearer throughout the paper. In particular, A1 - A6 are completely rewritten. \n\n", " Typos\\suggestions, including those not listed below, have all been adopted. \n\n> Identifying all minima.\n\nPlease see general comment 4.\n\n> line125: I don't think it is trivial at all whether the two-layers setup can be studied in the same way as the one-layer setup studied in the paper. Can the authors please comment on this? As far as I see, also in the reference [6], only the one-layer case is studied.\n\nPlease see general comment 3.\n\n> line 60-61: adding one neuron results in these minima ... unclear sentence \n\nAdding one hidden neuron to the student network. I.e., k = d + 1.\n\n> line 152: definition of inclusion is unclear to me line\n\nPlease see general comment 2.\n\n> line 367: what is meant by non-integer values of $d$?\n\nThe symmetry-based technique used in the paper yields a family of minima which depends continuously on the dimension parameter d. These minima, and the associated detailed analysis, lie in a fixed point space of dimension independent of $d$. Integer values of $d$ correspond to critical points lying in $M(k, d)$. However, $d$ may also assume non-integer (that is, fractional) values for which the spectrum analysis is still applicable - this makes essential use of symmetry and representation theory. For example, the determinant of the Hessian is well defined as a function of real $d$ for these families! In particular, we found that certain families of critical points change (i.e., *bifurcate*) from saddles to minima at non-integer values of $d$. ", " We thank the reviewers for their time, detailed feedback and helpful critical comments. \n\n1. **Presentation and background**. The mathematical tools used in the work are currently not part of standard training programs of researchers in machine learning. Indeed, it is only recently that certain tools from algebraic geometry and representation theory were found to be effective in the context of the theory of deep learning. Balancing between the presentation of our results on annihilation of minima under overparameterization, a decent account of relevant background material and the symmetry-based technique developed in the recent line of work, is therefore a subtle task. We put a lot of thought into how to present the new methods and appreciate the generally positive response of the reviewers to our efforts. The reviews have been helpful in identifying some points which require further elaboration. We will revise accordingly. \n\n&nbsp;\n\n2. **Inclusions**. Formally speaking, $\\Delta S_{d+1}$ does not contain $\\Delta S_d$ as a subgroup, but rather isomorphic copies thereof. For example, the group $S_4$ can act on a set of five elements by leaving (say) the last element fixed and permuting the first four elements, thus embedding $S_4$ in $S_5$. Likewise, inclusions are injective group homomorphisms which embed $\\Delta S_d$ in $\\Delta S_{d+1}$. We will make this clearer in the text.\n\n&nbsp;\n\n3. **Trainable vs non-trainable second-layer**. Reference [6] directly addresses the case where the second-layer is trainable (see Theorem 1). For families of minima investigated in [6], the homogeneity of the network allows one to carry out the analysis by first reducing to the case where the weights of the second layer are normalized to one, and then rescaling. The loss landscape considered in the paper exhibits symmetry breaking minima and is therefore amenable to the same methods. We felt that pursuing this generalization would reduce readability, especially in view of the use of novel mathematical techniques. We shall clarify this point in the paper.\n\n&nbsp;\n\n4. **Identifying all minima**. Families of minima exist of isotropy $\\Delta S_{d-p} \\times S_p$ for $p>2$. All the families of minima we have encountered so far have large symmetry groups, typically maximal proper subgroups of $\\Delta S_d$, and are amenable to the same symmetry-based analysis used in the paper. While we cannot yet rule out the existence of families of minima with no symmetry (trivial isotropy), empirical work shows their existence is unlikely even for large values of d. The use of symmetry enables the description and detailed analysis of many families of minima and critical points in a problem which would be intractable without the symmetry assumption. If $d$ is *fixed*, and the symmetry of the target $V$ is broken, these critical points persist with spectrum depending continuously on the perturbation of $V$. In summary, we are restricting to a subspace of $M(k,d)$ independent of $d$. Keeping $k - d$ fixed, and letting $d \\rightarrow \\infty$ (so $k \\rightarrow \\infty$), we then prove the existence of an analytic curve of minima emanating from the limiting point pair at infinity. We analyze spectra using tools from group representation theory and analysis\n\n&nbsp;\n\n5. **Extending the analysis to $k-d>2$.**\nThere is no barrier, but the analysis gets more complex in two ways. Firstly, as we increase $k$, keeping $d$ fixed, new families of critical points appear (emphasis here on families, rather than one family). Secondly, the original critical point is still there but now becomes part of a simplicial complex of degenerate critical points (we call this process fossilization). Once there, the fossilized set remains as we increase $k$ but it never contributes new local minima (the global minimum $V$ becomes part of a connected fossilized set which gives the global minimum). Each additional neuron increases the dimension of the fixed point space by 2 (for isotropy $S_{d-p}\\times S_p,~p > 1$). Generally, as we increase $k-d$, we expect to see more families of critical points. In short, the analysis gets more complex but does not appear so far to be intrinsically different from what we see when $k-d \\le 2$. In particular, fractional power series for minima (or critical points) will exist for new regular families \n\n", " The authors extend the theory of [6] for mild overparameterization where the student has one or two more neurons than the teacher (ReLU activation function).\nFor several chosen families of local minima, the authors give an analytic description in terms of the loss at these minima and their Hessians. The neuron additions can turn minima into saddles, \"minima of lesser symmetry\" needs more neurons for turning them into saddles. Originality: The paper extends the theory of [6]. Although the existence of local minima in the asymptotic regime and characterization in [6] is exciting as an example neural network landscape with provable local minima families, I find the contributions of the current paper somehow incomplete, therefore less attractive. \n\nQuality: The analysis techniques are non-standard. I did not have time to check the proofs, but the results look rigorous. No experiments are presented in the main text, for example, to justify the claim of why type-II minima are more attractive. \n\nClarity: Although individual sentences usually can be understood, the technical terms used in this paper are hard to follow. Table I is incomprehensible, the caption refers to the material in the later sections, instead of explaining the table. The spectrum column cannot possibly represent the full Hessian spectrum since there is a scalar value, does it refer to the min. eigenvalue of the Hessian? What are the r and n representations? \n\nSignificance: I think it is an important problem to classify all the critical points of the neural network landscape in the paper. This analysis can pave the way to explaining the difficulties in training mildly overparameterized networks. However, the paper only classifies a few types of local minima points in the asymptotical limit, and it is not even clear whether the dynamics would converge to these points or not. \n\n\n\n Is there any local minima of type $\\Delta S_{d-p} \\times \\Delta S_{p}$ where $p>2$? As far as I see, only $p=\\{0,1,2\\}$ are discussed (also, some overparameterization). \n\nline125: I don't think it is trivial at all whether the two-layers setup can be studied in the same way as the one-layer setup studied in the paper. Can the authors please comment on this? As far as I see, also in the reference [6], only the one-layer case is studied. \n\nTypos/detailed comments:\nline 60-61: adding one neuron results in these minima ... unclear sentence\nline 101: citation needed\ncaption of Table1: I don't understand how lines 4-6 and lines 10-12 are related to the table. \nline 116: $\\mathcal{L}(\\sigma W, \\sigma \\alpha) = \\mathcal{L}(W, \\alpha)$ for two-layers networks. However, the authors only study the committee machine setup where the second layer is fixed at $1$. Why not introduce the setup already in this form in page 1 (Eqs~1 and 2)?\nline 152: definition of inclusion is unclear to me \nline 289: (typo) so turn minima -> to turn minima \nline 367: what is meant by non-integer values of $d$? \n\n The paper is written in a neutral way, and the written text reflects the findings in the theorems. ", " This paper concerns about the structure of critical points and how local minima change to saddles in over-parameterized regime in the case of 2 layer ReLU networks, square loss, and labels created by planted model. Using symmetry structure of loss function and considering certain symmetric types of critical points it is shown that adding neurons can turn non-global minima into saddles, addressing over-parameterized models. Assuming the reader has enough knowledge of existing work and tools being used, the paper is well-written and addresses the effect of over-parameterization on certain symmetry types of minima and their transformation to saddles. The FPS method for defining spectrum of Hessian looks powerful for analyzing the loss and Hessian at local minima, and Theorem 3 is a good example of Hessian spectrum characterization at certain family of critical points. I like the arguments where adding neurons changes the nature of critical points, like in lines 280-284 and 601-603, as well as Theorem 4, and how different families of minima require different number of additional neurons to turn into saddles. I couldn't digest most of the proofs, except for pages 19-21, as I am not familiar with this line of research revolving around representation theory.\n\nThe paper also assumes previous knowledge from previous works and is not highly self-readable; However authors try to familiarize readers with existing work in introduction section.\nfor example, regarding lines 72-76, authors indicate a previous work and pinpoint that considering symmetry type of minima is important.Also following line 77, the symmetry breaking phenomenon observed in previous works is outlined. Typo:\n\nlines 599 and 686, table 5.2 should be replaced by table 2. line 527, $M(d+2,d)^{\\Delta S_{d-1}}$. line 538, $\\Omega_1$. line 606, restricted \"to\" the. line 619, $M(d+2,d)$. line 250, by section A.7 do you mean section B.1 where you prove Theorem 3?\n\n\nQuestions:\n\nI can't understand why the loss function is in invariant by the group of row permutations (line 116). shouldn't the weights of 2nd layer play a role here? why do you just consider $W$? I think I'm convinced by looking at line 124 where you set 2nd layer to be all ones, but this should come before line 116 to avoid confusion.\n\nWhat will be the main barrier in using these methods for analyzing k-d>2?\n\nFor line 198, where do you show that the loss at type II minima decays as $\\Theta(\\frac{1}{d})$? and is $\\Theta(1)$ loss at type I minima characterized by equations in lines 210-211?\n\nRegarding Definition 2, are there only two types of critical points for all k, d or just the cases you consider? I can see in Theorem 2 you prove there are 1 or 2 family of critical points for specific isotropies. Also for the remarks in lines 337-340, is there any evidence that the 2 families mentioned are representative of other symmetric spurious minima?\n\nfor line 200, why it is reasonable for the loss at initialization to be compared to loss in minima?\n\nRegarding line 347, can you point out where in the paper/proofs you assumed V having high symmetry? is it the assumption of $V=I_d$ in line 126?\n\nIn section B1, what is an eigenvalue transition matrix?\n\nWhat is the meaning of subscripts 1,2,3 in $\\nabla \\mathcal{L}$ in beginning of page 19?\n\n I don't see any negative societal impact, except for misuse\\illegal use of Neural networks. ", " This paper focuses the setting of fitting a two-layer networks to a target network with respect to the square loss. They apply novel tools from algebraic geometry to utilize the symmetric structure and characterize how over-parameterization annihilates spurious local minima. \n Strength:\n- This is a strong theoretical paper discussing the spurious local minima of two-layer ReLU networks.\n- The tools from algebraic geometry and representation theory are novel for the study of local minima in two-layer neural network. The analysis of the families of local minima and Hessian spectrums can bring meaningful results for the theoretical study of two-layer networks.\n\nWeakness:\n- The paper should be self-contained. However, the notation is confusing and this imposes difficulty for the reader to understand the contribution of the paper. \n - In Line 122, it is said $V=I_d\\in M(k,d)$ but here V is a d-by-d matrix and we need to append zero rows to V. It will be better to use another notation. \n - In Line 152, what does $i_g:\\Delta S_d\\subset \\Delta S_{d+1}$ mean?\n - Some notations are not defined. For instance, what is $\\mathcal{L}|M(k,d)^G$ in line 150 and what is $\\mathcal{L}|\\mathbb{R}^n$ in line 165?\n- The optimization variable is conflicted. (W,\\alpha) is the pair of optimization variable for the loss $\\mathcal{L}$ and a critical point should be in $M(k,d)\\times \\mathbb{R}^k$. In Line 150, why $c \\in M(k,d)$ is a critical point? It seems that the results of the entire paper focuses on optimizing W alone. Please specify what is the optimization variable.\n- The paper states that the over-parameterization annihilates spurious local minima but the main theorems (Theorem 2-4) only discuss the cases when $k=d,d+1,d+2$, which is not quite related to the regime of over-parameterization. Can the results in the paper be extended to the empirical loss, i.e., the loss function is only evaluated at limited number of data points?\n\nLet $N$ be the number of data points. It is assumed that the labels are generated by a target network with $d$ neurons. Is it possible to generalize the results to a target network with more neurons, say $N+1$? From the Caratheodory’s theorem, an optimal neural network fitting arbitrarily label needs at most $N+1$ neurons. Thus, the results will be more influential if it can be generalized to arbitrary target network.\n Yes.", " The paper presents theoretical results on two-layer regression networks with ReLU activations and squared loss. It builds upon a line of work that studies local minima based on symmetries of the loss function (as a function of the weights of the hidden layer). This paper applies and extends previously developed techniques to the setting where the hidden layer contains more neurons than the input dimension. The new techniques enable the authors to classify local minima in dependence of their symmetry, to provide sharp analytic estimates for the loss at these minima and the spectrum of the loss Hessian, and to thereby understand when local minima of certain symmetries turn into saddles when more neurons are added to the network. Strengths:\n\n- The paper applies interesting and novel techniques by combining a variety of mathematical tools. It builds upon a recent line of work, but applies the approach to a novel setting (where the hidden layer contains more neurons than input dimensions) and thereby requires new ideas and techniques. This results in novel, analytical results that provide sharp estimates on the loss at certain local minima and the spectrum of the loss Hessian. Interesting results can be then deduced on how minima turn into saddles in dependence of their symmetry class. (Some minima turn into saddles when a single neuron is added, while others require more neurons.) The analytic results do not only provide existence of certain families of local minima, their loss and Hessian, but also their frequency of appearance. \n\n- While the paper assumes knowledge of the line of papers it builds upon, all theoretical results are proved and detailed calculations are appended in the supplements.\n\n\nWeaknesses:\n\n- The paper makes a number of assumptions, which questions the applicability of the results to networks in practice:\nThe authors consider only two-layer regression networks with squared loss in a teacher-student setting, where labels are generated by a target network. The target network is fixed to a weight matrix equal to an identity matrix (stacked with zeros if the hidden layer contains more neurons than input dimensions) and the weights of the output layer equals a vector of ones. (The paper claims that an extension to trainable output layers is possible, but no details or further explanation is given. Similar, the authors write that , while the results assume a target of high symmetry, this would not be necessary, but again without evidence or further discussion.) This restricted setting can be slightly generalized to a hidden weight matrix given by an orthogonal matrix of size input dimension and rest of zeros, but only because of another strong (unrealistic) assumption of a Gaussian input. \n\n- It is the latter assumption that seems crucial to the entire theory, while also implying that the setting is quite different to a realistic setting. In particular, local minima are classified into groups of symmetry, defined by being invariant under permutations of hidden and input neurons. In particular, the authors want the target network to be invariant under permutations of the input neurons, which seems to require strong assumptions on the input distribution. (Aside, I would appreciate a clarification from the authors why it would be interesting to consider permutations of the dimensions in the input space at all.)\n\n- A second perceived weakness is the lack of clarification how exhaustive the results are. Are all spurious local minima (possible under the given setting) covered by the theory? It seemed to me that the answer is no, and the paper only considers very specific families of critical points.\n\n- The paper should also be better placed in related work. Existence of spurious local minima in networks have been shown previously (using other techniques) and it remains unclear, how the results of this paper relate to these. To give a possibly, strongly related example, a classical paper by Fukumizu and Amari („Local minima and plateaus in hierarchical structures of multilayer perceptrons Neural Networks“ 13, 2000) provides specific weight combinations (with a certain symmetry) that lead to local minima. To better place the given submission into the related work, the authors could discuss the specific form of weight matrices that are covered by the symmetry classes. (Since the group actions are very specific, they translate to specific weight configuration, which could be made explicit.) Related to this: If one were to translate the results into statements on specific weight configurations in a network, which certainly seems possible, are the techniques to study the local minima necessary, or can the results be interpreted more easily?\n\n-------\nAdditional Comments:\n\nI appreciated the detailed calculations in the appendix and detailed arguments in some parts. However, I also found that the paper too often assumes implicit knowledge and is not too precise in its use of terminology. I append a list of some of these points, where the presentation could be improved. Despite strong efforts, I was therefore not able to follow all of the arguments in the paper (I did look into the line of work this submission builds upon, but did not study it in detail.)\n\nThe authors call the their setting as “overparameterized“, when the hidden layer has more neurons than the number of input dimensions. This is in conflict with existing works that call a network overparameterized when it has more parameters than input patterns to learn.\n\nLine 116 “The loss function is always invariant...“ This only holds if the same permutation as applied to the output weights \n\nLine 120, the meaning of the equality sign is not defined\n\nLine 126: “L is invariant“. I suppose, the authors mean that the teacher network is invariant. \n\nLine 134: what means “fixed“? V was a fixed value to start with. Probably, the authors mean “invariant under“?\n\nLine 138: what exactly is meant by the terminology “nondegenerate“ and “spurious“?\n\nLine 138,139 is missing a reference for the empirical evidence\n\nLine 143, reference is missing to the “recent work“\n\nLine 163, The notation G_c(d) is not defined\n\nLine 185: Def 2 requires more explanation “corresponding to the action of delta S“, \nPossibly it should also be discussed in more detail why the diagonal elements converge to either +1 or -1.\n\nLine 198 “Since the loss at type II minima decays...“ Please provide an explanation or a reference\n\nLine 267 Typo: Adding one neuron should change the second subscript\n\nLine270: “This isotypic component may be written as...“ Please provide an explanation or a reference\n\nLine 357: This second part of conclusion is lacking context to understand this part. In particular, please revise or explain the following terminology: \n“sink“, “source“, “index“, “local deformation of the landscape geometry“, “bifurcation theory“, “forced symmetry breaking leads to great complexity near the transition but minimal models of complexity can be given“\n - Why is it reasonable to study permutations of the input neurons?\n\n- Is it possible to provide specific weight matrices that cover the different symmetry classes?\n\n- Are all spurious local minima (possible under the given setting) covered by the theory? The paper makes strong assumptions. It is unclear whether the local minima can be found in practical neural networks. The necessity of some strong assumptions and their consequences are not discussed in detail" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3, 2 ]
[ "qSphkv8c1E8", "ijxoEutrwl", "SAI2Rbc8dgO8", "oUF3J8JUMQs", "5bwg4Z9GpNW", "AszRdcIRv_", "GwXQfc4OBjd", "N6Jg8TxcbqH", "rKPihoqlTHf", "62F1HRKHbgt", "nips_2022_qfC1uDXfDJo", "nips_2022_qfC1uDXfDJo", "nips_2022_qfC1uDXfDJo", "nips_2022_qfC1uDXfDJo", "nips_2022_qfC1uDXfDJo" ]
nips_2022_4lw1XqPvLzT
Will Bilevel Optimizers Benefit from Loops
Bilevel optimization has arisen as a powerful tool for solving a variety of machine learning problems. Two current popular bilevel optimizers AID-BiO and ITD-BiO naturally involve solving one or two sub-problems, and consequently, whether we solve these problems with loops (that take many iterations) or without loops (that take only a few iterations) can significantly affect the overall computational efficiency. Existing studies in the literature cover only some of those implementation choices, and the complexity bounds available are not refined enough to enable rigorous comparison among different implementations. In this paper, we first establish unified convergence analysis for both AID-BiO and ITD-BiO that are applicable to all implementation choices of loops. We then specialize our results to characterize the computational complexity for all implementations, which enable an explicit comparison among them. Our result indicates that for AID-BiO, the loop for estimating the optimal point of the inner function is beneficial for overall efficiency, although it causes higher complexity for each update step, and the loop for approximating the outer-level Hessian-inverse-vector product reduces the gradient complexity. For ITD-BiO, the two loops always coexist, and our convergence upper and lower bounds show that such loops are necessary to guarantee a vanishing convergence error, whereas the no-loop scheme suffers from an unavoidable non-vanishing convergence error. Our numerical experiments further corroborate our theoretical results.
Accept
There seems to be a clear consensus among reviewers about the paper being well written and addressing relevant research questions pertaining to bi-level optimization, with a particular focus on two popular bilevel optimizers AID-BiO and ITD-BiO. Furthermore, Reviewer Kzzc stressed that this work provides several interesting convergence results, with a practical echo in applications such as meta-learning, NAS, some HO problems, etc. Kzzc also pointed out that, the convergence for ITD has not been well studied, and the results on upper and lower bounds in this work can be a good complement. rZSp joined Kzzc by pointing out that the authors first establish unified convergence analyses that are applicable to all implementation choices of loops for both AID-BiO and ITD-BiO. While at first critical on some aspects of the work (notation, references, tests), Reviewer rZSp went on with a score increase following the constructive discussion with the authors. The most critical Reviewer was KmZV, who questioned the originality of contributions compared to [17]. While the authors did give a detailed response during the discussion, this reviewer did not react. Overall, based on the reviews and the discussion, I assess the paper to be a valuable addition tot he existing literature and recommend it to be accepted to NeurIPS 2022.
train
[ "iHtw9VFg_wR", "imXUZi8KBv", "CEt44Hf2HRB", "qWoEcKvZ8Aw", "Z14wm-tyVO0", "hMp_5ISSXZ8", "I8xYLV-H6xk", "Yf8kCaCuTnT", "raMhF83h_9", "2vdQ0vzNGz9", "9Z1WMQ-k08Bq", "bEQN7jz5xT2T", "mnDaV64Qv-f", "mRmOUHVWT9_", "HlY7w-_oVsW", "ujHGtMk21IS", "syBQsRgfTjB" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We first truly thank all reviewers’ insightful and constructive suggestions, which helped to significantly improve our paper! We also thank the area chair very much for great efforts into handling our paper during the review process! \n\nUnfortunately, we regret that we have not received any response from Reviewer **KmZV** during this discussion period. Despite the absence of discussion engagement, we humbly believe that our detailed pointwise responses have clarified Reviewer **KmZV**’s questions. In particular, Reviewer **KmZV** had three major comments. **(1)** The submission is a summary of previous convergence results in [17]. In our response, we have elaborated in detail our new contributions beyond [17] in terms of new technical developments, much broader parameter regimes, relaxed assumptions, result generalizability, etc. **(2)** Comparison among the upper bounds of single- and double-loop algorithms rather than lower bounds. The comparisons of upper bounds (i.e., comparison of different $Q$ and comparison of different $N$ under $Q=1$) are new to the bilevel literature, which we believe are important steps in understanding their algorithmic designs. Such a type of comparisons have been widely used in optimization literature. **(3)** Choosing $N,Q$ as $1,20$ does not separate single- and double-loop and plot MV and GC. In our revision, we have added a new experiment in Fig. 2 with $N,Q$ chosen from $\\{1,50\\}$, and a new experiment in Fig. 4 comparing in MV and GC, both of which are consistent with our theoretical results.\n\nWe thank all reviewers’ and ACs' time and efforts again!", " Dear Reviewer jDxR,\n\nWe thank the reviewer very much for providing further feedback and suggestions! The open problems in Q2 and Q7 require substantial efforts, which we would like to explore in the future study! As suggested by the reviewer, we will move Appendix B, D, and G to the main paper when the page space is allowed. \n\nBest,\nAuthors", " Dear Reviewer KmZV:\n\nAs the author-reviewer discussion period ends soon, we will appreciate very much if your could check our response soon. In particular, our response has explained in detail about your concerns on the contribution of the paper beyond [17] (Q1) and comparison among upper bounds (Q2). We also added new experiment plots regarding your concern on the experiment (Q3). If our response resolves your concerns, we kindly ask you to consider raising the rating of our work. We are also more than happy to answer your further questions. Thank you very much for your time and efforts!\n\n ", " We thank the reviewer very much for recognizing our efforts and for increasing the rating! \n\nBest, authors\n\n\n", " Dear authors,\n\nThanks a lot for the efforts put in the rebuttal. I went through the rebuttal and find the responses to Q1, Q3, Q4, Q5, Q6, and Q8 are satisfactory but those to Q2 and Q7 are hand-waiving. In the final version, I suggest authors move Appendix B, D, and G to the main paper since they directly support the main claim of the paper. ", " Thanks for the authors' rebuttal and their efforts to improve this work. The response has addressed my questions. I have raised my score to 7. Best wishes.", " Dear Reviewer KmZV: \n\nSince the author-reviewer discussion period has started for a few days, we will appreciate if you could check our response to your review comments soon. This way, if you have further questions and comments, we can still reply before the author-reviewer discussion period ends. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work. Thank you very much for your time and efforts!\n\nThanks,\nAuthors", " Many thanks for providing the helpful review. In the revised version, we have made the changes based on the reviewer’s comments. All the changes are highlighted by the blue-colored texts.\n\nQ1: The major concern I have is that the majority of the theory in this submission is a summary of previous convergence results in [17]. Indeed, this paper and [17] focus on the same setting, the same algorithms, and the same convergence results but now with explicit dependence on the number of steps in the inner $N$ and outer loops $Q$ . While this is a good point to discuss, the technical content of this paper seems to be incremental. Therefore, this paper is more like a supplementary to [17]. \n\nA: We respectfully disagree with this comment. Our paper has significant new technical contributions beyond those in [17], as clarified below. \n\n1. More general results: Our results hold for all parameter ranges of $Q$ and $N$ (i.e., all types of looped algorithm implementations including single-loop, double-loop, triple-loop), whereas the results in [17] have restrictive constraints on $Q,N$ (which require large $Q$ and $N$ and hence study only the triple-loop algorithms.) Such much more general results necessitate novel developments and tighter error analysis (as we explain in the following item 2) in order to eliminate the restrictions in [17]. For example, in eq. (30) of [17], it can be seen that (using our notations) $\\delta_{N,Q}=C_1\\kappa^5(1-\\frac{1}{\\kappa})^N + C_2\\kappa(1-\\frac{1}{\\sqrt{\\kappa}})^Q<1$ ($\\delta_{N,Q}$ is defined in eq. (17) therein), which clearly means that $N$ and $Q$ have to be chosen at an order of $\\kappa\\log\\kappa$. \n\n2. In terms of technical developments, our analysis includes new developments and is much more challenging than that in [17], in order to accommodate the entire parameter range for $Q$ and $N$, which corresponds to different loops of implementations. We further elaborate this below. \n\n * To characterize the error of solving the linear system for AID, we devise a recursion-based analysis (see eq. (12),(13) in appendix) to bound the error between $v_k^Q$ (derived in eq. (10)) and $v_k^*$ (derived in eq. (11)). This development does not exist in the analysis [17] (see Lemma 3 therein), and provides a tighter characterization especially when $Q$ is small due to a tighter dependence on the stepsize $\\eta$ (see Lemma 1). Other big differences can be found in the error analysis of $y_k^N$ (compare our Lemma 2 with eq. (23) in [17]), the construction of error sequence (compare our eq. (22) with eq. (22) in [17]), and etc. \n\n * For ITD, the analysis in [17] assumes the initial gap $\\|y_k^0-y^*(x_k)\\|$ to be bounded (see eq. (40) therein). As a comparison, our analysis eliminates this assumption by constructing a decent sequence $\\delta_k$ shown in eq. (54), which does not exist in the analysis of [17]. Furthermore, the analysis in [17] leads to a much larger error $O(\\kappa^4)$ (see eq. (42) therein) for the choice of $N=O(1)$ due to a less tight characterization. \n\n3. For ITD-based algorithms, our characterization on the lower bound and the nonvanishing error is new, which is not studied in [17]. ", " Q2: The second major concern I have is that the theoretical comparison between the double-loop and the single-loop is in terms of the complexity upper bound. Note that the larger upper bound may not mean worse performance since the single-loop algorithm may be harder to analyze, leading to looser worst-case performance. For example, it would be interesting to compare the lower bound of the single-loop algorithm with the upper bound of the double-loop algorithms.\n\nA: Thanks! The optimization community does appreciate and benefit from the comparison of upper bounds in understanding different algorithmic designs and parameter selections, for example, as seen from the comparison among SGD, SVRG, to SPIDER/SARAH/STORM in the well established stochastic optimization literature. We do agree that the lower bound for single-loop algorithms is very interesting, but also non-trivial due to the upper-level nonconvexity and the nested structure. However, we provide some thoughts as below. \n\nBased on our analysis of upper bounds, the single-loop algorithm (i.e., the No-loop AID with $Q=N=1$) has worse complexity because its hypergradient estimation contains an error term that is a multiplication of $Q$ error (i.e., error in solving the linear system) and $N$ error (error in solving the lower-level problem). Thus, for small $N=Q=1$, the outer stepsize is smaller than that of $N$-$Q$-loop by an order of $\\kappa^2$ and hence results in a worse complexity. Based on this key insight, for the worst-case construction, we can choose a quadratic lower-level problem (similar to eq. (60) for ITD-BiO lower bound), but carefully construct a nonconvex objective to guarantee that the per-iteration estimation contains this multiplication-type error. However, deriving a tight lower bound requires substantial efforts, which we wish to leave for future study. \n\nIn addition, in Appendix C, we have added some discussions on the tightness of upper bounds from the proof and conceptual perspectives, which can help to understand the performance comparison and parameter selections of single- and double-loop algorithms. \n\nQ3: The third major concern is on the experiments. The experiments did not closely match the theoretical analysis. Since the separation between no-loop and double-loop is on $N =O(1)$ or $N=O(\\kappa)$, choosing $N, Q$ as $1 , 20$ does not serve the purposes. In addition, it may be better to plot the dependence on $MV$ and $GC$ not the runtime nor the number of iterations.\n\nA: Thanks! We have added new plots (see Fig. 2 in Appendix F in the revised pdf) with $N,Q$ chosen from $\\{1, 50\\}$, and new plots (see Fig. 4 in Appendix G in the revised pdf) of losses versus $MV$ (number of matrix-vector products) and $GC$ (number of gradients). It can be seen that these new empirical results are still consistent with our theoretical results. More comparison results will be added. \n\nQ4: A similar conclusion for AID-Bio (e.g., $N=O(\\kappa)$ is better than $N =O(1)$) has already been drawn in the stochastic bilevel algorithms; see ALSET (Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems). The connection and difference need to be discussed.\n\nA: Thanks for pointing this out! In ALSET, their comparison focuses only on the case $Q=\\kappa\\log K$, as seen from their choice of $Q$ (in their notations, they use $N$) after eq. (59) therein. In other words, they solve the linear system to a good accuracy of $\\epsilon$ with a large $Q$ loop. As a comparison, our theoretical comparison is more general by considering both $Q=O(\\kappa)$ and $Q=O(1)$. In addition, we also provide a comparison between $Q=O(1)$ and $Q=O(\\kappa)$ given different $N$, which is not covered in the ALSET paper. We have added this discussion in the revised pdf. \n\nQ5: While the paper did provide a lower bound for ITD-Bio, it seems quite loose compared with the upper bound of ITD-Bio. They differ not only in the dependence on $\\kappa$ but also the dependence on $K$.\n\nA: The point of this lower bound is to show that the nonvanishing convergence error in our upper bound (i.e., eq.(5)) for ITD-BiO fundamentally exists and is not crafted by our bounding techniques. Our lower bound does serve such a purpose. We believe closing the gap between upper and lower bounds is a much more challenging goal, which requires the construction of tighter worst-case objective functions, which requires substantial efforts due to the upper-level nonconvexity and the nested structure. We do think this open problem is an interesting topic for future study. ", " Q6: The main theorems (e.g., Theorems 1, 2, and 3) in the paper are presented in a very complicated way. They depend on many irrelevant constants such as $M,\\rho,r$, which make the main theorems difficult to interpret. \n\nA: Thanks! Theorems 1,2,3 are general convergence results for AID-BiO and ITD-BiO with flexible hyperparameters such as loop sizes and stepsizes. Hence, we keep all constants such as $M,\\rho,r$ for the completeness. For ease of interpretation, in the revised pdf, we have provided the simplified theorems by hiding constants in the notation $\\Theta(\\cdot)$, and relegated the complete theorems to the appendix. \n\nFinally, we thank the reviewer again for the helpful comments for our work. If our response resolves your concerns to a satisfactory level, we kindly ask the reviewer to consider raising the rating of our work. Certainly, we are more than happy to address any further questions you may have during the discussion period.", " Many thanks for providing the helpful review. In the revised version, we have made the changes based on the reviewer’s comments. All the changes are highlighted by the blue-colored texts.\n\nQ1: Does the lower bound also depend on $K$? \n\nA: Yes, from the proof in Appendix R, it can be seen that it has a term decaying exponentially w.r.t. $K$ (see eq. (67)). However, we do not include it in the final lower bound because the main purpose of our lower bound is to demonstrate that the nonvanishing error in our upper bound for ITD-BiO fundamentally exists and is not crafted by our bounding techniques. We will further investigate the open problem of improving the dependence on $K$ in future study. \n\nQ2: Is there any possibility of improving the lower bound or upper bound?\n\nA: Yes, for the lower bounds, it is possible to improve the dependence on $K$ and $\\kappa$ via a tighter construction of nonconvex upper-level objectives. Our upper bound development treats inner and outer variables separately in the error analysis, which may be improved if we treat them as an entire one and construct a tighter error sequence different from that in Lemma 5. However, both directions require substantial efforts due to the nested structure and nonconvexity of the objective function, which we wish to leave as the future study. \n\nQ3: The paper only focuses on the comparison of different deterministic bilevel optimization algorithms. Does the conclusion in this paper also hold in the stochastic setting? If not, it would be helpful by mentioning ``deterministic'' in the title or abstract.\n\nA: Yes, if the mini-batch size at each iteration of stochastic algorithms is chosen at an order of $\\epsilon^{-1}$, we have checked that our proof flow and comparisons still hold. We have clarified it in the revision.\n\nQ4: In practice, stochastic bilevel optimization algorithms may be used more frequently. How does the conclusion in this paper compare to the conclusion in the recent work on stochastic bilevel optimization algorithms; for example, Theorem 1 and Proposition 2 in [Chen et al' 2021]. It would be helpful to clarify. Chen et al \"Closing the gap: Tighter analysis of alternating stochastic gradient methods for bilevel problems.\" Advances in Neural Information Processing Systems, 2021.\n\nA: Great point! In Chen et al' 2021, they made a similar conclusion that $N=O(\\kappa)$ is better than $N=O(1)$ under the choice of $Q=O(\\kappa\\log\\frac{1}{\\epsilon})$ (i.e., solving the linear system to a good accuracy $\\epsilon$). As a comparison, our theoretical comparison is more general by considering both $Q=O(\\kappa)$ and $Q=O(1)$. In addition, we also provide a comparison between $Q=O(1)$ and $Q=O(\\kappa)$ given different $N$, which is not covered in Chen et al' 2021. We have added this discussion in the revised pdf. \n \nQ5: It is not clear to me why the metric in the AID-Bio experiments is runtime, but that in the ITD-Bio experiments is iteration?\n\nA: Thanks! For ITD-BiO, the goal of our experiment is to show that No-loop ITD-BiO with $N=1$ induces a larger convergence error than $N$-$N$-loop ITD-BiO with $N=1$. In other words, we compare their losses after they converge, i.e., after $500$ iterations. Therefore, using the iteration as a metric serves the purpose of this comparison. \n\nQ6: Since the theory is mostly on the comparison on the number of iterations or MV and GC, comparing those metrics in both experiments will make more sense.\n\nA: Great point! We have added a new Fig. 4 in Appendix G in the revised pdf, which plots losses versus $MV$ (number of matrix-vector products) and $GC$ (number of gradients), as suggested by the reviewer. It can be seen that these empirical results are still consistent with our theoretical results. More results on such comparisons will be added. \n\nQ7: In the AID-Bio, is it possible to provide the lower-bound similar to ITD-Bio?\n\nA: The lower bound for ITD-BiO is constructed particularly to demonstrate that the convergence error of ITD-BIO with $N=O(1)$ fundamentally exists. However, since AID-BiO does not contain convergence error, our instance used for ITD-BiO may not be tight enough. In general, the lower bound construction for AID-BiO is an interesting but very challenging task, and we would like to leave it for future study. \n\nQ8: The discussion on the setting with small response Jacobian (line 304) is not clear. What conclusion do the authors want to draw from this discussion? Is this consistent with the main results?\n\nA: Thanks for pointing this out for us! We want to convey that if we assume that the response Jacobian $\\frac{\\partial y^*(x)}{\\partial x}$ of our bilevel objective is sufficiently small at an $\\epsilon$ level, our analysis can further guarantee that the final convergence error is at an $\\epsilon$ level. This does not contradict our main results, because our main results do not make this extra assumption. ", " Many thanks for providing the helpful review. In the revised version, we have made the changes based on the reviewer’s comments. All the changes are highlighted by the blue-colored texts.\n\nQ1: The paper is limited to the deterministic setting, and I am wondering if the developed analysis can be further extended to the stochastic setting with data sampling. Can the authors have some comments or provide some guidances on this extension?\n\nA: Many thanks! Yes, if the mini-batch size at each iteration in the stochastic setting is chosen at an order of $\\epsilon^{-1}$, we have checked that our proof flow and comparisons still hold. We have clarified this in the revision.\n\nQ2: In AID-based method, people sometimes use the acceleration methods for both the lower-level and HIV approximations to achieve a better complexity. It would be great to discuss how the current analysis can be extended to such scenarios, and whether the corresponding comparison still holds?\n\nA: Great point! If we use acceleration methods for the lower-level and HIV approximations, we will achieve an improved $N=O(\\sqrt{\\kappa})$ and $Q=O(\\sqrt{\\kappa})$ for $N$-$Q$-loop, an improved $N=O(\\sqrt{\\kappa})$ for $N$-loop, and an improved $Q=O(\\sqrt{\\kappa})$ for $Q$-loop. However, it is not clear if an improvement can be obtained for No-loop. We will investigate this comparison under acceleration methods as an interesting future topic. \n\nQ3: Theorems 1, 2 and 3 seem to involve complicated parameters and relations. It would be good to provide some proof outlines for readers to better understand the technical idea of this paper.\n\nA: Good point! We have provided a proof sketch of Theorem 1 in Appendix H. For ease of interpretation, in the revised pdf, we have also provided the simplified theorems by hiding constants in the notation $\\Theta(\\cdot)$, and relegated the complete theorems to the appendix. ", " Many thanks for providing the helpful review. In the revised version, we have made the changes based on the reviewer’s comments. All the changes are highlighted by the blue-colored texts.\n\nQ1: What are the exact definitions of the notations $\\Theta$ and $\\Omega$?\n\nA: Many thanks! We use $a(x)=\\Theta(b(x))$ if $cb(x)<a(x)<Cb(x)$ and $a(x)=\\Omega(b(x))$ if $a(x)>cb(x)$, where $c,C$ are universal constants. We have added the definitions in the revised pdf. \n\nQ2: The authors may want to provide references on the hypergradient in AID-BiO to help readers to understand the algorithms.\n\nA: Good point! We have added the following references [1,2] for understanding the hypergradient in AID-BiO.\n\n[1] Grazzi, R., Franceschi, L., Pontil, M., and Salzo, S. On the iteration complexity of hypergradient computation. In Proc. International Conference on Machine Learning (ICML), 2020.\n\n[2] Pedregosa, F. Hyperparameter optimization with approxi- mate gradient. In International Conference on Machine Learning (ICML), pp. 737–746, 2016.\n\nQ3: The authors consider a hyperparameter optimization problem on MNIST in the experiments on AID-BiO (in Line 315), while you consider another hyper-representation problem in the experiments on ITD-BiO (in Line 332). Why do you consider different problems for the two optimizers?\n\nA: Many thanks for the question! We have added the other experiment for each optimizer. Specifically, for ITD-BiO, we have added a plot (Fig. 3 in Appendix F) on the hyperparameter optimization problem on MNIST in the revision with $N=1$ and $N=20$, where it can be seen that $N=20$ achieves a lower error and hence our theory is validated. For AID-BiO, we have also added a plot (Fig. 4 in Appendix G) on the representation problem, and a conclusion similar to Fig. 1 can be observed. Both new experiments are in consistency with our theory.\n\nQ4: In this paper, the authors only report the losses v.s. running time and the losses v.s. the number of iterations in experiments on AID-BiO and ITD-BiO, respectively. It would be more convincing if they report $MV(\\epsilon)$ (the total number of Jacobian- and Hessian-vector product computations), $Gc(\\epsilon)$ (the total number of gradient computations) or some more metrics in the experiments to support their theoretical results on the computational complexities.\n\nA: Great suggestion! For AID-BiO, since each matrix-vector (MV) computation takes the almost same time, our running time curve takes a trend very similar to that of the $MV(\\epsilon)$ (similarly for $GC(\\epsilon)$). To see this, we have added a new Fig. 4 in Appendix G in the revised pdf, which plots losses versus $MV$ (number of matrix-vector products) and $GC$ (number of gradients). It can be seen that these empirical results are also in consistency with our theoretical results. \n\nFor ITD-BiO, the goal of our experiment is to show that No-loop ITD-BiO with $N=1$ induces a larger convergence error than $N$-$N$-loop ITD-BiO with $N=1$. In other words, we compare their losses after they converge, i.e., after $500$ iterations. Therefore, using the iteration as a metric serves the purpose of this comparison. ", " 1.\tIn this paper, the authors study two popular bilevel optimizers AID-BiO and ITD-BiO, whose implementations involve different choices of loops. The authors first establish unified convergence analyses that are applicable to all implementation choices of loops for both optimizers.\n2.\tThe authors specialize their results to characterize the computational complexities for all implementations, and they then provide an explicit comparison across different implementations.\n Strengths:\n1.\tThe authors first establish unified convergence analyses that are applicable to all implementation choices of loops for both AID-BiO and ITD-BiO.\n2.\tThe authors compare the computational complexities among all implementations of the optimizers. The numerical experiments further demonstrate their theoretical results.\n3.\tThe paper is easy to follow.\n\nWeaknesses:\n1.\tThe exact definitions of some important notations such as $\\Theta$ and $\\Omega$ are missing.\n2.\tThe authors consider a hyperparameter optimization problem on MNIST in the experiments on AID-BiO (in Line 315), while they consider another hyper-representation problem in the experiments on ITD-BiO (in Line 332). They may want to explain why they consider different problems for the two optimizers.\n3.\tIn this paper, the authors only report the losses v.s. running time and the losses v.s. the number of iterations in experiments on AID-BiO and ITD-BiO, respectively. It would be more convincing if they report MV($\\epsilon$)(the total number of Jacobian- and Hessian-vector product computations), Gc($\\epsilon$)(the total number of gradient computations) or some more metrics in the experiments to support their theoretical results on the computational complexities.\n 1.\tWhat are the exact definitions of the notations $\\Theta$ and $\\Omega$?\n2.\tThe authors may want to provide references on the hypergradient in AID-BiO to help readers to understand the algorithms.\n3.\tThe authors consider a hyperparameter optimization problem on MNIST in the experiments on AID-BiO (in Line 315), while you consider another hyper-representation problem in the experiments on ITD-BiO (in Line 332). Why do you consider different problems for the two optimizers?\n Yes, the authors adequately addressed the limitations and potential negative societal impact of their work.", " This paper studies the fine-grained convergence of gradient-based algorithms for bilevel optimization problems. The focus is on establishing the unified convergence analysis for both AID-BiO and ITD-BiO and on the comparison between the rate of convergence under different numbers of steps in the inner and outer loops. Under reasonable assumptions, the paper shows that AID-BiO and ITD-BiO can benefit from double loops in certain sense. **Comments - Strengths**\n\n(+) The unified convergence theory for AID-BiO is new, which captures all choices of the inner-level gradient steps N and the outer-level Hessian-inverse-vector steps Q. The lower-bound on the nonconvex-strongly convex bilevel problem seems new. \n\n(+) The conclusion that bilevel optimization is in contrast to minimax optimization, where no-loop gradient descent ascent (GDA) with N = 1 often outperforms (N-loop) GDA is interesting.\n\n**Comments - Weaknesses**\n\n*Major comments*\n\n(-) The major concern I have is that the majority of the theory in this submission is a summary of previous convergence results in [17]. Indeed, this paper and [17] focus on the same setting, the same algorithms, and the same convergence results but now with explicit dependence on the number of steps in the inner $N$ and outer loops $Q$. While this is a good point to discuss, the technical content of this paper seems to be incremental. Therefore, this paper is more like a supplementary to [17].\n\n(-) The second major concern I have is that the theoretical comparison between the double-loop and the single-loop is in terms of the complexity upper bound. Note that the larger upper bound may not mean worse performance since the single-loop algorithm may be harder to analyze, leading to looser worst-case performance. For example, it would be interesting to compare the lower bound of the single-loop algorithm with the upper bound of the double-loop algorithms. \n\n(-) The third major concern is on the experiments. The experiments did not closely match the theoretical analysis. Since the separation between no-loop and double-loop is on $N={\\cal O}(1)$ or $N={\\cal O}(\\kappa)$, choosing $N, Q$ as $1, 20$ does not serve the purposes. In addition, it may be better to plot the dependence on $MV$ and $GC$ not the runtime nor the number of iterations.\n\n*Minor comments*\n\n(-) A similar conclusion for AID-Bio (e.g., $N={\\cal O}(\\kappa)$ is better than $N={\\cal O}(1)$) has already been drawn in the stochastic bilevel algorithms; see ALSET (Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems). The connection and difference need to be discussed. \n\n(-) While the paper did provide a lower bound for ITD-Bio, it seems quite loose compared with the upper bound of ITD-Bio. They differ not only in the dependence on $\\kappa$ but also the dependence on $K$. \n\n(-) The main theorems (e.g., Theorems 1, 2, and 3) in the paper are presented in a very complicated way. They depend on many irrelevant constants such as $M, \\rho, r$, which make the main theorems difficult to interpret. \n See my comments in weakness. Yes, it addresses the limitations. ", " This paper provides a unified convergence rate analysis for existing two types of widely-used bilevel optimization algorithms via so-called approximate implicit differentiation (AID) and iterative differentiation (ITD), under all different choices of loop sizes. Some interesting theoretical findings are provided as well. In particular, for AID-based approaches, two major loops exist within the base loop, i.e., a loop of size N for approximating the lower-level solution and a loop of size Q for estimating the Hessian-inverse-vector (HIV) product in the hypergradient computation. In the comparison among different loop choices, they specify their theories to several typical choices of interest in practice, including N= 1 or \\kappa and Q = 1 or \\kappa. Based on such comparisons, they show that the lower-level loop can improve the overall computational complexity w.r.t. both the matrix-vector and gradient computations, and the loop for HIV estimation can reduce the gradient complexity. For ITD-based method, they show via upper and lower bounds that loop size needs to be large for achieving a vanishing convergence error induced by the lower-level and HIV approximations. Some empirical results are also provided and seem to validate the theories well. This work is well written and the motivation is clear to me. Bilevel optimization has attracted signification attention recently from both the deep learning and optimization communities, where AID and ITD are two widely-used methods so far. Characterizing a unified convergence guarantee for both of these two types of methods is fundamentally important because different loop schemes have been used in practice, but only some of them have guarantee. This work has done a good job in providing a tight characterization and systematic study for all these cases, which is a good contribution. \n\nThe theoretical findings are interesting and provide useful insights for practical applications. By capturing different dependences on the condition number \\kappa, they demonstrate the necessity of lower-level optimization loops in both algorithms to improve the overall complexity, majorly in terms of the Hessian- and Jacobean-vector products. In particular, for ITD-based method, the lower bound is a nice and new contribution to justify the fundamental challenging for No loop scheme in achieving a vanishing error. Theories seem to be well supported by the experiments. \n\nThe technical analysis introduces some new developments, which may be of interest to the bilevel optimization community. Existing works on the convergence rate of bilevel optimization mainly focus on the case when the HIV is solved at a good accuracy via large Q, but this paper allows small Q by showing that the coupled error is decreasing iteratively. This kind of characterization may be used in other settings with AID types of bilevel optimization. \n The paper is limited to the deterministic setting, and I am wondering if the developed analysis can be further extended to the stochastic setting with data sampling. Can the authors have some comments or provide some guidances on this extension?\n\nIn AID-based method, people sometimes use the acceleration methods for both the lower-level and HIV approximations to achieve a better complexity. It would be great to discuss how the current analysis can be extended to such scenarios, and whether the corresponding comparison still holds? \n\nTheorems 1, 2 and 3 seems involving complicated parameters and relations. It would be good to provide some proof outlines for readers to better understand the technical idea of this paper. \n the authors adequately addressed the limitations and potential negative societal impact of their work", " This paper provides a unified convergence theory to capture the computational differences among different implementations in bilevel optimization algorithms such as AID-BiO and ITD-BiO, with a focus on the different choices of inner and outer loops. By comparing different implementation, the paper draws the conclusion that different from the minmax case, in the bilevel optimization, having double loops always have some provable benefit. [Strengths]\n* The paper is well written and the theoretical question studied in this paper is of practical interest.\n* The paper has done a good job of summarizing the theoretical results into two tables that have an explicit dependence on $\\kappa$ and $\\epsilon$. \n* The non-convergent result (both upper and lower bounds) of No-loop ITD-Bio is interesting, which suggests a gap between the performance of AID-Bio and ITD-Bio. \n\n[Weaknesses]\n* The lower-bound result in Theorem 4 is a bit loose. Not only because it has a large gap with the upper bound, but the worst-case example used in the proof also seems not carefully crafted. Does the lower bound also depend on $K$? Is there any possibility of improving the lower bound or upper bound? \n\n* The paper only focuses on the comparison of different *deterministic* bilevel optimization algorithms. Does the conclusion in this paper also hold in the *stochastic* setting? If not, it would be helpful by mentioning ``deterministic'' in the title or abstract. \n\n* It is not clear to me why the metic in the AID-Bio experiments is runtime, but that in the ITD-Bio experiments is iteration? Since the theory is mostly on the comparison on the number of iterations or MV and GC, comparing those metrics in both experiments will make more sense. \n * The lower-bound result in Theorem 4. See [Weaknesses]. It would be helpful to clarify. \n\n* Applicability of the results to the stochastic bilevel algorithms. See [Weaknesses]. It would be helpful to clarify. \n\n* In practice, stochastic bilevel optimization algorithms may be used more frequently. How does the conclusion in this paper compare to the conclusion in the recent work on *stochastic* bilevel optimization algorithms; for example, Theorem 1 and Proposition 2 in [Chen et al' 2021]. It would be helpful to clarify. \n\n Chen et al \"Closing the gap: Tighter analysis of alternating stochastic gradient methods for bilevel problems.\" Advances in Neural Information Processing Systems, 2021.\n\n* The performance metrics in the experiments. See [Weaknesses]. It would be helpful to clarify. \n\n* In the AID-Bio, is it possible to provide the lower-bound similar to ITD-Bio? \n\n* The discussion on the setting with small response Jacobian (line 304) is not clear. What conclusion do the authors want to draw from this discussion? Is this consistent with the main results? The limitations and potential negative societal impact of their work have been properly discussed. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 5 ]
[ "nips_2022_4lw1XqPvLzT", "Z14wm-tyVO0", "HlY7w-_oVsW", "hMp_5ISSXZ8", "9Z1WMQ-k08Bq", "mnDaV64Qv-f", "HlY7w-_oVsW", "HlY7w-_oVsW", "HlY7w-_oVsW", "HlY7w-_oVsW", "syBQsRgfTjB", "ujHGtMk21IS", "mRmOUHVWT9_", "nips_2022_4lw1XqPvLzT", "nips_2022_4lw1XqPvLzT", "nips_2022_4lw1XqPvLzT", "nips_2022_4lw1XqPvLzT" ]
nips_2022_G4VOQPYxBsI
Algorithms that Approximate Data Removal: New Results and Limitations
We study the problem of deleting user data from machine learning models trained using empirical risk minimization (ERM). Our focus is on learning algorithms which return the empirical risk minimizer and approximate unlearning algorithms that comply with deletion requests that come in an online manner. Leveraging the infintesimal jacknife, we develop an online unlearning algorithm that is both computationally and memory efficient. Unlike prior memory efficient unlearning algorithms, we target ERM trained models that minimize objectives with non-smooth regularizers, such as the commonly used $\ell_1$, elastic net, or nuclear norm penalties. We also provide generalization, deletion capacity, and unlearning guarantees that are consistent with state of the art methods. Across a variety of benchmark datasets, our algorithm empirically improves upon the runtime of prior methods while maintaining the same memory requirements and test accuracy. Finally, we open a new direction of inquiry by proving that all approximate unlearning algorithms introduced so far fail to unlearn in problem settings where common hyperparameter tuning methods, such as cross-validation, have been used to select models.
Accept
Most of the reviewers agree that this paper is well written and provides a notable improvement over prior works on algorithms for data deletion. Some initial concerns regarding the proper motivation for the problem setting have been largely addressed.
train
[ "hgJ2qr4ArkX", "sDwLHN4J2A", "kMm0yuCHAp", "1ZlaaQH2tcR", "hfELfj2hR6W", "oJ9A6PKyB6X", "ogFRO7lcNSh", "Z9SO0ejEQQ8", "ZMfRt3PEL2i", "IhW1bi7qGJi", "zdN1NGChHmC", "y3M4bR0kRVd", "_GuFkNdzjPx2", "le39Jajh1Pi", "6gPRjpakWh-", "SKIq5VqZ3fI", "J1ylj7DYNWd" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for raising their score but we remain slightly confused. \n\n- The reviewer wanted us to add plots about the hyper-parameters and we have done so. What correctness is there to check? Shouldn't it be as easy as checking that we have done so? \n\n- If the reviewer hasn't checked correctness and are changing their score just because others have done so, should their confidence in their assessment be so high? \n\n- What outstanding concerns does the reviewer have? It seems the reviewer was confused about some key aspects of our work (our use of just one Hessian and how the prox operator bypasses the need for smoothness of the regularizer). Are those still concerns? \n\nWe would really like to engage with this reviewer more to understand their assessments, but nevertheless thank them for changing their score to reflect slightly more positively on the paper.", " I would like to thank the authors' detailed response for addressing my concerns, but I have no time to check the correctness.\n\nSince all other reviewers have positive comments, I raise my score to 5.\n", " Hi, we wanted to let you know that the paper has been updated and would like to know before the discussion period closes if we have addressed your concerns with our paper revisions and responses above.", " I would like to thank the authors for improving upon the motivations of their work. I now find the utility of their algorithm justified. For the convex setting, I also find their contributions to be significant and sound. The adoption of infinitesimal jacknife and prox operator makes solid contribution upon previous work. The hyperparameter issue is the highlight of the paper as it introduces a new perspective towards this line of work.\n\nThe scope of the problem (i.e. only convex loss functions) is the only reason why I’m not giving a 7. I would also like to encourage the authors to study non-convex settings in the future as that is potentially more impactful for the task of data deletion.", " We appreciate the reviewers willingness to adjust their score and we have just updated figures 1, 2, and 3 and (lines 235-238) with our remarks about GDPR compliance.\n\nWe still take slight issue with the characterization of our paper as performing “data removal on only convex models with online applications to medical domain.” We realize this impression as our sole contribution may have come from our paper’s title, where we have chosen to emphasize the online aspect of our algorithm. Regardless of acceptance, we plan to change the paper’s title to “Algorithms that approximate data removal: new results and limitations\" so that all three contributions will be on equal footing. It is challenging to surface published convex ERM models being used in industrial applications because most of these are proprietary — however having worked and collaborated in these settings, we can attest that not all industrial applications which use user data and are under the purview of GDPR are non-convex ERM problems and our methods will be useful in these cases. To remind the reviewer of all of our contributions succinctly, they are:\n\n- An online/less expensive batch algorithm for data removal in convex ERM problems\n- An algorithm for non-smooth convex ERM problems (which current approximate data removal algorithms cannot handle)\n- A counter-example which implicates almost all established unlearning algorithms when hyper-parameter tuning has taken place\n\nWe feel these contributions are more than enough to warrant acceptance at NeurIPS.\n\nThanks!", " Can the authors provide a pointer to the captions you mentioned above? All of the legislations the authors mention in the introduction have a period of removal.\n\nI apologize for not noticing the changes in the new draft earlier. I am not saying it must include industrial scaled model. And I apologize for misspeaking earlier that the authors only study linear models. However, data removal on only convex models with online applications to medical domain seem limited compare to the entire space of applications needing data removal. And if the non-convex data deletion problem does not get studied in an online fashion, I don’t see these regulations shifting to require online deletion in the near future. That being said, I do find the authors’ algorithm useful in some real applications and will raise score once I see the GDPR comment properly included in the text/caption.", " Can the reviewer elaborate more on what we are \"overclaiming?\" Our work is in keeping with a line of work making the same claims with similar motivation... . We are struggling to understand this reviewers remarks given the following.\n\n1. As we have repeatedly clarified, **our paper does not study just linear models.** Our paper encompasses a large number convex optimization models unlike prior work. This means logistic regression, binary cross entropy, negative log likelihood, most generalized linear models etc. We make what models our algorithm applies to clear in the beginning of the paper\n\nEven if you find the Airbnb example to be old, we have provided several examples in healthcare and data pricing that are now included in the current draft. We feel there is no need for resubmission as our our work is consistent with a long line of work that study low-memory unlearning algorithms for convex ERM problems published at this same venue (and other top ML venues) with similar motivation. The problem of data removal is well motivated even in the convex setting! The reviewer seems to have the false impression that legislation and guidelines designed around the right to be forgotten should (or does) apply to only industrial sized models (which appears to allude to neural networks); but this is not the only applicable use case for unlearning algorithms. Can the reviewer provide citations that this is the case? The right to be forgotten concerns models that use user data and this includes several guidelines designed around datasets in healthcare, psychology, and information platforms that use convex models to train. We, and many philosophers, believe it should be applicable to most settings where user data is applied. \n\nWe also want to mention that our algorithms outperform when trained on high-dimensional datasets (not large n) as the savings is in d which is not the canonical setting of neural networks. Please have a look at our updated draft as well as several of the other works in this space.\n\n2. There also seems to be a fundamental misunderstanding regarding our experiments. As both examples are equivalent in both the batch and the online settings, the experiments shown are for the batch setting as well and therefore show the potential speedups after x months of data removal as well as the accuracy. Our experiments reflect settings that current data removal mechanism cannot be applied since they require smoothness. We have adjusted our captions to note this fact when it comes specifically to GDPR, but again, the right to be forgotten is not equivalent to GDPR and influences a lot more guidelines and policies.\n\nWe urge the reviewer to reconsider their position as our work includes several key contributions and is well motivated as it is in keeping with a growing body of work in this area. At the very least, can the reviewer explain why motivation for just industrial sized models is necessary? We think this requirement from the reviewer is centered around their misunderstanding (or limited view) of the right to be forgotten and when it should be applied.", " I would like to thank the authors for justifications towards the application of their methods. Unfortunately, in its current form, I find the authors to overclaim or missing some key aspects of the paper. There seems to need significant changes to the paper.\n1. The motivating examples of tech industry regulations do not seem to justify the study of linear models. Although the authors provided a reference about Airbnb models, it is rather old and hard to believe this is still being used as their production model. I do find the medical applications interesting and I would encourage the authors to elaborate on such motivation perspectives for future versions.\n2. Although the algorithm is more efficient in the batched setting in the long run, no empirical results have been shown that the algorithm will outperform. There, I would expect a much smaller gap between the authors' algorithm and the baseline algorithm.\n\nI strongly encourage the authors to submit again in the future with the proper motivating examples and precise use cases.\n", " We address a few more of the reviewers specific points made about limitations of our work:\n\n- The reviewer writes that \"the paper has not addressed the limitations of their work. It states that the proposed method fails to run with hyperparameter tuning, which is a common issue existing in unlearning algorithms.\" \n\nWe would like to emphasize the three main limitations of our work; (1) our technique does provide guarantees of unlearning for most forms of non-convex optimization; (2) the guarantees do not extend to hyperparameter tuning (which is a limitation of all existing unlearning algorithms); and (3) our algorithm applies to non-smooth regularizers but not non-smooth objective functions. We have added to the paper making these limitations much clearer. \n\nIn particular, for (1) we have highlight several potential use cases of our unlearning algorithm for industrial convex ERM problems (see comment for reviewer EwhA) but extending the influence function to work in nonconvex settings would have implications beyond privacy (as it is currently used for issues of explainability, robustness, and fairness) and would therefore be a significant step forward. For (2) we have noted that our results about hyperparameter tuning is not just a limitation of our algorithm, but almost all unlearning algorithms that have been introduced so far. Again, this is an important limitation that we are actively pursuing, but also view pointing out this limitation as a contribution since it applies to more than just our algorithm. For (3) we view extensions of the influence function (aka infinitesimal jackknife) to non-smooth loss functions (and not just non-smooth regularizers) as an important one and are currently pursuing this as well.\n\n- The reviewer writes that \"the paper mainly enables the unlearning algorithm to be applied in non-smooth regularized models, but does not explain it specifically.\" \n\nWhile we have added comments explaining how the non-smooth application through use of the proximal operator works, we note that our algorithm can be applied to smooth problems as well. In fact, our algorithm is more efficient than previously introduced methods for smooth problems after only two months of complying with GDPR (see response to reviewer EwhA).\n\n- The reviewer writes that the paper is \"lacking the comparison of runtime in different hyperparameter settings.\"\n\nWe hope we were able to address this concern with experiments we ran that are currently in the revision of our paper (see appendix E.1). The hyperparameter sweep makes no difference for the relative performance of the algorithms. \n\n", " **Comments on the usefulness of the online setting** While we believe we address several of these concerns in our summary response, we will answer each question specifically here. \n\n**The authors argue that the online setting is more practical. However, taking GDPR for example, the companies are obligated to delete user data only within a month instead of immediately. As a result, I find the batched setting to be more realistic where retraining could also be competitive:**\n\nWe appreciate the reviewers comment on the need for an online data deletion algorithm since the current format of GDPR allows for delete requests to be performed one month after. We note that even in the batch setting, our algorithm is less computationally intensive than previous methods after two months of complying with requests since it just needs to invert and store the Hessian once instead of every month, and therefore might *still* be preferable/useful to companies complying with monthly requests.\nAlso, while the current form of GDPR legislation allows quite a bit of time for companies to comply, we believe that showing a tool can provide the same empirical performance and theoretical guarantees with immediate deletion is beneficial toward encouraging companies to complying with GDPR requests more swiftly (and might encourage lawmakers to necessitate faster compliance). When individuals request to delete their data, it is sometimes because they are concerned about the risk of potential harm if their data remains available; this risk is compounded the longer it takes for the data to be deleted. By showing it can be done quickly, companies may be encouraged to act more expeditiously and less harm might occur. \n\n**As a result of the above comment, is it fair to compare IJ against RT and TA in the online setting? I believe it is more reasonable compare against the methods for removing an entire batch.**\n\nWe prove in Appendix A.2 that the online setting of the IJ can be easily extended to a batch version. In the batch setting our method is still computationally more efficient than TA at the point where there is more than one batch request. After two months of complying with GDPR, our method will be more efficient than other low memory methods. Indeed, the relative efficiency of our method compounds the more batch delete requests there are. Thus, making our algorithm ideal for, but not restricted to, a streaming setting (this is why we chose to highlight this setting in our experiments). Furthermore, TA was not originally able to support non-smooth regularizers. Our use of the proximal operator and an equivalence we show in Remark 1 are what enables TA to now support non-smooth regularizers and perform these experiments. \n\n**More intuition**\n**The proposed Algorithm 1 would benefit from more detailed description and intuition:**\n\nWe apologize for the lack of intuition given and will provide the following intuition in the main text:\nIn equation 5a (e.g. the smooth regularizer variant), this algorithm is an IJ estimate of the leave-one-out model with some additional noise. The amount of noise added to this approximation (determined by $c$) is dictated by the unlearning guarantees targeted by the company and properties of the function. The variant designed for the unsmoothed version simply introduces the proximal function to the IJ estimate to handle the non-existence of the Hessian of the regularizer. \n", " **Comments on motivation of unlearning in the convex setting**\n\n**The authors use data regulations for industrial sized models as motivation of the paper. However, most industrial models of the scale are deep neural networks. The study of only linear models seem ill-suited:**\nWe appreciate the reviewers concern for the applicabilty of our method to industrial sized models, specifically, neural networks. First, we point the author to several examples of industrial applications where convex (smooth and non-smooth) ERM is used. In healthcare, logistic regression models are used across the entire field for risk estimation [1]. In consumer apps such as Airbnb, convex ERM is used for optimizing pricing [2]. Our algorithm is applicable for a broad range of models generated from convex optimization problems, not just linear models.\n\nAs a practical matter the IJ can be applied to neural networks, without guarantees of unlearning except in some special settings. We remark on such a setting (line 144 in our paper) where we remark it can be applied directly to non-convex models such as neural networks. The latter case was studied in [3] who also provided similar unlearning guarantees for their algorithm in this non-convex example. \n\n**Could you give a concrete example where online data deletion is necessary?** In healthcare, online data deletion is not only desireable but a requirement of some studies. For example, the UK Biobank (which is one of the largest longitudinal health data studies across over 500 000 individuals in the UK) requires data deletion as soon as a participant requests for their data to be removed [4]. In this setting, our algorithm (more broadly online data deletion) is necessary to comply with the study regulations. Importantly, many of the models trained using UK Biobank data are *not* neural networks and fall within the scope of our unlearning guarantees.\n\nWe will provide these examples and more of where our algorithm can be readily applied in the revision. \n\n[1] C. Rudin and B. Ustun. Optimized scoring systems: Toward trust in machine learning for healthcare and criminal147\njustice. Interfaces, 48(5):449–466, 2018.\n\n[2] P. Ye, J. Qian, J. Chen, C.-h. Wu, Y. Zhou, S. De Mars, F. Yang, and L. Zhang. Customized regression model151\nfor airbnb dynamic pricing. In Proceedings of the 24th ACM SIGKDD international conference on knowledge152\ndiscovery & data mining, pages 932–940, 2018\n\n[3] C. Guo, T. Goldstein, A. Hannun, and L. Van Der Maaten. Certified data removal from machine learning models.141\nIn International Conference on Machine Learning, pages 3832–3842. PMLR, 2020.\n\n[4] A. Ginart, M. Guan, G. Valiant, and J. Y. Zou. Making ai forget you: Data deletion in machine learning. Advances137\nin Neural Information Processing Systems, 32, 2019", " We thank you for your reviews and the helpful comments provided. Below we provide responses to comments in the review:\n\n**How can the proposed method be effective in non-smooth regularized models?**\nOur proposed method is effective on non-smooth regularized models because we use the proximal operator, which is commonly used to optimize non-smooth ERM problems. We rely on software packages the glmnet and QUIC which provide efficient computation of the proximal operator for $\\ell_1$ penalties to show our method is very efficient. Notably, the proximal operator effectively optimizes the regularizer within a small domain close to the current iterate, which can be done efficiently for simple penalties. This is what allows us to compensate for the lack of second derivative for the non-smooth penalty which hopefully explains why it works in non-smooth regularized models. \n \nOur algorithm does indeed work in this setting and we apologize for the extent to which the reasons why were not made clear. Our paper provides both theoretical guarantees and experimental evidence that our method is effective for non-smooth regularized in Theorems 1 and 2 and Figure 2 respectively. Our work is the first to our knowledge to develop an algorithm which supports low memory unlearning of non-smooth regularized models. Even many of the memory intensive unlearning algorithms require smoothness [2]. Furthermore, our work is able to extend related work such as [1] to support non-smooth regularized models (Remark 1).\n\n**The proposed method only inverts the Hessian for one time, but how can it ensure the total error is tolerable:**\nOur error guarantee is statistically the same as other methods. This is due to the fact that the leave-one-out Hessian and full Hessian are sufficiently close (in particular $O(1/n)$ close). This, amount of closeness is enough to ensure that the model generated from our unlearning algorithm and the exactly unlearned model are $O(1/n^2)$ close. Adding noise on the order of $O(1/n^2)$ is what ensures the unlearning guarantee. The same kind of approximate unlearning argument is what is used in previous works. The statement of our unlearning guarantee for our IJ unlearning algorithm is in Theorem 1 and its proof is contained in Appendix A.3. We will provide additional text in the revision to make sure the structure of this argument is clear. \n\n**Comments about implementation**\n**Supplementary lacks the comparison of runtime in different hyperparameter settings:**\nWe did not originally include the runtime metrics for the different hyperparameter settings as they did not affect the relative difference in runtimes. For completeness, we have included them in an updated version of the Appendix (please check our current revision) to confirm that the runtime remains the same across hyperparameters.\n\n**The candidates of the hyperparameter $\\lambda$ may be $\\{10^{-3}, 10^{-4}, 10^{-5}, 10^{-6}\\}$ rather than $\\{1^{-3}, 1^{-4}, 1^{-5}, 1^{-6}\\}$:**\nThank you for pointing out this mistake; we have now fixed it in the main text. \n\n[1] A. Sekhari, J. Acharya, G. Kamath, and A. T. Suresh. Remember what you want to forget: Algorithms for machine149\nunlearning. Advances in Neural Information Processing Systems, 34, 2021.\n\n[2] S. Neel, A. Roth, and S. Sharifi-Malvajerdi. Descent-to-delete: Gradient-based methods for machine unlearning. In145\nAlgorithmic Learning Theory, pages 931–962. PMLR, 2021.", " We thank you for your reviews and helpful comments for our paper and appreciate the important impact of our results. Below we provide responses to concerns:\n\n**While being interesting, I feel jacknife, proximal updates, etc are all well established theoretical tools and there is not much that fundamentally new:**\nWe appreciate that the reviewer found our application of the Jacknife to be interesting. While these tools are well established in the theory, our work is the first to demonstrate the applicability of the Jacknife to privacy concerns such as unlearning. More broadly, our work is in keeping with a line of research on different ways the IJ can be used for issues of societal concern. For example, *notable* (both papers won best paper awards) applications of the IJ (aka influence function) have focused on robustness [1] and explainability [2] concerns, and more recently, the jackknife was introduced as a way to ensure fairness [3].\n\n**Using the wrong definitions:** In our experiments, we report ERM. We thank the reviewer for pointing out this mismatch. Our algorithm satisfies the definition of unlearning from [4] which we also prove in the Appendix. We have updated the paper that we focus on this definition so that our proofs and experiments are aligned.\n\n**Suppose we restrict ourselves to generalized linear models...What are the benefits of your algorithm in that case:** In this situation there will be little theoretical and practical difference between the Taylor approximation and the IJ approximation for unlearning datapoints. Our algorithm, however, will be beneficial for non-smooth GLM problems, which previous unlearning algorithms are not designed to handle. We demonstrate the theoretical benefits in Theorem 1 and the empirical runtime benefits in Figure 2. \n\n**Do you have any suggestions from mitigating the reported issue in Section 5 for hyperparameter-tuned models ?** We are currently working on this. We think it might be possible to augment the memory requirements of our unlearning algorithm or add more noise to the estimator to preserve the unlearning guarantees for specific types of hyperparameter tuning, but we are still in the beginning stages of proving such adjustments could work.\n\n[1] R. Giordano, W. Stephenson, R. Liu, M. Jordan, and T. Broderick. A swiss army infinitesimal jackknife. In The139\n22nd International Conference on Artificial Intelligence and Statistics, pages 1139–1147. PMLR, 2019\n\n[2] P. W. Koh and P. Liang. Understanding black-box predictions via influence functions. In International conference143\non machine learning, pages 1885–1894. PMLR, 2017.\n\n[3] E. Black and M. Fredrikson. Leave-one-out unfairness. In Proceedings of the 2021 ACM Conference on Fairness,135\nAccountability, and Transparency, pages 285–295, 2021.\n\n[4] A. Sekhari, J. Acharya, G. Kamath, and A. T. Suresh. Remember what you want to forget: Algorithms for machine149\nunlearning. Advances in Neural Information Processing Systems, 34, 2021", " In this general comment, we would like to first thank the reviewers for taking the time to review the submission and the provided comments. Next, we would like to reiterate the contributions of this submission.\n\n Our work is in keeping with a line of research on different ways the infinitesimal jackknife (also called the influence function) can be used for issues of societal concern, including robustness, explainability, and fairness. To our knowledge, we are the first to introduce it in the context of privacy relating to data removal. For all prior works including ours, the IJ can be applied to non-convex settings but guarantees are virtually non-existent. Providing guarantees in a general non-convex setting would be a significant step forward for all these proposed applications but is outside the scope of our work. Our work notably provides guarantees for a broad range of models generated from both smooth and non-smooth convex ERM problems (e.g. logistic regression, hinge loss, cross entropy loss etc.) as well as specialized non-convex models (see line 144) as was done in previous works. In light of the feedback given, we elaborate on our three contributions:\n\n* **An online algorithm:** while the current form of GDPR requires deletion after a month to allow companies time to comply, showing it can be done efficiently and quickly could incentivize a more narrow window for compliance. This, in turn, could mitigate the harm done from potential privacy attacks even further than current legislation. Notably, even in the batch setting our algorithm will be more efficient than all low-memory unlearning algorithms in the literature so far after only two months of complying with requests since it requires inverting a Hessian once and storing it while other methods will require inverting a Hessian monthly. This, we believe, would more than likely make the IJ approximation the standard technique used for compliance with data removal requests (at least when compared with the presently introduced techniques).\n* **Application to non-smooth models:** we provide theoretical guarantees of data deletion and experimental result for models with non-smooth regularizers such as the $\\ell_1$ penalty. To the best of our knowledge, no current approximate unlearning algorithm and very few memory intensive unlearning algorithms can be applied in this popular setting. Our technique for handling non-smoothness can be applied to the previously introduced Taylor approximation (TA) and we provide theoretical guarantees for this.\n* **Counterexample:** Hyper-parameter tuning is common in most model generation pipelines. Our work is the first to point out that unlearning via all techniques presented so far does not work when this procedure takes place. While this is the last part of the paper, we feel this contribution alone is a noteworthy one for the community as it could likely prevent unintended harm.\n\\end{itemize}\n\nWe will make all these contributions clearer in the revision. Finally, there were additional concerns regarding motivation of our algorithm, clarity about why the proximal IJ works, and experimental details. We give more detailed responses to each reviewers concerns below.", " The paper provides an algorithm for online data deletion. They consider unlearning with convex / strongly convex loss functions (with additional assumptions standard in the literature). The key improvement in the algorithm over prior works in [15, 29] is to provide an update step that does not require to recompute a new hessian inverse for every new deletion request (which they call infinitesimal Jacknife (IJ). Thus, they improve the running time from O(md^3) to O(md^2) where \\(m\\) is the number of deletions that can be handled. The authors provide extensions of this update step when the underlying regularization is not smooth (but the objective function is still smooth). This is obtained using a proximal update. Finally, the authors show an interesting limitation of unlearning for ERM based learning procedures. Strength: The topic of machine unlearning is new and upcoming. The paper addresses computational side of the problem in the online deletion setting which is an important problem. The paper is well written! \n\nWeaknesses: \n1. I feel the key technical improvement from prior works is to come up with update in 5a in which H_l^{-1} does not need to be updated after every deletion request. While being interesting, I feel jacknife, proximal updates, etc are all well established theoretical tools and there is not much that fundamentally new, which is why I am not giving this paper a strong accept. \n\n\n2. I am a bit confused by the rationale to choose Definition 1 from [15] (instead of the smilar definition from [29]). In order for this definition to hold, one needs to randomize the out of the learning algorithm (which would decrease performance). In the corresponding definition in [29], one does not need the learning algorithm to be randomized. Furthermore, in your experiments where you report RT is this ERM + noise or just ERM ? (I feel there is a mismatch between the two sections). \n\n3. Suppose we restrict outselves to generalized linear models ? In this case, the hessian is the sum of rank-1 matrices (one corresponding to each data point), and online updating the hessian as in [15, 29] is easy to do. What are the benefits of your algorithm in that case? \n\n4. Do you have any suggessions from mitigating the reported issue in Section 5 for hyperparameter-tuned models ? \n Refer to above comments I do not see any negative societal consequences. In fact, the paper is trying to develop solutions for the important issues of right to be forgotten. ", " The paper proposes an online approximate unlearning algorithm that is efficient in computation and memory. To begin with, the paper uses the infinitesimal jackknife to enable the proposed algorithm to delete data from models trained which is non-smooth regularized. Furthermore, the paper theoretically proves the generalization and deletion capacity of the proposed method. Finally, extensive experimental results demonstrate that the proposed method can exceed in retraining the model and the second-order Taylor Approximation[1]. \nFrom my point of view, this paper falls into several significant drawbacks and might be improved by addressing the following issues:\n1. It’s not clear how can the proposed method be effective in non-smooth regularized models. \n2. The proposed method only inverts the Hessian for one time, but how can it ensure the total error is tolerable. \n3. The supplementary lacks the comparison of runtime in different hyperparameter settings. \n4. The candidates of the hyperparameter \\lambda may be {10^{-3}, 10^{-4}, 10^{-5}, 10^{-6}} rather than {1^{-3}, 1^{-4}, 1^{-5}, 1^{-6}}. \n\n[1] Ayush Sekhari, Jayadev Acharya, Gautam Kamath, and Ananda Theertha Suresh. Remember what you want to forget: Algorithms for machine unlearning. Advances in Neural Information Processing Systems, 34, 2021.\n Strengths: \n1. The paper provides sufficient theoretical proof of the proposed method. \n2. According to the experimental results, the proposed method outperforms much better than the previously proposed unlearning algorithms. \nWeaknesses: \n1. The paper mainly enables the unlearning algorithm to be applied in non-smooth regularized models, but does not explain it specifically. \n2. Lacking the comparison of runtime in different hyperparameter settings.\n 1. How can the proposed method be effective in non-smooth regularized models? The paper has not addressed the limitations of their work. It states that the proposed method fails to run with hyperparameter tuning, which is a common issue existing in unlearning algorithms.", " In this paper, the authors study the data deletion problem under the online setting. For linear models, the authors propose a second order online data deletion algorithm that enjoy better theoretical guarantees than existing methods. The algorithm can also be extended to non-smooth regularization objectives through a proximal gradient variant. Empirical studies have also shown the proposed algorithm to significantly improve over the time complexity of existing baselines while sacrificing some test accuracy. In the end, the authors propose a simple cross validation setting where all existing algorithms fail to have any theoretical guarantees. Overall, I find the paper to exhibit reasonable improvement over existing work. I also find the discussion in the end of the paper regarding model tuning to be valuable and a highlight of the paper.\n\nOn the other hand, I find the setting motivation to be relatively unsupported:\n1. The authors use data regulations for industrial sized models as motivation of the paper. However, most industrial models of the scale are deep neural networks. The study of only linear models seem ill-suited.\n2. The authors argue that the online setting is more practical. However, taking GDPR for example, the companies are obligated to delete user data only within a month instead of immediately. As a result, I find the batched setting to be more realistic where retraining could also be competitive.\n3. As a result of the above comment, is it fair to compare IJ against RT and TA in the online setting? I believe it is more reasonable compare against the methods for removing an entire batch.\n4. The proposed Algorithm 1 would benefit from more detailed description and intuition. 1. Could you give a concrete example where online data deletion is necessary?\n2. Is there motivating examples of linear models of scale significant enough that retraining underperforms the proposed data deletion algorithm? Or alternatively, do the authors find the algorithm easily generalizable to neural networks?\n I think the limitation of hyper-parameter tuning is very interesting.\n\nOn the other hand, please see the above questions regarding limitations of use cases of the proposed algorithm." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 2 ]
[ "sDwLHN4J2A", "kMm0yuCHAp", "ZMfRt3PEL2i", "hfELfj2hR6W", "oJ9A6PKyB6X", "ogFRO7lcNSh", "Z9SO0ejEQQ8", "IhW1bi7qGJi", "y3M4bR0kRVd", "zdN1NGChHmC", "J1ylj7DYNWd", "SKIq5VqZ3fI", "6gPRjpakWh-", "nips_2022_G4VOQPYxBsI", "nips_2022_G4VOQPYxBsI", "nips_2022_G4VOQPYxBsI", "nips_2022_G4VOQPYxBsI" ]
nips_2022_Epk1RQUpOj0
Online Minimax Multiobjective Optimization: Multicalibeating and Other Applications
We introduce a simple but general online learning framework in which a learner plays against an adversary in a vector-valued game that changes every round. Even though the learner's objective is not convex-concave (and so the minimax theorem does not apply), we give a simple algorithm that can compete with the setting in which the adversary must announce their action first, with optimally diminishing regret. We demonstrate the power of our framework by using it to (re)derive optimal bounds and efficient algorithms across a variety of domains, ranging from multicalibration to a large set of no-regret algorithms, to a variant of Blackwell's approachability theorem for polytopes with fast convergence rates. As a new application, we show how to ``(multi)calibeat'' an arbitrary collection of forecasters --- achieving an exponentially improved dependence on the number of models we are competing against, compared to prior work.
Accept
There is general agreement that this paper should be accepted.
train
[ "5j8F_Ty1j4J", "wpggp_zGXT", "KWanmtmTkRW", "WC0Xl9A64ZJ7", "XE5TjJTyXb3", "qBeMzf07gxN", "oJnL6CkRtzI", "HbgvZ59lU_", "0oirysySE9G" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank authors for their detailed explanation. I tend to keep my score.", " Thank you!", " Thanks for the response. This is a good paper, and I will continue supporting acceptance. Congrats for the good work! ", " Thank you for your review! We agree with you that the manuscript is long. However, in our view, showing that so many different problems can all be easily solved within the same simple framework is one of the most interesting parts of our contribution, and so we think it is important to keep all of the applications within the same paper. In our opinion, splitting this paper up by application would obfuscate this message. We are, however, reorganizing it (as suggested by Reviewer 2) to have simpler applications first so as to make the paper smoother to read. \n\nEspecially given its length, we don't think this paper needs experimental evaluation. The main contribution of our paper is not primarily any of the particular algorithms (many of which existed in one form or another already), but rather their common derivation. ", " \nThank you very much for your detailed and insightful review! Your suggestions for future directions and connections to the literature are very interesting, and we will think carefully about them. To address your specific points:\n\n1. We agree with your comments on the technical and aesthetic connections to EW, and will include additional discussion in our next revision.\n\n2. We agree it would be interesting to improve the constant to the minimax optimal one, and will think about ways to do so. It is worth mentioning that we stated the constant as 4 for the sake of simplicity, even though it can easily be further reduced (to at least below 3) with basically the same technique.\n\n3. We fully agree with this interpretation --- one could view the method we propose as a greedy strategy for breaking up a global Rakhlin et al. style DP problem (min max min max ... over all rounds) into local computations (min max in each round). \n\n4. This is an intriguing connection, thank you for the pointers to these papers. We will give it further thought and add a pointer to these papers and this future work direction in the revision.\n\n5. Your suggestion for improving the exposition by moving the simpler applications up front is also well taken, and we plan to do this as we revise the paper. The reason we did not do this in the submission is that we wanted to have space to present some of the more novel applications, but we are in complete agreement with you about the ordering of the results best for exposition. \n\n6. Thank you for spotting these typos, we will fix them. \n", " Thank you for your review! Algorithms for multicalibration existed prior to our work, but were derived using a specialized analysis. The main contribution of our paper is to derive --- among other things --- multicalibration algorithms as a simple application of a new common framework that can also be used to recover a large variety of other algorithms (e.g., many families of no-regret algorithms, and fast Blackwell approachability methods). \n\nIn addition to our new framework, our application to \"multicalibeating\" is new. Prior work on calibeating (by Foster and Hart) used different techniques and had error terms that depended polynomially on the number of models to be \"calibeat\". Our results give an exponentially improved dependence --- our algorithms have error terms depending only logarithmically on the number of models to be ``calibeaten''. Our algorithms are also the first that can simultaneously ``calibeat'' many models while being multicalibrated, which we call ``multicalibeating.''", " This paper consider an online adversarial multiobjective minimax optimization scenario, where at each round t, the adversary chooses an environment defined by a convex compact action set $\\mathcal{X}^t$ for the learner, a convex compact action set $\\mathcal{Y}^t$, then learner choose action $x_t$ and adversary chooses action $y_t$. The d dimensional loss function is defined such that each of its coordinate is convex in x and concave in y. We use the maximum of these coordinate as the final loss, i.e., $\\max_{j\\in [d]}\\ell_j^t(x_t,y_t)$. The authors proposed an online exponential multiplicative algorithm to solve this minimax problem and show that it achieve sub-linear regret bound, i.e., $\\sqrt{T}$. They further show a direct application of their algorithm: Multicalibration and Multicalibeating.\n This paper consider a novel online learning scenario, where the objective is the maximum coordinate of a vector loss, and borrow the idea from bandit learning to solve it. The proof looks good to me. They also give applications of their framework in fairness field. However, I am not familiar with fairness including Multicalibration and Multicalibeating, so I cannot judge the value of their application. I think there are some previous works in Multicalibration and Multicalibeating, so can authors compare their algorithm with others, what is the advantage of their method? I did not see negative societal impact of this work.", " The paper makes several theoretical contributions to multi-objective online learning and its applications. First, it proposes a new performance metric called \"Adversary-Moves-First\" regret, and a novel algorithm to control it via solving a convex-concave problem in each round. This general framework leads to three interesting applications:\n\n1. For the expert problem, it yields an algorithm with sublinear \"subsequence regret\", which subsumes many well-known performance metrics. Moreover, special versions of this algorithm recover classical solutions designed from first principles, like EW. \n2. It leads to an approachability algorithm for polytopes, whose approaching rate depends logarithmically on the amount of constraints. This recovers a strong result from the approachability literature. \n3. For the calibration problem, it improves a recent result on \"calibeating\": the bound depends logarithmically, rather than polynomially, on the number of forecasters. Moreover, this can be combined with the \"multicalibration\" task to achieve a stronger goal called \"multicalibeating\". The proof strategy improves an existing level set analysis. This is a strong submission in my opinion, and overall a pleasant read.\n\nStrengths: \n1. The setting of the framework is simple but general. The proposed solution has a EW flavor but differs in some substantial ways that I am not aware of in existing works. This could be interesting to a large subset of the community. In particular, the algorithm \"upweights\" the coordinates with higher historical losses in the resulting convex-concave problem, which is natural but insightful. \n2. The paper provides an extensive discussion on existing works. Contributions are clear, and limitations are truthfully presented. \n3. Technical extensions and applications are thoroughly developed, with clear proofs. \n4. The paper is well-written in general. It is a dense paper, but the authors delivered the key idea quite clearly. \n\nWeaknesses: \nI don't have any major complaints on the paper; its current form is already good. Some minor suggestions are provided in the following. \n 1. There seems to be an intriguing connection between the proposed multi-objective setting and the standard expert problem (Appendix F.1), and this intuitive similarity extends to the algorithms: Similar to EW, the analysis in Section 2.2 also argues that the log-sum-exp potential does not grow too fast, despite a quite significant deviation in Lemma 2.3. It might be helpful to add some discussions in Section 2.2 and emphasize the novelty of this reasoning compared to EW. \n2. From a somewhat aesthetic point of view, although the EW algorithm is recovered in Appendix F.1, the constant $4$ in the bound is not the asymptotically optimal one achieved by EW ($1/\\sqrt{2}$). Improving this constant might make the general framework even more appealing. \n3. Based on the connection to the expert problem, I think the proposed general algorithm might be interpreted as a regret-computation tradeoff. With a known $T$, we can use dynamic programming to obtain the absolutely optimal strategy for the expert problem. Apparently this is not computationally feasible, therefore EW serves as a proximal solution, which is fast and asymptotically optimal.\\\nAs for the multi-objective setting, if we fix the domains then DP can still be applied to obtain the absolutely optimal strategy. The proposed algorithm might be seen as an approximation of it, which only requires solving a stage-wise minimax problem online instead of a global one offline. \n4. There is a possibly interesting future direction, related to [Rakhlin et al., 2012] cited in Appendix A. The idea of [Rakhlin et al., 2012] is that we can first write down the conditional value function achieved by DP, then try to relax (upperbound) it by a tractable potential function. This argument has been improved recently [Drenska and Kohn, 2020; Kobzar et al., 2020], where potential functions are more easily obtained by solving a PDE. For the expert problem with few experts, this yields tighter bounds compared to EW. Also, it allows general terminal conditions: instead of $\\max_j R^T_j$, we can bound $\\phi(R^T_1,\\ldots,R^T_d)$ for more general $\\phi$. \\\nNot sure if this PDE approach extends to the setting of the present paper, but if it does, then the generalized terminal condition might also be useful in related applications, e.g., calibration, approachability, etc. \\\n\\\nDrenska, Nadejda, and Robert V. Kohn. \"Prediction with expert advice: A PDE perspective.\" Journal of Nonlinear Science 30.1 (2020): 137-173.\\\nKobzar, Vladimir A., Robert V. Kohn, and Zhilei Wang. \"New potential-based bounds for prediction with expert advice.\" Conference on Learning Theory. PMLR, 2020.\n5. Some minor suggestions on organization. Maybe it's just a personal preference, I feel the last two applications (No-regret expert & approachability) are easier to read compared to calibration, therefore might be moved forward (after Section 2) for better clarity. Section 3.2 (multicalibeating) is quite dense, so I suggest shortening it using high-level arguments rather than precise arguments (details are in the appendix anyway). In particular, the proof sketch of Theorem 3.2 and the statement of Theorem 3.3 (it's complete but really hard to parse). \n6. Very minor:\n- The setting of multicalibeating involves level sets (Line 240), but it's not defined until Line 275. \n- The predictor for multicalibeating is denoted as $a$. Instead, $\\mathbb{A}$ or $\\mathcal{A}$ might be clearer?\n- Line 91, $2^\\theta\\rightarrow 2^\\Theta$. Limitations are truthfully stated in the paper. This is a theoretical work, therefore the societal impact questions are not applicable.", " This paper introduces a simple and general online learning framework with adversary plays. And the author shows how to (multi)calibeat and multicalibrate at the same time. Generally, this work is well-written and solid. \nMy main concern is that this paper seems to be too heavy for a conference paper. It includes too many ingredients such as softmax, the main conclusion, applications, and extensions. \nI guess they should separate it into several papers and only focus on one term each time. Should you add some experiments to the main paper? line 228, introduces.\n" ]
[ -1, -1, -1, -1, -1, -1, 5, 8, 9 ]
[ -1, -1, -1, -1, -1, -1, 2, 3, 2 ]
[ "qBeMzf07gxN", "KWanmtmTkRW", "XE5TjJTyXb3", "0oirysySE9G", "HbgvZ59lU_", "oJnL6CkRtzI", "nips_2022_Epk1RQUpOj0", "nips_2022_Epk1RQUpOj0", "nips_2022_Epk1RQUpOj0" ]
nips_2022_dpYhDYjl4O
No-regret learning in games with noisy feedback: Faster rates and adaptivity via learning rate separation
We examine the problem of regret minimization when the learner is involved in a continuous game with other optimizing agents: in this case, if all players follow a no-regret algorithm, it is possible to achieve significantly lower regret relative to fully adversarial environments. We study this problem in the context of variationally stable games (a class of continuous games which includes all convex-concave and monotone games), and when the players only have access to noisy estimates of their individual payoff gradients. If the noise is additive, the game-theoretic and purely adversarial settings enjoy similar regret guarantees; however, if the noise is \emph{multiplicative}, we show that the learners can, in fact, achieve \emph{constant} regret. We achieve this faster rate via an optimistic gradient scheme with \emph{learning rate separation} \textendash\ that is, the method's extrapolation and update steps are tuned to different schedules, depending on the noise profile. Subsequently, to eliminate the need for delicate hyperparameter tuning, we propose a fully adaptive method that smoothly interpolates between worst- and best-case regret guarantees.
Accept
Reviewers are all positive and appreciate the theoretical contributions of the paper. Great work! Please make sure you address all the reviewers' comments and incorporate them (and any new experimental results, if applicable) in your camera-ready.
train
[ "4sMo6j_LdIZ", "i2Jz_6M291V7", "Qu4VD5MKAf", "malrQ5-GhzJ", "Iu87QUocjklV", "JjRHWX1LSsg", "BcpW4qYywMA", "nx0qsnpRpD", "qXQGVZvSxIH", "hm76YActITq", "bIyemohKNY1" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I see your point about why the crude approach I suggested would indeed not work. Thanks for clarifying.", " Dear Reviewer,\n\nWe would like to thank you again for your valuable feedback and positive evaluation. We truly appreciate it. Below we briefly reply to the two points mentioned in your response.\n\n1. We agree that whether the notion of (external) regret is relevant to the multi-player setting beyond its connection to convergence to equilibrium is an important and fundamental question. This probably deserves more attention from the community. For future research, a promising direction is then to investigate whether our techniques can be adapted to prove bounds for other notions of regret such as policy regret ([Arora et al. NIPS 2018](https://arxiv.org/pdf/1811.04127.pdf)).\n\n2. Yes it is possible to have superlinear regret here because the action set is unbounded. We will clarify this point in our revision. Thank you for bring this up.", " I'm already positive about the paper and believe it should be accepted. \n\nI still think that the regret result is the least exciting part of the paper, for the reasons detailed above (and below). On the other hand, the trajectory convergence to Nash is novel and exciting. This way or another, a paper where different readers can find different points of interest is certainly a good one. Highlighting some over others is a matter of presentation which is inherently subjective after all. \n\nFor the sake of completeness:\n\n1. I think that your response mixes two unrelated things. Worst case regret guarantees against an adversary is one thing, which I agree can be useful in practice. Improved regret for the case that the environment consists of other players is a separate thing, that makes sense only if the regret means anything as a benchmark against other players. Since this is not the case, your algorithm actually doesn't have improved performance guarantees in a (noisy) game setting. Your work doesn't improve the worst-case guarantees against an adversary either, so I'm not sure how this information is relevant to this argument. Convergence to Nash, however, is highly relevant to defend the impressive technical results of this paper. These are my two cents. \n\n2. One does not encounter superlinear regret on a daily basis. I guess this is possible in this setting since the action set is unbounded. Clarifying that in the text can avoid some confusion. ", " Dear Reviewer,\n\nThank you very much for your thoughtful input, detailed remarks, and positive evaluation. We reply to your main questions point-by-point below (and we will of course include your remarks in our revision).\n\n1. **On the use of regret in game-theoretic setting** \n\n We agree that the notion of regret is a fairly weak performance criterion for a game-theoretic setting: given that players are not facing an arbitrary environment but each other, convergence to a Nash equilibrium (or, at the very least, some approximation / slight relaxation thereof) is a much more meaningful target. At the same time, the minimization of regret remains a minimal worst-case requirement for any learning algorithm, as players who do not know if they will be facing other rational players or a dispassionate nature would like to be able to do well against both (and without prior knowledge of which environment they will be called to operate in). [The classical example of a routing game is particularly apt in this context, as congestion could be caused by a confluence of both agent-driven and environment-driven factors (choice of routes and weather conditions for example).]\n\n In this regard, we believe that developing efficient adaptive algorithms is a valuable endeavor. We acknowledge that our results can be tightened further in the adversarial setup, but we believe that they nevertheless shed light on several trade-offs that arise in this setting and can provide a stepping stone for further advances on the topic.\n\n1. **On the tightness of the $O(\\sqrt{T})$ regret bound** \n\n To the best of our knowldge, the tightness of the $\\mathcal{O}(\\sqrt{T})$ bound for online convex optimization with gradient feedback is due to Abernethy et al. (COLT 2008, \"Optimal strategies and minimax lower bounds for online convex games\"). In the multi-armed bandit case, similar lower bounds seem to be folk, cf. the classical textbook of Cesa-Bianchi and Lugosi (2006). We are not aware of a lower bound for the specific learning model that we consider, but we conjecture that $O(\\sqrt{T})$ is still tight here in the genuine stochastic case.\n\n We will of course include a remark about the above -- thanks again for bringing it up.\n\n1. **On superlinear regret** \n \n Yes, by \"superlinear\" we mean that the regret grows faster than $\\Theta(T)$.\n\n1. **On trajectory convergence** \n\n Thank you very much for highlighting this point. In Appendix B, we have a paragraph discussing the works that prove trajectory convergence for learning in games with noisy feedback. As far as we are aware, there are no previous works proving the result of *no-regret* algorithms that converge in all *variationally stable* games when the feedback is *noisy*. However, due to the discrepancy of the results that we manage to obtain for the three algorithms, we have made the decision to focus more on the regret bounds.", " Dear Reviewer,\n\nThank you very much for your detailed feedback, encouraging remarks, and positive evaluation. We reply to your main questions point-by-point below.\n\n\n1. **On the proposed interpolation meta-scheme** \n\n Concerning the \"meta-algorithm\" that you propose for achieving adaptivity, we mainly foresee two obstacles in implementing it in our problem. First, due to the presence of noise, the regret incurred by any given player is stochastic, and our bound only holds for the expected regret. This is a well-known obstacle and limitation encountered by meta-algorithms of this type, see e.g., the work of Bubeck and Slivkins \"The best of both worlds: Stochastic and adversarial bandits\" (COLT 2012). In this case, to ensure that an excess of the regret really implies a failure of the algorithm (and is not otherwise due to random fluctuations), we would first need to derive a high-probability version of our results using concentration inequalities -- and this could be highly challenging in the case of multiplicative noise.\n\n Second, the non-adaptive algorithms that we considered are only really effective when we have full knowledge of the various constants and parameters involved; in particular, the players must know beforehand that the noise is multiplicative and know the associated constant beforehand, a limitation which we feel would somewhat limit the desired interpolation result.\n\n That being said, *if* the constants are effectively known by the learner(s), we believe it is indeed possible to work out a method that retains the optimal guarantee in the adversarial case, possibly by adapting the more recent techniques of Zimmert et al. (\"Beating stochastic and adversarial semi- bandits optimally and simultaneously\", ICML 2019). We find this to be a very fruitful research direction for future work, and we will include it as such in our revision.\n\n1. **Minor remarks** \n\n Thanks a lot for this highly detailed input. We will fix the typos you spotted and we will make it clear from the introduction that our adaptive method does not recover the optimal rate in the adversarial setup.\n\nThanks again for the detailed input and positive evaluation!", " Dear Reviewer,\n\nThank you very much for your thoughtful comments and positive evaluation! We address below the individual points that you raised in your review.\n\n1. **On learning with noisy feedback in finite games.** \n\n We fully agree that our paper is not the end of the story as far as learning with noisy feedback is concerned. As you remark, learning with sampling- or bandit-based information in finite games is a very important topic, and it is definitely an area where one would like to apply the analysis and results of our paper. However, the current state of the art in game-theoretic learning is not yet there, and, in this regard, we believe that our paper is opening the door to a range of tools and techniques that have not yet been considered. \n\n1. **On the work of Lin et al.** \n\n To the best of our understanding, Lin et al. [24] do not provide any regret guarantees, but after inspecting their proof and setup, we concur that it is possible to derive constant regret from Theorem 4.4 of [24], under the additional assumption of cocoercivity. This assumption rules out our running example and several of our intended applications, but this is otherwise a very valuable observation -- thanks for bringing this point to our attention, we will add a remark along these lines in our revision.\n\n1. **On Assumption 4** \n\n Assumption 4 is necessary for proving our results in the adaptive case. This is a technical assumption that cannot be readily bypassed when dealing with adaptive algorithms and filtration-dependent learning rates.\n\n1. **On the necessity of the extrapolation step** \n\n Online gradient descent can indeed achieve $O(1)$ regret under multiplicative noise **in cocoercive games**: as discussed above, this can already be inferred from Theorem 4.4 of Lin et al. [24]. However, in the more general case of merely monotone (or variationally stable) games, even **deterministic** gradient descent fails to achieve low regret, as can be seen in the classic example of $\\min_{x}\\max_{y} xy$. In this regard, our work serves to highlight the algorithmic tweaks that need to be made in order to achieve constant regret in noisy non-cocoercive settings: extrapolation is required to overcome the lack of cocoercivity (just as in the deterministic case), and the separation of learning rates is required to supply an indirect variance reduction mechanism.\n\n1. **On interpolating between best- and worst-case guarantees** \n \n The situation where only a fraction of players deviate from the prescribed \"self-play\" policy is a very interesting one, but not one that can be handled with any of the techniques that we are aware of. We believe that this is a very fruitful direction for future research, and we will clearly identify it as such in our revision; we will also drop the term \"interpolate\" from our paper's abstract to avoid any ambiguity or confusion.", " Dear Reviewer,\n\nThank you for your encouraging comments and positive evaluation! We are delighted that you apprecicate our work, and we reply to your main questions point-by-point below.\n\n1. **On the need for convexity and variational stability.** \n\n Convexity (and its variants) is a vital requirement in the literature on online learning; otherwise, it is not possible to transform iterative gradient bounds to bona fide regret guarantees. [By comparison, there are very few works on online *non-convex* optimization, and these works either drastically relax the definition of the regret, or they exploit ad hoc characteristics of the problem to work with a convex reformulation thereof] In a similar vein, variational stability can be seen as a variant convexity assumption for multi-agent environments, where unilateral convexity assumptions do not suffice to give rise to a learnable game. [For example, finite games are unilaterally linear, but finding a Nash equilibrium of a finite game is a PPAD-complete problem]\n\n In the specific context of our paper, variational stability allows us to establish tighter control on the agents' learning trajectory and, in a sense, to \"stabilize\" it. More precisely, variational stability provides us with the means to bound some weighted expected second-order path length, which in turn allows us to bound the regret of each player. Some recent works manage to bypass variational stability in deterministic settings and achieve low regret in \"merely convex\" games (i.e., games that are convex but not variationally stable), but these techniques are inextricably tied to the deterministic structure of the players' feedback. At this point, it is not clear if variational stability can be circumvented in a stochastic setting, but it seems crucial for the last-iterate convergence that we prove in Section 6. We will add a discussion on this point in a subsequent revision of our paper.\n\n1. **On learning in non-convex games.** \n\n Learning in non-convex games is a very actively researched topic, but also a very difficult one, with very few known convergence results. In particular, a series of recent results has shown that standard first-order methods can take exponential time to locate a first-order stationary point (which is a drastic relaxation of the notion of a Nash equilibrium) [Daskalakis et al., STOC 2021], or even be trapped in spurious limit cycles that don't contain any critical point of the game under study [Hsieh et al., ICML 2021].\n\n\n In this highly complex landscape, we expect that the learning rate separation techniques proposed in our paper could resolve convergence failures due to recurrence (e.g., as in the case of bilinear min-max games whose trajectories comprise a foliation of degenerate periodic orbits), but we do not believe it would be possible to overcome the convergence obstructions mentioned above.\n\nThank you again for your highly encouraging remarks and your positive evaluation. Needless to say, we remain at your disposal if you have any further questions.", " The paper studies simultaneous no-regret learning in a subset of convex games (satisfying a variational stability condition) when players observe noisy estimates of their gradients. An optimistic gradient algorithm is proposed which uses learning rate separation. This allows to prove O(sqrt(T)) regret in the presence of additive noise, and O(1) regret in case of multiplicative noise. This was not possible with existing optimistic gradient schemes. Moreover, the authors propose a primal-dual variant of their method which retains the same guarantees but ensures also O(sqrt(T)) regret in fully adversarial environments. Authors consider also adaptive learning rate selection methods that do not require a-priori global knowledge of the game and finally study the last-iterate convergence of the aforementioned approaches under the considered noise models. Simultaneous learning in games is a relevant problem, which has received significant attention in the past years. Moreover, the study of such dynamics in the presence of noise is definitely important. The results presented are very impressive, original, and sound. Moreover, the authors did a very good job in putting them into the context of existing works and providing intuitions along the way.\nHence, I definitely recommend acceptance. \nI don't think the paper has major weaknesses. However, I feel additional experimental results would have made a stronger impact and could serve as ablation for the different convergence bounds and noise models considered. I have the following questions/comments: \n- The authors make the blankett assumptions of convexity and variational stability. Perhaps a discussion about these, and why are they needed/helpful would be beneficial. \n- Besides the lack of global optimality and regret guarantees, would the proposed time-scale separation be beneficial for non-convex games too? Why?\n I don't foresee potential negative societal impact.", " This paper studies no-regret learning in multi-player smooth games in the presence of noise. The paper analyzes variants of optimistic gradient descent, and shows that when the noise is purely multiplicative, i.e. when the strength of noise is proportional to the gradient norm, all players can achieve regret independent of $T$. An adaptive learning rate scheme is then proposed to eliminate the need to know the learning rates of other players. Strengths:\n- The algorithms are simple, tuning only the learning rates and extrapolation parameter in vanilla OGD. They are also very practical, since they enjoy guarantees in both cooperative (Theorem 3) and adversarial (Proposition 2) scenarios, and can be made parameter-free.\n- The results would be a nice addition to online learning literature, where few results beyond the $O(\\sqrt{T})$ bound exist when the feedback is noisy.\n- The analysis of optimistic gradient descent with noise is novel.\n\nWeaknesses:\n- The setting is a bit restricted, since only the unconstrained case is handled, and the interesting results are for the multiplicative noise setting. Dealing with unconstrained space precludes application to matrix games (strategies constrained on simplices) and creates issues in the definition of regret (an artificial bounded set $\\mathcal{K}$ is introduced for that). While multiplicative noise is a common assumption in optimization literature, it might not be the most suitable one for game settings, since arguably one of the most common sources of noise is the sampling noise from mixed strategies, which is additive rather than multiplicative. This work would greatly benefit if it can be applied to learning in multi-player matrix games with bandit feedback.\n- The authors claim that the $O(1)$ regret bound in the presence of noise to be the first of its kind. It seems to me, however, that Theorem 4.4 in Lin et al. [24] also implies an $O(1)$ regret bound, albeit under a stronger setting. If this is the case, I would suggest rephrasing this claim. - Can Assumption 4 be avoided, since the final regret guarantees have some sort of boundedness assumptions?\n\n- Can be it be shown that extrapolation is algorithmically necessary? In other words, is it possible for online gradient descent to achieve a similar $O(1)$ regret bound?\n\n- The abstract mentions a smooth extrapolation between worst- and best- case guarantees. However, based on the statement of Theorem 3, it seems that the deviation of any player from the Adapt scheduling would lead to the worse fallback regret bound. Is there an actual \"extrapolation\" between the two cases? In other words, can regret bounds between $O(exp(1/2q))$ and $O(T^{0.5+q})$ be provided, if only a small number of players deviate slightly from the Adapt scheme? Limitations of this work are appropriately discussed.", " The paper studies the expected regret achievable by individual players\nin the general class of variationally stable games when the gradient\nobservations are perturbed by noise that can be either additive or\nmultiplicative. An important technique is learning rate separation for\noptimistic methods, which refers to scaling the optimistic extrapolation\nsteps by a larger step size than the update steps. Learning rate\nseparation was previously introduced by Hsieh et al for optimistic\nonline gradient descent in the context of additive noise, but their\nalgorithm is vulnerable to adversarial attacks in early iterations, so\nthe present paper adapts it for optimistic dual\naveraging/follow-the-regularized-leader. The paper considers two cases:\n\n1. All players use (more or less) the same algorithm, which is described\nin the paper;\n2. Some of the players use an arbitrary strategy.\n\nFor case 1, Thms 1&2 show that all players can achieve constant regret\nif the noise is multiplicative. (It is also shown that, with a different\nstep size tuning, they can get tilde O(sqrt{T}) regret, but that would\nalready be achievable using standard results, so this seems of interest\nonly as a lead-up to the adaptive results in the next section.)\n\nIn section 5, adaptive step sizes are introduced, which achieve a type of\nbest-of-both-worlds result:\n* If some of the other players do not use the same algorithm, then they\n get O(T^{1/2 + q}) regret, where q is a hyperparameter of the\n algorithm. This is optimal for q=0, and slightly suboptimal otherwise.\n* If all players use the same algorithm with these adaptive step sizes,\n then they get:\n - constant regret for q>0 if the noise is multiplicative, where the\n constant depends on q.\n - O(sqrt{T}) regret otherwise.\n\nFinally, section 6 contains a trajectory analysis for the proposed\nmethods with non-adaptive step sizes.\n\nMinor remarks:\n\n* In Theorems 1 and 2, \"Reg_{p^i}(T)\" should be \"Reg_T(p^i)\" and the\n statement should contain a \"for all p^i\".\n* In Theorem 3: there should be parentheses around \"2q\" to show that q\n is in the denominator of the fraction and not in the numerator.\n* Line 102: \"widely solution\" is missing a word.\n* Assumption 1: add \"for all x^{-i}\"\n* Assumption 2: define cal{X}_* \n This is a well-written paper, with a nice new adaptive result. It\ncontains a novel approach of using learning rate separation to deal with\nmultiplicative noise, which is shown to stabilize the last iterate of\nthe dual averaging algorithm, and therefore constant regret becomes\npossible. The importance of studying multiplicative noise is\nsufficiently motivated.\n\n * Could the authors comment on whether the following simple crude\n approach would also work to get an adaptive method?\n - Run the non-adaptive method from Theorems 1&2 that guarantees\n constant regret if the noise is multiplicative. \n - If the regret ever exceeds the constant from the regret bound,\n restart with any standard method that gets O(sqrt{T}) regret.\n Clearly this is not as elegant as the approach in the paper, but it\n seems it would solve the suboptimal dependence on q.\n * The fact that the adaptive method does not recover the optimal rate,\n but O(T^{1/2 + q}) for case 2 should be mentioned much earlier. It\n should at least be mentioned in the introduction, and possibly also in\n the abstract.\n\n* The rates depend on the optimal comparator points p^i, but this\n dependence is not made explicit in the big-Oh notation, and no attempt\n is made to achieve an optimal dependence in the tuning of the step\n sizes. This limitation is clear from the results and not uncommon in\n the related work, so it is perfectly fine and requires no extra\n discussion.\n", " The paper considers learning in variationally stable games, with noisy gradient feedback. The cost function of each player is assumed to be convex and smooth. It is shown that optimistic gradients methods can achieve constant regret with multiplicative noise while still achieving O(sqrt(T)) for additive noise. The key novelty in the methods is to separate the step-size sequence that multiplies the current gradient with the sequence that multiplies the \"optimistic\" term. Two methods are proposed, where the second one is shown to also maintain O(sqrt(T)) regret against an adversary. An adaptive version of both is shown that doesn't need to be tuned. Finally, convergence to Nash equilibrium is shown if all players use one of these algorithms (even if with different sequences). The paper is very well-written, and the technical contribution is solid and significant. The paper advances the understanding of gradient learning methods in variationally stable games, which to the best of my knowledge are the largest class of games where gradients methods are known to converge. It was definitely interesting and enjoyable to read. I believe the paper should be accepted. \n\nMy main issue is with the presentation and position of the results. I'll admit this is highly subjective, but perhaps can be useful when revising the paper: \n\nThe paper can easily be made more interesting to a broader audience. As written, the paper is mainly interesting for readers who already agree that the technical questions tackled here are interesting. I think that the main issue is that the notion of regret loses its meaning against other players since the benchmark is arbitrary - it is how much the best action in hindsight would achieve if for some reason the other players were forced to play the same thing they played against the algorithm, without responding to this best action. Indeed, the technical reason this regret is still interesting (at least to me) is that gives convergence to equilibrium, which in this case is the Nash equilibrium, which is even the last iterate convergence rather than the immediate average iterate convergence (given the regret). While the statement in Line 25 about the learner's objective makes sense against an oblivious adversary (or a stochastic environment), I'm not sure if it still does for multiplayer environments. Why would a learner concern itself with optimizing the regret against other players, if the benchmark has no meaning? Low regret doesn't show that \"the sequence is efficient\" in the multi-player case. Without the convergence rate to Nash as motivating, going beyond the O(sqrt(T)) lower bound therefore seems purely mathematical. Of course, I might have missed something here, so please see the question below. \n\nThe way the introduction is written, I got the impression that the main goal of the paper is to prove lower regret guarantees for a multiplayer environment with noisy gradient feedback. However, this is only done for multiplicative noise. Where certainly impressive on its own, it gives an (unjustified) sense of disappointment as a partial answer to the question in line 52. I'm therefore not sure if this question properly motivates the great results in the paper. At the same time, the paper acheives something very impressive with no qualifiers - it provides the first gradient methods that converge to Nash in variationally stable games with noisy gradient feedback, even if they're not strict (to the best of my knowledge, so please verify). It also provides convergence rates. Along with the fact that regret is an arbitrary performance measure against other players, I find this to be the main contribution of the paper. Then, discussing the comparison with [40] is necessary to highlight this contribution. I would consider highlighting this aspect of the results. \n\nMinor comments: \n\n1) What do you mean by superlinear regret in line 51? worse than O(T)? \n\n2) Please provide a reference for \"Nash equilibria coincide precisely with the zeros...\" in line 106.\n\n3) The V notation is easy to miss since it's in the text above Assumption 2, which doesn't state it's a definition. \n\n4) A table mapping the results in this paper can help. I don't mean the conversion table in the appendix, but one that gives a brief meaning for each result. \n\n5) The notation X compared to x is confusing since it's usually reserved for random variables, whereas here both are random. \n\n6) Line 146 can emphasize that the 1/2 shifted sequence is the actual sequence of actions.\n\n7) Lines 150-151: here the utilities don't change, or do you mean from a single-player point of view? it's better to explain why you're interested in these methods. Then, \"in certain classes of games\" is vague, please provide a reference. \n\n8) Line 160-161 should mention that the regret is constant only for multiplicative noise. \n\n9) In Theorem 3, provide the variance parameters to supplement the statement \"if the noise is multiplicative\".\n\n10) Line 340 - probably a typo, this is Theorem 5.\n\n11) Line 345 - \"relative\" - relatively \n\n12) Line 768 - \"The proof is proved\" - the result is proved. \n\n13) Lemma 8 - \"holds\" - hold.\n\n14) Unify \"infinity\" and \"+infinity\". \n\n15) (24) - I think this should be \\eta_{t+1} 1) Do you see any meaning to the regret against other players beyond its connection with convergence to equilibrium? (average or last iterate)?\n\n2) Do you have a sense of whether the O(sqrt(T)) result for additive noise is tight? in other words, what is the answer to the main question in line 52 for noise that is more general than multiplicative? if unknown, I think it would be interesting to state that in the concluding remakes. This is a theory paper with no societal impact. The paper clearly states all technical limitations. " ]
[ -1, -1, -1, -1, -1, -1, -1, 8, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "Iu87QUocjklV", "Qu4VD5MKAf", "malrQ5-GhzJ", "bIyemohKNY1", "hm76YActITq", "qXQGVZvSxIH", "nx0qsnpRpD", "nips_2022_dpYhDYjl4O", "nips_2022_dpYhDYjl4O", "nips_2022_dpYhDYjl4O", "nips_2022_dpYhDYjl4O" ]
nips_2022_B3TOg-YCtzo
Physics-Embedded Neural Networks: Graph Neural PDE Solvers with Mixed Boundary Conditions
Graph neural network (GNN) is a promising approach to learning and predicting physical phenomena described in boundary value problems, such as partial differential equations (PDEs) with boundary conditions. However, existing models inadequately treat boundary conditions essential for the reliable prediction of such problems. In addition, because of the locally connected nature of GNNs, it is difficult to accurately predict the state after a long time, where interaction between vertices tends to be global. We present our approach termed physics-embedded neural networks that considers boundary conditions and predicts the state after a long time using an implicit method. It is built based on an $\mathrm{E}(n)$-equivariant GNN, resulting in high generalization performance on various shapes. We demonstrate that our model learns flow phenomena in complex shapes and outperforms a well-optimized classical solver and a state-of-the-art machine learning model in speed-accuracy trade-off. Therefore, our model can be a useful standard for realizing reliable, fast, and accurate GNN-based PDE solvers. The code is available at https://github.com/yellowshippo/penn-neurips2022.
Accept
The paper proposes a E(n)-equivariant neural PDE solvers that can satisfy boundary conditions provably. The reviewers acknowledged the importance of the studied problem setting and generally appreciated the results. The paper is nicely written and provides both strong experimental results and theory. Indeed, a range of interesting experiments demonstrate the effectiveness of the proposed method. I want to thank the authors for their detailed responses that helped in answering some of the reviewers' questions. (The reviewers have provided detailed feedback in their reviews, and we strongly encourage the authors to incorporate this feedback when preparing a revised version of the paper.) In summary, this paper is a clear accept. Well done!
train
[ "aLFezcWsT0", "dwnAOJI2Hs", "6_n0D2g4vln", "b0rX6FQLtix", "FIIVAGvhBWn", "V7MlegFgvMS", "MGz3GYDoc0ga", "1BcbRmxZvm4", "AHIS1AZtCtV", "jEYVG5qatn6", "E1uV2dOlHZ", "MJJCczQQeS", "Sl252AbE7tk" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the clarification. ", " We appreciate the feedback given by the reviewers. We have performed additional experiments and updated the manuscript. Here we summarize our main updates:\n\n* We changed the title to \"Physics-Embedded Neural Networks: Graph Neural PDE Solvers with Mixed Boundary Conditions\" (although it seems that the tile on the OpenReview cannot be changed for the moment).\n* We added detailed explanations regarding the machine learning model we constructed (Section 4.2.2, Appendix C.3)\n* We performed additional parameter studies for PENN, MP-PDE, and OpenFOAM for a more comprehensive insight into the speed-accuracy tradeoff (Figure 4, Appendix C.6).\n* We performed additional experiments using the advection-diffusion problems to demonstrate the capacity of the proposed model to predict time series data with various PDE parameters. (Appendix D)\n\nWe also note the minor updates we made:\n\n* The appendix is included in the PDF file of the main paper.\n* We added trainable weight in the definition of IsoGCN (Equation (10)) and NIsoGCN (Equation (14)), which was unintendedly missing in the first version.\n* We switched to using the abbreviated name \"NIsoGCN\" for \"NeumannIsoGCN\" to save space.\n\nWe hope that our revised manuscript and responses could address all the questions and uncertainties.", " We thank the reviewer for the constructive feedback and thoughtful questions.\n\n> Though the authors branded the novelty of this work on the E(n)-Equivariant properties of the model, it is not a unique contribution from this work since it is based off of the IcoGCN work which this work used as a backbone. Stressing this property (even in the title) feels like an oversell to me.\n\nThe title is fixed to eliminate $\\mathrm{E}(n)$-equivariance as advised. However, it is noteworthy that all the components we added (e.g., NeumannIsoGCN, the neural nonlinear solver) are fully compatible with $\\mathrm{E}(n)$-equivariance. It is easy to break equivariance because if a portion of the model is not equivariant, the entire model will not be equivariant.\n\n> Though the overall accuracy / runtime tradeoff for the proposed model is compelling (Fig. 4), I would like to see some thoughtful redesign of the model to allow flexibly adjusting accuracy / runtime also for this learned model. One suggestion is to use the results from this model to initialize a coupled PDE solver, so that with a good guess leveraging data prior, the model can achieve the same guaranteed accuracy at a faster speed.\n\nWe have added the results of the PENN models with varying numbers of parameters and iterations in the neural nonlinear solver, which adjusts the speed-accuracy tradeoff (Figure 4 and Table 5).\n\nThe suggestion made by the reviewer seems quite attractive, as it can guarantee accuracy at the same level as the classical solvers. However, it is beyond the scope of this study, as our primary purpose was to construct an end-to-end neural PDE solver.\n\n> In this work, the model does seem to require training on a given dataset (albeit a small dataset of only 203 examples), yet solves for a PDE using the Neural nonlinear solver in Sec 3.3. Please help me understand how the learned data prior is incorporated in this process?\n\nTwo main parts incorporate the first-principles physics into our machine learning model. Generally, we guarantee physics using inductive biases in the model architecture and utilize data to accelerate the prediction. Here are the details:\n\n* Model architecture: As discussed in Appendix C.3, which we updated with more details, we construct the PENN model to reflect the encoded governing equation. This works as a good inductive bias in the model, as it respects how physical quantities interact with each other and react with respect to coordinate transformation (i.e., the tensor transformation rule).\n* Training: As the reviewer pointed out, classical solvers do not require training data while keeping the prediction physical. However, classical solvers occasionally take a long time to compute because they always try to predict zero-based, i.e., utilizing no information. In extreme cases (even when we run the same analysis twice) a classical solver does not perform faster the second time. Our approach utilizes data for fast prediction while maintaining physics as much as possible. Thus, we encode the input physical quantities to higher dimensional space to utilize the neural nets' capability. During training, the model learns an effective way to encode and compute the PDEs in the encoded space.\n\n> Nit: Eqn (19) in the appendix has an extra parenthesis.\n\nThank you for pointing this out. We fixed the equation.\n\n> For the experimental evaluations, it would be interesting to see how the model generalizes to a different range of physical parameters, such as Reynolds number. Such evaluations seems to be lacking in the experiments.\n\nWe have added another experiment on the advection-diffusion dataset (Appendix D). In this experiment, we varied the velocity magnitude and diffusion coefficient. The results demonstrated that the model can learn and predict phenomena using various parameters. However, the parameters in the test dataset are within the domain of those in the training dataset.\n\nGeneralizing the model to a parameter range outside the training dataset remains an open question because the learned data may not help predict a solution for such a parameter domain, and this would be the next direction of the research. However, it is beyond the scope of this paper, as we do not claim generalization for unseen parameters. However, the proposed model is still helpful because it can predict various states in various shapes, as it successfully predicts the test dataset for fluid phenomena. These results are supported by the novelties of the present work, which are the reliable treatment of boundary conditions and the neural nonlinear solver.", " > How does the proposed method perform on domains that were not seen in the training set?\n\nOur model can accept any mesh as input, as discussed above, resulting in generalization with various analysis domains. In particular, the model has $\\mathrm{E}(n)$-equivariance; thus, it can predict phenomena on unseen analysis domains with both translation and rotation.\n\nRegarding the parameter domain (e.g., the Reynolds number), the focus should be directed to our new experiments on the advection-diffusion dataset (Appendix D). The model can predict test data with various parameters (velocity and diffusion coefficient). However, this was within the range of the parameters in the training dataset.\n\nGeneralizing the model to a parameter range outside the training dataset remains an open question because the learned data may not help predict a solution for such a parameter domain, and this would be the next direction of the research. However, it is beyond the scope of this paper, as we do not claim generalization for unseen parameters. However, the proposed model is still helpful because it can predict various states in various shapes, as it successfully predicts the test dataset for fluid phenomena. These results are supported by the novelties of the present work, which are the reliable treatment of boundary conditions and the neural nonlinear solver.", " > I find it difficult to trust comparisons that do not detail metrics such as a number of trainable parameters or FLOPs. As it currently stands, I am unable to judge the tradeoff I would be making by using PENN as opposed to MP-PDE, for example. Consider adding these metrics as their absence makes true comparison very difficult.\n\nWe have added tables presenting the speed, accuracy, and number of parameters (if any) as Tables 5, 6, and 7. Also, we have added a discussion related to the parameter count (lines 563-568). As seen in the table, the PENN models have a significantly smaller number of parameters than the MP-PDE models, with one to two digits without degrading the predictive performance. This is because our model effectively shares the parameters in the neural nonlinear solver, where the same network is used for each iteration.\n\n> Figure 4 only partially addresses this concern but it does not present any speed/accuracy tradeoff for PENN (like it does for OpenFOAM and MP-PDE) and any speed/accuracy/parameter-count tradeoff is not considered at all.\n\nWe have added an additional study by varying the number of parameters for the PENN and MP-PDE models (Figure 4 and Tables 5, 6, and 7). Through this comparison, we demonstrated that our method improves the speed-accuracy tradeoff for multiple configurations. We also found that decreasing the number of iterations in the neural nonlinear solver significantly affected the computation time compared to the number of parameters.\n\n> The discussion of PINNs in section 2.2.1 strikes me as unrelated to the current work, or at least the connection isn't very clear to me. I would advise the authors to clarify the limitation of PINNs they are discussing and make the connection or contrast with the current work explicit.\n\nWe have added a line to contrast our model against PINNs (lines 101 and 102). The advantage of our model is its ability to generalize shapes, translation, and rotation as it does not consider the absolute positions of the vertices as inputs. The limitations of PINNs are as follows (lines 94-100):\n\n* Generalization: PINNs need to take the absolute positions of the vertices to leverage automatic differentiation regarding space. If we input the absolute positions into NNs, to learn physical quantities as functions of space, generalization regarding shapes and boundary conditions will deteriorate because the learned functions may not work for problems with different shapes and boundary conditions.\n* Guarantee of physics: PINNs utilize physics information during training. However, the prediction of PINNs has less justification for physics, because typical PINNs have no inductive bias, inside the model, to guarantee physics. Our model has physics embedded, thus generating more reliable predictions.\n\n> The same, but to a lesser degree, for GNNs in section 2.2.2. Please clarify the difference to the current work.\n\nWe have added some sentences that distinguish our model from other GNNs (lines 112-114). The superiority of our method lies in the proposed neural nonlinear solver, which considers global interaction in an effective, efficient, and $\\mathrm{E}(n)$-equivariant way. Furthermore, most GNNs use local connections with a fixed number of message passings and do not consider global interactions.\n\n> It seems to me that the main paper omits rather important details that are given in the first paragraph of section C.3 (lines 461-469 in the supplementary PDF). I realize that the page limit makes it difficult to include all the information, but consider finding a place for some or all of this information in the main paper.\n\nWe moved most of the contents concerned to the main paper, and we hope that it is now easier to comprehend the overview of the actual machine learning models we used.", " We thank the reviewer for the constructive feedback and thoughtful questions.\n\n> The title of the work appears to be slightly misleading in the sense that E(n)-equivariance stems from the work of Horie et al (2021). and not from the current work. I would suggest choosing a more descriptive title, such as \"Physics-Embedded Neural Networks for Nonlinear Dynamics with Mixed Boundary Conditions\".\n\nThank you for this suggestion. As advised, we omitted equivariance from the title and included the words \"Mixed Boundary Conditions.\" We kept the phrase \"Graph Neural PDE Solvers\" because we think the word \"PDE\" is commonly used and understood; thus, using \"PDE: may be helpful for readers to search for a good PDE solver.\n\nAs pointed out, the main part of the equivariance comes from the work of Horie et al. (2021). Nevertheless, one of our contributions is to demonstrate that all the components we added (e.g., NeumannIsoGCN, neural nonlinear solver) are fully compatible with $\\mathrm{E}(n)$-equivariance. Furthermore, it is easy to break equivariance as if a portion of the model is not equivariant, the entire model will not be equivariant.\n\n> I cannot find any information on the actual network architecture that is used. How is the domain encoded into the network? How flexible can the domain be? How about supporting heterogeneous coefficients? It seems like the authors refer the reader to previous works, but I would expect this info to be present at the current paper or at least in the supplementary material.\n\nWe have added detailed explanations and figures of the actual network architectures (Figures 5, 9, 10, 11, 13, 14, and 15). We constructed the model using components that accept arbitrary input lengths (e.g., pointwise MLPs, deep sets, NeumannIsoGCNs) (Appendix C.4, lines 544-546). Therefore, our model is flexible in accepting arbitrary meshes as the inputs. Although we have not demonstrated heterogeneous coefficients in this study, the model can deal with this condition by feeding the heterogeneous features as inputs, as we used $e^{-0.5d}$ in the incompressible flow (Appendix C.3, line 513).\n\n> Also, from a brief look I couldn't find the architecture in the authors' code (but maybe I'm wrong). In any case - I do not see how one can replicate the results in this work.\n\nThe model architectures are stored in the YAML files in the following directories in the supplementary material:\n\n* penn_neurips2022_supplemental_20220803/data/grad: The gradient dataset\n* penn_neurips2022_supplemental_20220803/data/fluid: The incompressible flow dataset\n* penn_neurips2022_supplemental_20220803/data/ad directories: The advection-diffusion dataset (added in the latest update)\n\nWe see that it was unclear to readers; thus, we have added some details where the model architecture is described to README.md.\n\n> I would expect more comparisons to other NNs and classical solvers, particularly FEM/FVM and AMG-based solvers. The authors claim that they have tested their model against classical solvers, but I found no evidence of this in the main paper or supplementary PDF.\n\nWe believe that the MP-PDE is the best NN model to compare to, owing to the following reasons:\n\n* It can deal with various boundary conditions in a general manner.\n* It can deal with various shapes leveraging GNN's feature.\n* It is contemporary and sufficiently powerful, as it is published in ICLR 2022.\n\nTo the best of our knowledge, we were not able to find any other NN models that can satisfy all of the above reasons.\n\nWe used OpenFOAM, which adopts FVM, with AMG-based solvers. In general, the FEM tends to exhibit instability (e.g., spurious oscillation) and takes time to solve fluid problems. In the literature, Molina-Aiz et al.[1] reported that the FEM requires twice as long computation time as compared to the FVM. Regarding the linear solver, we have added a comparison with the following configurations (Table 6):\n\n* AMG solver for $p$ and the smooth solver for $\\boldsymbol{u}$ (our initial choice) \n* AMG solver for $p$ and $\\boldsymbol{u}$\n* The smooth solver for $p$ and $\\boldsymbol{u}$\n\nThe results confirm that our initial choice (AMG for p and smooth for u) was optimal. Also, all the speed-accuracy data used to plot Figure 4 were added (Tables 5, 6, and 7).\n\n[1] Molina-Aiz, F. D., Hicham Fatnassi, Thierry Boulard, Jean-Claude Roy, and D. L. Valera. \"Comparison of finite element and finite volume methods for simulation of natural ventilation in greenhouses.\" Computers and electronics in agriculture 72, no. 2 (2010): 69-86.", " We thank the reviewer for the constructive feedback and thoughtful questions.\n\n> in eq) 17, does i refer to time steps, or gradient descent steps? If it refers to # of time steps, does propose approach generalize dynamic simulation of arbitrary timesteps?\n\n$i$ in Equation (17) refers to the step in the neural nonlinear solver (gradient descent steps), as we use an iterative method for the solver. We have clarified this in the manuscript (lines 192 and 193).\n\n> My primary concern is also one point that I don't quite understand if the i in eq. 17 refer to time steps. If so, how could it generalize to potentially longer time scales?\n\nAs discussed above, $i$ is unrelated to the absolute time step. Therefore, our model can predict the future state regardless of the absolute time of the input state. A newly added experiment shows that our model can learn and predict time series data by applying the same neural nonlinear solver for each time step (Appendix D). Because of the autoregressive architecture of the model, it can generalize to time series data with arbitrary length.", " > They mention that the proposed method is not suitable for solving inverse problems but don't elaborate much on that. \n\nWe have added the reason why our model has difficulty solving inverse problems to Section 5, line 286. It states that our model uses the information of the available PDE, making our approach reliable and efficient. However, a typical inverse problem does not have an explicit form of PDE, thus making it difficult to utilize our model.\n\n> Perhaps a a few simple experiments similar to those in Brandstetter et al would give more information on whether improvement in predictions is really due to the constraints enforced by the proposed architecture.\n\nWe have added a simple experiment using the advection-diffusion equation. The experiment on that dataset also showed that our proposed approach is more effective than the other ablation models. In addition, the results show that the PENN model can learn and predict time series data with various PDE parameters (flow velocity and diffusion coefficient).", " We thank the reviewer for the constructive feedback and thoughtful questions.\n\n> The connection to global pooling in computing the step size α of the Barzilai–Borwein method is also interesting and perhaps deserves some more discussions.\n\nWe have added a subsection on the Barzilai–Borwein method in Appendix A.3. In the method, $\\alpha$ attempts to facilitate convergence as well as possible for every vertex. Thus, global information is included in $\\alpha$.\n\n> The paper is sometimes hard to read. And not very clear. e.g., 3.2 Where it is explained that the weight should be kept small to 'respect' information in the neighborhood.\n\nWe have expanded upon the explanation regarding the value of $w_i$ (lines 168-171). With an extremely large $w_i$, other terms tend to be neglected, relatively, leading to information in the neighborhood being disregarded.\n\n> 3.2 Doesn't explain how the model can be generalized to vectors or higher rank tensors.\n\nAn explanation of how NeumannIsoGCN could be generalized to higher rank tensors is included in Appendix A.2, lines 441-445, where it can be generalized using a recursive definition as in Equation (29).\n\n> Moreover, in the paper it doesn't really explain how experiments are performed so I assume the reader is often very familiar with this kind of experiments to test neural pde solvers otherwise they need to explain it better.\n\nWe have refined the descriptions of the experiments, particularly in Appendix C.3, and the input and output features used in the experiments were clarified. It is now clear that our model is in line with the typical formulation of neural PDE solvers described in the line 88, which takes the state at $t$ as inputs and the output state at $t + \\Delta t$.\n\n> Especially how GNN gradient operators predict gradients.\n\nWe have added an explanation of some connections between NeumannIsoGCN (NIsoGCN) and spatial differential operators in Appendix A.2, lines 446-456. These models are similar to the GCN, the model's origin, which involves multiplication between adjacency matrices, input features, and a trainable weight matrix. The output could be an encoded representation of the derivative if the model is well-trained.\n\nAlso, we have added figures of the machine learning models. In particular, Figure 5 shows the architecture of the gradient dataset as that model only computes the gradient. In contrast, the models of the other experiments are somewhat complicated. For example, the NIsoGCN block, shown in Figure 5 (b), takes encoded features (scalar $\\psi$ and the normal directional derivative $\\hat{g}\\boldsymbol{n}$) as inputs and outputs the gradient in the encoded space. These encoded gradients have 16 vectors for each vertex. The MLP block next to the NIsoGCN block decodes the 16 vectors to 1 vector per vertex to obtain the final prediction of the gradient.\n\n> Can you elaborate more on the Dirichlet encoder-decoder? I do not understand whether it is as simple as it looks or perhaps there is something more. My understanding is that the encoder distinguish between boundary nodes and nodes that are not on the boundary and just apply a the pseudoinverse transformation on boundary nodes for decoding.\n\nWe have added Figure 9, which shows an overview of the model we used for the incompressible flow dataset. Each input feature was encoded separately; however, the networks are shared by the DirichletLayers. Once the nonlinear solver has completed, we apply the Dirichlet layers as in Equation (17), and then decode using the pseudoinverse decoders, which are applied to all the vertices in the mesh, not only on the boundary. Please note that the encoders and decoders are applied pointwise, as is done in the standard encode-process-decode architecture. We have added a description of this to Appendix C.4.\n\n> In this case I don't understand its role in performing better predictions other than just trivially enforcing boundary conditions at the end of the processing.\n\nBecause the pseudoinverse decoder is applied to all vertices, it facilitates the spatial continuity of the output, as mentioned in Appendix C.7 (lines 605-608). In addition, if there is no Dirichlet layer in the neural nonlinear solver loop, the hidden features tend to shift from what is expected, that is, the state satisfying the boundary condition. We added another model (model (B)) in the ablation study, which had no boundary condition assignment in the nonlinear solver and only had an assignment at the end of the network. Its performance was significantly worse than that of the proposed model (Tables 3 and 9), which corroborates our discussion.", " The paper presents a neural PDE solver based on adn encode-process-deode architecture that respects boundary conditions thanks to a novel GNN-based gradient operator. Other than the proposed version of an E(n)-equivariant GNN nonlinear solver they also propose a different encoding process for boundary condition treatments in the encoded space. Experiments comprise prediction of the gradient field from a given scalar field to verify the expressiveness of the proposed version of the GNN-based gradient operator; and the task of learning incompressible flow. Other than respecting boundary conditions by construction, results show important improvement with respect to the state of the art neural PDE solvers. The idea seems to be very simple and based on existing methods but is effective. The connection to global pooling in computing the step size $\\alpha$ of the Barzilai–Borwein method is also interesting and perhaps deserves some more discussions. The paper is sometimes hard to read. And not very clear.\ne.g.,\n3.2 Where it is explained that the weight should be kept small to 'respect' information in the neighborhood.\n3.2 Doesn't explain how the model can be generalized to vectors or higher rank tensors. (edited)\n\nMoreover, in the paper it doesn't really explain how experiments are performed so I assume the reader is often very familiar with this kind of experiments to test neural pde solvers otherwise they need toexplain it better. Especially how GNN gradient operators predict gradients. There are some more explanations in the Appendix but overall the description remains unclear. Can you elaborate more on the Dirichlet encoder-decoder? I do not understand whether it is as simple as it looks or perhaps there is something more. My understanding is that the encoder distinguish betweenboundary nodes and nodes that are not on the boundary and just apply a the pseudoinverse transformation on boundary nodes for decoding. In this case I don’t understand its role in performing better predictions other than just trivially enforcing boundary conditions at the end of the processing. Authors address potential societal impact of their work and mention the fact that the properties of the model limit its applicability domain. They mention that the proposed method is not suitable for solving inverse problems but don't elaborate much on that. The improvement with respect to the state of the art (here Brandstetter et al.) is significant. Perhaps a a few simple experiments similar to those in Brandstetter et al would give more information on whether improvement in predictions is really due to the constraints enforced by the proposed architecture.", " This work proposed to use E(n)-Equivariant Graph Neural Network to solve PDEs. Main differences compared to existing works include 1) using a boundary node encoder and pseudoinverse decoder to enforce boundary conditions and achieve better long-term accuracy. 2) embed the PDE inside the model to achieve better long-term accuracies, via the proposed NeumannIsoGCN ***novelty*** The formulation is new and the global encoding and embedded PDE inside the GNN is novel, to the best of my knowledge. \n\n ***quality and clarity*** The work is well written and well-motivated with good background introductions and well-organized experimental results. The accuracy speed trade-off plot is interesting.\n\n***significant*** The work demonstrates their work on solving an important task.\n - in eq) 17, does $i$ refer to time steps, or gradient descent steps? If it refers to # of time steps, does propose approach generalize dynamic simulation of arbitrary timesteps? My primary concern is also one point that I don't quite understand if the $i$ in eq. 17 refer to time steps. If so, how could it generalize to potentially longer time scales?", " The authors’ primary contribution is the development of neural network layers that enable the treatment of Dirichlet- and Neumann-type boundary conditions in encoded space (the “boundary encoder”, “Dirichlet layer”, “NeumannIsoGCN layer” and the “pseudoinverse decoder”). For Dirichlet-type BCs, they accomplish this by constructing a decoder design such that the entire model will learn to approximate the identity function at the Dirichlet portion of the boundary (the “Dirichlet layer” and “pseudoinverse decoder”). The authors deal with Neumann-type BCs by extending the IsoGCN layer from the prior art to include terms that convert the Neumann BC into a penalty/constraint embedded into the architecture of the layer (“NeumannIsoGCN”).\n\nA secondary contribution of the paper is the augmentation of a neural nonlinear solver where, by computing the “optimal step size” for a gradient descent step, they achieve a filter analogous to global pooling. This aids the neural model in learning highly nonlinear dynamics, such as exhibited by the incompressible Navier-Stokes equations.\n\nThese constructions are supported both by experiments and comparisons with the prior art as well as an ablation study to show the merit of each proposed component.\n Strengths\n\n* The paper is generally sound and the proposed model indeed outperforms the prior art model under comparison.\n\n* The model achieves zero error on the Dirichlet boundary, as claimed.\n\n* The ablation study can serve to show that each proposed component indeed contributes to the performance of the PENN model.\n\n* Also of note is Figure 1 which presents an overview of the main contributions in a clean, legible and aesthetically pleasing way.\n\nWeaknesses\n\n* The title of the work appears to be slightly misleading in the sense that E(n)-equivariance stems from the work of Horie et al (2021). and not from the current work. I would suggest choosing a more descriptive title, such as “Physics-Embedded Neural Networks for Nonlinear Dynamics with Mixed Boundary Conditions”.\n\n* I cannot find any information on the actual network architecture that is used. How is the domain encoded into the network? How flexible can the domain be? How about supporting heterogeneous coefficients? It seems like the authors refer the reader to previous works, but I would expect this info to be present at the current paper or at least in the supplementary material. Also, from a brief look I couldn't find the architecture in the authors' code (but maybe I'm wrong). In any case - I do not see how one can replicate the results in this work.\n\n* I would expect more comparisons to other NNs and classical solvers, particularly FEM/FVM and AMG-based solvers. The authors claim that they have tested their model against classical solvers, but I found no evidence of this in the main paper or supplementary PDF.\n\n* I find it difficult to trust comparisons that do not detail metrics such as a number of trainable parameters or FLOPs. As it currently stands, I am unable to judge the tradeoff I would be making by using PENN as opposed to MP-PDE, for example. Consider adding these metrics as their absence makes true comparison very difficult. Figure 4 only partially addresses this concern but it does not present any speed/accuracy tradeoff for PENN (like it does for OpenFOAM and MP-PDE) and any speed/accuracy/parameter-count tradeoff is not considered at all.\n 1) The discussion of PINNs in section 2.2.1 strikes me as unrelated to the current work, or at least the connection isn’t very clear to me. I would advise the authors to clarify the limitation of PINNs they are discussing and make the connection or contrast with the current work explicit.\n\n2) The same, but to a lesser degree, for GNNs in section 2.2.2. Please clarify the difference to the current work.\n\n3) It seems to me that the main paper omits rather important details that are given in the first paragraph of section C.3 (lines 461-469 in the supplementary PDF). I realize that the page limit makes it difficult to include all the information, but consider finding a place for some or all of this information in the main paper.\n\n4) How does the proposed method perform on domains that were not seen in the training set?\n No limitations.", " The authors proposed a novel design, building on top of IcoGCN to add two types of boundary conditions to the solution of the solver, Dirichlet conditions constraining the values of the functions at the boundaries, and Neumann conditions, constraining the gradients of the values at the boundaries. By leveraging the E(n) equivariant properties of the IcoGCN backbone design, the authors demonstrated improved performance compared to SOTA baselines such as MP-PDE, in particular with respect to translation and rotation of the domain. Compared to the original IcoGCN, the authors demonstrated that the proposed method is able to strictly satisfy Dirichlet and Neumann conditions, leading to improved accuracies of the solutions. Strengths:\n* Embedding the physical constraints into the design of the model itself by combining the PDE solution process with the model is an elegant and more generalizable approach to enforcing physical constraints, compared to simply using a loss penalty as an auxiliary loss only during training time.\n* By incorporating the IcoGCN backbone, the authors are able to demonstrate translational and rotational equivariance of the final model, which is a very strong and desirable property missing in many works in the physics informed machine learning literature.\n* The experimental evaluations are compelling. Not only did the authors compare with two reasonable baselines to demonstrate improved enforcement of boundary conditions and rotation/translation invariance, they also performed a runtime analysis between speed and accuracy tradeoffs, even considering the OpenFoam as a baseline. This is not too commonly evaluated among physics informed ML literature and I am happy to see it included in this work.\n* The paper is clear and well written. In particular, I enjoyed the writeup for the backgrounds section, which doesn't assume too much prior knowledge on the subject matter and is easy to follow.\n\nWeaknesses:\n* Though the authors branded the novelty of this work on the E(n)-Equivariant properties of the model, it is not a unique contribution from this work since it is based off of the IcoGCN work which this work used as a backbone. Stressing this property (even in the title) feels like an oversell to me.\n* Though the overall accuracy / runtime tradeoff for the proposed model is compelling (Fig. 4), I would like to see some thoughtful redesign of the model to allow flexibly adjusting accuracy / runtime also for this learned model. One suggestion is to use the results from this model to initialize a coupled PDE solver, so that with a good guess leveraging data prior, the model can achieve the same guaranteed accuracy at a faster speed.\n One main thing I am trying to understand more deeply is how the proposed algorithm combines the first-principle physics with data priors. One way is to predict the results only using data prior, perhaps adding physics equations as an auxiliary loss (e.g., PINNs), which requires lots of training data, but leads to poor generalization. Another extreme is to only solve from the first principles, given as PDEs, using a PDE solver (as is the case in OpenFOAM), which requires no training data, but has very good generalization. In this work, the model does seem to require training on a given dataset (albeit a small dataset of only 203 examples), yet solves for a PDE using the Neural nonlinear solver in Sec 3.3. Please help me understand how the learned data prior is incorporated in this process? \n\nNit: Eqn (19) in the appendix has an extra parenthesis. For the experimental evaluations, it would be interesting to see how the model generalizes to a different range of physical parameters, such as Reynolds number. Such evaluations seems to be lacking in the experiments." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 3 ]
[ "MGz3GYDoc0ga", "nips_2022_B3TOg-YCtzo", "Sl252AbE7tk", "FIIVAGvhBWn", "V7MlegFgvMS", "MJJCczQQeS", "E1uV2dOlHZ", "AHIS1AZtCtV", "jEYVG5qatn6", "nips_2022_B3TOg-YCtzo", "nips_2022_B3TOg-YCtzo", "nips_2022_B3TOg-YCtzo", "nips_2022_B3TOg-YCtzo" ]
nips_2022_qHs3qeaQjgl
On Scalable Testing of Samplers
In this paper we study the problem of testing of constrained samplers over high-dimensional distributions with $(\varepsilon,\eta,\delta)$ guarantees. Samplers are increasingly used in a wide range of safety-critical ML applications, and hence the testing problem has gained importance. For $n$-dimensional distributions, the existing state-of-the-art algorithm, $\mathsf{Barbarik2}$, has a worst case query complexity of exponential in $n$ and hence is not ideal for use in practice. Our primary contribution is an exponentially faster algorithm, $\mathsf{Barbarik3}$, that has a query complexity linear in $n$ and hence can easily scale to larger instances. We demonstrate our claim by implementing our algorithm and then comparing it against $\mathsf{Barbarik2}$. Our experiments on the samplers $\mathsf{wUnigen3}$ and $\mathsf{wSTS}$, find that $\mathsf{Barbarik3}$ requires $10\times$ fewer samples for $\mathsf{wUnigen3}$ and $450\times$ fewer samples for $\mathsf{wSTS}$ as compared to $\mathsf{Barbarik2}$.
Accept
This submission studies (a somewhat non-standard version of) tolerant closeness testing of distributions over the n-dimensional hypercube. Instead of only iid samples, it is assumed that the tester is able to efficiently evaluate the probability mass at any point in the domain and to sample from the distribution conditioned on any subset of size two of the domain. The main result is an algorithm with query complexity scaling near-linearly in the dimension. Using only iid samples, one would need exponential dependence on dimension. The algorithm is evaluated on synthetic and real-world datasets. It is experimentally shown that their algorithm outperforms a previous baseline, which in the worst case has complexity scaling exponentially in the dimension. Overall, this is an interesting work that appears to meet the bar for acceptance.
val
[ "P6Gg109u8e", "7Eq06oK7Aqp", "QsKoioC6jU", "BQZsYGnm5qc", "dzX_xj7ZtVA", "dZ9cXwTadWX", "TR_9c_WBF2j", "qbGY4gT-fOj", "YIXtAFE6OG" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \nSampling from real-life distributions is computationally intractable in general; hence samplers providing guarantees are slow in practice. Guaranteed sampling techniques, such as FPRASes and compilation-based techniques(WAPS), can offer DUAL access; hence, in our experiments, we use WAPS as a DUAL oracle to $P$.\n\nOn the other hand, we are interested in checking whether a given sampler generates distribution close to the target distribution, and as such, we would like to handle a very general class of samplers. Such samplers rely on heuristics to achieve efficiency in practice, such as mutation-based and importance sampling-based methods. For such techniques, it is much easier to engineer the weaker PCOND access than the stronger DUAL, as it would require a way to determine the point probability of a returned sample, which might differ for every technique. In contrast, the PCOND access is sampler agnostic. Indeed, the sampler we test in our experiments, wSTS, cannot provide DUAL access. \n\nSuppose a user wants to test whether their sampler $G$ really does sample from the target distribution $A$. For the purpose of testing, they would use a guaranteed sampler to sample from $A$ and then use Pacoco to certify if $G$ is close to $A$. The guaranteed sampler cannot be used in practice owing to the much slower sample generation but can be used to test the faster non-guaranteed sampler. \n\nWe hope the above clarifies the reviewer’s concerns and that the reviewer can adjust the score accordingly. We will add discussion to this effect in the final version of the paper. \n\n", " Thanks for clarifying, it was indeed a misunderstanding on my part! That said, given that we're already assuming DUAL access to one of the distributions, is there a practical reason why one shouldn't assume DUAL access to the other distribution? In any case, I'm happy to raise my score to a 5 for now and possibly a 6 if the authors can provide some more insight into when it's reasonable to assume the different kinds of oracle access.", " This addresses my questions adequately. Thank you! Please take extra care to revise the manuscript and explain the access to the sampling oracles as precisely as in your responses. If other reviewers don't spot a contradiction in the whole picture, I will adjust my score.", " We thank the reviewer for their detailed review and suggestions.\n \n(Weaknesses and Q3) We do think there has been a misunderstanding: we will highlight the core difference between the access model of Corollary 2 [Canonne and Rubinfeld '14] and our model:\n\tIn CR’14, both $P$ and $Q$ offer DUAL access. In contrast, Pacoco has access to distribution $Q$ via the PCOND+SAMP oracles only, while it can access $P$ via the DUAL oracle. We note that Pacoco operates with the same oracle accesses as Barbarik2. The fact that PCOND is a weaker oracle than DUAL contributes to the higher sample requirement.\t\nTo verify the claim that we do not give DUAL access to $Q$, we would like to point out that in the pseudocode, ${Q}(j)$ (the point probability of element in $j$ in the distribution ${Q}$), is never queried. \nIn our revision, we will make this fact clearer and, as suggested, make the oracle accesses explicit in Theorem 1. \n \n \n(Q1, Q2) To simulate DUAL access, we compile the Boolean functions into the weighted dDNNF representation. dDNNFs allow polytime DUAL access. Our implementation follows directly from Barbarik2. In the experimental section, we mention the use of the tool $\\mathsf{WAPS}$ for access to $P$. $\\mathsf{WAPS}$ is a tool to deal with weighted dDNNFs. We will make this clearer in our revision.\n\nPCOND access is simulated via the use of chain formulas. Given two satisfying assignments to Boolean formula, a chain formula construction allows sampling from the distribution conditioned on the two assignments. Chain formulas preserve the relative probabilities of the two assignments. Our work uses the construction as is from [Meel-Pote-Chakraborty’20]. They provide a detailed discussion on this in Section 2.2 of their paper. For clarity, we will add the details to the appendix.\n \n(Minor Comment – “P. 4 Line 125: isn't the DUAL oracle being used for both $P$ and $Q$, not just for $P$?”) On Line 125, we describe the construction of $B_P$ and $B_Q$. To construct $B_P$ and $B_Q$, only $P$ is accessed via the DUAL oracle. Hence we can say that DUAL access is not used on $Q$.\n", " Response\n \nWe thank the reviewer for their detailed review of the experiments and theoretical contribution.\n \n In the revision, we will add runtime details.\n \n(Q1) Barbarik2 and Pacoco have identical access to the distributions, which is as follows: \nDistribution P can be accessed with the DUAL oracle only.\nDistribution Q can be accessed with the PCOND and SAMP oracles only.\nThe lower bound comes from the paper of Narayanan[30], where it appears in Theorem 1.6. Phrased in the jargon of our paper, the lower bound states that distinguishing between $d_{TV}(P,Q) > \\eta $ and $d_{\\infty}(P,Q) = 0 $ requires $\\tilde{\\Omega}(\\sqrt{n/\\log(n)}/\\eta^2)$ samples. Note that the lower bound is shown on a special case ($\\varepsilon = 0$) of our problem. Hence the lower bound applies to our problem as well.\n \nIn [30], the lower bound is shown for the case where distribution P provides full access, i.e., the algorithm can make arbitrary queries to P. This is a stronger access model than DUAL. The lower bound is for a stronger access model, hence it extends to our problem as well.\n\nFor clarity, in our revision, we will include a table placing our results among the existing upper and lower bounds. \n\n(Q2) Our test consists of two subtests. The OutBucket test is run first and manages to reject wSTS in most of the cases. In a few instances, OutBucket accepts, and the InBucket test is run. The InBucket test generally requires much more samples. The few instances where InBucket runs contribute to the sharp increase in sample complexity. Here we would like to note that in the plot, the observed sample requirement has been sorted in increasing order along the x-axis, so the few cases where InBucket is called, show up to the right. ", " \nWe thank the reviewer for their helpful suggestions. In the revision, we will add\n(1) an explanatory note regarding the implementation of the oracles\n(2) details regarding the runtime of Pacoco\n", " The problem of testing the closeness of distributions asks to distinguish two cases for input distributions $P,Q$: the case that $P,Q$ are $\\epsilon$-close in total variation distance and the case that they're $\\eta$-far (for $\\epsilon \\leq \\eta$). One setting that has received attention is the setting where the support of the input distributions is (a subset of) the $n$-dimensional hypercube. The most natural setting for a testing algorithm is to only allow black-box sample access to $P$ and $Q$. However, it is known that this requires the sample complexity to be exponential in $n$. In this submission, the authors present an algorithm, Pacoco, that has sample complexity $\\tilde{O}(\\sqrt{n} \\log n / (\\eta - 11.6 \\epsilon) + n / \\eta^2)$. The algorithm uses two types of conditional sampling queries (COND and DUAL): sample access with conditioning on a subset of the support, and querying the probability of elements of the support. The authors compare their algorithm to one of the most recent works, Barbarik2, which also uses conditional sampling (in a possibly weaker model, see questions below) which has exponential sampling complexity. The experiments show for artificially obtained product distributions and many real-world benchmarks that the asymptotic running time outweighs the constants in the running time for $\\epsilon = 0.05$, $\\eta = 0.9$ and error probability $\\delta = 0.2$. Strengths:\n\n* The algorithm has nearly linear sampling complexity in $n$. Although the conditional sampling oracle model it uses seems strong, the sampling models were proposed in previous work and are not artificially tailored to the algorithm.\n* In summary, the experiments support the claims on the sample complexity and provide some evidence that the constants are not too large.\n\nWeaknesses:\n\n* It seems that the various upper and lower bounds compared in this paper might be somewhat incompatible.\n* Only a single set of constants $\\epsilon, \\eta, \\delta$ is chosen for the experiments, and these values are not motivated.\n* A running time comparison is lacking. My rating mainly mirrors that the model assumptions for the various upper and lower bounds from previous work and Pacoco that are discussed in this paper are very unclear to me. Could you elaborate on the exact conditional sampling model / requirements that are assumed by Barbarik2, Pacoco and the lower bounds mentioned? E.g.: Does Barbarik2 assume DUAL oracle access? Is the lower bound derived from [30] assuming DUAL oracle access? More generally, could you group the upper and lower bounds that are mentioned so that all bounds in one group assume exactly the same model?\n\nWhy is there a sudden, steep ascent of #samples for Pacoco on wSTS for #instances > 36? -", " The paper is concerned with probabilistically validating if samples are close to high-dimensional distributions. The model, experiments, implementation, and empirical comparisons are all in the model of Chakraborty-Meel (AAAI19,NeurIPS20). The framework rejects if the TV distance is too large, and accepts if the multiplicative distance is sufficiently small. This paper focuses on distributions on n-dimensional hypercubes. \n\nIt leverages several oracles called COND, PCOND, and DUAL which allow the sampler to access about the true distribution other than point-wise probabilities. These have been used elsewhere in the community to show improved speed-ups, but the paper does not specifically discuss how one could in practice implement these primitives efficiently, although it seems they have. \n\nFor this problem setting, the previous approach required a number of samples exponential the dimension, while the new algorithm requires samples linear in the dimension. This is accompanied by formal guarantees and proofs. The empirical results show the implementation's number of samples for a wide array of problems, and the new approach works significantly better. \n\n Strengths:\n - formal proof guarantees, with exponential speed-up\n - empirical evidence of speed-up when implemented\n\nWeaknesses:\n -What seems missing however, is a measurement of how well the methods actually perform for a fixed amount of time, or the wall-clock runtime. I understand the number of samples is a key constraint, but the two methods compared may require other auxiliary data structures or apply other oracles which may take more or less time. Moreover, it could be the theoretical bounds on the previous method are not tight, and it actually performs very accurately with many fewer samples. \n That is, who is testing the testers? \nAn evaluation of this of some form would have increased my score for the paper. N/A No concerns here. ", " The present work essentially studies tolerant closeness testing of distributions over {1,-1}^n under the assumption that one not only has sample access but also the ability to 1) evaluate the probability mass at any point in the domain (\"DUAL access\"), and 2) the ability to sample from the distribution conditioned on any subset of size two of the domain (\"PCOND access\").\n\nFollowing Meel-Pote-Chakraborty '20, this paper considers a slightly nonstandard notion of tolerant testing where one should whp accept when the two distributions p, q are \"multiplicatively close\" in the sense that their pdfs are within a factor of $1 \\pm \\epsilon$ pointwise, and reject when they are $\\eta$-far in TV, for $\\epsilon$ at most a small multiple of $\\eta$.\n\nThey give an algorithm with query complexity scaling like $n/\\eta^2 + \\sqrt{n} / \\eta^4$, up to polylog factors. Notably, this avoids the exponential scaling that one would get from traditional closeness testers that only get iid sample access. They also evaluate their algorithm on synthetic and real-world benchmarks and show that their algorithm outperforms a previous baseline, \"Barbarik2\", which in the worst case has query complexity scaling exponentially in $n$. My main concern is that even with just DUAL and sample access, Canonne and Rubinfeld '14 already gave a simple and optimal tester for this problem. They consider the standard setting of tolerant closeness testing, where we want to distinguish whether $p$ and $q$ are $\\epsilon_1$-close in TV or $\\epsilon_2$-far in TV. By Corollary 2 in that work (see also Theorem 4.2.10 from Canonne's survey), they achieve the optimal query complexity of $1 / (\\epsilon_2 - \\epsilon_1)^2$ in this setting,. The point is that it is easy to come up with an unbiased estimator of the TV between p and q using DUAL and sample access (see (2) in that work for the special case of uniformity testing, which can be adapted easily to closeness testing).\n\nIn terms of the parameters in this submission, we would take $\\epsilon_2 = \\eta$ and $\\epsilon_1 = \\epsilon/2$. Then if $p,q$ are $\\epsilon$-multiplicatively-close, then they are $\\epsilon_1 = \\epsilon/2$-TV-close, in which case the Canonne-Rubinfeld tester would accept, and if $p,q$ are $\\eta$-far, then the tester would reject. Notably, their query complexity is $n$-independent, whereas the submission still has some linear dependence on $n$, and furthermore they do not need a PCOND sampler.\n\nThis might just be a big misunderstanding on my part, and if so I'd be happy to see clarification on this point in the rebuttal. In terms of positives, I like the practical motivations with which the submission frames the question of tolerant closeness testing, and the proposed algorithm is a clear improvement over Barbarik2 both theoretically and empirically. Questions:\n- On P. 3 the paper mentions weighted d-DNNFs as yielding one example where DUAL oracle access is possible and says that these are used in the experimental evaluations, but I don't see any subsequent mention of these.\n- Relatedly, in the experiments, how did you simulate PCOND and DUAL access?\n- Strictly speaking, the guarantees of Barbarik2 and Pacoco are incomparable because the former only assumes conditional sampling access and not DUAL access, right?\n\nMinor comments:\n- P. 1 Line 33: \"can be take\" -> \"can take\"\n- P. 1: While it's formally defined on P. 3, would be helpful to be more specific on P. 1 about what is meant by getting access to the target distribution, because the linear dependence on n is clearly good to be true if one only has sample access.\n- P. 4 Line 125: isn't the DUAL oracle being used for both P and Q, not just for P?\n- P. 4 Line 133: Line9 -> Line 9\n- Would be helpful to state explicitly in Theorem 1 that you are given DUAL and COND access to both distributions. The authors have adequately addressed the limitations, and I don't see any potential negative societal impact from this work." ]
[ -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "7Eq06oK7Aqp", "BQZsYGnm5qc", "dzX_xj7ZtVA", "YIXtAFE6OG", "TR_9c_WBF2j", "qbGY4gT-fOj", "nips_2022_qHs3qeaQjgl", "nips_2022_qHs3qeaQjgl", "nips_2022_qHs3qeaQjgl" ]
nips_2022_Dqcoao24G8s
A Best-of-Both-Worlds Algorithm for Bandits with Delayed Feedback
We present a modified tuning of the algorithm of Zimmert and Seldin [2020] for adversarial multiarmed bandits with delayed feedback, which in addition to the minimax optimal adversarial regret guarantee shown by Zimmert and Seldin [2020] simultaneously achieves a near-optimal regret guarantee in the stochastic setting with fixed delays. Specifically, the adversarial regret guarantee is $\mathcal{O}(\sqrt{TK} + \sqrt{dT\log K})$, where $T$ is the time horizon, $K$ is the number of arms, and $d$ is the fixed delay, whereas the stochastic regret guarantee is $\mathcal{O}\left(\sum_{i \neq i^*}(\frac{1}{\Delta_i} \log(T) + \frac{d}{\Delta_{i}}) + d K^{1/3}\log K\right)$, where $\Delta_i$ are the suboptimality gaps. We also present an extension of the algorithm to the case of arbitrary delays, which is based on an oracle knowledge of the maximal delay $d_{max}$ and achieves $\mathcal{O}(\sqrt{TK} + \sqrt{D\log K} + d_{max}K^{1/3} \log K)$ regret in the adversarial regime, where $D$ is the total delay, and $\mathcal{O}\left(\sum_{i \neq i^*}(\frac{1}{\Delta_i} \log(T) + \frac{\sigma_{max}}{\Delta_{i}}) + d_{max}K^{1/3}\log K\right)$ regret in the stochastic regime, where $\sigma_{max}$ is the maximal number of outstanding observations. Finally, we present a lower bound that matches regret upper bound achieved by the skipping technique of Zimmert and Seldin [2020] in the adversarial setting.
Accept
The paper makes a solid technical contribution in the online learning literature, providing the first best-of-both worlds algorithm for online learning with delayed feedback. Despite building heavily on existing algorithmic ideas, the paper involves some critical technical novelties that enable their results.
train
[ "P9R6EISV3Ma", "2T80BdIHnG4", "nDaDf2_cqW", "Ji0M3GKI9Us", "40D6jRAyjOP", "DYotiTsIirL", "4A75f3SHlu8" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks the authors for their response.\n\nI have read through other reviewers’ comments and as I said earlier in my initial comment, I think the paper makes novel contribution into analysing the regret bound of a modified version of a known algorithm in stochastic setting. I also acknowlege that the lower bound analysis is novel, yet I still believe it disconnects with the other part of the paper as I explained in my initial response. It would be very nice if in the future, the authors can manage to use skiping technique to improve the algorithm’ analysis so that this lower bound is more relevant.\n\nOverall, the paper is sought and I recommend a borderline accept score. I keep my initial score as I believe my concerns are still valid after the rebuttal.", " We thank the reviewer for their time and interesting questions.\n\n> Are there any known lower bounds for stochastic settings with delayed feedback?\n\nRegarding lower bounds for the stochastic setting: For uniform delays (constant delay $d$ for all rounds), a trivial lower bound is $\\Omega(\\sum_{i\\neq i^*}\\frac{\\log T}{\\Delta_i} + d \\frac{\\sum{i \\neq i*} \\Delta_i}{K})$. It follows from the fact that the first $d$ rounds are played ``blindly'' without any information available and in expectation the agent cannot do better than playing random actions.\nUpper bounds for algorithms tailored to the stochastic regime obtain $\\mathcal{O}(\\sum_{i\\neq i^*}\\frac{\\log T}{\\Delta_i} + \\Delta_{\\max}d )$ [Joulani and Gyorgy, 2013], which shows that the lower bound is tight within a minor factor in the time-independent lower order term. We are unaware of instance-dependent stochastic lower bounds for varying delays. As already mentioned, this would be an interesting question for future research.\n\n> Can the proposed algorithm handle stochastic regimes with adversarial corruption or adversarial regimes with self-bounding constraint given in [Zimmert and Seldin, 2021]? Since the analysis is based on this technique, it appears to be easily extendable. If not, I would like to know why.\n\nRegarding adversarial regimes with self-bounding constraints (including corrupted regimes as a special case): Indeed, our result and analysis can be easily extended. We rely on the same self-bounding technique as Zimmert and Seldin (2021) and following their proof immediately yields $\\mathcal{O}(B^{stoch}_T+\\sqrt{B^{stoch}_T C})$, where $B^{stoch}_T$ is the regret upper bound in the stochastic regime proven in our paper and $C$ the total corruption budget. We will add a formal statement and a proof to the paper.\n ", " We thank the reviewer for their time and interesting open questions.\n\n> Is it able to achieve Lemma 4 with episodic MDPs?\n\nLemma 4 can be adapted to other forms of regularizations, including the Hybrid Regularizer used by Jin and Luo [2020] for the MDP problem. We have not checked the precise details, but we expect that adaptation could be achieved by modifying the learning rates, as we did in our paper.\n \n> Is it possible to improve the $d_{max} k^{1/3} \\log K$ and $d k^{1/3} \\log K$ terms in the regret bounds for the stochastic setting? What's the main difficulty in removing such terms from the regret bounds? Is it possible to use SAPO or EXP3++ approaches to avoid these terms?\n\nThis is a very interesting question. \nOn a technical level, these terms stem from the need to control the drift of the player's distribution induced by unseen feedback.\nWe use a worst-case bound the drift, which might be overly conservative in the stochastic regime. Perhaps a more refined approach could do better.\n \nWe note that at the moment the best stochastic regret bounds for SAPO and EXP3++ without delay scale with $\\log^2(T)$, whereas our regret bound scales with $\\log(T)$. So, even if it would work (which we doubt), it would be an improvement of a lower order term at the cost of the dominating term.\n\n> Is it possible to achieve similar results with arbitrary delays with respect to the total delay $D$?\n\n$D$ is not a relevant quantity for characterizing the regret, neither in the adversarial setting (see our lower bound in Section 6), nor in the stochastic setting (in the case of uniform delays stochastic lower bound depends on $d$ rather than $D = dT$, for more details see our response to Reviewer Rykq regarding their question about existing lower bounds).\n\n> Any corruption result?\n\nOur result and analysis can be easily extended to the corrupted regime. We rely on the same self-bounding technique as Zimmert and Seldin (2021) and following their proof immediately yields $\\mathcal{O}(B^{stoch}_T+\\sqrt{B^{stoch}_T C})$, where $B^{stoch}_T$ is the regret upper bound in the stochastic regime proven in our paper and $C$ the total corruption budget. We will add a formal statement and a proof to the paper.", " We thank the reviewer for their time and feedback.\n\nWe would like to say a few words regarding the significance of our contribution.\n\n1. We would like to emphasize that, to the best of our knowledge, this is the first best-of-both-worlds result for bandits with delayed feedback and that it resolves an open question by Zimmert and Seldin [2020].\n\n2. We would like to emphasize that control of the drift of the playing distribution in Lemma 4 is novel and highly non-trivial, as also recognized by other reviewers.\n\n3. While the connection between the refined adversarial lower bound in Section 6 and the rest of the paper is indeed a bit loose at the moment, the lower bound is significant for several reasons. First, it establishes optimality of the skipping technique of Zimmert and Seldin for the adversarial regime with arbitrary delays. Second, we expect that skipping may eventually be used to eliminate the need in prior knowledge of the maximal delay $d_{max}$, although for now the control of the drift is already challenging enough and so far we were unable to get rid of this assumption. But if it succeeds at some point, the connection between the lower bound and the other results will be stronger. And finally, the trade-off for optimal skipping in the stochastic regime seem to be different from the trade-off for optimal skipping in the adversarial regime. Therefore, we conjecture that there is no skipping scheme that would be optimal for the adversarial and stochastic regime simultaneously, although we do not have a best-of-both-worlds lower bound for arbitrary delays yet. But this is another direction for future research that will likely strengthen the connection between our lower bound and the rest of the paper. \n\n> minors: There are a few terms mentioned in the paper that I can not find the definition (e.g., uniform delays, minimax regret).\n\nMinimax regret is the minimum of all worst case (maximum) outcomes of regret (depending on the adversary randomization). Uniform delays refers to having fixed delay $d$ in all rounds. We will add these definitions to the paper.\n\n> Can the authors comment on the existing lower bound regret in the stochastic setting if they exist one?\n\n For uniform delays (constant delay $d$ for all rounds), a trivial lower bound is $\\Omega(\\sum_{i\\neq i^*}\\frac{\\log T}{\\Delta_i} + d \\frac{\\sum{i \\neq i*} \\Delta_i}{K})$. It follows from the fact that the first $d$ rounds are played ``blindly'' without any information available and in expectation the agent cannot do better than playing random actions.\nUpper bounds for algorithms tailored to the stochastic regime obtain $\\mathcal{O}(\\sum_{i\\neq i^*}\\frac{\\log T}{\\Delta_i} + \\Delta_{\\max}d )$ [Joulani and Gyorgy, 2013], which shows that the lower bound is tight within a minor factor in the time-independent lower order term. We are unaware of instance-dependent stochastic lower bounds for varying delays. As already mentioned, this would be an interesting question for future research.\n", " This paper studied the problem of multi-armed bandits problem with delayed feedback. Most of the traditional algorithms focus on either stochastic or oblivious adversary settings, thus in the case where the setting is unknown, these algorithms can not achieve optimal pseudo-regret bound. In order to overcome this problem, this paper provided a slight modification version of Zimmert and Seldin [2020]'s algorithm (adding a constant term $\\eta_0$ and $\\gamma_0$ to the learning rate). By doing that, it can achieve near-optimal regret in both stochastic and oblivious adversary settings. I like the overall idea of the paper to extend the best-of-both-worlds algorithm into a delayed feedback setting. However, I have a few concerns over the contribution of this paper.\n\nAs the algorithm is very similar to Zimmert and Seldin [2020]'s algorithm, the first regret bound analysis against oblivious adversary is also very similar to the analysis in Zimmert and Seldin [2020]'s paper. Furthermore, due to the modification, the newly derived regret suffers another term, $d_{max} K^{1/3} \\log(K)$ compared to the regret of the original algorithm. \n\nThe second regret bound against stochastic setting is interesting and novel as it provides a new feature for the Zimmert and Seldin [2020]'s algorithm (it also requires a new technique of controlling drift of the playing distribution over arbitrary time intervals). \n\nThe third contribution about adversarial regret lower bound, although novel is in my opinion not well connected to current paper since Algorithm 1 does not consider skipping-based refined regret or achieves any regret bound that matches the new lower bound. \n\nTherefore, the main contribution of the paper is to provide a regret bound for a modification of Zimmert and Seldin [2020]'s algorithm against a stochastic setting with a small sacrifice in the regret against an oblivious adversary. Even though I think this is new and novel, I am unsure whether it is significant enough for publication in Neurips. \n\nminors: There are a few terms mentioned in the paper that I can not find the definition (e.g., uniform delays, minimax regret).\n\n\n Can the authors comment on the existing lower bound regret in the stochastic setting if they exist one? There is no potential negative societal impact.", " This paper present an algorithm which achieves the best of both worlds guarantee with delayed feedback between the adversarial world and stochastic world. More specifically, the authors consider two delay settings (fix delay and arbitrary uniform delay) of multi-armed bandit problem, and propose two algorithms that achieve $\\widetilde{O}(log K + K^{1/3} )$ (the ideal stochastic bound should be $\\widetilde{O}(log K)$) regret within stochastic setting where the loss are sampled i.i.d from a fixed distribution, while ensuring $\\widetilde{O}(\\sqrt{K})$ in the worst case. The algorithms applies the FTRL framework with specific hybrid regularizers and careful learning rate scheduling, which is similar to those from previous works. However, with the novel analysis technique (Lemma 4), the authors show that the proposed algorithms actually achieve the best of both worlds. Unlike the original multi-arm bandit problem which receives the feedback immediately after playing an arm, the delayed feedback setting is much more complicated as the learner has to maintain sufficient robustness with missing information, which may lead to drastic different strategy. Moreover, it is even harder to achieve best of both worlds results in this setting, which requires the learner to carefully balance the exploration and exploitation. \n\nTo the best of my knowledge, this is the first best of both world results of multi-armed bandit problem with the delayed feedbacks. The analysis is quite different from the approach (and \"cheating-regret\"-\"drift\" decomposition) from $Gy\\ddot{o}rgy$ and Joulani (2021) and the other approach of Zimmert and Seldin (2020), with the help of the powerful Lemma 4 which controls the drift of the playing distribution by the time-varying hybrid regularizer to handle the arbitrary delays (arbitrary uniform or fixed). Besides, the proof of Lemma 4 is very interesting and may be applied in many related problems. \n\nThis paper does not have any specific weakness point. Overall, the writing is clean and well-organized. 1. Is it able to achieve Lemma 4 with episodic MDPs? \n\n2. Is it possible to improve the $d_{max} K^{1/3} \\log K$ and $d K^{1/3} \\log K$ terms in the regret bounds for the stochastic setting? What's the main difficulty in removing such terms from the regret bounds? Is it possible to use SAPO or EXP3++ approaches to avoid these terms? \n\n3. Is it possible to achieve similar results with arbitrary delays with respect to the total delay $D = \\sum_{t=1}^{T} d_t$? \n\n4. Any corruption result? None. ", " This study considers the multi-armed bandit problem with delayed feedback.\nFor this problem, the authors propose a best-of-both-worlds algorithm that achieves minimax optimal regret for adversarial regimes as well as logarithmic regret for stochastic regimes.\nThis result is achieved by modifying the update rules for learning rates of an existing algorithm proposed by [Zimmert and Seldin, 2020].\nThe proposed approach works even for problems with arbitrary (round-dependent) delays. Strengths:\n\n- The paper makes solid contributions to important problems.\n- The paper is well structured and easy to read.\n- The proposed approach appears simple and practical.\n\nWeaknesses:\n\n- Novelty in algorithms and analysis techniques is somewhat limited.\n\nComments:\n\nDelayed feedback and best-of-both-worlds algorithms are both topics of practical importance.\nI consider this paper to be of high importance because it provides a solid contribution to these two topics.\n\nThe proposed algorithm and analysis techniques are based on existing ones and have limited novelty.\nHowever, there are several nontrivial steps in the analysis that require delicate attention, which the authors address in a sophisticated manner. - Are there any known lower bounds for stochastic settings with delayed feedback?\n\n- Can the proposed algorithm handle stochastic regimes with adversarial corruption or adversarial regimes with self-bounding constraint given in [Zimmert and Seldin, 2021]?\nSince the analysis is based on this technique, it appears to be easily extendable.\nIf not, I would like to know why. The limitations are adequately addressed." ]
[ -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, 3, 4, 3 ]
[ "40D6jRAyjOP", "4A75f3SHlu8", "DYotiTsIirL", "40D6jRAyjOP", "nips_2022_Dqcoao24G8s", "nips_2022_Dqcoao24G8s", "nips_2022_Dqcoao24G8s" ]
nips_2022_45p8yDYVr5
Lower Bounds on Randomly Preconditioned Lasso via Robust Sparse Designs
Sparse linear regression with ill-conditioned Gaussian random covariates is widely believed to exhibit a statistical/computational gap, but there is surprisingly little formal evidence for this belief. Recent work has shown that, for certain covariance matrices, the broad class of Preconditioned Lasso programs provably cannot succeed on polylogarithmically sparse signals with a sublinear number of samples. However, this lower bound only holds against deterministic preconditioners, and in many contexts randomization is crucial to the success of preconditioners. We prove a stronger lower bound that rules out randomized preconditioners. For an appropriate covariance matrix, we construct a single signal distribution on which any invertibly-preconditioned Lasso program fails with high probability, unless it receives a linear number of samples. Surprisingly, at the heart of our lower bound is a new robustness result in compressed sensing. In particular, we study recovering a sparse signal when a few measurements can be erased adversarially. To our knowledge, this natural question has not been studied before for sparse measurements. We surprisingly show that standard sparse Bernoulli measurements are almost-optimally robust to adversarial erasures: if $b$ measurements are erased, then all but $O(b)$ of the coordinates of the signal are identifiable.
Accept
This paper studies the problem of sparse regression with ill-conditioned Gaussian covariates. Despite the simplicity of this problem formulation and the extensive studies of sparse linear regression, the potential existence of a statistical-computational gap for this problem has not been well understood. Taking a step towards understanding this problem, the authors provide theoretically rigorous evidence about the limitation of randomly preconditioned Lasso for this problem. The paper contains solid impossibility results, and hence I recommend acceptance. Note that one reviewer has suggested ways to improve the structure and readability of the paper, which I hope the authors can address in the final paper; the paper would also benefit from having more substantial experiments.
train
[ "wGOyZ9XsuBm", "pKArDpzBz8o", "y7fak8aAuBA", "wa8KSovewoq", "10wM7JUPABl", "uoQ4Yyr6o_f", "WRcuY0wAoki", "r90XMvDqx26", "yKguTVwWO8K" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I want to thank the authors for their detailed answers to my questions. I stick to my original evaluation.", " We thank the reviewer for their time. To address their questions:\n\n1. Indeed, ill-conditioned random-design sparse linear regression has no known reduction from planted clique. Of course, this is one of the primary motivations for our work. As to *why* this problem has no known reduction from planted clique, we can only point to a few intertwined partial answers. \n\n First, as alluded to in the paper, planted clique reductions generally start with a reformulation as a simple testing problem between two very structured distributions which we believe is hard, e.g. test between $N(0,I)^{\\otimes m}$ and $N(0, I + \\theta uu^T)^{\\otimes m}$ for sparse PCA. For SLR, there is no known analogue of this which is conjectured to be hard. This seems like the most fundamental obstacle. \n\n Second, problems which reduce from planted clique generally seem to share some common structure: the hardness seems driven by a \"signal-to-noise ratio'' (like $\\theta$ in the sparse PCA example), which when varied transitions the problem from computationally tractable to computationally intractable to statistically impossible. It's not clear where to find this ratio in ill-conditioned SLR.\n \n Note: there are many other problems with conjectured computational-statistical gaps which have no known reduction from planted clique. For example, the hardness of learning halfspaces agnostically is known in terms of SQ lower bounds and a reduction from a certain random constraint satisfaction problem (work of Daniely), but not from planted clique. See also parity with noise, learning halfspaces in the Massart noise model, nongaussian component analysis, k-community stochastic block model, mixture of gaussians, ... \n \n2. Good question; our general expectation is that since known guarantees for (standard) Orthogonal Matching Pursuit roughly match known guarantees for (standard) Lasso, e.g. it works under incoherence/RIP/RE/etc., a preconditioner for Orthogonal Matching Pursuit should be \"good'' roughly when it's \"good'' for Lasso. However, this is in no way rigorous, and it's possible that there is a way to precondition Orthogonal Matching Pursuit which could outperform preconditioned Lasso. This is one of the important directions of future study.\n \n (Technical note: one case where OMP has stronger guarantees is under the Das and Kempe condition of weak submodularity, which is a bit weaker than the Lasso conditions. But this theory does not come with exact recovery guarantees, only approximate.)\n \n3. Unfortunately, while the the performance of the Lasso on correlated/ill-conditioned data has been studied for a long time, we are not aware of a nice historical reference which explicitly posed this question/conjecture. We could call the statistical/computational gap for ill-conditioned Gaussian SLR a \"folklore conjecture''. For a recent reference that discusses this possible gap, see e.g. \n \n [Kelner et al., 2021] \"On the Power of Preconditioning in Sparse Linear Regression\" (e.g. their Discussion section)\n \n There are also several papers which raise the possibility of a statistical/computational gap for the (very closely related/almost equivalent) problem of learning Gaussian Graphical Models, where again there is no formal evidence for the gap:\n \n [Brennan et al., 2021] \"Statistical Query Algorithms and Low-Degree Tests Are Almost Equivalent'' (see discussion on page 37)\n \n [Kelner et al., 2020] \"Learning Some Popular Gaussian Graphical Models without Condition Number Bounds\" (see e.g. end of Section 1.1)\n \n [Misra et al., 2020] \"Information Theoretic Optimal Learning of Gaussian Graphical Models\" (a paper on *inefficiently* learning ill-conditioned GGMs; they raise the computational question in the Conclusions section)\n \n There are numerous papers on a (much smaller, roughly constant-factor) gap for isotropic Gaussian SLR (e.g. [Bandeira et al., 2022] \"The Franz-Parisi Criterion\"). But that problem seems unrelated to ours, where the conjectured sample complexity gap is nearly exponential.\n \n There is also a closely related problem of learning sparse halfspaces --- this was posed by Feldman in a 2014 COLT Open Problem (\"Open Problem: The Statistical Query Complexity of Learning Sparse Halfspaces'').\n \n4. That is correct, Basis Pursuit would be more accurate terminology. Note that it is the limit of the Lasso program as the regularization parameter goes to zero.", " We thank the reviewer for their time. Addressing the reviewer's comments:\n\n1. On traits of data distributions that nullify possible improvement of randomized preconditioners: at a high level, a deterministic preconditioner works when it can sparsely precondition the whole space; a randomized preconditioner can get away with preconditioning *most* of the space. See [Kelner et al., 2021] \"On the Power of Preconditioning for Sparse Linear Regression'' (Appendix D) for an example along these lines where randomized preconditioning seems helpful. \n \n2. In the formal statement of Theorem 1.4 (Theorem C.14) we use the terminology ``$(b,b',\\eta,\\tau)$-erasure-robust\" which we formally defined in the preliminaries (line 634).\n \n3. Thanks for catching this omission; we will add a mention that SLR stands for Sparse Linear Regression.\n\nAddressing the reviewer's questions:\n\n* By ''information-theoretic'' we simply mean to emphasize that the estimator we give is not computationally efficient. This is the standard terminology in the computational-statistical gaps literature.\n \n* When we say dense preconditioners \"morally'' do not work, this is what we mean: if the signal is $e_i$ and column $i$ of the preconditioner $S^T$ is dense, then the preconditioned signal $S^T e_i$ is dense and because of this we do not expect our program based on $\\ell_1$-penalization to recover the signal successfully. This intuition can be made rigorous when the preconditioner is square and invertible, in which case we are literally performing basis pursuit after changing basis by $S^T$: by a dimension counting argument, if $m$ samples are given, then the output of basis pursuit will be $m$-sparse with probability $1$, so a very dense signal will not be recovered. For rectangular preconditioners, we need to be more careful to rule out dense matrices and our techniques are more involved.\n\n[*Re. Limitations*]: the reviewer writes \"The compressed sensing result hinges on a non-practical recovery algorithm... It is also not fully clear how this aspect of the CS result impacts the preconditioner result.'' We want to emphasize that the lower bound for preconditioned Lasso is unconditional; having an efficient algorithm for the erasure recovery problem is a very interesting open problem, but such a result would have zero implications for the lower bound. See the discussion at the end of Section 1.3 for more discussion of how information-theoretic erasure-robustness shows up as a technical ingredient in the lower bound proof. ", " We thank the reviewer for their time. Addressing the reviewer's comments:\n\n1. Indeed, our main contribution is summarized in Section 1, on lines 66--68. The theorem statements in the introduction are written tersely (compared to the full statements in the supplementary) to avoid clutter, but they are entirely rigorous. We can certainly attempt to further clarify the statements in the final version.\n \n2. Our main contribution is a proof that a whole class of algorithms cannot solve a particular statistical task. What numerical simulations would the reviewer have us run? \n \n To see a numerical example where preconditioned Lasso works but standard Lasso does not work, see [Kelner et al., 2021] ''On the Power of Preconditioning in Sparse Linear Regression'' (Figure 1). But given that we are proving an impossibility result, it's not clear what empirical evidence would be apposite.\n\nAddressing the reviewer's questions:\n\n1. See Line 40: a problem has a statistical/computational gap if $m_\\text{est} \\ll m_\\text{alg}$, where $m_\\text{est}$ is the number of samples needed to solve the problem information-theoretically, and $m_\\text{alg}$ is the number of samples needed to solve the problem in polynomial time. The ''lower bound'' we prove is a lower bound on the number of samples needed by a particular class of algorithms (the preconditioned Lasso algorithms).\n \n2. See Line 33, where we reference (Tibshirani, 1996: Regression shrinkage and selection via the lasso). The classical interpretation of Lasso is the following: ideally, we want to perform linear regression with $\\ell_0$ regularization. But this is computationally intractable (for worst-case data), so we relax the $\\ell_0$ penalty to a convex $\\ell_1$ penalty. This yields the Lasso.\n \n3. See response to comment 2.", " We thank the reviewer for their time. To address their comments:\n\n* The ''informal statements'' in the introduction are written tersely to avoid clutter (i.e. using polylog instead of specifying the exact power), but they are entirely rigorous. However, we can certainly attempt to further clarify the statements in the final version.\n\n* Our lower bound is indeed matched by an upper bound (up to constants): our lower bounds show that (for our covariance matrix and signal distribution) no preconditioned Lasso algorithm can succeed with less than $n/7$ samples. On the other hand, with $n$ samples, the regression problem can always be solved by Gaussian elimination (and this is computationally efficient).\n\n(Note: in case the reviewer is asking about the adversarial compressed sensing result, as we stated in the text, making our estimator computationally efficient is an interesting open problem.)\n", " In this work, lower bounds on randomly preconditioned Lasso are provided. The authors construct a covariance matrix and a sparse signal distribution under which any randomly-preconditioned Lasso program with invertible preconditioners fails. The key technique is a new robustness result in compressed sensing, and the authors study recovering a sparse signal when a few measurements can be erased adversarially. Strengths: This paper is well-written. Both the main theoretical results (lower bounds on randomly preconditioned Lasso) and the key proof technique (erasure-robust sparse designs) are interesting to me. Although I have not checked the proofs in detail, the theoretical results seem to be reliable based on the technical overview.\n\nWeaknesses: The authors should write down the full version (instead of informal statement) of the main theorems in the main text. I am wondering whether a matching upper bound can be achieved by computationally efficient algorithms. Not applicable.", " This paper investigated the problem of random-design sparse linear regression. Recent work shown that, for certain covariance matrices, the broad class of Preconditioned Lasso programs provably cannot succeed on polylogarithmically sparse signals with a sublinear number of samples. However, this lower bound only holds against deterministic preconditioners. For an appropriate covariance matrix, a single signal distribution is constructed on which an invertibly-preconditioned Lasso program fails with high probability, unless it receives a linear number of samples. Surprisingly, at the heart of our lower bound is a new robustness result in compressed sensing. This manuscript studied recovering a sparse signal when a few measurements can be erased adversarially. Generally speaking, the paper is well-written, and enriched the existing results. However, the paper contains also several weaknesses:\n1. The structure of the paper can be re-organized thus to improve the readability. So far, the paper contains only two sections. The flow of revealing the ideas and results is not very smooth and clear. The main contributions should be summarized in Section I. Moreover, (most of) the theoretical results established in the paper are presented in informal form. Then why not simply present them in the supplementary material and just cite them in the paper? These informal statements render it difficult to judge the results of the paper in a rigorous setup. \n2. The paper is purely theoretical. The established computational and/or statistical results shall be compared against existing sparse recovery methods via substantial numerical simulations.\n 1. What dose “statistical/computational gap” mean in the Abstract? And what is the “lower bound\"? It is better to provide detailed explanations of these terms in this paper. \n2. Please provide the reference for “Lasso” and its interpretation.\n3. Is it possible to include numerical examples in this paper? This is helpful for readers to better understand the correctness and effectiveness of the results. NA", " The paper shows that the use of randomized preconditioners does not improve the performance of preconditioned lasso, as it is always possible to find data distributions (over sufficiently sparse signals) where the number of samples required by randomly preconditioned Lasso is linear. The analysis is tied to a result in compressed sensing that shows tolerance to adversarial erasure of randomized measurements that still allows partial recovery of the signal support, by exploiting the fact that binary random matrices are likely to provide adjacency matrices for expander graphs.\n The description in Section 2.1-2.2 is instructive.\n\nIt would be illuminating to have a discussion of the traits of the data distributions that nullify the possible improvement of randomized preconditioners. Perhaps this can inform whether randomized preconditioners may provide an advantage for the distribution at hand.\n\nAlthough Theorems 1.2 and 1.3 easily track their counterparts in the supplement, it is less clear to see the relationship between Theorem 1.4 and its counterpart.\n\nThe acronym SLR should be defined. Corollary 1.5 should be more explicit about what it means to identify \"information theoretically\" - the estimator in line 214 looks for the sparsest signal that meets a mismatch up to delta per observed measurement: how is this information-theoretic? \n\nWhat does \"morally\" refer to in line 233? Is this discussion of preconditioners specific to random ones? Plenty of dense preconditioners can provide sparsity. The compressed sensing result hinges on a non-practical recovery algorithm, hindering its applicability. It is also not fully clear how this aspect of the CS result impacts the preconditioner result - perhaps there could be a different recovery method that improves over the results here.", " This paper considers a sparse linear regression problem, where the covariate matrix is very-ill conditioned. This paper is motivated by the fact that there might be a computational-statistical tradeoff for this problem. That is, no polynomial-time algorithm exists which solves this problem in polynomial-time with an information theoretically (near-)optimal sampling rate. \n\nThis paper constructs a covariate matrix and a corresponding signal distribution such that any randomized $S$ preconditioned LASSO fails with high probability, if $S$ is a square matrix. In case that $S$ is a rectangular matrix contains a result of similar flavour. (However, in the latter scenario the paper only shows that any randomized $S$-preconditioned LASSO fails with probability $1/2$.) This gives first evidence that there might be a computational-statistical tradeoff in this problem.\n\nMoreover, as a byproduct in the proof of the main result, this paper establishes a result about sparse linear regression with adversarial erasures. In contrast to other problems where these tradeoffs are suspected (k-Clique, sparse PCA,...) and reductions to the k-clique problem are known, sparse linear regression with ill-conditioned covariates is surprisingly poorly understood. This paper makes an important step towards better understanding this problem and identifying a potential computational-statistical tradeoff. Moreover, this paper is extremely well-written and it is a pleasure to read.\n\nTo conclude, I think this is a very strong paper which should published.\n\nTypos:\nl. 294: leas->leads 1. Many (all?!) other problems, which exhibit a computational-statistical tradeoff, like RIP certification or sparse PCA, can be reduced to the planted clique conjecture (at least in a certain sense). This seems not to be the case for this problem. It would be great if the authors could elaborate further on this.\n\n2. LASSO is not the only algorithm one could apply to this problem. It would be great if the authors could comment on why they expect that properly modified variants of other algorithms like orthogonal matching pursuit also will not work in the low-sample regime.\n\n3. In the abstract the authors write: \"Sparse linear regression with ill-conditioned Gaussian random covariates is widely believed to exhibit a statistical/computational gap, but there is surprisingly little formal evidence for this belief.\" Are there any references where this possibility is discussed?\n\n4. The paper uses the term LASSO to describe the optimisation problem (1). Would it not be more accurate to use the terminology \"basis pursuit\" instead? Yes" ]
[ -1, -1, -1, -1, -1, 6, 5, 7, 8 ]
[ -1, -1, -1, -1, -1, 3, 2, 3, 4 ]
[ "pKArDpzBz8o", "yKguTVwWO8K", "r90XMvDqx26", "WRcuY0wAoki", "uoQ4Yyr6o_f", "nips_2022_45p8yDYVr5", "nips_2022_45p8yDYVr5", "nips_2022_45p8yDYVr5", "nips_2022_45p8yDYVr5" ]
nips_2022_c39zYHHgQmy
CLIPDraw: Exploring Text-to-Drawing Synthesis through Language-Image Encoders
CLIPDraw is an algorithm that synthesizes novel drawings from natural language input. It does not require any additional training; rather, a pre-trained CLIP language-image encoder is used as a metric for maximizing similarity between the given description and a generated drawing. Crucially, CLIPDraw operates over vector strokes rather than pixel images, which biases drawings towards simpler human-recognizable shapes. Results compare CLIPDraw with other synthesis-through-optimization methods, as well as highlight various interesting behaviors of CLIPDraw.
Accept
This is a very interesting paper. While there are methods for generating text without training using CLIP (e.g., https://arxiv.org/abs/2205.02655), this paper introduces a method generating stroke-based images based on the similarity between the text and the image. The performance of the method is quite impressive and the reviews are all positive. I therefore recommend acceptance of this paper.
train
[ "2QkycgxPica", "FMFlYkxpixW", "FV84LBINYZI", "ANOPEABuMkv" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you all for the valuable reviews. Since the feedback was largely positive, we will not be making any large changes to the work.\nHowever, some comments will be addressed in the revision:\n- Wall clock time is ~1 minute on a Colab GPU\n- The motivation for focusing on human-interpretable strokes (in contrast to photorealistic images) lies in the simplicity and flexibility of the objective. Photorealistic objectives work well, however they have been the subject of many works, and there is a benefit in studying an image basis which focuses on higher-level visual representation rather than pixel-level style and details.\n- We now cite Parametrized Brushtrokes and CLIPPasso\n- Ablations on the image augmentation method are included in the Appendix", " This paper presents a synthesis-through-optimization approach that generates drawing based on textual description. The approach leverages differentiable rendering to optimize RGBA Bézier curves to maximize the CLIP representation between generated images and the text input. The proposed approach initialize a predefined number of curves at random positions and modify the curve parameters (color, transparency, thickness, etc.) via gradient descent.\n\nThe authors demonstrated that the proposed methods can generate visually interesting image by showing qualitative examples and demonstrate the importance of using data augmentation during synthesis optimization. In addition, the author showed that practitioners can gain more control over this method by using negative controls (additional input text that describes the visual aspect that should not appear in the generated images). \n\nI find the proposed method to be a simple framework that can be used to visualize the knowledge captured in CLIP. Given its simplicity and effectiveness, I am leaning toward accept.\n\nHowever, given that more powerful (pixel-based) image generation systems are getting developed and released, I encourage the authors to discuss more about the future aspect of this current work. Will the proposed technique still be useful in three years? For this, I have more detailed comments in the following sections. Strength:\n1. the proposed method is novel \n2. the proposed method is simple and easy to implement\n3. the proposed method has already generated impact in this direction of work\n\nWeakness:\n1. Given that pixel-based image generation methods are becoming increasingly powerful, I am not sure if the proposed method would still stand out.\n2. most of the evaluation is done by example visualizations, which makes it hard to assess the general quality of the proposed method Question:\n1. what is the wall clock time of the proposed method for generating one image? \n\nComments on Writing:\n\n1. `Rather than photorealistic images, CLIPDraw aims to synthesize simple drawings that nevertheless match the prompt.` Why are we interested in this task?\n\n1. `Thus, CLIPDraw optimizes a set of vector strokes rather than pixel images, a constraint that biases drawings towards simple human-recognizable shapes.` Argubly, methods that generate pixel images are also human interpretable. I don’t quite see this shortcoming of recent works in this regard. I do not see any negative societal impact of this work", " This paper proposes an interesting CLIPDraw model that can synthesizes novel drawings from natural language input. Specifically, the proposed model follows the synthesis-through-optimization paradigm and utilizes the powerful pre-trained CLIP language-image encoder as a metric for maximizing similarity between the given description and a generated drawing. It also adopts the differentiable renderer to operates over vector strokes rather than pixel images. In addition, the rendered image are augmented to force drawings to remain recognizable when viewed through various distortions. The images synthesized by CLIPDraw looks creative and have human-recognizable semantics. The pre-release work has inspired some follow-up methods in the image synthesis community. Strengths:\n\n+ The task and the idea are interesting. \n+ The experiments verify the creativity, semantics and controllability of the proposed method in image synthesis results.\n+ The work may enlighten many follow-up studies.\n\nWeaknesses:\n\n+ We may call the proposed model a simple yet effective model, but its effect largely depends on the existing two models: differentiable renderer (Li et al., Differentiable vector graphics rasterization for editing and learning, ACM TOG, 2020) and CLIP (Radford et al., Learning transferable visual models from natural language supervision, ICML, 2021), which makes me concern about the technical contribution of the proposed method.\n+ In addition, some works also adopt techniques similar to the proposed model, which are not mentioned in this paper, such as: Parameterized Brushstrokes (Kotovenko et al., Rethinking Style Transfer: From Pixels to Parameterized Brushstrokes, CVPR, 2021) and CLIPasso (Vinker et al., CLIPasso: Semantically-Aware Object Sketching, arXiv preprint arXiv:2202.05822, 2022). The tasks of these methods may be different, but they are also used the differentiable renderer and the CLIP model, and follows the synthesis-through-optimization paradigm. Therefore, the authors should further discuss the differences and the originality of the proposed method compared with other methods. See weaknesses above. The authors have adequately addressed the limitations and potential negative societal impact of their work.", " The paper introduces a text-based image synthesis method by leveraging the CLIP model. The proposed method, CLIPDraw, directly optimizes a set of vector strokes without training a separate model. More specifically, control points of randomly initialized Bézier curves, thickness, color, and alpha values are fitted to match the text prompt. The paper presents a set of qualitative analyses that focus on providing insights about CLIPDraw as well as a comparison with various optimization-based approaches. It is a well-written paper. The proposed approach is easy to understand. The idea of optimizing parametric curves rather than the pixels is interesting. Most importantly, it works and enables the synthesis of stylistic images without training a separate generator network. CLIPDraw achieves promising results compared to the other synthesis-through-optimization baselines. The augmentation technique also seems to be highly effective.\n\nCLIPDraw applies perspective and resized crop augmentations that have an emphasis on the shape, which are suitable for parametric curves. If the pixel baseline (i.e., Pixel Optimization in Section 4) applies the same, I would also suggest trying a color-based augmentation. The baseline seems to work well with the shape but lacks the texture.\n\nAblation of the number of augmentations and the range of augmentation parameters could be useful for the reader.\n\nThis is already mentioned as a limitation in the paper. But I would like to express my interest in adding a discussion about potential solutions. The proposed method always generates images in a watercolor painting style due to the parametric curve inputs. It requires a large number of strokes to make the content recognizable. However, then it gets cluttered and without sharp details. A discussion on how to increase the quality could be insightful.\n 1. Have the authors tried optimizing the position of the strokes? This could yield more accurate drawings with fewer strokes and hence less clutter. It could be applied together with the stroke-by-stroke approach (Fig. 11) as the stroke position might be an important degree of freedom. Yes, a detailed and helpful discussion was provided." ]
[ -1, 7, 6, 7 ]
[ -1, 5, 4, 3 ]
[ "nips_2022_c39zYHHgQmy", "nips_2022_c39zYHHgQmy", "nips_2022_c39zYHHgQmy", "nips_2022_c39zYHHgQmy" ]
nips_2022_FWMQYjFso-a
Pre-Trained Language Models for Interactive Decision-Making
Language model (LM) pre-training is useful in many language processing tasks. But can pre-trained LMs be further leveraged for more general machine learning problems? We propose an approach for using LMs to scaffold learning and generalization in general sequential decision-making problems. In this approach, goals and observations are represented as a sequence of embeddings, and a policy network initialized with a pre-trained LM predicts the next action. We demonstrate that this framework enables effective combinatorial generalization across different environments and supervisory modalities. We begin by assuming access to a set of expert demonstrations, and show that initializing policies with LMs and fine-tuning them via behavior cloning improves task completion rates by 43.6% in the VirtualHome environment. Next, we integrate an active data gathering procedure in which agents iteratively interact with the environment, relabel past "failed" experiences with new goals, and update their policies in a self-supervised loop. Active data gathering further improves combinatorial generalization, outperforming the best baseline by 25.1%. Finally, we explain these results by investigating three possible factors underlying the effectiveness of the LM-based policy. We find that sequential input representations (vs. fixed-dimensional feature vectors) and LM-based weight initialization are both important for generalization. Surprisingly, however, the format of the policy inputs encoding (e.g. as a natural language string vs. an arbitrary sequential encoding) has little influence. Together, these results suggest that language modeling induces representations that are useful for modeling not just language, but also goals and plans; these representations can aid learning and generalization even outside of language processing.
Accept
This paper adapts the "pretrain-then-finetune" framework to policy learning using large language models and demonstrates its effectiveness. They also develops an active expert data gathering approach for settings where no expert data is available. All reviewers find the empirical findings in the paper interesting and the work technically solid. This paper may spur more work in using pretrained language models in RL settings. I recommend acceptance.
train
[ "MdyCubzwkkA", "Q70XdolY7rx5", "lbOIWbQhAlY", "804cNQN3Bm3", "LDN-PwYWNT", "2NOYGiOrZFz", "H4yvufWWa_f", "XO_csgoccBE", "quNXA68q4rf", "bbkEQg8WQhZ", "DdNtH_2SWce", "L3h79bdrBSG" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the authors' feedback and the updated version of the paper. I appreciate that the authors add additional experimental results on using bidirectional encoders from BART and show that it works even better. My main concern regarding reproducibility is also resolved by the provided code. However, I still believe the description in Appendix D.2 provides enough training details. The batch size, number of training steps, weight decay of AdamW, and learning rate scheduler are all missing. Although practitioners can look them up from the code, I would still recommend including sufficient details in the paper. \nOverall, I would like to keep my original rating and recommend accepting this paper.", " Authors have addressed all my comments and they do have the analysis I was looking for. ", " Thank you for providing the additional experiments and clarifications. The response addressed my concerns mentioned in the Weaknesses part. I also confirmed the answers to my questions with the updated content of the paper. \n\nRegarding the Societal Impact part, maybe my previous comment is not clear. My point is that the outcome of this paper may not bring negative social impacts. But I am still happy to see that the authors will give further discussion in the future version. \n\nHence, I am glad to increase my rating to 7.\n", " Dear Reviewer, thank you for your valuable comments and feedback. We appreciate that you think our paper is novel, simple, and quite effective. We have addressed your questions below. Please let us know if you have any additional questions -- we are happy to clarify or provide additional experiments.\n\n------------------------------------\n\n**Q1: One concern is about the reproducibility of the paper. The authors do not provide the detailed architecture and hyperparameters of the method. Although there is a short description in Appendix D.2. I don't think it is sufficient for reproduction.**\n\n**A:** Thanks a lot for your suggestion. In the paper we submitted, we provided the detailed model architecture in Appendix D.1, Figure 6, and Figure 7. The training details and hyperparameters are provided in Appendix D.2. \n\nIn this rebuttal, we further added our code for training LID-Text in the supplementary materials.\n\n------------------------------------------\n\n**Q2: It would be better if the authors can provide standard deviations in Tables 1 and 4. With a small number of demos, I believe that the standard deviation would be large.**\n\n**A:** Thanks for your suggestion. We have added the standard deviations in Table 1 and Table 4 in the updated version.\n\n------------------------------------------\n\n**Q3: Violating the formatting instruction. Notably, the authors include the appendix at the end of the main paper submission instead of putting it in the supplementary material which violates the NeurIPS format.**\n\n**A:** Dear reviewer, adding the appendix at the end of the main paper does not violate the NeurIPS format. \n\nOn the NeurIPS webpage (https://neurips.cc/Conferences/2022/PaperInformation/StyleFiles), they have explicitly mentioned that “additional pages containing only the checklist, references, and appendices are allowed.”\n\nAlso on the NeurIPS FQA webpage (https://neurips.cc/Conferences/2022/PaperInformation/NeurIPS-FAQ), they have answered this question “Yes. You can include appendices with the main submission file, or you can include them as a separate file in the supplementary materials.”\n\nThe provided NeurIPS template also contains the appendix at the end (page 6 in https://media.neurips.cc/Conferences/NeurIPS2022/Styles/neurips_2022.pdf). \n\n-------------------------------------\n\n**Q4: Are all the tokens in brackets (such as [grab] and <apple>) treated as individual special tokens and added to the tokenizer or they are just treated as normal takes?**\n\n**A:** They are treated as normal tokens. [grab] <apple> is first converted to “grab apple” and then sent to the tokenizer.\n\n---------------------------------------\n\n**Q5: I don't see the reason why the authors only use autoregressive language model. The next action prediction task doesn't seem to require an autoregressive model since it's not a sequence prediction task. I wonder how bidirectional Transformers such as BERT and RoBERTa perform.**\n\n**A:** We agree that the autoregressive language model is not the only way to do action prediction. GPT-2 is one of the most representative language models, and thus we opted to use GPT-2 in our method. However, other language models such as BART [15], BERT, and RoBERTa can also be used in our framework.\n\nWe have added an experiment that replaced the GPT-2 with BART, which has a Bidirectional Encoder similar to BERT. See Fig 1 in [15] for the comparisons of BART, BERT, and GPT. In our experiment, we used the pre-trained BART model (BART-BASE) from the HuggingFace library (https://huggingface.co/docs/transformers/model_doc/bart). \n\nWe compared the results of using GPT-2 and BART in the LID-ADG framework. The results of using BART on the three test settings, In-Distribution, Novel Scenes, and Novel Tasks, are 49.0 (std. 3.7), 38.0 (std. 6.1), and 33.7 (std. 2.6), respectively. The result of using GPT-2 (results reported in Table 2) on these three test settings are 46.7 (std. 2.7), 32.2 (std. 3.3), and 25.5 (std. 4.1), respectively. The bidirectional transformer (BART) works well on solving decision-making tasks as well.\n\n[15] BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension\n", " **Q4: Pre-trained language models limit the length of input text. For instance, the maximum input length of GPT-2 [1] is 1024 [2]. Will the length of encoded text in your experiments exceed this number? If not, can you figure out a solution for handling long-text input? Because the environment and policy description could be too long, we may not neglect this situation.**\n\nA: This is a great question. In our experiments, the length of the encoded text is smaller than the default input length (1024) of GPT-2. However, in general, we agree that the model should be able to handle inputs of arbitrary length. \n\nThere exists a large body of existing work on supporting long-length inputs in transformers. Some works shorten sequences that exceed the max input length by summarizing contextual information [5]. We used a similar approach to represent each object node in the observation. Instead of using a long sentence to describe the object’s name, state, and spatial relations, we use a single feature vector to describe each node (see Appendix D and Fig 7). \n\nOther works split a long sequence into chunks and use a task-specific model to aggregate the outputs of the chunks [6,7]. \n\nSince the input sequence length of Transformers is limited in part by the quadratic time and memory complexity of attention, many works have further developed more scalable parameterizations of attention that readily extend to longer sequences. Sparse attention is used in Big Bird [8] and sparse transformers [9], and a low-rank factorization is used in Linformer [10]. Longformer [11] combines local windowed attention with sparse global attention. Performers [12] and Linear Transformers [13] develop kernel-based approaches.\n\nHandling long-text input continues to be a rich and active research topic, and such techniques are complementary to the framework and methods we proposed and may be directly applied. We hope to explore these strategies in policy learning applications in future work.\n\n[5] Recursively Summarizing Books with Human Feedback\n\n[6] Hierarchical Transformers for Long Document Classification\n\n[7] Recurrent Chunking Mechanisms for Long-Text Machine Reading Comprehension\n\n[8] Big Bird: Transformers for Longer Sequences\n\n[9] Generating long sequences with sparse transformers.\n\n[10] Linformer: Self-attention with linear complexity\n\n[11] Longformer: The Long-Document Transformer\n\n[12] Rethinking Attention With Performers\n\n[13] Transformers are RNNs: Fast autoregressive transformers with linear attention.\n\n--------------------------------------\n\n**Q5: In the exploration stage (Section 4.2.2), what is the exact sampling method for the goal and initial state?**\n\n**A:** As shown in Appendix Algorithm 2, we first generate a set of initial states in VirtualHome using the code released by [14]. For each initial state, we are able to get a set of feasible tasks that can be accomplished in this environment. For example, in an initial state, if the apple is on the kitchen table, a feasible task goal could be “put the apple inside the fridge.” In contrast, “put the banana inside the fridge” is not a feasible task if there is no banana in the initial state.\n\nWe collected 9,893 initial states and randomly sampled an initial state and its feasible goal every time when we reset the environment. After each data collection interaction, we obtain a set of new goals using the goal relabel function. We save the goal and its corresponding initial state in the replay buffers and use the same strategy to sample the goal and initial state in the next interaction. We have added these details in the updated paper in Appendix F.2.\n\n[14] Watch-And-Help: A Challenge for Social Perception and Human-AI Collaboration\n\n--------------------------------\n\n**Q6: The BabyAI 1.1 baselines [3] report the mean ± std in their papers. However, the evaluation of the BabyAI in this paper (Table 1 and 4) did not use different random runs.**\n\n**A:** Thanks for your suggestion. We have added the std in Table 1 and Table 4 in the updated version. Each method is tested on 5 random seeds. \n\n----------------------------------\n\n**Q7: Societal Impact: No potential negative societal impact.**\n\n**A:** Some discussion of societal impact is included in Section 9 L402-L404: “A potential disadvantage of the proposed approach is that biases of the pre-trained LMs may influence its behavior, and further study of LID-based models’ bias is required before they may be deployed in sensitive downstream applications.”\n\nThe potential negative societal impact caused by the biases of the pre-trained LMs may influence their behavior in sensitive downstream applications. In our experiments, we mitigate this by ensuring that the data used to fine-tune the LMs is free of sensitive content, i.e. the vocabulary of our dataset and the feasible goal space and action space of the VirtualHome environment only describe everyday house chores. We will expand this discussion in any final version of the paper.\n", " Dear Reviewer, thank you for your valuable comments and feedback. We appreciate that you think our paper is novel, technically sound, and has significant contributions. We have addressed your questions below. Please let us know if you have any additional questions -- we are happy to clarify or provide additional experiments.\n\n----------------------------------------\n\n**Q1: Scalability: The proposed encoding method is templated-based (Line 155-156). Although the input encoding scheme (Section 7.1) may be a trivial problem, the encoding scheme may still affect the performance. Searching for the optimal encoding scheme is an expensive process, which may bring a high cost of hand-crafted engineering. Besides, the data gathering method also relies on hand-designed templates (Line 220).**\n\n**A:** Thanks a lot for your suggestion. We agree that the encoding scheme may affect the performance. However, the influence of using different templates is small after fine-tuning the model on enough data, as shown in Table 4. To further demonstrate the scalability of the proposed method, we add an experiment where the model (LID-ADG) is trained on templated English but is used to solve natural language tasks written by humans during testing (the Real-Human-Goal setting).\n\nWe collected 16,114 language goals by combining the collected human language descriptions and objects from VirtualHome. These language descriptions are different from the templated English used during training. We found that our model generalizes from the rigid English templates used in training and can understand a diverse set of naturally written English goals at test time.\n\nWe tested our model 5 times using different random seeds. At each time, we randomly select 1000 examples. The performance of LID-ADG on this Real-Human-Goal setting is 41.2%. Its performance in the In-Distribution setting is 46.7%, as shown in Table 2. The difficulty of tasks generated by humans is close to that of tasks in the In-Distribution setting, but the language descriptions made by humans are much more diverse. The results demonstrate that our method trained on templated English has the scalability to solve more general human tasks. (The human experiments were approved by the institutional review board (IRB). Please see Appendix I for more details about this human experiment.)\n\n-------------------------------------------\n\n**Q2: Presentation: The related work of PLM is adequately cited. But the authors should also introduce the background of policy learning so that the significance of this work can be highlighted.**\n\n**A:** Thanks a lot for your suggestion. We have added the policy learning literature in the related work section in the update paper. \n\n-------------------------------------------\n\n**Q3: Clarity: Most parts of this paper are well written. However, there are some typos in the paper.**\n\n**A:** Thanks a lot for your suggestion. We have corrected all the mentioned typos in the updated paper.\n", " **Q3: Can you provide more intuition/experiments on how different layers of GPT-2 in terms of self-attention are lighting up when a task is being performed?**\n\n**A:** Thanks a lot for your suggestion. We have such an experiment in Appendix H, “Visualization of Attention Weights”. In the inference time, when we are decoding the actions, we save the self-attention weights with respect to different layers and different heads. Then, we use BertViz library (https://github.com/jessevig/bertviz) to visualize normalized attention weights as in Figures 11-12. The left side is the query side. The boldness of the lines is proportional to the attention weight.\n\nIn Figure 11, We show the attention weights of a layer named “Head 3 Layer 2” in dealing with two different tasks. We find that “Head 3 Layer 2” is able to capture objects in the goal predicates, such as “wineglass” and “cutleryfork” in the left figure and “pancake” and “chicken” in the right figure.\n\nIn Figure 12, we illustrate the attention weights of another two layers named “Head 1 Layer 2” (left) and “Head 4 Layer 738 11” (right). Given the goal predicates, history, and the current observation, the policy predicts the next action as “grab milk”. We find that “Head 1 Layer 2” is able to capture objects in the goal predicates, such as “milk”, “pancake”, and “chicken” while “Head 4 Layer 11” focuses on the interacted object in the predicted action, such as “milk”.\n\nWe noticed that the attention weights from different self-attention layers are significantly different—some self-attention layers assign high attention weight to objects in the goal predicates while some layers focus on the interacted object. Some layers do not have interpretable meanings.\n\nAs described in our reply to Q1, we believe it is important to understand the internal structure of Transformers. However, this is still an active open area of study. We would like to explore more in this direction in our future work. \n\n-------------------------------------------\n\n**Q4: Authors don't discuss the limitations enough of the model in the environment of VirtualHome and BabyAI. I'd like to see a discussion of failure modes of the model.**\n\n**A:** Thanks a lot for your suggestion. In the paper we submitted, the limitation of our model is described in L401-404. Some failure cases are shown in Figure 5 and Appendix B. \n\nAs discussed at L401-404, one drawback of active data gathering is that it relies on hand-designed rules for task relabeling. More generally, a potential disadvantage of the proposed approach is that biases of the pre-trained LMs may influence the model behavior. Further study of LID-based models’ bias is required before they may be deployed in sensitive downstream applications. \n\nAs shown in Figure 5 in Appendix B, we observed two main types of failure: grounding error and policy error. For failures caused by grounding error, the agent interacts with a wrong object that is not related to the given goal, e.g., the agent puts cutlets instead of the salmon inside the fridge. For failures caused by policy error, the agent cannot find the target objects or does not interact with them. \n\n", " Dear Reviewer, thank you for your valuable comments and feedback. We appreciate that you think our paper is technically solid and the proposed framework is important for RL. We have addressed your questions below. Please let us know if you have any additional questions -- we are happy to clarify or provide additional experiments.\n\n-------------------------------------\n\n**Q1: There can be more discussion from a model-structure perspective on how the internal structure of the transformer is creating these embeddings for policy networks.**\n\n**A:** It is a great suggestion, and we agree that understanding the Transformer architecture is an important research topic. Analyzing how the internal structure of Transformers influences their results is still an active open area of study and is a great direction of study for a future paper. \n\nTo provide a preliminary analysis of such structure, in our paper Appendix H, we visualize the attention weights from the self-attention layers of GPT-2. We empirically found that some self-attention layers inside the Transformer assign high attention weights to objects in the goal predicates while some layers focus on the interacted object. \n\nSome recent papers [1,2,3,4] characterize the Transformer architecture from different perspectives. In [1], the authors show that “induction heads play a major role in general in-context learning.” In [4], the authors locate and edit facts stored in language models. Despite these interesting findings, developing a complete understanding of the internal structure of Transformers remains a difficult problem in need of further investigation. \n\n[1] https://transformer-circuits.pub/2022/in-context-learning-and-induction-heads/index.html\n\n[2] https://transformer-circuits.pub/2021/framework/index.html\n\n[3] https://aclanthology.org/2021.emnlp-main.446.pdf\n\n[4] Locating and Editing Factual Associations in GPT\n\n-----------------------------------\n\n**Q2: In Table 2, VirtualHome seems to be a very tough environment for any policy that isn't LID-ADG. I'd like to see comparison against other methods which have significance performance on this: even though they may not be using ADG, it would be nice to see how much can ADG push a method against a better performing model which might have more expert data.**\n\n**A:** The VirtualHome tasks are challenging for RL methods because of the large action spaces, sparse rewards, and long-horizon planning. RL methods perform poorly out of the box, as shown in Table 2, but can be improved by providing additional data / favorable weight initialization, as shown in Table 3.\n\nIn Table 3, the offline RL method, Decision Transformer (DT), is trained on data collected by our well-trained model (LID-ADG). Such data can be treated as a type of expert data. When given enough expert data, the offline RL method can work well in solving decision-making tasks in VirtualHome.\n\nIn Table 3, the online RL method, PPO, can also solve VirtualHome tasks when its policy is initialized from our well trained model (LID-ADG). This demonstrates that by using a good initialized policy, PPO can solve more challenging tasks. \n\nThese experiments show the comparisons of our methods and the stronger baselines that can achieve significant performance in VirtualHome. However, these baselines require extra information during training, such as expert data or weight initialization.\n", " We thank the reviewers for their helpful comments and feedback. All the reviewers find our work novel and technically solid. They all agree with the significance of using pre-trained language models (LMs) as a general framework for decision-making tasks. They also believe the active data gathering and the analysis of the encoding scheme in LMs are important. We responded to each of the reviewers’ questions below. We also updated the draft based on the reviewers’ comments (changes are highlighted) and attached the code for training LID-Text in the supplementary material.\n", " This paper presents a framework to use language models like transformers to be used for decision making in interactive environments. The authors also present the features of this approach and present experiments to verify the same. The key contributions are identifying that language models for decision making in RL-type settings improves combinatorial generalization, an active data gathering approach to account for less than ideal pre-collected expert data and how sequentiality is very important in all these encoding techniques for planning in the environments discussed in the paper. Strengths: The paper presents the hypotheses very well and also lays out a rigorous enough framework to test the hypotheses. It lays out a framework to use a general large language model to be used in place of existing non-transformer based policy networks. The authors touch on the importance (or lack thereof) of the encoding scheme, how to convert structured pieces of instructions into language and testing the framework for achieving the goal state. Finally the piece about active data gathering is very significant and can be expanded in general to a lot of other RL methods to compensate for lack of expert data in RL environments\n\nWeaknesses: Even though there are experiments in section 7 trying to give an intuitive feel for the reason this method is working, there can be more discussion from a model-structure perspective on how the internal structure of the transformer is creating these embeddings for policy networks. Q1: In Table 2, VirtualHome seems to be a very tough environment for any policy that isn't LID-ADG. I'd like to see comparison against other methods which have significance performance on this: even though they may not be using ADG, it would be nice to see how much can ADG push a method against a better performing model which might have more expert data.\n\nQ2: Can you provide more intuition/experiments on how different layers of GPT-2 in terms of self-attention are lighting up when a task is being performed? This question is aimed to poke at the structure of the transformer and how different layers are being used for interactive decision making\n Authors don't discuss the limitations enough of the model in the environment of VirtualHome and BabyAI. I'd like to see a discussion of failure modes of the model and how that traces back to the encoded embeddings.", " This paper proposes to utilize the pre-trained language model (PLM) for solving interactive decision-making problem, named LID. Specifically, the policy network consists of a PLM and tailored task modules. The policy information (goal, history, information) will be described as the natural text and encoded by GPT-2 PLM. During the fine-tuning process, the PLM and task-specific modules will be jointly optimized (the authors also study the different training paradigms). To alleviate the scarcity problem of expert data, the authors also propose an active data gathering method to make the agent learn in a self-supervised way. The BabyAI platform and VirtualHome platform are used to evaluate the performance of the LID framework. Several empirical experiments and the corresponding analysis show the rationality and generalization ability of the proposed LID framework. \n\nThe contributions of this paper are two-folded: 1) proposing a PLM-based framework for interactive decision-making task, i.e., LID in this paper; 2) proposing a self-supervised data gathering method, which provides a stable RL training process. The authors also analyze the source of the generalization ability of LID framework, which can provide a clear direction for future research.\n **Strengths**:\n1. ***Novelty***: It is not surprising that the \"pre-training LM then fine-tuning\" paradigm is widely used in natural language processing tasks. Although the authors also follow this paradigm, it is interesting to see that the sequence modeling ability of PLM can be used in policy learning.\n2. ***Significance***: The proposed framework is well supported by empirical studies and evaluated on two commonly used benchmarks. This new framework powered by PLM can be beneficial to the research community of policy learning.\n3. ***Quality***: This paper is technically sound, and the authors provide enough technical details in the Appendix including network architecture, encoding methods, etc.\n\n**Weaknesses**:\n1. ***Scalability***: The proposed encoding method is templated-based (Line 155-156). Although the input encoding scheme (Section 7.1) may be a trivial problem, the encoding scheme may still affect the performance. Searching for the optimal encoding scheme is an expensive process, which may bring a high cost of hand-crafted engineering. Besides, the data gathering method also relies on hand-designed templates (Line 220).\n2. ***Presentation***: The related work of PLM is adequately cited. But the authors should also introduce the background of policy learning so that the significance of this work can be highlighted.\n3. ***Performance***: Compared to the work that uses traditional networks like DQN, the integration of PLM may affect the inference speed. \n4. ***Clarity***: Most parts of this paper are well written. However, there are some typos in the paper:\n - Line 53: pretrained LMs -> pre-trained LMs\n - Line 104: language -> language. (missing full stop mark)\n - Some papers should be cited in a proper way: Line 108: [23], Line 109: [36], Line 285:[15], Line 287 [15]. For example, in Line 108, \"[23] show that\" needs to be rewritten as \"Frozen Pretrained Transformer (FPT) [23] show that\". \n\n[Rebuttal Updates] The authors provided the additional experiments for addressing my concern of scalability. The authors also revised the typos and added the related works.\n\n 1. Pre-trained language models limit the length of input text. For instance, the maximum input length of GPT-2 [1] is 1024 [2]. Will the length of encoded text in your experiments exceed this number? If not, can you figure out a solution for handling long-text input? Because the environment and policy description could be too long, we may not neglect this situation.\n2. In the exploration stage (Section 4.2.2), what is the exact sampling method for the goal and initial state?\n3. The BabyAI 1.1 baselines [3] report the mean ± std in their papers. However, the evaluation of the BabyAI in this paper (Table 1 and 4) did not use different random runs. I did not find any justification in Section 5.2. Could you please offer an explanation? \n\n[Rebuttal Updates] I confirmed the authors' answers.\n\nReference:\n\n[1] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., & Sutskever, I. (2019). [Language models are unsupervised multitask learners.](https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf).\n\n[2] https://huggingface.co/gpt2/blob/main/config.json\n\n[3] Hui, D. Y. T., Chevalier-Boisvert, M., Bahdanau, D., & Bengio, Y. (2020). [BabyAI 1.1.](https://arxiv.org/abs/2007.12770) - Societal Impact: No potential negative societal impact. The authors provide a new perspective to aid policy learning with a pre-trained language model. \n- Limitation: 1) Building text descriptions for each task still requires human labor. We do not know what textual format is optimal for policy learning. It varies from task to task, model to model. On the other hand, as I stated in Question 1, the long-text input could restrict the scalability of this framework. 2) The proposed methods also need humans to design some templates/rules, as the authors mentioned in the conclusion part. \n", " This paper introduces LID, a framework that encodes the goal, history, and observation as the tokens and fine-tunes a pre-trained GPT-2 on the next action prediction task. The authors show empirically that such a pre-training mechanism significantly improves performance on both VirtualHome and BabyAI tasks. The proposed method is especially useful when only a limited number of demos are available. Their LID-ADG is able to make meaningful predictions on VirtualHome with any expert data (unlike baselines which get about 0% accuracy.\n ### Strengths\n- The proposed method is novel, simple, and quite effective.\n- The authors conduct a deep analysis in Section 7 to provide deeper understanding of various design choices in LID.\n- The paper is easy to read.\n\n### Weaknesses\n- One concern is about the reproducibility of the paper. The authors do not provide the detailed architecture and hyperparameters of the method. Although there is a short description in Appendix D.2. I don't think it is sufficient for reproduction.\n- It would be better if the authors can provide standard deviations in Tables 1 and 4. With a small number of demos, I believe that the standard deviation would be large.\n\n### Summary\nOverall, the proposed method is sound and effective, so I recommend accepting it. However, the tasks studied in this paper are not in my research area, so I am not confident with my judgment.\n\n### Violating the formatting instruction\nNotably, the authors include the appendix at the end of the main paper submission instead of putting it in the supplementary material which violates the NeurIPS format. I wonder if it is acceptable. - Are all the tokens in brackets (such as [grab] and <apple>) treated as individual special tokens and added to the tokenizer or they are just treated as normal takes?\n- I don't see the reason why the authors only use autoregressive language model. The next action prediction task doesn't seem to require an autoregressive model since it's not a sequence prediction task. I wonder how bidirectional Transformers such as BERT and RoBERTa perform.\n Yes, the authors discuss them in Section 9." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "804cNQN3Bm3", "H4yvufWWa_f", "LDN-PwYWNT", "L3h79bdrBSG", "2NOYGiOrZFz", "DdNtH_2SWce", "XO_csgoccBE", "bbkEQg8WQhZ", "nips_2022_FWMQYjFso-a", "nips_2022_FWMQYjFso-a", "nips_2022_FWMQYjFso-a", "nips_2022_FWMQYjFso-a" ]
nips_2022_TJUNtiZiTKE
Diffusion-based Molecule Generation with Informative Prior Bridges
AI-based molecule generation provides a promising approach to a large area of biomedical sciences and engineering, such as antibody design, hydrolase engineering, or vaccine development. Because the molecules are governed by physical laws, a key challenge is to incorporate prior information into the training procedure to generate high-quality and realistic molecules. We propose a simple and novel approach to steer the training of diffusion-based generative models with physical and statistics prior information. This is achieved by constructing physically informed diffusion bridges, stochastic processes that guarantee to yield a given observation at the fixed terminal time. We develop a Lyapunov function based method to construct and determine bridges, and propose a number of proposals of informative prior bridges for both high-quality molecule generation and uniformity-promoted 3D point cloud generation. With comprehensive experiments, we show that our method provides a powerful approach to the 3D generation task, yielding molecule structures with better quality and stability scores and more uniformly distributed point clouds of high qualities.
Accept
All reviewers agreed that this work has many positive aspects, such as originality of the idea, technical soundness and practical relevance. In the initial reviews, some concerns about the experimental evaluation have been raised. In particular, one reviewer mentioned potential problems regarding the uniqueness of generated molecules. This issue, however could be addressed reasonably well in the rebuttal. I do share the generally positive perception of this paper. Therefore, I recommend to accept to paper.
train
[ "JSQrVv5P6b_", "XlhP0RzHS3M", "KcCtuv62OS", "2ZEFOewRPm7", "FLG3aACL72Z", "QmyEhbGZCZ", "eN7lEljmTRS", "Ehgtu5mAFJv", "c9QB_2gGZu", "QBnQxxG-A4f", "kJKqmE6werF" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for raising the score and giving positive feedback on our work. We submit a revised version and trying our best to cover as much as clarities as possible in blue in this version. Since the page limit is still nine pages at the current stage, we are running out of space to cover all the clarification above. We will fully cover them if this paper can get accepted with one additional page. \n\nFor your question, The Lyapunov function method is a technique to determine and verify whether the bridge condition will hold when we add a force term on the Brownian bridge for injecting prior knowledge.\n\n", " I appreciate your thorough response and explanation.\n\nSince all of my prior issues have been resolved, I would like to increase my rating from 5 to 6.", " I thank the authors for their responses to my questions. I will keep my score the same since I have a relatively low confidence on my assessment compared to the other reviewers. However, I would acknowledge that I find the paper addresses a useful problem and in my opinion if the overall molecular stability (which is at 0% currently and has been slated for future work) can be improved, then it can have a strong impact in drug discovery domain. So I would maintain my weak support for the acceptance of the paper.", " Thank you for the detailed response and clarification.\n\nMy previous concerns regarding the loss function have been fully addressed and I would like to raise my score from 6 to 7. \nThe paper provides a novel framework for injecting prior information into the diffusion process, which is flexible for diverse domains and prior information. The authors have shown two successful applications with improved results and less sampling time, which also seem promising for other different tasks. Thereby, I would like to raise my score further if the authors provide a revised paper with the mentioned clarifications.\n\nOne last question:\nFrom what I have understood, the proposed Lyapunov function method (in section 3.2) provides a criterion for a general form of bridge processes. Is the Lyapunov function method related to the injection of prior information? If so, It would be great if the authors provide some intuition about the relation between the Lyapunov function and the injection of prior knowledge, especially in the form of physical energy.", " **Question 1.** Is Bridge with Priors designed for a more complex process such as Ornstein–Uhlenbeck process\n\n**A:** An $x$-bridge is any process $Z$ that guarantees to achieve $Z_1 = x$ at $t=1$. There are different ways to construct processes that satisfy such conditions. A typical approach to constructing such processes is to take an arbitrary process, denoted by $\\mathbb M$, such as an Ornstein-Uhlenbeck process, and derive its conditional process $\\mathbb M(\\cdot |Z_1 = x)$ when its endpoint is pinned at $x$; existing methods, such DDPM, SMLD, and the method in Peluchetti can be viewed in this way. \nIn brief, the conditioned OU processes can be used as a $x$-bridge. \n\nHowever, the conditioning method requires mathematical derivation and is restricted to simple processes with a closed form. The main contribution of this work is to show that, by developing a more general Lyapunov criterion, we can construct much more flexible bridges that incorporate complex physical prior information. \n\nWe will try our best to clarify these points in the revision. We hope the reviewer can understand that it is a challenging task given the limited space and the application-oriented scope of the paper. \n\n\n**Question 2.** How to determine the learnable alpha\n\n**A:** $\\alpha$ is trained together with $\\theta$ to minimize the loss in a typical way.\n\n\n**Question 3.** What is the meaning of to ensure that U is minimized at time t = 1 in line 147]\n\n**A:** Here $U$ is a Lyapunov function that we use to certify $Z_t = x$. Hence, we want that $U(\\cdot, t=1)$ to be minimized at $x$ by construction.\n\n**Question 4.** Is the condition in line 157 correct?\n\n**A:** It is correct and it is consistent with Corollary A.5; Proposition A.4. is used a Lemma. We will clarify this. \n\n**Question 5.** Why is the molecular stability of GEOM-DRUG 0.0 for all models?\n\n**A:** The stability is a difficult problem for all existing methods, which is an open question that we hope to address in future works. Essentially, the stability checks if the molecules satisfy the union of a set of constraints (e.g., the distances of a bound are in a certain region), and it reports failure once one constraint is not met. In future works, we will investigate how to incorporate the hard stability constraints as priors in generative models. \n\n\n**Question 6.** Is the training time of Bridge with Priors larger than previous diffusion models & compare with Schrödinger Bridge\n\n**A:** Priors larger than previous diffusion models; compare with Schrödinger Bridge]\nWe conduct all our experiments with the same training epoch as the baseline. In addition, as we reply to Reviewer nq85 in Question 1, computing the energy term only yields a minor extra time cost (around $3\\%$, $1.18s$ vs. $1.22s$). Hence, in practice, our bridge's inference and training time with priors are almost the same as the previous diffusion models. \n\nSchrödinger Bridges (SB) is an alternative approach to diffusion generative models. However, the training process of SB is more complicated and expensive since it requires solving an entropy regularized optimal transport problem, while our method only specifies an arbitrary bridge process. Importantly, we leverage the flexibility of bridge processes to incorporate prior information, but it is unclear how to do this in SB. We will add a discussion regarding this issue. \n", " Thanks for giving us careful comments and suggestions. We provide pointwise responses to your concerns and questions about our paper below:\n\n**Weakness 1.** Lines 102-111 are too concise with necessary details omitted \n\n\n**A:** We tried to give a concise description of the main idea of bridge models here, which is difficult due to the advanced stochastic calculus tools involved. We will add a more thorough introduction to the appendix in the final version. \nEssentially, given a set of bridges $\\mathbb{Q}^x$ that guarantees to have $Z_1 = x$ for each $x$, we want to train the neural process to approximate the mixture of $\\mathbb{Q}^x$ when $x$ is drawn from the data distribution. In this way, the $Z_1$ generated from the neural process would also follow the data distribution. \n\n**Weakness 2.** the term $b_t(Z_t|Z_1)$ in the loss function of Eq.(2) should be the drift term of the mixture $\\mathbb{Q}^{\\pi*}$. Is this term analytically accessible] \n\n**A:** It is correct that, theoretically, we should have $s_t^\\theta(Z_t)$ to match $b_{t(Z_t)}^{\\Pi^*}$, the \ndrift of $\\mathbb{Q}^{\\Pi^*}$. But we can show that $b^{\\Pi^*}(Z_t) = \\mathbb{E}_{Z_1 \\sim \\Pi^*}[b_t(Z_t|Z_1)]$, and hence (ignoring the variance term for simplicity): \n\n$$\n\\mathbb{E}\\left[ || {s_t^{\\theta(Z_t)} - b_t^{\\Pi^*}(Z_t|Z_1)} ||_2^2 \\right] = \\mathbb{E}\\left[||{s_t^{\\theta(Z_t)} - b_t^{\\Pi^*}(Z_t|Z_1)}||_2^2 \\right] + const. \n$$\n\nTherefore, it is equivalent to match $s_t^\\theta(Z_t)$ with $b_t^{\\Pi^*}(Z_t|Z_1)$. \nThe identity above is due to \n$$\n\\mathbb{E}[||{X^\\theta-Y}||^2] = \n\\mathbb{E}[||t{X^\\theta-\\mathbb{E}[Y]}||^2] + \\mathrm{Y},\n$$\nand $\\mathrm{Y}$ is independent with $\\theta$. \n\n\n**Weakness 3.** The computation of the loss function of equation (2) needs clarification\n\n**A:** You are correct regarding the evaluation of Eq (2), \nexcept that $x_0$ can be drawn from **any initial distribution** (e.g., standard Gaussian), once $\\mathbb{Q}^x$ is a bridge process that converges to $x$ regardless of the initialization. We will clarify this part. \n\n\n**Weakness 4.** The condition of line 157 does not seem trivial which could be satisfied for most practical functions\n\n**A:** $\\mathbb{E}[\\left\\lVert{f(Z_t)}\\right\\lVert_2^2] < +\\infty$ (regarding any distribution of $Z_t$) is trivially satisfied if $f$ is bounded, i.e., $\\sup_z \\left\\lVert{f(z)} \\right\\lVert<+\\infty$. \nIt can also be easily satisfied for unbounded $f$ if $Z_t$ has bounded moments, e.g., when $\\left\\lVert{f(x)}\\right\\lVert\\leq C\\left\\lVert{x}\\right\\lVert^\\alpha$ for some $C<+\\infty$ and $\\alpha\\in \\mathbb R$ (true for ReLU networks with $\\alpha = 1$) and $\\mathbb{E}[\\left\\lVert{Z_t}\\right\\lVert^\\alpha] <+\\infty$.\n\n\n**Weakness 5.** Is the energy function minimized when t approaches 1\n\n**A:** The physical energy term is **NOT** minimized at $t=1$, because we would have $Z_1 = x$ guaranteed when following the $x$-bridge $\\mathbb{Q}^x$ (instead, $Z_t = x$ minimizes the Lyapunov function, which is the sum of the energy term and a singular term $\\frac{Z_t-x}{\\beta_1-\\beta_t}$). \nThe goal of incorporating the physical energy term is to regularize the trajectory of $Z_t$ before it hits $t=1$, \nso that the neural generative process learns from $\\mathbb{Q}^x$ also has more ``physical looking\" trajectories. \nIt is an empirical finding that much more physically regularized processes yield better learning performance. \n\n\n**Weakness 6.** More experimental results to the authors' claim for \"less sampling time\"\n\n**A:** 1) The advantage of the faster sampling speed of our method is also significant, as shown in Table 4: \nBoth the MMD and COV score of our method with 10 steps matches or even better when compared with the Diffusion or Bridge baseline model with 100 steps (e.g., our 10-step Chair COV-CD is even better than the 100-step Diffusion baseline and our 10-step Chair MMD-EMD is better than 100 step Bridge baseline ). \n2) We additionally list the few-step result on GEOM-DRUG. We notice that, 200-step Bridge + Force achieves comparable results as 1000-step E-GDM.\n3) We will properly add an ablation study section for the less sampling step and the sampling speed (See response for reviewer nq85 for the current sampling speed comparison) in our next revision.\n\n| Method / Atom Stable | 200 Step | 1000 Step |\n|:-|:-:|:-:|\n| Bridge + Force| 0.812 | 0.824 |\n| E-GDM | 0.798 | 0.813 |\n", " Thanks for giving us careful comments and suggestions. We provide pointwise responses to your concerns and questions about our paper below:\n\n**Weakness1.** the overall molecule stability seems to be 0%\n\n**A:** The stability is a difficult problem for all existing methods, which is an open question that we hope to address in future works. Essentially, the stability checks if the molecules satisfy the union of a set of constraints (e.g., the distances of a bound are in a certain region), \nand it reports failure once one constraint is not met. We also mention this as a limitation and future direction in line 327-328.\n\nMoreover, our prior is imposed on the trajectory that generates the molecules in the diffusion process, and hence its impact on the constraint satisfaction of each bond could be small. We think that we would need to explicitly incorporate the hard constraints in the stability evaluation to improve the stability. \n\n\n\n\n**Weakness2.** Computational complexity of the proposed method\n\n**A:** The per-step computational cost of our method is almost the same as that of standard diffusion models during inference. This is because our method and the baselines train the same neural diffusion models except for the extra energy term (a simple function involving no neural networks and is fixed during training). Hence, our method yields a lower total inference time, giving comparable results with fewer inference steps. \n", " We thank the reviewer for your time and comments. We address all your concerns as below:\n\n**Weakness 1.** Some Typos\n\n**A:** Thanks for pointing out the typos. We will carefully proofread the draft in the revision. \n\n**Weakness 2.** Table 1 omits uniqueness comparisons\n\n**A:** Here, we show the uniqueness metrics in Table 1, which is the percentage of valid and unique molecules in 12000 generated molecules. We see that our method outperforms E-GDM and EN-Flow. We copy the EN-Flow results from [17] because we do not run an EN-Flow model.\n| Method | Valid + Unique |\n|:-|:-:| \n| EN-Flow| 0.349|\n|E-GDM |0.902|\n| Bridge |0.902 |\n| Bridge + Force | 0.907 |\n\n\n**Question 1.** Taking fewer time steps sometimes could mislead & if a wall clock comparison can be presented.\n\n**A:** Because our method and the baselines train the same neural diffusion models except the extra energy term (which is a simple function that involves no neural networks and is fixed during training), the wall clock time is very close for our method and the baselines. Hence, the number of steps can be viewed as a good surrogate of wall clock time. \n\nThe table below shows the time spent by our method and E-DGM on each step (difference <3%). The slight increase in time of the method is due to the need to load the prior function. We also list the per-epoch training time difference when we use AMBER prior. We use 9% more time than the baseline because of the calculation of prior force.\n We will add this table to the paper in our next revision.\n\n| Method | One-step Inference Time (Second) | One-batch Training Time (Second) |\n|:-|:-:|:-:|\n| E-DGM | 1.18 | 98.1 |\n| Bridge| 1.22 | 107.3 |\n\n\n\n**About the Limitation:**\nSince the main goal of this work is to propose a novel methodology, we choose molecule conformation and point cloud generation as two examples of areas where our method can contribute to improvement. It is a part of our plan to apply the method to solve the more challenging drug design problems requiring collaboration with domain experts. \n", " This paper proposes a new diffusion method incorporating physical information by carefully designing the prior for the downstream task. The suggested diffusion approach incorporates physical prior into bridges, in contrast to other diffusion methods that learned diffusions as a combination of forward-time diffusion bridges. The method was evaluated on molecules and 3D point cloud generation tasks, results of which show advantages in terms of generating quality, efficiency, and energy calculation convenience. They demonstrate the superiority of the method in terms of both quantity and quality.\n ### Strength\n* The paper is generally clear, well structured, and easy to follow.\n* The theory and accompanying research are explained clearly.\n* They properly motivate the need for the method and describe the proposed model thoroughly.\n* Injecting task-dependent priors into a diffusion method is non-trivial and novel\n* They reasonably select the benchmarks emphasizing the strength of the suggested model.\n\n\n### Weakness\n* Typos\n * Line 42: repeated citation [14] \n * Line 109: ofdenoised -> of denoised\n * Line 196: bound(x) -> bond(x)\n * Line 240: [26] seems to be an incorrect citation because it's nothing to do with a molecule application.\n* Table 1 omits uniqueness comparisons that are important to determine the superiority of the model. Including uniqueness comparisons is important because there may be models that generate molecules with low uniqueness and high novelty only. (I'd be happy to increase my score if this issue is resolved)\n * Taking fewer time steps sometimes could mislead because actual wall clock time spent for each time step could be different. Wondering if a wall clock comparison can be presented.\n * The scientific significance of merely producing molecular conformation is limited. The usefulness of this work would be higher if this model could be applied to real-world drug discovery applications, as the authors emphasized.", " This paper proposes a framework for incorporating physics driven prior bridges in diffusion models for improved molecule and point cloud generation. In order to do so, a Lyapunov function based method is developed to construct and determine bridges. The work seems to improve on existing methods for molecule and point cloud generation. ***Strengths***\n1) The proposed contribution of Lyapunov function to construct prior bridges is novel in my opinion and is a sensible approach for incorporating informative priors.\n\n2) The proposed method seems to improve molecule generation and point cloud generation benchmarks.\n\n***Weaknesses***\n\n1) For GeomDrug datasets, although the atom stability seems to improve, the overall molecule stability seems to be 0%, i.e., none of the predicted molecules are correct. Although this is the case with other existing models as well, it is surprising to see that the proposed method using informative physics informed priors does not improve this aspect of generation. Will inferring the edges during the diffusion process (instead of doing so in a post processing manner with RDKit) improve this aspect?\n\n2) I did not find any mention of computational complexity of the proposed method. It would be nice to see how feasible is the proposed approach from a practical perspective. In drug discovery, often a deserted outcome is the de novo generation of molecules at a reasonable speed. Hence, knowing about the computational requirements and the time taken on average for generating realistic molecules will be a good information to have. See questions in weaknesses sections above. The limitations are mentioned in the weaknesses section above.", " This paper introduces a novel diffusion-based generative model that leverages the prior information to learn the diffusion process satisfying the bridge condition. Specifically, the paper proposes a Brownian bridge with extra drift term that incorporates the prior information, and aims to learn the diffusion process by fitting the trajectories drawn from the mixture of these bridges. The paper utilizes Lyapunov functions to guarantee the bridge condition, and further introduces energy functions to guide the training process for each downstream task. The proposed method, Bridge with Priors, is able to generate samples with better quality compared to the baselines in both molecule generation and point cloud generation tasks. **Strength**\n\n1. The paper proposes a novel approach of leveraging the prior information in the form of diffusion bridges to learn the diffusion process, which is clearly different from injecting inductive bias into the model architectures. Moreover, instead of using the time-reversal technique which most of the previous works are based on, the paper learns the generation process as a mixture of diffusion bridges which was introduced in [Peluchetti, 2022], which is also a new direction for the diffusion model.\n\n1. The paper has a clear motivation of exploiting problem-dependent prior information to generate high-quality samples. Especially, for the point cloud generation, the paper addresses the problem of unevenly distributed points and provides a solution by using the Risze energy function. \n\n1. Although the paper is based on mathematically heavy theory, the paper is well-written and understandable with clear notations and formulations, except for the part explaining the mixture of diffusion bridges and the loss function (lines 102-112). \n\n1. The paper shows promising results for both molecule generation and point cloud generation tasks\n\n1. I believe the experiments are well-conducted with clear data/training/evaluation setups and compared with the state-of-the-art baselines (EDM). Especially, comparing previous diffusion models, the Bridge without Prior, and the Bridge with Prior clearly shows the advantage of exploiting prior information for the generation. \n\n**Weaknesses**\n\n1. Lines 102-111 are too concise with necessary details omitted. Without reading [Peluchetti, 2022], it would have been impossible to understand the concept of \"mixture of diffusion bridges\" denoted as $\\mathbb{Q}^{\\Pi^{\\ast}}$. \n\n1. If I understood correctly, the term $b_t(Z_t|Z_1)$ in the loss function of Eq.(2) should be the drift term of the mixture $\\mathbb{Q}^{\\Pi^{\\ast}}$, not the drift term of a specific $x$-bridge $\\mathbb{Q}^{x}$. Is this term analytically accessible? \n\n1. The computation of the loss function of equation (2) needs clarification. I assume the expectation in Eq.(2) was computed by a Monte Carlo estimate with a sample $(t,z_1,z_0,z_t)$ where $z_1\\sim \\text{Data}$, $x_0\\sim\\Pi_{0|1}(dx_0|x_1)$ and $z_t\\sim p_{t|0,1}(z_t|z_0,z_1)$. Although [Peluchetti, 2022] shows that $p_{t|0,1}(z_t|z_0,z_1)$ has explicit form, how is $x_0\\sim\\Pi_{0|1}(dx_0|x_1)$ sampled? \n\n1. The condition of line 157: $\\mathbb{E}_{\\mathbb{Q}^{x,bb}}\\|f_t(Z_t)\\|^2$ does not seem trivial which could be satisfied for most practical functions, although the provided intuition in lines 158-161 is convincing. It would be clearer if the authors provide proof that the condition is satisfied for $f$ induced from the proposed energy functions.\n\n1. The reason for incorporating the energy function (prior information) into the Brownian bridge by $f_t(\\cdot)=-\\nabla E(\\cdot)$ is not clear. Is the energy function minimized when $t$ approaches 1?\n\n1. I would like to see more experimental results to the authors' claim for \"less sampling time\", either for GEOM-DRUG or the point cloud generation tasks. I can see that in Table 2, Bridge+Force with 500 steps outperforms EGM for 1000 steps in the QM9 dataset which is impressive, but as the molecules of QM9 are small, it would be more convincing if the claim is proven for larger datasets. It would be great if there is an ablation study section in the paper to emphasize the advantage of less sampling time.\n\n(*) I would like to raise my score if the presentation is improved and the addressed concerns/questions are clarified.\n\n**Minor correction**\n- The explanation for the abbreviation SMLD is missing. \n- The SDE of $\\mathbb{P}^{\\theta}$ and Eq.(1) use t-subscript notation for drift and diffusion coefficients, while Eq.(4) does not use such t-subscript. I recommend unifying the notation to prevent confusion.\n- No bold in row Airplane, column 100 Steps-COV-EMD in Table 4.\n- Condition #3 of Theorem A.1. in the supplementary file is missing.\n- In my opinion, Algorithm 1 and Figure 1 do not give much information. It would be great to add more details about \"mixture of diffusion bridges\" and the explicit form of $b_t(Z_t|Z_1)$.\n\n---\n**References**\n- Peluchetti, Non-denoising forward-time diffusions, 2022 1. Is Bridge with Priors designed for a more complex process such as Ornstein–Uhlenbeck process? Or is it applicable only to the Brownian bridge which is possible due to the singular characteristic of the drift term $\\sigma^2_t\\frac{x-Z_t}{\\beta_1-\\beta_t}$?\n\n1. How to determine the learnable parameter $\\alpha$ as in $s^{\\theta}_t(z) = \\alpha f_t(z)+\\tilde{s}^{\\theta}_t(z)$?\n\n1. What is the meaning of \"to ensure that $U$ is minimized at time $t=1$\" in line 147?\n\n1. Is the condition in line 157: $\\mathbb{E}_{\\mathbb{Q}^{x,bb}} [||f_t(Z_t)||^2] <\\infty$ correct? \nIn the supplementary file, the condition of Proposition A.4. is $\\mathbb{E} [\\int^1_0||f_t(Z_t)||^2] < \\infty$.\n\n1. Why is the molecular stability of GEOM-DRUG 0.0 for all models? Is it because the molecules are large?\n\n1. Is the training time of Bridge with Priors larger than previous diffusion models? Additionally, although the comparison with the Schr&ouml;dinger Bridge methods [De Bortoli et al., 2021, Chen et al., 2022] is out-of-the-scope of this paper, I believe that the comparison would be interesting as Bridge with Priors may have less training/sampling time and competitive or better performance.\n\n---\n**References**\n- De Bortoli et al., Diffusion Schrödinger Bridge with Applications to Score-Based Generative Modeling, NeurIPS 2021, \n- Chen et al., Likelihood Training of Schrödinger Bridge using Forward-Backward SDEs Theory, ICLR 2022\n Training the model takes a long time, which is a common limitation of diffusion models. The paper also discusses the future works in section 6, which seems to be a promising research direction." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 2, 4 ]
[ "2ZEFOewRPm7", "Ehgtu5mAFJv", "eN7lEljmTRS", "FLG3aACL72Z", "QmyEhbGZCZ", "kJKqmE6werF", "QBnQxxG-A4f", "c9QB_2gGZu", "nips_2022_TJUNtiZiTKE", "nips_2022_TJUNtiZiTKE", "nips_2022_TJUNtiZiTKE" ]
nips_2022_fKXiO9sLubb
Learning from Stochastically Revealed Preference
We study the learning problem of revealed preference in a stochastic setting: a learner observes the utility-maximizing actions of a set of agents whose utility follows some unknown distribution, and the learner aims to infer the distribution through the observations of actions. The problem can be viewed as a single-constraint special case of the inverse linear optimization problem. Existing works all assume that all the agents share one common utility which can easily be violated under practical contexts. In this paper, we consider two settings for the underlying utility distribution: a Gaussian setting where the customer utility follows the von Mises-Fisher distribution, and a $\delta$-corruption setting where the customer utility distribution concentrates on one fixed vector with high probability and is arbitrarily corrupted otherwise. We devise Bayesian approaches for parameter estimation and develop theoretical guarantees for the recovery of the true parameter. We illustrate the algorithm performance through numerical experiments.
Accept
An interesting approach to stochastically revealed preferences
train
[ "MP0cNt_hjk3", "R_cXLkrwZa", "OrFtwcyN3gk", "mwzlSv-IOqY", "hfsDGjvl33K", "qcoHP_RhuhL", "ad5l_HZZD4p", "ESoJ82hv0B5", "Uuv4jvOlYNY", "8-9LCwThzW" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the clarification, and I will keep my rating for the paper.", " Thanks for the clarifications, most of my questions are addressed.\n\nI think it would be good to include the discussion on \"which distance to select\" the corresponding considerations in such selection; and a quantitative discussion on the complexity (i.e., which step of the algorithm is the most computationally costly and it depends on certain variables such as dimension, or the acceptance rate by the MC method in some way) if possible in the revision/appendix.\n\nOverall, I am maintaining my original rating.", " We thank the reviewer for the comments. In the following, we address the comments one by one. We look forward to hearing your further feedback. \n\nFor “a set of agents”, we are sorry for the previous confusion. “By sharing a common utility function”, we mean that the existing literature on preference learning and the general inverse optimization problem all consider the case that the objective/utility function is parametrized by an unknown but fixed deterministic vector $u=(u_1,…,u_n)$. Then, each agent/customer is represented by a sample tuple $(x_t^*,a_t,b_t)$ and all the agents share the same utility vector $u=(u_1,…,u_n)$. The objective is to identify this vector of $u$ from the available samples. Mathematically, under the linear program formulation, the problem of identifying $u$ is equivalent to finding a feasible solution to a system of linear inequalities. In comparison, we relax this assumption and allow some heterogeneity of the agents’ utilities. Specifically, each agent can have a different utility vector $u_t=(u_{t1},…,u_{tn})$, and the vector $u_t$ is sampled from some unknown distribution $P_u$. The goal then becomes to learn this distribution (instead of the previous deterministic vector) from samples. \n\nFor the motivation of considering the Gaussian setting and the $\\delta$-corruption setting, we made this choice mainly because of their analytical tractability. Modeling-wise, we also believe these two are the most natural candidates if one wants to relax the deterministic assumption. We refer to our response to Reviewer tzej for more detailed discussions on other alternatives and the connection between these two settings.\n\n“In Theorem 1, why is it that b no longer has a subscript t“. \nWe are sorry for the typo, and there should be a $t$ for the subscript. \n\n“The dependence on the choice of distance”\nYes. The bound depends on the choice of distance. Even for two equivalent metrics, the constant coefficients in the corresponding bounds can be different. Here, we remark that similar bounds in the same order with respect to the sample size also hold for the total variation distance, the Hellinger distance, and all other weaker metrics. \nThe main required property of a metric to show our bound is the distinguishability of distributions in the corresponding metric space in terms of test functions. To be specific, we require a similar property in Lemma 9 to hold. Lemma 9 says that there exists a set of test functions distinguishing a given distribution and the true distribution at an exponential rate with respect to the sample size. Our bound depends on the rate of distinguishability.\n\n“Which distance to choose”\nIf multiple distances are possible, the choice depends on our goal. For example, if we just hope to show a beautiful bound, we should consider a metric with better distinguishability, as discussed in the previous part; if we want to additionally recover the true parameter, we also need to consider the relationship between the metric of the distribution class and the metric of the parameter space.\n\n“The Limitation”\nHere, we will take the example of Algorithm 1 to address the question, and a similar reason is also applicable to Algorithm 2. The main reason why the algorithm performance is poor in the high dimensional setting is that we choose small numbers of iterations and sample sizes to allow the algorithm to finish within a limited time. If we run ten times more iterations with five times more samples, our new numerical result can increase to 75% for Experiment (ii) when n=25 in the Gaussian case. Thus, for question (ii), a better implementation can reduce the running time and allow us to run our algorithm with a larger number of iterations and a larger sample size within the same time, which will give a better result. \nOne method to speed up our current method is to apply parallel computing in Step 5 to estimate the acceptance rate for the Monte Carlo method. That is why parallel computing can help improve the performance of Algorithm 1. For question (i), since the relationship between the implementation and the performance is discussed above, let us point out the most computational-consuming steps: Step 4, where the algorithm generates random numbers of the von Mises-Fisher distribution, and Step 5, where the algorithm estimates the acceptance rate by the Monte Carlo method.\n\nWe look forward to hearing your further feedback and will get back timely in the following week.\n", " We thank the reviewer for the comments and for pointing out the typos. We will correct them in the later version of the paper. The high-dimensionality issue is also raised by another reviewer. We make the following remarks:\n\nFirst, for the Gaussian setting, we believe the difficulty in handling the high dimensional setting is intrinsic and might be inevitable for algorithm design. We can show that even for a low dimensional setting (n=2), if one uses the conventional maximum likelihood approach to estimate the parameter, the likelihood function is non-convex, and there exist locally optimal values that have an arbitrarily large gap compared to the global optimal. In this light, our Bayesian approach serves as a characterization for the posterior distribution: although one can’t say much about the landscape of the likelihood function, the posterior distribution will concentrate around the true parameter. \n\nSecond, for the $\\delta$-corruption setting, the high-dimensional issue can be resolved if we change our goal. Throughout the paper, our goal has been to identify the distribution of the utility vector. However, if we instead consider an online setting with the following objective:\n$\\sum_{t=1}^T u^*_t x^*_t - u^*_t x_t,$\nwhere $x_t$ is the decision we made at time $t$ subject to the constraint $a_tx_t\\le b_t, x_t\\in [0,1]$. $x_t^*$ is the optimal solution of the linear program $\\max u_t^*x_t, s.t. a_tx_t\\le b_t, x_t\\in [0,1]$. This objective characterizes a setting where the goal is to predict the customer’s choice and measure the prediction loss by the utility gap. Then the objective has a convexity structure, and the online gradient descent (GD) algorithm can be used, following the approach in (Bärmann et al., 2018, An Online-Learning Approach to Inverse Optimization). While the original paper of (Bärmann et al., 2018) studies a fixed utility setting, we can obtain a regret bound of $O(\\sqrt{T}+\\delta T)$ by analyzing the noisy version of the online GD algorithm. \n\nThirdly, the problem can also be mitigated when the underlying problem has more structure, such as in Section D of the appendix, where we discuss the setting where the parameter $\\kappa$ is known for the Gaussian setting. Also, when there exists one product that has a known deterministic utility, the problem may also be solved by learning all the other utility distributions one by one. This corresponds to the setting where this product with a known deterministic utility that represents the “no-purchase” option for the customer. We leave this type of problem as future research on what kind of additional structure enables efficient learning of the utility distribution, especially in a high-dimensional setting.\n\nWe look forward to hearing your further feedback and will get back timely in the following week.\n", " We thank the reviewer for the comments on the high dimensionality. We make the following remarks on the issue and will update the results in the later version of our paper.\n\nFirst, for the Gaussian setting, we believe the difficulty in handling the high dimensional setting is intrinsic and might be inevitable for algorithm design. We can show that even for a low dimensional setting (n=2), if one uses the conventional maximum likelihood approach to estimate the parameter, the likelihood function is non-convex, and there exist locally optimal values that have an arbitrarily large gap compared to the global optimal. In this light, our Bayesian approach serves as a characterization for the posterior distribution: although one can’t say much about the landscape of the likelihood function, the posterior distribution will concentrate around the true parameter. \n\nSecond, for the $\\delta$-corruption setting, the high-dimensional issue can be resolved if we change our goal. Throughout the paper, our goal has been to identify the distribution of the utility vector. However, if we instead consider an online setting with the following objective:\n$\\sum_{t=1}^T u^*_t x^*_t - u^*_t x_t,$\nwhere $x_t$ is the decision we made at time $t$ subject to the constraint $a_tx_t\\le b_t, x_t\\in [0,1]$. $x_t^*$ is the optimal solution of the linear program $\\max u_t^*x_t, s.t. a_tx_t\\le b_t, x_t\\in [0,1]$. This objective characterizes a setting where the goal is to predict the customer’s choice and measure the prediction loss by the utility gap. Then the objective has a convexity structure, and the online gradient descent (GD) algorithm can be used, following the approach in (Bärmann et al., 2018, An Online-Learning Approach to Inverse Optimization). While the original paper of (Bärmann et al., 2018) studies a fixed utility setting, we can obtain a regret bound of $O(\\sqrt{T}+\\delta T)$ by analyzing the noisy version of the online GD algorithm. \n\nThirdly, the problem can also be mitigated when the underlying problem has more structure, such as in Section D of the appendix, where we discuss the setting where the parameter $\\kappa$ is known for the Gaussian setting. Also, when there exists one product that has a known deterministic utility, the problem may also be solved by learning all the other utility distributions one by one. This corresponds to the setting where this product with a known deterministic utility that represents the “no-purchase” option for the customer. We leave this type of problem as future research on what kind of additional structure enables efficient learning of the utility distribution, especially in a high-dimensional setting.\n\nWe look forward to hearing your further feedback and will get back timely in the following week.\n", " We thank the reviewer for the comments. \n\nFor the comment on “two somewhat disparate settings,” we agree with the reviewers that the techniques we developed for the two settings do not have much in common. We chose these two settings mainly for modeling and analytical tractability considerations. In the following, we list a few other options that we have tried, and we will include these discussions in the future version of the paper:\n\nFirst, due to the scale-invariant property of the utility vector, another candidate is the compositional distribution that is supported on a standard simplex. This type of distribution is usually used for geoscience and environmental science to describe the decomposition of geochemical elements and soil contamination. The drawback of this distribution is that it requires the utility vector to be non-negative, and its learning tractability for our setting is not as good as the Gaussian distribution. \n\nSecond, we also thought about the case that the utility vector takes a discrete distribution with finite support. When the support is known, the problem reduces to the learning/estimation of the probability parameters. A combination of probability parameters that is consistent with the observations can be learned through a simple maximum likelihood estimation procedure. When both the support and the probability parameters are unknown, this problem can be very challenging, and we will leave it for future studies. \n\nThird, there is a subtle connection between the two settings. The Gaussian setting can be viewed as a special case of the $\\delta$-corruption setting when $(a_t,b_t)$’s are generated from certain distributions. Imagine that when the dispersion parameter $\\kappa$ is large, the utility vectors sampled from the Gaussian distribution will concentrate around its mean direction. It can happen that all utility vectors within a certain neighborhood of this mean direction are not distinguishable from each other based on the observations of $(a_t,b_t)$’s. And all the other utility vectors outside this neighborhood can be viewed as a $\\delta$ corruption. For a certain distribution of $(a_t,b_t)$, we can establish a relation between $\\delta$, $\\kappa$, and the diameter of this neighborhood. Then, the results from the $\\delta$-corruption setting can also be applied to the Gaussian setting. \n\nWe also thank the reviewer for pointing out the paper “Prediction and Stochastic Choice.” We were not aware of this work. As we understand, this paper is more aligned with the literature on choice modeling, which mainly focuses on the modeling of choosing one product among $m$ alternatives. From the perspective of choice modeling, our paper complements the existing works in providing a probabilistic model that allows purchasing multiple products. \n\nWe look forward to hearing your further feedback and will get back timely in the following week.\n", " The paper considers the stochastic setting where utility comes from some unknown distribution and the goal is to learn the distribution from the utility-maximizing actions of agents. They consider two distributions: Gaussian and delta-corruption. In both settings, they derive Bayesian approaches to learn distribution and provide some theoretical guarantees. They also provide approximation techniques and present some numerical experiments. Strenghts:\nThey motivate the guarantees based on posterior distribution using sound explanation and examples.\n\nWeaknesses:\nThe experiments show that their approximation algorithms are quite weak and accuracy suffers in high dimensions.If we follow the authors argument that the primary reason is curse of dimensionality then this work only helps small dimensional settings. I think paper can benefit from broader discussion on tractability and inaccuracy in high dimension settings. The authors address the limitations.", " This paper studies the problem of revealed preference under a stochastic setting. The utility of the agents follows an unknown distribution and we estimate the distribution from only the observation of the utility-maximizing actions of the agents. In this paper, the authors consider two underlying distributions: a Gaussian setting and a $\\delta$-corruption setting, and provide Bayesian approaches for the problems with theoretical guarantees. *Strengths:*\n\n- The paper studies the revealed preference problem under a stochastic setting, where existing analyses assume that the agents share the same utility function.\n- The authors propose Bayesian approaches for two utility function distributions and the method may potentially be extended to other settings beyond Gaussian and $\\delta$-corruption settings.\n- The authors also provide theoretical guarantees to the Bayesian models.\n\n*Weaknesses:*\n\n- The method requires sampling from a high-dimensional space, which incurs the curse of dimensionality that negatively impact the performance of the algorithms. - A typo in L261, where an extra \"0\" appears at the end of the sentence.\n- Figure 1 plus Figure 2 appears to be too wide. The authors seem to have addressed the limitations of their work.", " This paper studies the problem of learning utilities from purchasing behavior, or, learning from revealed preference. While multiple papers have studied the sample complexity of backing out a single set of feasible item utilities that explain purchasing data drawn from a distribution, the authors study stochastic revealed preferences: the item utilities themselves are drawn from a distribution, and the dataset consists of purchases with respect to iid draws of the item utilities. The authors consider two main settings: in the first the utilities are drawn from a von Mises-Fisher distribution, in the second there is a deterministic utility vector that explains the purchase w.h.p, but the purchase (with complementary probability) might be explained by a utility vector drawn from an arbitrary distribution (representing a probability-delta corruption). They provide sampling and optimization techniques for both settings, with guarantees, and verify convergence of the learned utility parameters experimentally via actual implementations of the techniques. I think this is an interesting contribution to the area of learning from choice data. The paper is nicely written and I was able to follow it with no issues. The techniques are interesting: the stochastic nature of the problem makes it more difficult than simply \"backing out\" a feasible utility vector. To this end I found the techniques used to overcome this challenge to be novel and interesting. I also appreciated the fact that the authors directly gave algorithms for arriving at the distribution P_u that could be experimentally verified, rather than stopping at sample complexity bounds.\n\nOne potential weakness is that the paper considers only two somewhat disparate settings, and there are not any connections (as far as I could tell) developed between the techniques used for both settings. It does seem like this setting is challenging and more general remarks would be tougher (e.g. there’s probably no general distribution-independent statement that can be made) but I wonder if there is a more general abstract learnability result in terms of parameters of P_u. Is there any relation to the literature on learning stochastic choice? See, e.g., the manuscript “Prediction and Stochastic Choice” on SSRN and references therein. That literature deals mostly with binary preference relations so maybe there’s not much of a relation, but could be worth looking into (I’m not suggesting that these works be cited). N/A", " This paper studies the problem of learning an known utility function via the actions (of an agent) in the stochastic setting, specifically a Gaussian setting and a $\\delta$-corruption setting. Under both settings, methods with theoretical guarantees are proposed to learn the known utility function paramerized by a vector. Two respective algorithms are described and implemented for numerical evaluation to verify the theoretical guarantees (i.e., convergence). This paper rigorously studies the problem of learning some unknown utility function under the stochastic setting (two specific settings). The problem setting is clearly laid-out and the proposed approaches seem sound (to the best of my ability). In particular, Theorems 1 and 2, provide the necessary theoretical guarantees for the approaches, respectively, which are later empirically verified. Regarding weaknesses, the setting of \"a set of agents\" can be made clearer and the motivation for considering the Gaussian setting and $\\delta$-corruption setting togeher in one paper can be made clearer, too.\n\nStrengths\n- The overall theoretical rigor is good: in problem formulation, discussing the approaches and providing the necessary theoretical guarantees.\n- The presentation is clear (i.e., writing and organization of content).\n\nWeaknesses\n- The setting of \"a set of agents\" is not so clear. A clear contrast (to existing works) this paper claims is \"all the existing algorithms and analyses uder this topic relay on the assumption that all the agents share one common utility function (or a common utility parmeter vector) and thus can fail in the stochastic setting.\"\n - Are there multiple agents (with different utility functions) at the same time for the learner to learn? Why does stochasticity give rise to multiple utility functions (as opposed to one common utility function), since there is only one true $\\mathcal{P}_{\\mu}$ (for the Gaussian case) or $\\boldsymbol{u}^*$ (for the $\\delta$-corruption case)?\n - Following this, the motivation of this setting can be made clearer with references in addition to the described use-case in lines 28-30.\n- The motivation for specifically considering both the Gaussian case and the $\\delta$-corruption can be made clearer. Is it a standard approach to consider these two options (if so, are there references for it), or is there a significant motivation for considering these two case together in one paper?\n\n - Please see the first Weakness about \"a set of agents\".\n- In Theorem 1, and in lines 130-132, why is it that $b$ no longer has a subcript $t$? What does $b$ (without the subscript $t$) refer to?\n- In lines 137, \"the Wasserstein distance in the theorem is not critical and it can be replaced with other distances such as the total variation distance and the Hellinger distance.\" \n - Does the bound in the theorem depend on the specific choice of distance in terms of _expression_ and _properties_ (e.g., does it need to be a proper metric)?\n - If multiple distances are possible, which should be used? In other words, what are some considerations one should make when picking the suitable distance? The authors have identified in Sec. 5 that the approaches seem to scale poorly w.r.t. the dimension of the problem and argued that it can be mitigated by better and more efficient implementation. It would be good to briefly discuss this (perhaps in Appendix if there is no space) i) to pinpoint the component in the Algorithms that lead to this issue; ii) some intuitions for better implementation (e.g., why would parallel computing improve estimation accuracy, as I thought parallel computing is meant to distribute the computational load)." ]
[ -1, -1, -1, -1, -1, -1, 6, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 2, 4, 2 ]
[ "mwzlSv-IOqY", "OrFtwcyN3gk", "8-9LCwThzW", "ESoJ82hv0B5", "ad5l_HZZD4p", "Uuv4jvOlYNY", "nips_2022_fKXiO9sLubb", "nips_2022_fKXiO9sLubb", "nips_2022_fKXiO9sLubb", "nips_2022_fKXiO9sLubb" ]
nips_2022_mowt1WNhTC7
When does dough become a bagel? Analyzing the remaining mistakes on ImageNet
Image classification accuracy on the ImageNet dataset has been a barometer for progress in computer vision over the last decade. Several recent papers have questioned the degree to which the benchmark remains useful to the community, yet innovations continue to contribute gains to performance, with today's largest models achieving 90%+ top-1 accuracy. To help contextualize progress on ImageNet and provide a more meaningful evaluation for today's state-of-the-art models, we manually review and categorize every remaining mistake that a few top models make in order to provide insight into the long-tail of errors on one of the most benchmarked datasets in computer vision. We focus on the multi-label subset evaluation of ImageNet, where today's best models achieve upwards of 97% top-1 accuracy. Our analysis reveals that nearly half of the supposed mistakes are not mistakes at all, and we uncover new valid multi-labels, demonstrating that, without careful review, we are significantly underestimating the performance of these models. On the other hand, we also find that today's best models still make a significant number of mistakes (40%) that are obviously wrong to human reviewers. To calibrate future progress on ImageNet, we provide an updated multi-label evaluation set, and we curate ImageNet-Major: a 68-example "major error" slice of the obvious mistakes made by today's top models -- a slice where models should achieve near perfection, but today are far from doing so.
Accept
All reviewers are positive about this paper, leaning toward accept. The AC does not find sufficient grounds to overrule the consensus.
val
[ "mFDdn20E2WC", "lm2tSpWvgO", "zfdN50wlKD2", "lgcLsZ4aWA0", "p5nAIDdI-F1", "omzg4Japvu6", "pk9wpFta4kq", "1bK2OXEaQBO", "8SDNxkRPLJL", "2_TKKVrGq76" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for the detailed response!\n\nI had concerns that\n1. It's unclear how to use ImageNet-M to evaluate models (qualitatively and quantitatively) and decide the next steps based on the evaluation (debugging, model selection, model comparison, ...)\n2. I wasn't entirely sure if ImageNet-M is sufficiently representative.\n\nThe authors have argued that (if I'm understanding correctly)\n1. ImageNet-M provides additional analysis on top of top-1 and MLA scores: how severe the errors actually are. The analysis can be run in a qualitative as well as quantitative fashion.\n2. ImageNet-M is perhaps not \"exhaustive\", but model predictions tend to be similar (Section 3 of https://arxiv.org/pdf/2012.15483.pdf). One could also think of ImageNet-M as a necessary condition for a model to make no obvious errors.\n\nI think the response defends the essential value of ImageNet-M, though the contribution is still not very strong. \n\n## Why the contribution is still not strong.\n\nIt is not entirely convincing that there will be cases where the ImageNet-M scores will show drastically different behaviours than top-1 or multi-label accuracies. The examples discussed in the authors' response are hypothetical. Moreover, solving ImageNet-M does not ensure the elimination of all possible obvious errors and it may be possible that one could overfit to the particular samples in ImageNet-M.\n\nI find the authors' way of interpreting ImageNet-M results ambiguous. When model A is getting 30/68 correct and model B is getting 40/68 correct, while all the other accuracies (top-1 and MLA) are identical, we can argue that model B is better than model A. In that sense, the benchmark has a quantitative role - it ranks models and enables model selection. But at the same time, the authors suggest looking into the 68 samples to examine the models' qualitative behaviours. Will this activity bring further insights? - Such as model A is in fact better than model B in the example above because the remaining 28 mistakes by model B are much worse in quality than the 38 mistakes by model A. If this is the case, does this mean the simple ranking of models A and B based on the number of mistakes in ImageNet-M is not reliable? Alternatively, maybe ImageNet-M is intended purely as a dataset for qualitative analysis. In that case, we cannot compare models A and B quantitatively along with accuracies like top-1 and MLA. I find it hard to reconcile this double usage of ImageNet-M. I hope the final version of the paper contains more discussion about the actual use cases of the benchmark that involves both quantitative and qualitative treatment and explain how to interpret the results when the conclusions contradict each other. \n\n## Corrected score: 5\n\nReflecting the above, I would recommend borderline acceptance (score 5) for the paper. ", " The response addressed my concerns. I have raised my rating to \"6: Weak Accept.\"\n\nBut I still request the authors consider adding a flowchart as an overview to better illustrate how the initial 676 mistakes are narrowed down to the final 68 major mistakes.", " ```\nOne possible way to verify the representativeness of the 68 ImageNet-M images would be to do the following experiment: You prepare a sequence of K models M1, M2, M3, M4, M5, M6, ....., MK. Then, plot the size of ImageNet-M against the number of models used for generating ImageNet-M increases from 1 to K. I wish to see if\n * the size of ImageNet-M converges to a non-zero number (ideally close to 68) as you include more models; or\n * the size of ImageNet-M effective converges to zero as each model gets included.\n```\nHere are the numbers:\n\nFor ViT3B – we start with ~154 major mistakes.\n\nVit3b_wrong & coca_wrong: 51\nVit3b_wrong & soups_wrong: 54\nVit3b_wrong & insta_wrong: 117\n\nVit3b_wrong & soups_wrong & coca_wrong: 26\nVit3b_wrong & insta_wrong & coca_wrong: 36\nVit3b_wrong & insta_wrong & soups_wrong: 41\n\nall_wrong: 17\n\nWe choose the slightly more relaxed ‘3 of 4 get them wrong’ criterion to get the 68 examples, rather than the ‘all of them wrong’ criterion that would yield only 17 examples.\n\nWhile we understand the concern about ‘representativeness’, we want to emphasize that the specific 68 examples themselves are not special in isolation. They are merely a set that we’ve analyzed carefully where we believe 100% is achievable. When people ask about whether ImageNet is done, people are in essence asking when models have solved the solvable examples. ImageNet-M is an inexhaustive but careful attempt at measuring this.\n\nMoreover, for practical reasons, we tried to strike a balance between a slice that is too small to be general, and too large to be manually reviewed by the average researcher. We expect most new SOTA models this year to get between 30-40 of them correct, so only 30-40 would need to be reviewed for potentially novel mistakes / categorization.\n\nWe're happy to include this analysis in the Appendix to better contextualize ImageNet-M's creation.\n\n`It would be great if the paper discusses how one could run such a qualitative analysis on the 68 samples - e.g. what kind of insights they could get and how they can improve the engineering based on that.`\n\nThis is a great question. We are happy to expand upon this in our paper. In general, we think researchers should use ImageNet-M for debugging and understanding the mistakes their models are making, and perhaps using it to identify common error patterns they may be able to address through targeted remediations. If a researcher evaluates their model on ImageNet-M and looks at their mistakes, they can:\n\n1. Check that the mistakes are actually mistakes, and if not, potentially update ImageNet-M.\n2. See if the mistakes their model is making are actually major, for those that are they can identify what error category they think the mistake belongs in, and apply some appropriate remediation. If they are minor, that may be worth noting in a resulting paper for their method. A model that does not improve upon accuracy, but makes less (or no) “major” mistakes, if it can be communicated well, is also useful to the community (and practitioners!). \n3. As a researcher iterates on their model, they can keep evaluating on ImageNet-M to understand how their changes changed the distributions of mistakes a model is making.\n\nWe hope that encouraging this style of analysis will lead to interesting follow on papers that look at specific remediations to remaining errors, or even provide a mistake remediation handbook, which would be impossible without a way to first understand and categories failure modes.\n\n\n", " We thank the reviewer for their positive as well as constructive feedback on our work. In particular, we are happy you found our paper to be “interesting news for many who follow and contribute models on ImageNet” and “quite interesting to read…a well-written paper.”\n\nWe respond to concerns about the utility of ImageNet-M in the joint response, and to individual questions and concerns inline below.\n\n`I wonder if there is anything special about those 68 ImageNet-M samples.`\n\nThe special thing about the 68 images in ImageNet-M is that they were all “major errors” made by a majority of four very different SOTA models from recent years. They were the subset of errors remaining on ImageNet that we were confident as a team a human would get correct. The remaining errors were more borderline or minor, in many cases errors you may not bother fixing unless your application had very exacting requirements. \n\n```\nAnother way to formulate my worry is as follows: what can you conclude when model A gives 60/68 correct and model B gives 50/68 correct?\n * Can you say model A is addressing the \"remaining challenges\" of ImageNet better than model B?\n * Are you sure model A is not making more mistakes outside of ImageNet-M than model B?\n```\nThis is a great question, and in fact highlights the need to:\n\n 1. Evaluate ImageNet-M holistically with the other quantitative benchmarks like top-1 and MLA, rather than in isolation.\n 2. Analyze the *quality* of the mistakes made on this set.\n\nFor example, let’s say that:\n* For model A:\n * The 8 incorrect predictions are entirely non-sensical or pre-existing major mistakes.\n * But gets slightly higher top-1 and MLA (say 0.2% higher on each than model B).\n\n* For model B:\n * The 18 incorrect predictions are all novel predictions that would be deemed borderline.\n * Gets slightly lower top-1 and MLA.\n\nOne could argue that model B is actually the more useful model, because it never makes a major mistake on ImageNet-M, and on the broader set of top-1% and MLA is performing nearly on par. Yes, it is conceivable that model B makes many major mistakes on an entirely novel set of MLA mistakes in that 0.2% gap, but given that 3 of 4 very different models all made mistakes on this common 68 set, we believe that is less likely. For external evidence supporting this, see Section 3 of https://arxiv.org/pdf/2012.15483.pdf on dominance probabilities, which shows that slightly higher accurate models are unlikely to significantly ‘swap’ which examples they get wrong.\n\n```\nIsn't it sufficient to compare the multi-label accuracy (MLA) rather than ImageNet-M?\n * What additional benefit does ImageNet-M bring on top of MLA?\n * Will there be convincing use cases where ImageNet-M ranking would differ from MLA ranking?\n```\nMLA doesn’t capture the severity of the mistake, which is a critical thing you care about in the real world. ImageNet-M is a common set of MLA mistakes, and importantly it’s ones that are less ambiguous and that we are more confident in. We believe eventually a model will be able to get 100% on ImageNet-M, but don’t expect a model to be able to do that with MLA on the whole validation set.\n\nA model that improves upon MLA but does worse on ImageNet-M may just be getting better at solving ambiguous or borderline cases, and not improving solving examples we expect a model really should be able to get right. This looks a lot more like overfitting to a dataset.\n\nFinally, you can manually inspect ImageNet-M results easily due to its small size, and it is easier for the community to update.\n\n`Re: convincing use case.`\n\nElucidating on the example in the shared response: let’s say you have two models that both get around 91% on ImageNet top-1, and 98.5% on ImageNet-MLA. One gets 55 of the 68 right, and the other gets 30 of the 68 right. However, the predictions of the 55/68 correct model still makes 13 predictions that are major mistakes (e.g., they are the same mistakes other models have made), whereas the second model’s 38 incorrect predictions end up being novel predictions that would be categorized as borderline mistakes. One could argue that the second model is actually more useful than the first, because it makes no major mistakes, and its novel predictions are in general closer to the right answer. In applications like autonomous vehicles, if a model predicts a scooter for a bicycle, the set of actions taken are likely to be more similar / safe than if the scooter was mistaken for a shopping cart.\n\nIndeed, the point of ImageNet-M is not the quantitative result, but the qualitative one, which requires analyzing the quality of the mistake. If the set were too big, researchers would find it hard to analyze, and if the set were too small, it might not be representative. We’re not sure whether there’s a perfect size; we found 68 to be a reasonable number for someone to spend, without being trivially small (like < 20).\n\n\n\n\n", " We thank the reviewer for their positive and constructive feedback on our work. We respond to concerns about the utility of ImageNet-M in the joint response, and to individual questions and concerns inline below.\n\n`My major concern is if the ImageNet-M can serve as an evaluation set for future models to benchmark. Due to its small size (i.e., only 68 images), the error bars (due to model selection or random seeds) could be very high.`\n\nWe appreciate the concern around error bars, which our clopper pearson intervals show on the figure in Section 5.1. As mentioned in our general response to reviewers, we want to emphasize that ImageNet-M is meant as less of a quantitative “model quality” score, and more of a “model capability measurement”. \n\nImageNet-M is the remaining subset of mistakes these top performing models make that we A.) believe models should eventually be able to get 100% accuracy on, and B.) that we found existing SOTA models to make “major” mistakes on, mistakes we were sure a human would not make. A model that does really well on ImageNet-M, and slightly poorer on ImageNet Top1 might be interesting because it might be more “human like” in its behavior. \n\nFinally, as we state in our general response to reviewers, we view it’s small size as a feature, not a bug. We wanted the dataset to be small enough that a single researcher could reasonably look at all their model’s mistakes on this subset, and understand how their model improved, or identify what errors their model is still making, and the severity of them. \n\n`Some related works are not discussed. In terms of mistake analysis on ImageNet (L109-125), Salient ImageNet (Singla and Feizi 2022) annotated core and spurious features on ImageNet; Domino (Eyuboglu et al. 2022) designs a method to automatic detect systematical errors based on clustering. Regarding spurious correlation (L207-211) for the context, ImageNet-9 (Xiao et al. 2021) studies the influence of background on object recognition on ImageNet.`\n\nThank you for these! We’re working on reading and understanding them so we can properly cite them in a final draft of this paper. \n\n`Besides, could the authors clarify the distribution of these 68 images in terms of the mistake category (i.e., how many images are in the “Fine-grained” category and other categories)? Are the ImageNet-M the 68 images listed in Appendix E?`\n\nThis is a great suggestion, and we’re happy to add this to the paper and add all 68 images in ImageNet-M to the appendix with model predictions.\n", " We deeply thank the reviewers for their time and thoughtful feedback on our work. We are glad to hear that Reviewer c29F found our analysis “thorough and useful”, Reviewer VG4e found our human evaluations “extensive” and addressing an “important problem” with “interesting findings…and good insights”, and Reviewer NAFQ found our paper “well-written” and interesting and thought-provoking to read. \n\nWe also hear your concerns regarding the utility of our proposed dataset ImageNet-M. We have clarified the purpose of this dataset (how we hope the community will use and benefit from it in our response below, and will revise it accordingly in our paper. We also address reviewer-specific questions in our individual response to each reviewer. Please feel free to follow up with any additional questions.\n\n**ImageNet-M**\n\nReviewer VG4e and Reviewer NAFQ shared similar concerns about the size and representativeness of ImageNet-M.\n\nImageNet-M is the remaining subset of mistakes these top performing models make that we A.) are confident a model should eventually be able to reach 100% accuracy on, and B.) found existing SOTA models to make “major” mistakes on, mistakes we were quite sure a human would not make.\n\nAs the Clopper-Pearson intervals show, ImageNet-M is best suited as a qualitative analysis slice of ImageNet multi-label evaluation, and should be reported and analyzed as context for both top-1 and MLA quantitative numbers. We will work to emphasize this more in the paper.\n\nOur goal here is not to add yet another context-less quantitative number for researchers to put in their paper, but instead provide a small slice of examples (where we have confidence in the labels) that is reasonable for a single researcher to manually review to find out *how their model improved*, instead of *how much their model improved*. We hope that authors of future papers use ImageNet-M as a barometer of the quality of the mistakes their model makes, and as a tool to understand and fix model failures.\n\nImageNet-M is unique among benchmarks and evaluation splits of ImageNet in that we believe it is possible to get 100% on it, and research that aims in solving the remaining long tail of a benchmark (e.g. active learning) will benefit greatly from ImageNet-M. It provides researchers a trustworthy way to ask “How well does our technique help comprehensively solve major errors made even by todays SOTA models?”.\n\nFinally, when considering ImageNet-M as a qualitative tool for analyzing model predictions, we find its small size to be a feature rather than a bug. We wanted the dataset to be small enough that a single researcher could reasonably look at all their model’s mistakes on this subset. For example, as one of the reviewers questioned, if model A gets 50/68 right, a careful analysis of the remaining 18 mistakes might show that the errors are all borderline, whereas the hypothetical 60/68 model might still make 8 very egregious mistakes. That may be an important difference, especially if the two models are roughly comparable on ImageNet Top-1 and MLA. This way, a qualitative ImageNet-M analysis adds nuance to existing quantitative numbers.\n", " We thank the reviewer for their positive and constructive feedback on our work. We respond to individual questions and concerns inline below.\n\n`Have authors done similar analysis on another dataset to see how much the observations generalize? Even if it is a closed (proprietary) dataset?`\n\nWe decided to focus on ImageNet because it’s the de-facto benchmark in vision. Additionally, because of the extensive time costs of expert review, we considered other datasets out of scope for this project. The fact that accuracy levels are so high on ImageNet makes manual review like this feasible.\n\nThat said, our ImageNetV2 results show that the general issues hold across a dataset collected independently of ImageNet, which suggests these behaviors are likely to occur whenever the set of classes is overlapping enough to be multi-label. It would not be surprising if these results held on other datasets with similar label set properties.\n\n`Can we use this analysis to build better datasets or give guidelines to human reviewers or use it to quickly correct some other major datasets used by the community?`\n\nWe absolutely believe that the community can use our process to build better evaluation datasets. One of the major lessons of this work is that evaluation datasets need to be continually updated throughout their lifecycle as models get better.\n\nIn Section 4.2 we state: “These proportions suggest that large models are frequently uncovering new correct multi-labels, suggesting that mistake analysis and label correction needs to be part of the lifecycle and of benchmark development of long-tailed errors to properly assess performance as a benchmark saturates.”\n\nAdditionally, as we state in the paper, benchmark designers should be more careful in designing their tasks such that humans are plausibly able to label things correctly when given sufficient time, and that they should think not just of the initial state of the benchmark, but what it means to “solve it”. Much hand-wringing in the community around ImageNet lately is precisely because the benchmarks aren't clear about what to do as models approach perfection.\n\n`How do the authors think of this work translating to video classification where the accuracies are still on the lower side and the models are not as accurate?`\n\nWhile video benchmarks seem less close to saturation of performance, we’ve seen similar types of issues with video classification, in that a scene might have multiple plausible labels depending on the dataset, and yet we’re still evaluating without multi-labels typically. We expect similar types of issues to arise as performance on these benchmarks approaches saturation too.\n\n`Also, going beyond classification into detection and segmentation would also have been a strong direction to think about.`\n\nThis is a great direction for future work. One thing worth noting is that some of the issues we see in classification actually go away in localization tasks like detection and segmentation, because objects or things are separately labeled with their respective pixels. This removes some of multi-label ambiguity plaguing ImageNet, but we expect some issues like fine grained classification to still exist given a sufficiently fine grained label list, and for things like spurious correlations to still exist.\n", " The paper dives deeper on mistakes in ImageNet dataset by the latest models. To help contextualize progress on ImageNet and provide a more meaningful evaluation for today’s state-of-the-art models, authors manually review and categorize the mistakes that a few top models make and provide insights into the long-tail of errors. Since ImageNet images contains multiple labels the paper focuses on the multi-label\nsubset evaluation of ImageNet, where today’s best models achieve upwards of 97% top-1 accuracy. The main contribution of the paper is what the analysis reveals: nearly half of the supposed mistakes are not mistakes at all (significantly underestimating the performance\nof these models). At the same time the models still make a significant number of mistakes (40%) that are obviously wrong to human\nreviewers. To calibrate future progress on ImageNet, authors also provide an updated multi- label evaluation set.\n 1. Thorough and useful analysis of mistakes made by current models on ImageNet\n2. Significant clarity on the mistakes both qualitative (thanks to the supplementary material) and quantitative\n3. A refined multi-label dataset for folks to compare in future\n4. categorizing the mistakes into four: Fine-grained errors, Fine-grained with out-of-vocabulary, Spurious correlations, Non-prototypical labels.\n5. Useful conclusions like 40% of errors are not errors, remaining errors have the four above categories, deduplication and nearest neighbor filtering from validation set. Have authors done similar analysis on another dataset to see how much the observations generalize? Even if it is a closed (proprietary) dataset? \n\nCan we use this analysis to build better datasets or give guidelines to human reviewers or use it to quickly correct some other major datasets used by the community?\n\nHow do the authors think of this work translating to video classification where the accuracies are still on the lower side and the models are not as accurate? 1. Doing this evaluation on datasets that are larger like ImageNet22K (even if subsampled) would throw more light on the limitation of models. Right now, a lot of conclusions and observations is on the dataset rather than a particular set of models etc. \n\n2. Also, going beyond classification into detection and segmentation would also have been a strong direction to think about.\n\n", " The paper conducts extensive manual reviews on the mistakes made by a few top models. The ​mistakes are further annotated with severity and category. Two major findings are (1) about half of the mistakes are actually correct; (2) about 40% of the mistakes are major mistakes—predictions that humans find obviously wrong. The authors propose ImageNet-Major (ImageNet-M), an evaluation split based on 68 “major mistakes” images, to better benchmark models' progress on remaining mistakes. Strengths\n\n* The paper studies an important problem to understand the remaining mistakes on ImageNet better.\n* The human evaluations and the experiments are extensive.\n* The manual review provides interesting findings (e.g., the model’s performance is underestimated due to label errors) and good insights (e.g., the categories of the mistakes).\n\n\nWeaknesses\n\n* My major concern is if the ImageNet-M can serve as an evaluation set for future models to benchmark. Due to its small size (i.e., only 68 images), the error bars (due to model selection or random seeds) could be very high.\n* Some related works are not discussed. In terms of mistake analysis on ImageNet (L109-125), Salient ImageNet (Singla and Feizi 2022) annotated core and spurious features on ImageNet; Domino (Eyuboglu et al. 2022) designs a method to automatic detect systematical errors based on clustering. Regarding spurious correlation (L207-211) for the context, ImageNet-9 (Xiao et al. 2021) studies the influence of background on object recognition on ImageNet.\n\nSingla, Sahil, and Soheil Feizi. 2022. “Salient ImageNet: How to Discover Spurious Features in Deep Learning?” In International Conference on Learning Representations. https://openreview.net/forum?id=XVPqLyNxSyh\n\nEyuboglu, Sabri, Maya Varma, Khaled Kamal Saab, Jean-Benoit Delbrouck, Christopher Lee-Messer, Jared Dunnmon, James Zou, and Christopher Re. 2022. “Domino: Discovering Systematic Errors with Cross-Modal Embeddings.” In International Conference on Learning Representations. https://openreview.net/forum?id=FPCMqjI0jXN.\n\nXiao, Kai Yuanqing, Logan Engstrom, Andrew Ilyas, and Aleksander Madry. 2021. “Noise or Signal: The Role of Image Backgrounds in Object Recognition.” In International Conference on Learning Representations. https://openreview.net/forum?id=gl3D-xY7wLq.\n Questions\n* I request the authors to answer if the “high error bar” problem (see weakness) would exist on ImageNet-M. If so, whether or not it can prevent ImageNet-M from serving as an evaluation dataset.\n* Besides, could the authors clarify the distribution of these 68 images in terms of the mistake category (i.e., how many images are in the “Fine-grained” category and other categories)? Are the ImageNet-M the 68 images listed in Appendix E?\n\nMinor suggestion\n* Although it is good to present many details about the reviewing process, I feel it becomes very easy to get lost while reading the paper. The paper will be better presented if the authors add a figure (e.g., a flowchart) as an overview to summarize how the initial 676 mistakes are narrowed down to 378 mistakes and the final 68 major mistakes.\n Yes, the authors adequately addressed them in Sec. 5.2.", " The paper analyses the remaining mistakes by the state of the art image classifiers on ImageNet. Several models with >95% on the multi-label accuracy (MLA) metric are considered and their mistakes are analysed. The paper discusses the remaining challenges in ImageNet and proposes a benchmark ImageNet-M that will quantify how well the future models would address those challenges. ## Strengths\n\nWhen evaluating a model, we often report only a number (e.g. classification accuracy) or a set of numbers (ImageNet variants). Then they miss out on the rich information about *what kind of mistakes* they make and *how severe* they are. Given that models are getting closer to 100% accuracy (in MLA), it is important to delve into the remaining errors of ImageNet to distinguish whether\n- remaining errors are mostly due to labelling errors or beyond-the-common-sense difficulty; or \n- they are reasonable errors that could be conquered.\n\nThe paper exactly does that. The conclusion is that there are both impossible and doable cases in the remaining errors. I believe this is interesting news for many who follow and contribute models on ImageNet.\n\nThe paper is quite interesting to read. The authors are often quite frank about the procedure and possible limitations there. This paper lets you think. I think it's a well-written paper.\n\n## Weaknesses\n\nImageNet-M is a subset of the ImageNet validation set that combines the obvious (i.e. humans wouldn't really get confused - line 325) \nerrors made by four state-of-the-art models. The authors suggest this subset as another benchmark dataset for evaluating future close-to-perfect models. \n\nI'm worried about the representativeness of ImageNet-M for the remaining challenges. Indeed, (roughly speaking) they are the intersection of the mistakes made by four state-of-the-art image classifiers. However, as the authors have observed as well, even the best models today make a different set of mistakes. I wonder if there is anything special about those 68 ImageNet-M samples. I'm worried that they are rather unlucky ones for the four particular classifiers.\n\nAnother way to formulate my worry is as follows: what can you conclude when model A gives 60/68 correct and model B gives 50/68 correct? \n- Can you say model A is addressing the \"remaining challenges\" of ImageNet better than model B?\n- Are you sure model A is not making more mistakes outside of ImageNet-M than model B?\n- Isn't it sufficient to compare the multi-label accuracy (MLA) rather than ImageNet-M?\n - What additional benefit does ImageNet-M bring on top of MLA?\n - Will there be convincing use cases where ImageNet-M ranking would differ from MLA ranking?\n\nOne possible way to verify the representativeness of the 68 ImageNet-M images would be to do the following experiment: You prepare a sequence of K models M1, M2, M3, M4, M5, M6, ....., MK. Then, plot the size of ImageNet-M against the number of models used for generating ImageNet-M increases from 1 to K. I wish to see if \n- the size of ImageNet-M converges to a non-zero number (ideally close to 68) as you include more models; or\n- the size of ImageNet-M effective converges to zero as each model gets included.\n\nIf the former holds, then I would agree that ImageNet-M is indeed representative. If the latter holds, then ImageNet-M is rather a transient existence that results from considering the particular 4 models. I believe the trend from M1 up to M1,..,M4 (the four models for generating 68 images) would already tell us a lot. Could you share them?\n\nThe paper also seems to hint at the possibility of manually looking through the 68 samples to get more information about the model performance and mechanism than what you would get with numeric metrics (line 346). It would be great if the paper discusses how one could run such a qualitative analysis on the 68 samples - e.g. what kind of insights they could get and how they can improve the engineering based on that.\n\n## Conclusion\n\nThe analysis of remaining errors for current models is interesting and useful (see \"strengths\"). However, I'm not too confident about ImageNet-M as a valid evaluation set for the remaining challenges for future models (see \"weaknesses\"). Unless the issues with ImageNet-M are successfully resolved in the rebuttal, I'd consider the paper's contribution to be insufficient for publication.\n\n## After the rebuttal\n\nI believe the authors have proved the minimal value for the ImageNet-M benchmark in their response. I would therefore change my score to 5 - borderline acceptance. I think it's more natural to include the questions in the \"strengths and weaknesses\" section above. I think it's more natural to include the limitations in the \"strengths and weaknesses\" section above." ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "zfdN50wlKD2", "p5nAIDdI-F1", "lgcLsZ4aWA0", "2_TKKVrGq76", "8SDNxkRPLJL", "nips_2022_mowt1WNhTC7", "1bK2OXEaQBO", "nips_2022_mowt1WNhTC7", "nips_2022_mowt1WNhTC7", "nips_2022_mowt1WNhTC7" ]
nips_2022_PzI4ow094E
Scalable Sensitivity and Uncertainty Analyses for Causal-Effect Estimates of Continuous-Valued Interventions
Estimating the effects of continuous-valued interventions from observational data is a critically important task for climate science, healthcare, and economics. Recent work focuses on designing neural network architectures and regularization functions to allow for scalable estimation of average and individual-level dose-response curves from high-dimensional, large-sample data. Such methodologies assume ignorability (observation of all confounding variables) and positivity (observation of all treatment levels for every covariate value describing a set of units), assumptions problematic in the continuous treatment regime. Scalable sensitivity and uncertainty analyses to understand the ignorance induced in causal estimates when these assumptions are relaxed are less studied. Here, we develop a continuous treatment-effect marginal sensitivity model (CMSM) and derive bounds that agree with the observed data and a researcher-defined level of hidden confounding. We introduce a scalable algorithm and uncertainty-aware deep models to derive and estimate these bounds for high-dimensional, large-sample observational data. We work in concert with climate scientists interested in the climatological impacts of human emissions on cloud properties using satellite observations from the past 15 years. This problem is known to be complicated by many unobserved confounders.
Accept
This paper extends the marginal sensitivity model to continuous treatments. Given the developments in the discrete treatment setting, none of the parts of the paper are surprising. Further, there are several simultaneous related works that carry out a generalization to continuous treatments. That being said, the work is sound and a polished contribution.
test
[ "EheJ_AzPleY", "VgLJ9k5p63Q", "ck50dmzHVyl", "9r2ifDgTB7O", "lbegvuLTfF-", "gRy8x-uVgoG", "IEssAyWPFeg", "vDD4-Aj61o6k", "9Dc8fod7zvU", "YBPA7kRCk7O", "NVYL-k_Qcjs", "Kxu6ruNeYBr", "BK05pRAKeZH", "TnLkNI1Mi6", "nV9D_dUnfY", "SM6ijFqeT0r", "15QWNOFV9ea", "ufB7-qd57G" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for taking the time to review our paper. We hope our response below has addressed your concerns. If you have any further suggestions, we would be happy to discuss them with you prior to your review confirmation.", " Thank you again for your feedback and corrections.", " Thank you again for your insightful review. Your suggestion to expand upon the interpretation and discussion of the climate results will clearly improve the paper and we will certainly do so if an extra page is granted.", " Thank you again for the review and we appreciate the score increase as it ought to help increase the visibility of our work. Thank you for your clarification regarding the experiment. We will aim to include this in the camera ready should this paper be accepted.", " Thank you very much for the detailed response. Again, it was a fun paper to read.\n\nFor the experiment, what I meant was something like the comparison done in [1] (Page 8, first column last paragraph, dashed yellow lines in the experiments was what I was referring to). In any case it was a minor point which I asked out of curiosity, and not really expected. I am already satisfied with all the evaluations presented in the paper (hence my high score). After reading all the reviews and the authors' responses to them, I am happy to increase my score for the paper.\n\n[1] Marmarelis, M. G., Steeg, G. V., & Galstyan, A. (2022). Bounding the Effects of Continuous Treatments for Hidden Confounders. arXiv preprint arXiv:2204.11206.\n", " Thanks to the authors for their detailed responses to my questions. I remain very positive about this work and am maintaining my score.\n\nRegarding the responses about the real data experiment---I won't pretend to have a solid understanding of the scientific context for the climate application. On my initial read of Section 5.2 it felt like the main conclusion was that \"as the strength of confounders increases (Λ > 1.0), the range of uncertainty in the treatment outcome increases\", which is clearly how the method is intended to work, and there was less of a story regarding bringing clarity to the climate application itself. However, upon another read and in light of some of the rebuttal answers, it seems that the authors do in fact interpret $\\Lambda$ in the context of the application. Overall, I think this section might benefit from some more explanation given more space in a camera ready version of the paper.", " Thank you for the clarifications. I have no further comments.", " A sincere thank you to all reviewers for being so generous with their time and attention in reviewing our work. We are greatly encouraged that you have each given scores indicating that our work is worthy of acceptance to NeurIPS. There do not seem to be any major concerns with our theoretical or empirical results. We have updated our manuscript according to the suggestions you have made and plan to extend the literature review in the appendix.\n\nWe are also encouraged by your reaction to our empirical evaluation using climate data. In order to verify recent improvements in climate modeling, we must first improve how we validate and establish trends from our observational record. Unfortunately, confounding effects such as the swelling of aerosol in humid environments, makes it difficult to confidently establish baseline trends that we can compare to model output. The methodology and resulting model within allow climate modelers and observationalists to established bounds of possible effects while accounting for these confounding influences. This not only allows us to understand which models can recreate the observed trends, but how well they recreate the trends within distinct environmental regimes.\n", " Thank you for your detailed questions and comments. It is encouraging that you find the topic important and timely. We address your individual comments below.\n\nTL;DR — *We discuss recent related works. We propose to add a detailed literature review to the appendix to include discrete treatment methods. We point out the efforts made to go beyond the machine learning literature. We address your technical questions. We fix the errors you have found.*\n\n### \"This appears a crowded topic, as there seem to be at least two other recently arxiv'ed papers on the same topic (Chernozhukov et al., 2021; Marmarelis et al., 2022)\"\n\nWe would agree that this is a timely topic and do not believe that the publication of any of these papers should preclude publication of another. Our motivation for developing this method was that we were initially using discrete treatment methods to analyze the climate data. The overwealming initial feedback from the climate science community was that to treat AOD as a continuous variable. While there were methodologies to quantify statistical and causal uncertainty for discrete treatments, there appeared to be a gap in the continuous treatment regime. We became aware of (Chernozhukov et al., 2021) shortly after begining to develop our methodology. Because our methods take different approaches to the sensitivity analysis, we pushed forward on developing our own. We became aware of (Marmarelis et al., 2022) after submission of this work to an earlier conference. They take the MSM approach, but derive a different MSM. It will make for interesting future work to compare and contrast these methods. However, since each are unique and independently developed approaches to a timely problem, we do not see their co-occurence alone as a reason for any to be rejected.\n\n### \"Also, I'm wondering if there aren't additional papers published in different fields (other than machine learning), where similar results could have been discussed earlier. On a quick googling, I found at least the following, which, based on title and abstract, might have a similar goal: \"Bias formulas for sensitivity analysis of unmeasured confounding for general outcomes, treatments, and confounders\" by VanderWeele and Arah, Epidemiology, 2011. Could you clarify the difference of the present work to this, and also check if you can find other similar papers published in, e.g., economics, epidemiology, or (bio)statistics?\"\n\nThank you for sharing this work. It is a parametric approach in the same vein as CHH16, DHCH16, MSDH16, Ost19, CH20a, and CH20b. However, it looks at discrete rather than continuous treatment levels. We tried earnestly to do an extensive survey outside the machine learning literature. CHH16 is from the Journal of Research on Educational Effectiveness, DHCH16 is from Statistics in Medicine, MSDH16 is from Political Analysis, Ost19 is from the Journal of Business & Economic Statistics, and CH20a is from the Journal of the Royal Statistical Society. We chose to prioritize methods that work with continuous treatment values in the related works due to space constraints. Given your comments — and the comments by Reviewer UYQB and Reviewer eRS3 — it is clear that a more detailed treatment of the related works would be welcome. We would be happy to include a more thorough report in the appendix.\n\n### \"Proposition 1 has an assumption that P(Y_t|T=t,X=x) is equivalent to a Lebesgue measure. I read this such that the distribution is uniform, which seems very restrictive. Could you clarify this, please?\"\n\nEquivalence here is in the measure-theoretic sense. Namely, we assume that $P(\\mathrm{Y}_\\mathrm{t}|\\mathrm{T}=\\mathrm{t},\\mathrm{X}=\\mathrm{x})$ is absolutely continuous with respect to the Lebesgue measure, *and* that the Lebesgue measure is absolutely continuous with respect to $P(\\mathrm{Y}_\\mathrm{t}|\\mathrm{T}=\\mathrm{t},\\mathrm{X}=\\mathrm{x})$. This does not imply that $\\mathrm{Y}_\\mathrm{t}|\\mathrm{T}=\\mathrm{t},\\mathrm{X}=\\mathrm{x}$ is uniformly distributed. Rather, it says that the zero measure sets of the Lebesgue measure and $P(\\mathrm{Y}_\\mathrm{t}|\\mathrm{T}=\\mathrm{t},\\mathrm{X}=\\mathrm{x})$ coincide. This assumption certainly holds when $\\mathrm{Y}_\\mathrm{t}|\\mathrm{T}=\\mathrm{t},\\mathrm{X}=\\mathrm{x}$ is distributed with support over the whole real line, for example, with Cauchy, Gaussian, or Laplace distributed random variables. The assumption is also consistent with our use of mixture density networks to model $p(\\mathrm{y} | \\mathbf{x}, \\mathrm{t})$. Admittedly, equivalence is an overloaded term. We have updated the manuscript to make this more explicit (lines 93-94).", " ### \"I don't follow eq (7). It's probably similar to KMZ19, but it would be good to make the present article self-standing. For example, what is the w(.) function and what is its role here intuitively? It's defined after eq (9) for one argument, but it seems to be used in the text interchangable with one or two arguments.\"\n\nIt is best to start at the bound in equation (6) to understand the role of $w(\\cdot)$:\n\n$\\frac{1}{\\Lambda p(\\mathrm{t} \\mid \\mathbf{x})} \\leq \\frac{1}{p(\\mathrm{t} \\mid \\mathrm{y}_{\\mathrm{t}}, \\mathbf{x})} \\leq \\frac{\\Lambda}{ p(\\mathrm{t} \\mid \\mathbf{x})}$.\n\nThe role of $w(\\cdot)$ is to express the hypothesized inverse complete propensity density, $\\frac{1}{p(\\mathrm{t} \\mid \\mathrm{y}, \\mathbf{x})}$, as a linear interpolation between the lower bound, $\\frac{1}{\\Lambda p(\\mathrm{t} \\mid \\mathbf{x})}$, and the upper bound, $\\frac{\\Lambda}{ p(\\mathrm{t} \\mid \\mathbf{x})}$. We first define $w(\\cdot)$ after equation (7) in general as a function of $\\mathbf{x}$ and $\\mathrm{y}$ with range $[0, 1]$, such that:\n\n$\\frac{1}{p(\\mathrm{t} \\mid \\mathrm{y}, \\mathbf{x})} = \\frac{1}{\\Lambda p(\\mathrm{t} \\mid \\mathbf{x})} + w(\\mathrm{y}, \\mathbf{x}) \\left( \\frac{\\Lambda}{p(\\mathrm{t} \\mid \\mathbf{x})}-\\frac{1}{\\Lambda p(\\mathrm{t} \\mid \\mathbf{x})}\\right)$.\n\nTo get a sense of this interpolation, it is easy to see that when $w(\\mathrm{y}, \\mathbf{x}) = 0$, the second r.h.s. term disappears leaving just the lower bound, $\\frac{1}{\\Lambda p(\\mathrm{t} \\mid \\mathbf{x})}$, and when $w(\\mathrm{y}, \\mathbf{x}) = 1$, the lower bounds cancel out leaving just the upper bound, $\\frac{\\Lambda}{p(\\mathrm{t} \\mid \\mathbf{x})}$. When $w(\\mathrm{y}, \\mathbf{x}) \\in (0, 1)$, the function can take on any value between the two extrema.\n\nWe define $w(\\cdot)$ in the most general case because we started looking at a gradient descent approach to solve the bounds on the CAPO and APO functions, which we detail in Appendix F. The gradient descent approach optimizes a parameterized version of $w(\\mathrm{y}, \\mathbf{x})$. The gradient descent approach may be an interesting avenue for future work, but it did not yield any clear advantage over the grid search approach in our initial analyses. While we relegated the gradient descent approach to the appendix, it may be best to keep the definition general here to inspire other approaches.\n\nIn equations (8) and (9), we drop the $\\mathbf{x}$ and express $w(\\cdot)$ as just a function of $\\mathrm{y}$ for two reasons. First, we pick a specific form for $w(\\cdot)$ — the Heaviside step function — for conciseness in the lead-up to Theorem 1. Theorem 1 uses Lemma 2, showing that the solution space for $w$ is limited to step functions. Second, given this choice and the fact that the infima and suprema in (8) and (9) are determined independently for each $\\mathbf{x}$, the $\\mathbf{x}$ notation becomes redundant. We are happy to add a note in the main text and perhaps add this discussion to the appendix.\n\n\n### \"Similarly to the previous comment, I'm not sure what the y with the bar below/above in Section 3.3 is.\"\n\nThe parameters returned by the grid search algorithm (Algorithm 1) for the step function, $H$, are given by $\\underline{\\mathrm{y}}$ and $\\overline{\\mathrm{y}}$, where $\\underline{\\mathrm{y}}$ is the parameter associated with the lower bound estimate, and $\\overline{\\mathrm{y}}$ is the parameter for the upper bound estimate.\n\n### \"The figures in the Experiments section are not sharp when printed out. Otherwise the presentation is clear.\" \"Very minor: l. 210: D has subscript j while indexing uses k.\" \"l. 230: By mazimizing the log-likelihood, not \"minimizing\".\"\n\nThank you for pointing these out to us. We will use vector graphics in the camera-ready should this paper be accepted for publication. We have updated the manuscript to address the errors.", " Thank you very much for your feedback. We are glad you found the beginning of the paper clear and intuitive. We recognize that we can improve the overall readability for you. We address your questions below.\n\nTL;DR — *We provide context for the synthetic experiment. We request clarity on how we can improve the readability of Section 3. We explain how ignorability fits in with the neural-network architecture.*\n\n### \"The generation of the synthetic data seems to be in a particular form, it's not clear to me why this exact formulation was chosen. Could the authors explain a little more on this?\"\n\nThe synthetic data design illustrates several different aspects. We need a ground truth density for the complete propensity to calculate reference $\\Lambda$ values for each $\\mathrm{x}$ value. These are unique for each $\\mathrm{x}$ show we show the bounds growing for increasing lambda to show how we cover the ground truth CAPO at the ground truth $\\Lambda$ value. The choice of ground truth complete propensity density induces regions of high and low overlap, and we see the statistical uncertainty grow in those regions. We have the CAPO functions follow a varying non-linear form to make it more challenging for the estimator and reflect real-world possibilities.\n\n### \"The latter sections (section 3) are slightly difficult to follow. It would be nice to have more intuitive explanation of the derivations throughout the section.\"\n\nThank you for this comment. Were there specific areas that you had in mind? Were the remarks helpful? We would be happy to include more in sections 3.2-3.5 if we had more space.\n\n### \"For the experiment, it's not exactly clear to me how the assumption of ignorability fits into the neural network architecture.\"\n\nThe ignorability assumption does not factor into the neural network architecture per se. A neural network can be used to estimate the statistical value $\\mathbb{E}[\\mathrm{Y} \\mid \\mathbf{x}, \\mathrm{t}]$. Under ignorability (and other assumptions), the statistical estimand above is equivalent to the causal effect, $\\mathbb{E}[\\mathrm{Y}_\\mathrm{t} \\mid \\mathbf{x}]$. Sensitivity analysis allows us to quantify the interval of $\\mathbb{E}[\\mathrm{Y} \\mid \\mathbf{x}, \\mathrm{t}]$ values that are compatible with the data and a user-specified relaxation of the hidden confounding assumption. Our sensitivity analysis depends not only on the mean value estimate $\\mathbb{E}[\\mathrm{Y} \\mid \\mathbf{x}, \\mathrm{t}]$, but also the the density of $\\mathrm{Y}$ given $\\mathbf{X}=\\mathbf{x}$ and $\\mathrm{T}=\\mathrm{t}$, $p(\\mathrm{y} | \\mathbf{x}, \\mathrm{t})$. Where a standard regression neural-network just outputs the mean value estimate, we use a mixture density network to model $p(\\mathrm{y} | \\mathbf{x}, \\mathrm{t})$. Modeling the density allows us to sample $\\mathrm{Y}$ values and perform our sensitivity analysis. $\\Lambda$ fits in at this stage, after model training at inference time.", " Thank you very much for your detailed feedback. We are sincerely flattered by your positive comments. We address your questions and concerns below.\n\nTL;DR — *We offer a minimal definition of a marginal sensitivity model. We propose to provide a broader literature review in the appendix. We provide additional insights into understanding the $\\Lambda$ parameter using the climate data. We elaborate on the connections between statistical uncertainty and violations of positivity.*\n\n### \"Within the main paper, the authors assume a basic familiarity with marginal sensitivity models as known method/object (Tan, 2006) (e.g., \"We propose CMSM as a new MSM\" Ln 77). While they provide more information in the supplementary material, perhaps there is some minimal definition of a \"marginal sensitivity model\" that they can put in the main paper to provide a baseline context that they can build upon.\"\n\nThe term \"marginal sensitivity model\" is introduced by ZSB19. For ZSB19, a sensitivity model is a user hypothesized complete propensity score function. Analogously, a continuous treatment sensitivity model in our setting would be a user hypothesized propensity density. The inverse of the hypothesized propensity density is:\n\n$\\frac{1}{p(\\mathrm{t} \\mid \\mathrm{y}, \\mathbf{x})} = \\frac{1}{\\Lambda p(\\mathrm{t} \\mid \\mathbf{x})} + w(\\mathrm{y}, \\mathbf{x}) \\left( \\frac{\\Lambda}{p(\\mathrm{t} \\mid \\mathbf{x})}-\\frac{1}{\\Lambda p(\\mathrm{t} \\mid \\mathbf{x})}\\right).$\n\nFor ZSB19, they define their marginal sensitivity model as the collection of user hypothesized complete propensity score functions that satisfy the bound on the odds ratio. Analogously, we could define the CMSM as \n\n$\\mathcal{P}(\\Lambda) = \\{ p(\\mathrm{t} \\mid \\mathrm{y}, \\mathbf{x}): \\frac{1}{\\Lambda} \\leq \\frac{p(\\mathrm{t} \\mid \\mathbf{x})}{p(\\mathrm{t} \\mid \\mathrm{y}, \\mathbf{x})} \\leq \\Lambda, \\forall \\mathbf{x} \\in \\mathcal{X}, \\forall \\mathrm{y} \\in \\mathbb{R} \\}$.\n\nThe brackets are not compiling here, but we have added the definition to the manuscript (lines 102-105).\n\nYour comment does bring up an interesting question: why marginal? Again, ZSB19 introduces the terminology. They attribute MSM to Tan06, but it is not used by Tan06. Further, they do not elaborate on why marginal. We speculate that this could either be because the models are rooted in the generally unidentifiable marginal distribution of potential outcomes $P(Y_t \\mid X=x)$, or because you marginalize over the treatment with respect to the hypothesized inverse propensity score. Perhaps both. We will follow up with ZSB once it is clear that it will not jeopardize anonymity.\n\n### \"To paint a fuller picture of the range of possible approaches to sensitivity analysis in the related work, the authors might consider adding a mention of works such as (Imbens (2003) and Veitch & Zaveri (2020)) which explicitly model the correlation due to unobserved confounding (for a binary treatment). Veitch & Zaveri gets around some criticism of existing methods that the authors discuss in Ln 246 by allowing for flexible, non-parametric models of the unobserved confounding relationship.\"\n\nThank you for sharing these works. We propose to add a thorough discussion in the appendix to include this related literature. Reviewers eRS3 and zMDP have also suggested literature on the discrete treatment regime, so it is clear that this change will strengthen the paper. \n\n### \"The main possible difficulty with the proposed method that came to mind is how to select values of the sensitivity parameter. As opposed to the odds ratio (which, as the authors note, is generally interpretable to practitioners), the density ratio is somewhat difficult to make judgements about directly. The authors provide an alternative characterization in terms of the \"proportion of unexplained range in \", but even this is (to my knowledge) not a commonly considered statistic. Can this be related in any way to, e.g., the (i.,e., the fraction of variance unexplained)?\"\n\nGood question. It seems non-trivial to make the direct connection between Lambda and the fraction of unexplained variance. We think this is better left as a future contribution if it turns out to be possible. Indeed, we propose the proportion of unexplained range as an intermediate heuristic reflecting the fraction of unexplained variance we would attribute to hidden confounding under an assumed $\\Lambda$. We are also exploring methods using quantiles of the conditional distribution of the outcome.", " ### \"Also, can one compute the proportion of unexplained range for a reference covariate(s) (e.g., patient age) and say something along the lines of \"unobserved confounding would need to account for at least as much of the range in as patient age to reach a sensitivity value of \"? Would such comparisons be valuable for judging the plausibility of different values?\"\n\nIn [this figure](https://i.imgur.com/QFuqLk3.png) we compare the same region with different covariates to identify an appropriate $\\Lambda$. We fit one model on data from the Pacific (blue) and one model from the Pacific omitting $\\omega_{500}$ from the covariates (orange). The shaded bounds in blue are the ignorance region for $\\Lambda \\to 1$ for the Pacific. We then find the $\\Lambda$ that results in an ignorance interval around the Pacific omitting $\\omega_{500}$ that covers the Pacific model prediction. From this, we can infer how the parameter $\\Lambda$ relates to the inclusion of covariates in the model. We show that we need to set $\\Lambda = 1.01$ to account for the fact that we are omitting $\\omega_{500}$ from our list of covariates. Is this what you had in mind?\n\n### \"Along these lines, I don't think the authors discussed what reasonable values of might be in their cloud experiment. I think the exercise of reasoning about in the context of this example would be very helpful for readers and potential users of the method. In the climate experiment, what guidance is there for how to pick relevant values to consider? Could the authors add some kind of qualitative discussion/judgement here?\"\n\nIn the [this figure](https://i.imgur.com/H4X6ivE.png), we compare two regions with similar meteorology but different magnitudes of confounding influences to identify an appropriate $\\lambda$. We fit one model on data from the Pacific region (blue) and one model on data from the Atlantic region (orange). The possible differing confounding factors include aerosol type, aerosol hygroscopicity, aerosol size, and others. We find the lambda that results in an ignorance interval around the Pacific model prediction that covers the Atlantic model prediction. From this, we can understand how well our current climate models (dashed: green, red, purple) recreate these trends and if their behavior is within the bounds which account for different confounding influences. Models outside of the shaded bounds, or regions of the model's predictions outside the shaded bounds, are likely not correctly emulating aerosol-cloud interactions, leading to substantial errors in their estimates of the effects.\n\n### \"How does the proposed method address possible positivity violations? The authors allude to this in the introduction but (it seems to me) it is never explicitly spelled out. It seems that the uncertainty quantification via bootstrapping is meant to capture uncertainty in, e.g., regions with poor overlap? I think some explicit statement about this (say, in Section 3.4) would help tie up any loose ends regarding the authors' claims.\"\n\nThe statistical uncertainty quantified via bootstrapping ought to be high in regions of poor overlap. This hypothesis is put more explicitly by JMSG20 and JMGS21, who use different methods to quantify statistical uncertainty. We have added an explicit statement as suggested (lines 209-210).\n\nWe offer [this figure](https://i.imgur.com/Xl1ExHp.png) showing an extended treatment axis for the results shown in Figure 4 as evidence. In the paper, we plot the 97th percentile of AOD (treatment) values in Figure 4, which lie roughly on (0.03, 0.4). The remaining 3% of observed treatment values lie roughly on (0.4, 3.0). Positivity is challenged here. As expected, the statistical (epistemic) uncertainty is very high where the overlap is weak.", " Thank you very much for your detailed feedback. We are pleased you found the paper well written and appreciate our efforts to evaluate the method in a real-world context. We address your questions and concerns below. \n\nTL;DR — *We refer you to the appendix for alternatives to the grid-search algorithm. We comment on the surprising efficacy and speed of the grid search. We have updated the main paper to include the continuous treatment works and will provide a broader literature review in the appendix. We need clarification on the experiment you would expect.*\n\n### \"The limitations of the method should be discussed to give a clearer picture of the method. One limitation is could be the need for a grid search.\"\n\nWe have tried to be transparent about the limitations of our method by clearly stating all assumptions, commenting on the challenges with interpreting $\\Lambda$, and highlighting the current gaps in theory that delay the arrival of causal conclusions from observational climate data. We appreciate that there may always be further limitations not yet considered.\n\nRegarding the need for a grid search, we refer you to appendices D.0.1 and F, where we give line-search and gradient descent alternatives. Initial unreported results have shown that both options increase algorithmic complexity without improving the compute time or bound fidelity. The \"grid search is all you need\" phenomenon is probably due to modern GPU architectures. It is parallelizable, and the bounds for a batch of data can be computed in 3ms using an NVIDIA 1080 Ti and consumer CPU. Analyzing different algorithms could be interesting for future work but is beyond the scope of this paper since the grid search is surprisingly effective and a solution to the problem.\n\n### \"Considering that the paper deals with bounding causal effects, the following papers could be mentioned in the background work.\"\n\nThank you for sharing these works. The instrumental variable approaches of KKS20, HWZW, and PZWKSK22 are very relevant and escaped our review. We have updated the related works section to include these (line 259). Reviewers UYQB and zMDP have also shared additional works in the discrete treatment setting, and we propose to add these to a detailed lineage in the appendix due to space constraints in the main paper.\n\n### \"Are there any baselines that the authors could have compare their methods to? For instance even something line shoehorning a continuous MSM into the binary setting (for a single dimensional continuous X of course)?\"\n\nWe have to ask for clarification here. By \"single dimensional continuous X,\" we assume you are looking for a synthetic experiment. Then, would we use the density ratio (probability ratio) instead of the odds ratio for a binary treatment setup?\n\n### \"While the amount of compute used is reasonably small, I couldn't find any information on how long it took to get the bounds? Do you some estimate of this?\"\n\nThe following estimates are for a consumer intel CPU, 16GB of RAM, and an NVIDIA GeForce GTX 1080 Ti GPU. For both the neural network architecture and transformer architecture, a forward pass and bound estimation on a batch of data is completed on the order of miliseconds.", " The paper tackles a very interesting an important question, that of bounding treatment effects when ignorability is violated and the treatment and covariates are continuous. They do this by proposing an extension of the marginal sensitivity model (MSM) to the so called continuous marginal sensitivity model (CMSM). In the CMSM, a density ratio is used to quantify the belief of how much ignorability is violated, in contrast to the odds ratio used in MSM.\n\nFollowing this the authors five a semi-parametric estimator for the bounds and an algorithm to compute them. The claims are then validated though synthetic and real world experiments. **Strengths**. \n* The paper is well written and easy to follow in my opinion. The flow is natural and it is clear how each part fits into the bigger picture while reading the paper.\n* The paper evaluates their method on a real world datasets in collaboration with the domain experts. The ideal application of causal inference is always said to be in alliance with domain experts, and using that as one of the ways to evaluate a proposed method speaks in favour of the method and the paper.\n* The experiments seem to justify the claims made in the paper.\n* The appendix provides a thorough description of the background on MSM and how the proposed method relates to it.\n\n**Weaknesses**. \n* The limitations of the method should be discussed to give a clearer picture of the method. One limitation is could be the need for a grid search.\n\n**Other related work**. Considering that the paper deals with bounding causal effects, the following papers could be mentioned in the background work. While they are not directly doing sensitivity analysis per se, I think they are relevant to getting a context of the related work since they also tackle the problem of bounding treatment effects in the presence of confounding.\n\n*Continuous setting*: \nKilbertus, N., Kusner, M.J. and Silva, R., 2020. A class of algorithms for general instrumental variable models. Advances in Neural Information Processing Systems, 33, pp.20108-20119.\n\nHu, Y., Wu, Y., Zhang, L. and Wu, X., 2021, May. A generative adversarial framework for bounding confounded causal effects. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 13, pp. 12104-12112).\n\nPadh, K., Zeitler, J., Watson, D., Kusner, M., Silva, R. and Kilbertus, N., 2022. Stochastic Causal Programming for Bounding Treatment Effects. arXiv preprint arXiv:2202.10806.\n\n*Discrete setting*: \nDuarte, Guilherme, Noam Finkelstein, Dean Knox, Jonathan Mummolo, and Ilya Shpitser. 2021a. “An Automated Approach to Causal Inference in Discrete Settings.” arXiv [stat.ME]. arXiv. http://arxiv.org/abs/2109.13471.\n\nDuarte, G., Finkelstein, N., Knox, D., Mummolo, J. and Shpitser, I., 2021. An automated approach to causal inference in discrete settings. arXiv preprint arXiv:2109.13471.\n\nZhang, Junzhe, Jin Tian, and Elias Bareinboim. 2021. “Partial Counterfactual Identification from Observational and Experimental Data.” arXiv [cs.AI]. arXiv. https://www.causalai.net/r78.pdf.\n * Are there any baselines that the authors could have compare their methods to? For instance even something line shoehorning a continuous MSM into the binary setting (for a single dimensional continuous X of course)?\n* While the amount of compute used is reasonably small, I couldn't find any information on how long it took to get the bounds? Do you some estimate of this?\n The potential negative societal impact is discussed but the limitations could be described better.", " In this paper, the authors develop a sensitivity analysis method for estimating causal effects with continuous-valued interventions. This is important because when performing causal inference from observational data, common assumptions like no unobserved confounding rarely hold. Building on a sensitivity analysis formulation for binary treatments, the authors develop a continuous marginal sensitivity model (CMSM) that accounts for possible unobserved confounding by bounding the density ratio between the observational and \"complete\" treatment propensity distribution. This sensitivity model allows the authors to compute upper and lower bounds on the conditional average potential outcome (which when marginalized over covariates yields the average potential outcome). Thus, for a given value of the sensitivity parameter, and for any value of the treatment and covariates, the method produces an interval for the conditional average potential outcome that can be subsequently used to estimate the range of causal effects compatible with the data. In synthetic experiments the authors validate various aspects of their approach. Finally, they apply their method to a simplified causal question in climate science, showing that a range of hypotheses are compatible with the data under varying degrees of possible unobserved confounding. I really enjoyed this paper. I think the problem is important, the paper is well-written and well-executed, and the developed methodology seems both elegant and practical. As the authors note, I think sensitivity analysis is very important for bringing the value of causal inference from observational data to policy making in practical scenarios, and I think the current paper represents a useful method for doing so.\n\nI mainly have a few clarifying questions & minor suggestions:\n* The main possible difficulty with the proposed method that came to mind is how to select values of the sensitivity parameter. As opposed to the odds ratio (which, as the authors note, is generally interpretable to practitioners), the density ratio is somewhat difficult to make judgements about directly. The authors provide an alternative characterization in terms of the \"proportion of unexplained range in $Y$\", but even this is (to my knowledge) not a commonly considered statistic. Can this be related in any way to, e.g., the $R^2$ (i.,e., the fraction of *variance* unexplained)? Also, can one compute the proportion of unexplained range for a reference covariate(s) (e.g., patient age) and say something along the lines of \"unobserved confounding would need to account for at least as much of the range in $Y$ as patient age to reach a sensitivity value of $\\Lambda$\"? Would such comparisons be valuable for judging the plausibility of different $\\Lambda$ values?\n* Along these lines, I don't think the authors discussed what reasonable values of $\\Lambda$ might be in their cloud experiment. I think the exercise of reasoning about $\\Lambda$ in the context of this example would be very helpful for readers and potential users of the method.\n\n\n**Suggestions**\n* Within the main paper, the authors assume a basic familiarity with marginal sensitivity models as known method/object (Tan, 2006) (e.g., \"We propose CMSM as a new MSM\" Ln 77). While they provide more information in the supplementary material, perhaps there is some minimal definition of a \"marginal sensitivity model\" that they can put in the main paper to provide a baseline context that they can build upon.\n* To paint a fuller picture of the range of possible approaches to sensitivity analysis in the related work, the authors might consider adding a mention of works such as (Imbens (2003) and Veitch & Zaveri (2020)) which explicitly model the correlation due to unobserved confounding (for a binary treatment). Veitch & Zaveri gets around some criticism of existing methods that the authors discuss in Ln 246 by allowing for flexible, non-parametric models of the unobserved confounding relationship.\n\nImbens, G. W. (2003). Sensitivity to exogeneity assumptions in program evaluation. American Economic Review, 93(2), 126-132.\n\nVeitch, V., & Zaveri, A. (2020). Sense and sensitivity analysis: Simple post-hoc analysis of bias due to unobserved confounding. Advances in Neural Information Processing Systems, 33, 10999-11009. * How does the proposed method address possible positivity violations? The authors allude to this in the introduction but (it seems to me) it is never explicitly spelled out. It seems that the uncertainty quantification via bootstrapping is meant to capture uncertainty in, e.g., regions with poor overlap? I think some explicit statement about this (say, in Section 3.4) would help tie up any loose ends regarding the authors' claims.\n\n* In the climate experiment, what guidance is there for how to pick relevant $\\Lambda$ values to consider? Could the authors add some kind of qualitative discussion/judgement here? Yes, the authors adequately address limitations.", " Ignorability assumption is essential to ATE, however, this is often not satisfied in the real world. This means that there will be ignorance in the causal estimates when this assumption is relaxed. The paper addresses this issue by proposing a neural network based model that are scalability for ananlysing sensitivity and uncertainty when the ignorability assumption does not hold true for continuous data. Pros,\n\n- The paper addresses an important issue of sensitivity analysis and uncertainty to measure the ignorance in causal estimate when the ignorability assumption is voileted. The paper also proposes a scalable neural network based approach, which can be applicable to a wilder set of data and problems.\n\n- The beignning of the paper is very well written and easy to follow. The settings, assumptions and derivations are clear and concise.\n\nCons,\n\n- The generation of the synthetic data seems to be in a particular form, it's not clear to me why this exact formulation was chosen. Could the authors explain a little more on this?\n\n- The latter sections (section 3) are slightly difficult to follow. It would be nice to have more intuitive explanation of the derivations throughout the section.\n\n- For the experiment, it's not exactly clear to me how the assumption of ignorability fits into the neural network architecture. \n\n\n\n\n - It's not clear to me why the exact formulation of the SCM of the synthetic data was chosen. Would be nice if the authors could elaborate on this.\n\n- Section 3 is a bit difficult to follow, would be nice to have more intutive explanations of the derivations. The authors were transpararent about the limitations of their work, such as lambda may not be easily identified.", " The paper derives bounds for causal treatment effect estimates when the intervention is continuous and there exists a user-specified amount of unobserved confounding. The main innovation is to extend previous bounds which were derived for discrete treatments to the case of continuous treatments. Non-parametric models are assumed for the treatment conditional on the covariates, and for the outcome conditional on the treatment and covariates. Strengths: \n\nThe article has an important and timely topic. The derivations for the bounds seem rigorous, and are validated empirically in the experiments. The analytical results are derived for the asymptotic case, for finite case the article proposes bootstrapping to get a bounds for the bounds. Related works are covered well (though see questions below). The presentation is clear. The real-world application in climate change is important and well-described.\n\nWeaknesses: \n\nThis appears a crowded topic, as there seem to be at least two other recently arxiv'ed papers on the same topic (Chernozhukov et al., 2021; Marmarelis et al., 2022), which may differ in some details (I haven't read those papers closely). The idea for the topic seems to have been discussed in previous year's \"Causal Inference & Machine Learning: Why now?\" Neurips Workshop, which potentially has spawned the interest (my speculation). Anyway, this seems to slightly decrease the novelty of the present article. I am not sure if any of these related papers is more deserving to be published first, but at least the present article I found a rather complete and good package and tentatively can support its acceptance, unless other reviewers identify major issues. I don't find any other major problems.\n\nThere were some small problems with presentation, especially not all notation used in the formulas is defined/explained clearly (see below). The figures in the Experiments section are not sharp when printed out. Otherwise the presentation is clear. Given the multiple somewhat overlapping works currently under review, I would suggest expanding the \"Sensitivity and Uncertainty Analyses for Continuous Treatment Effects\" section in \"Related Works\" to include more details about the differences of the methods.\n\nAlso, I'm wondering if there aren't additional papers published in different fields (other than machine learning), where similar results could have been discussed earlier. On a quick googling, I found at least the following, which, based on title and abstract, might have a similar goal: \"Bias formulas for sensitivity analysis of unmeasured confounding for general outcomes, treatments, and confounders\" by VanderWeele and Arah, Epidemiology, 2011. Could you clarify the difference of the present work to this, and also check if you can find other similar papers published in, e.g., economics, epidemiology, or (bio)statistics?\n\nProposition 1 has an assumption that P(Y_t|T=t,X=x) is equivalent to a Lebesgue measure. I read this such that the distribution is uniform, which seems very restrictive. Could you clarify this, please?\n\nI don't follow eq (7). It's probably similar to KMZ19, but it would be good to make the present article self-standing. For example, what is the w(.) function and what is its role here intuitively? It's defined after eq (9) for one argument, but it seems to be used in the text interchangable with one or two arguments.\n\nSimilarly to the previous comment, I'm not sure what the y with the bar below/above in Section 3.3 is.\n\n\nVery minor:\nl. 210: D has subscript j while indexing uses k.\n\nl. 230: By mazimizing the log-likelihood, not \"minimizing\".\n Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 8, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "15QWNOFV9ea", "IEssAyWPFeg", "gRy8x-uVgoG", "lbegvuLTfF-", "TnLkNI1Mi6", "BK05pRAKeZH", "YBPA7kRCk7O", "nips_2022_PzI4ow094E", "ufB7-qd57G", "ufB7-qd57G", "15QWNOFV9ea", "SM6ijFqeT0r", "SM6ijFqeT0r", "nV9D_dUnfY", "nips_2022_PzI4ow094E", "nips_2022_PzI4ow094E", "nips_2022_PzI4ow094E", "nips_2022_PzI4ow094E" ]
nips_2022_QedyATtQ1H
On the convergence of policy gradient methods to Nash equilibria in general stochastic games
Learning in stochastic games is a notoriously difficult problem because, in addition to each other's strategic decisions, the players must also contend with the fact that the game itself evolves over time, possibly in a very complicated manner. Because of this, the convergence properties of popular learning algorithms — like policy gradient and its variants — are poorly understood, except in specific classes of games (such as potential or two-player, zero-sum games). In view of this, we examine the long-run behavior of policy gradient methods with respect to Nash equilibrium policies that are second-order stationary (SOS) in a sense similar to the type of sufficiency conditions used in optimization. Our first result is that SOS policies are locally attracting with high probability, and we show that policy gradient trajectories with gradient estimates provided by the REINFORCE algorithm achieve an $\mathcal{O}(1/\sqrt{n})$ distance-squared convergence rate if the method's step-size is chosen appropriately. Subsequently, specializing to the class of deterministic Nash policies, we show that this rate can be improved dramatically and, in fact, policy gradient methods converge within a finite number of iterations in that case.
Accept
This paper analyzes the convergence of policy gradient algorithms in "generic" stochastic games. The authors provide local convergence guarantees for projected gradient descent with the REINFORCE gradient estimator. Reviewers were generally positive on this paper--- though I think it needs to be much better contextualized in the literature on gradient-based learning in games (of which this is a special case). Indeed--- while interesting in the context of MARL --- the results are not very surprising given that they seem very similar with other local analyses of (stochastic) gradient-play in games (see e.g., [1]). Furthermore the equivalence of equilibria follows from well known manipulations of the single-agent RL loss function like those performed in [2], genericity arguments for Nash equilibria [3], as well as work on variational inequality approaches to learning in games [4]. The final version of the paper should really comment on these previous results. Nevertheless, due to the positive reviews and the relevance to MARL, I recommend this paper for acceptance. [1] Chasnov, Ratliff, Mazumdar, Burden; Convergence Analysis of Gradient-Based Learning in Continuous Games [2] Zhang, Ren, Li; Gradient play in stochastic games: stationary points, convergence, and sample complexity [3] Ratliff, Burden, Sastry; Characterization and computation of local Nash equilibria in continuous games [4] Mertikopolous and Zhou; Learning in games with continuous action sets and unknown payoff functions
test
[ "Vq5a2AVtXAh", "L7WytOIyyyV", "xsD6iX_z_I", "WqGLheGukyw", "5neWKZ6Dd7N", "bOP7aInLmzqW", "TE-nkKpnCt7", "m_P5T8O1Q3C", "2qOqTc4lzsy", "CNSxhQVeBeD", "CpNZNNEsXbxJ", "GSK0CJ9wJ1i2", "kHs6nOnLq8z", "Cc8xgJi5eCx", "smqr1-UOlx8", "k69HUOgZk_o", "eVLDI79jYHu", "e2MeCNkI7ra" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your follow-up comments and your positive re-assessment! We reply to your two remarks point-by-point below:\n\n1. ***On the assumptions of Jin et al.*** \n\n The only Nash equilibrium convergence result of Jin et al. concerns two-player zero-sum stochastic games; by contrast, our paper treats general stochastic games, so the assumptions of Jin et al. are stronger in terms of structure on this point. Other than that, the results of Jin et al. for general stochastic games concern (time-averaged) convergence to *coarse correlated equilibria* (CCE), which is a much weaker solution concept than Nash equilibrium. [For example, as was shown by Viossat and Zapechelnyuk (JET, 2013), there exist CCE that assign positive probability only to strictly dominated strategies, and thus fail even the most basic postulates of rationalizability.] Still, even in that case, Jin et al. also assume an initialization that gives positive weight to all states/actions, so their assumptions are similar to our own concerning the mismatch coefficient.\n \n Overall, given that the analysis and results of Jin et al. are fundamentally incomparable to our own (in terms of both the type of convergence and the solution concepts involved), it does not seem possible to perform a finer ablation study between each paper's assumptions and results, so we did not undertake one. \n \n \n1. ***On the issue of imperfect information.*** \n\n Thanks for the clarification that you had **extensive form games** in mind! Indeed, in the case of stochastic games *in extensive form*, there is an important distinction between perfect and imperfect information (Chess and Go for the former, versus Poker for the latter). Since we focus throughout on stochastic games *in normal form*, this distinction is not germane to our study, but we will make sure to include a remark and the relevant literature pointers that you brought up to clarify this in a subsequent revision. \n \nThank you again for your constructive input and engagement – and please let us know if you have any further questions!", " Dear Authors\n\nThanks for the detailed and precise reply which clarifies for me the main technical differences with Mertikopoulos and Zhou. This confront me in the score I gave and on the paper's quality. I hope the best for your paper.", " Thanks for the detailed rebuttal. I'm globally satisfied with the author's answers and will modify my score accordingly.\nSome comments on two specific points:\n\n-1 I agree that obtaining the last iterate is more convenient than only average convergence, especially since it is usually harder to compute the average policy in practice. But It also seems that you require more assumptions than in the mentioned paper, no?\n\n-4 I was rather thinking about imperfect information games for which there exists also a large literature:\n\n *J. V. Romanovsky. Reduction of a game with complete memory to a matricial game. 1962\n\n *Bernhard von Stengel. Efficient Computation of Behavior Strategies. 1996\n\n *Daphne Koller, Nimrod Megiddo, and Bernhard Von Stengel. Efficient Computation of Equilibria\nfor Extensive Two-Person Games. 1996\n\n *Marc Lanctot, Kevin Waugh, Martin Zinkevich, and Michael Bowling. Monte Carlo Sampling for\nRegret Minimization in Extensive Games. In Advances in Neural Information Processing Systems,\n2009\n\n *Samid Hoda, Andrew Gilpin, Javier Peña, and Tuomas Sandholm. Smoothing Techniques for Computing Nash Equilibria of Sequential Games. 2010\n\n *Christian Kroer, Gabriele Farina, and Tuomas Sandholm. Solving Large Sequential Games with the Excessive Gap Technique. 2018.\n\n *Gabriele Farina, Christian Kroer, and Tuomas Sandholm. Stochastic Regret Minimization in Extensive-Form Games. 2020\n\n *....\n\nIndeed it is not clear if it is possible with your setting to model, e.g, a simple card game where one player does not know the opponent's hand. In this case, which I think covers a substantial part of the games in practice, the players do not have access to the state of the game after each round.", " Thanks for the follow-up! Mertikopoulos and Zhou [40] (referenced as [37] in our original submission) introduced the notion of variational stability for continuous *concave* games - that is, games with continuous action sets and individually concave payoff functions. They did not consider second-order stationary points and/or the rate of convergence to such points, so the only point of contact with [40] would be Theorem 1 (asymptotic convergence to stable equilibria). \n\nStill, even on this point, the analysis of [40] is drastically different for the following reasons: \n\n1. The authors of [40] consider a version of Nesterov's (2009) \"dual averaging\" method which can be well-approximated by a system of ordinary differential equations (ODEs) in continuous time. This ODE allowed the authors of [40] to derive their local convergence result in the context of continuous concave games by means of stochastic approximation arguments in the spirit of Kushner and Yin (1997) and Benaïm (1999). By contrast, because of the projection step, the policy gradient algorithm **cannot be expressed** as the stochastic approximation of an ODE, so the technical analysis of [40] breaks down at the very first step. \n\n1. In addition to the above, [40] only considers learning with unbiased stochastic gradients with finite variance (akin to Model 2 of our paper). However, the REINFORCE algorithm (Model 3) **does not adhere to these assumptions**: in particular, the log-trick estimator (11) is either unbiased with infinite variance, or it has finite variance but nonzero bias (if an explicit mixing step is included). These features of REINFORCE led us to the introduction of an additional smooth exploration mechanism which cannot be handled by the analysis of [40], even if the first obstacle mentioned above was somehow overcome. \n\nInstead, our analysis focuses directly on the discrete-time algorithm (i.e., without going through a continuous-time proxy), and we leverage a series of convergence results for almost supermartingales to control the evolution of the iterates of the process (namely the Robbins-Siegmund theorem). All this makes the analysis and techniques of [40] radically different from our own - and, as we stated above, [40] does not consider second-order stationary points and/or rates of convergence (which, in turn, require different toolboxes altogether).\n\nPlease let us know if you have any further questions - and thanks again for your detailed input and positive evaluation!", " I would like to thank to authors for answering my all questions. A last question: can you please compare your results with the one in Mertikopoulos, Zhou Math Prog 2019, at least at the technical level? thanks.", " Thank you for your very encouraging comments and your positive evaluation. We reply to your main questions point-by-point below, and we have colored all relevant revisions in our paper in $\\color{purple}{\\textrm{purple}}$.\n\n1. **On the locality of our convergence results.** \nIndeed, our convergence results are local, but this cannot be avoided. In general, equilibrium policies are not unique, and gradient-based dynamics may also admit non-equilibrium attractors, such as limit cycles and the like. As a result, in the presence of multiple equilibria/attractors, the best one can hope for is a local equilibrium convergence result, conditioned on the basin of attraction of said equilibrium.\n\n These issues can only be overcome in games with a sufficiently strong global struture – such as potential or min-max games – *but not otherwise*. We have revised the relevant part of our paper to make this clear.\n\n\n1. **On non-strict second-order stationary policies.** \nSecond-order stationary policies are characterized by a negative-definite Jacobian of the individual gradient field $v(\\pi)$. Admittedly, because of the difficulty involved in calculating - and manipulating - the value functions of a stochastic game in closed form, we acknowledge that we don't have an explicit numerical example of a stochastic game with a non-strict SOS policy. However, given that negative-definiteness of the Jacobian is a hyperbolicity assumption, we don't see a fundamental obstruction to the existence of such sets. In addition, we should also point out that SOS policies are standard in the case of non-tabular stochastic games, see e.g., the recent paper of Zhang et al. (SICON, 2020) which provides a wide range of examples of non-strict SOS policies in non-tabular problems. Our theory also applies to this setting, but such an extension lies beyond the scope of this work, so we did not undertake it.\n\n\n1. **On the relation to best-iterate convergence results.** \nThe template of \"best-iterate\" results can be summarized as follows: if an algorithm is run for $T$ iterations, then, at least one of the generated iterates will have near-stationary individual payoff gradients (with the exact distance from stationarity determined by the associated best-iterate convergence rate).\n A major difficulty in this setting is that, in the stochastic case, it is impossible to determine which of these $T$ iterates is near-stationary, and this because the players only have access to *stochastic* gradients (not their mean values). In our context however, we do not need to: if we consider a game for which a best-iterate result is available (for example, as you suggest, potential stochastic games or the like), this guarantees that at least one iterate will be sufficiently close to a stationary policy. Given this \"asymptotic closeness\" result, our paper's analysis guarantees that stable policies will capture the policy gradient process with high probability, thus turning the \"best-iterate\" analysis to a \"last-iterate\" analysis.\n \n That said, in the case of stochastic potential games, it would be simpler to analyze the policy gradient method directly as a constrained stochastic gradient algorithm with possibly biased gradients (and a variance that grows to infinity), rather than going through a \"best-iterate\" type of result. We conjecture that such an analysis is indeed possible - for both the standard policy gradient method and its \"lazy\" variant - but it would take us too far afield, so we defer it to the future.\n\n1. **On more general classes of games where our results apply.** \nA highly promising application domain for our techniques would be the class of non-tabular stochastic games where SOS policies have been studied extensively - cf. the SICON paper by Zhang et al (2020) that we cited above. More generally, our results essentially apply to all continuous games with a \"gradient dominance\" property guaranteeing that stationary policies are also Nash; this extension lies beyond the scope of the current paper, but, again, it is a very fruitful direction for future research.\n\n\n1. **On extensions to ergodic stochastic games.** \nThis is a very interesting - and challenging - research question. A key difference here is that the REINFORCE algorithm is no longer meaningful in the ergodic setting, so such an extension would require new tools and techniques (at least as far as the gradient estimation process is concerned). This would also necessitate a different, non-episodic algorithm structure - however, if this difficulty is lifted, we believe that it should be posible to extend our analysis to such models; we mention this as an important and intriguing question for future research.\n\nThanks again for your insightful questions and the positive evaluation of our work. We hope and trust that our replies have alleviated any remaining concerns, and we look forward to an open-minded discussion if you have any further questions.", " Because of the openreview character limit, our replies were broken up into 2 posts, labeled as \"Replies 1/2\" and so on. Unfortunately, openreview comments appear in reverse chronological order, so we had to edit our posts in order for our replies to appear in a more natural order on openreview.", " Thank you for your time and your input. We reply to your main questions point-by-point below, and we have colored all relevant revisions in our paper in $\\color{purple}{\\textrm{purple}}$. All numbering used below corresponds to the updated version of our paper if not explicitly mentioned:\n\n\n1. **On the random stopping-time model.** \nIn his seminal work, Shapley defined the problem for two-player zero-sum stochastic games using a random stopping time $T$. Motivated by this work, we used the random-stopping attribute in our model.\nFurthermore, when the ‘terminating’ probability $\\zeta_{s,\\alpha}$ is equal for every state-action pair, and equal to $\\zeta_0$, the value function of the players in the random-stopping model is the same as in the infinite horizon case with discounted factor equal to $\\zeta_0$. \nFor policy gradient methods, an additional request to be satisfied for both models (random-stopping/infinite horizon with discounted factors) is the finding of an unbiased estimator for the gradient of the value function. While for the random stopping case the proof is direct (See Lemma 4 in Appendix F), for the case of infinite horizon multiple different approaches have been established. Although many sophisticated ones have been proposed (for the single agent case, see [1] and references therein), the easiest gateaway is actually a Monte Carlo simulation via random stopping-time, which leads back to the first formulation. \n\n A further reason that supports our formulation passes through the Von Neumann–Morgenstern utility axiomatization. More precisely, from game theory perspective, the players in a game should have a well-defined way to evaluate their utility function in each round. While in MARL with inifinite horizon this is in principle impossible, random stopping-time formulation describes syntactically how the players could evaluate probabilsitically their value. \n\n For the interested reader, in the camera-ready version we will add a short paragraph to explain the equivalences between the various models.\n\n [1] *Global Convergence of Policy Gradient Methods to (Almost) Locally Optimal Policies* Kaiqing Zhang, Alec Koppel, Hao Zhu, Tamer Başar.\n\n1. **On Lemma 2 and the gradient dominance property.** \nWe would like to respectfully point out that we ***included a proof of Lemma 2*** in \"*Appendix G: Solution concepts*\". We would also like to point out that we only said that stochastic games satisfy a ***version*** of the PL inequality: this property only holds for each agent's individual value function keeping all other players' policy variables fixed. Of course, this is a much weaker version of the PL inequality, but it suffices to show that first-order stationarity implies Nash (cf. Lemma 3).\n\n To make things clearer, we have moved the proof of Lemma 2 to \"*Appendix E: Structural properties of policy gradient methods*\", and we included a clear pointer to the appendix in our revision, as well as a remark to clarify that the gradient dominance property is only a related version of the PL inequality, not the actual PL condition itself.\n\n\n\n1. **On projecting to $\\Pi$.** \nThanks for bringing this up. Indeed, projecting to an arbitrary non-convex subset can be at least $\\mathsf{NP}$-hard. However, ***$\\Pi$ is not an arbitrary set:*** it is a Cartesian product of canonical simplices, so the projection to $\\Pi$ immediately boils down to a projection to each factor simplex $\\Delta(\\mathcal{A}_i)$. This projection can be performed efficiently in $\\mathcal{O}(A_i \\log A_i)$ operations by sorting the components of the vector to be projected and, subsequently, doing a \"water-filling\" pass. This is a widely known procedure that dates back at least to Brucker in the 1980's, and an explicit description can be found in https://arxiv.org/pdf/1309.1541.pdf (see also the standard optimization textbook of Boyd and Vandenberghe, 2004, Exercise 4.1).\n\n If the committee finds this useful, we will include a version of this discussion, as well as the relevant literature pointers.\n\n---\n**[Please see next post for the continuation of our replies]**", " ---\n**[This post is a continuation of \"Replies 1/2\"]**\n\n1. **On the existence of stable Nash policies.** \nThe question of existence of a stable policy is akin to the existence of evolutionarily stable policies in population games, cf. the classical textbook of Sandholm (2010). As in that case, many interesting classes of stochastic games possess such policies, among them potential stochastic games, games with strict equilibria and a cooperative transition structure, etc. For example, in single-state repeated games with payoffs uniformly distributed in $[0,1]$, strict (and hence stable) Nash policies exist with probability that converges to $1-1/e \\approx 64\\%$ in the large $N$ limit, so this covers a very large class of games (Dresher, 1970).\n\n As for verifying the SOS condition, it is easy to see that $\\pi^\\ast$ is SOS when the Jacobian $Jv(\\pi^\\ast)$ of $v$ at $\\pi^\\ast$ is positive-definite (cf. the proof of Proposition 1 in Appendix G), and this can be verified within $\\mathcal{O}(NSA^{\\log_2 7})$ arithmetic operations \\textendash\\ and, again, all potential games, single-state and cooperative games with strict equilibria admit such a policy.\n\n To make sure there are no doubts for the reader, we have included a version of this discussion in our revision, as well as the relevant literature pointers.\n\n\n1. **On Eq. (1).** \nYes, this was a typo, thanks for the catch!\n\n1. **On the definition of the stopping time $T(\\tau)$.**\nDue to space restrictions, we avoided the formal definition of the stopping time. This random-stopping model is equivalent to (i) having a 'terminal' state $s_f$ with zero value that is reachable from every state-action pair $(s,\\alpha)$ with probability $\\zeta_{s,\\alpha}$, and (ii) once reaching state $s_f$, the game terminates. Hence, we can define it formally as $T(\\tau):= \\inf\\{t \\in \\mathbb{N}:s_t = s_f\\}$. Since $\\zeta_{s,\\alpha} \\geq \\zeta >0$ for all $(s,\\alpha) \\in S \\times \\mathcal{A}$, we readily obtain that $\\mathbb{E}[T(\\tau)] < \\infty$, and, subsequently, $T(\\tau) < \\infty$ on almost every trajectory. The reason is that $\\mathbb{E}[T(\\tau)]$ is upper-bounded by the expected value of a *geometrically* distributed random variable with parameter $\\zeta$, which is finite.\n\n\n1. **On the term \"realized values\".** \nYes, we meant \"instantaneous rewards\", thanks for the catch!\n\n\nThanks again for your detailed reading and your remarks! We hope and trust that our replies have alleviated your concerns regarding the merits of our submission, and we look forward to an open-minded discussion if any such concerns remain.", " Because of the openreview character limit, our replies were broken up into 3 posts, labeled as \"Replies 1/3\" and so on. Unfortunately, openreview comments appear in reverse chronological order, so we had to edit our posts in order for our replies to appear in a more natural order on openreview.", " Thank you for your detailed reading and constructive comments. We reply to your main questions point-by-point below, and we have colored all relevant revisions in our paper in $\\color{purple}{\\textrm{purple}}$. All numbering used below corresponds to the updated version of our paper if not explicitly mentioned:\n\n\n1. **On the paper of Jin et al. (2021).** \nThanks for bringing up this paper. To put things in perspective, Jin et al. propose an algorithm (called V-learning) which is based on an adversarial bandit wrapper, and which updates the policy $\\pi_n$ of the $n$-th episode based on the observed rewards so far. The paper's main contributions may then be summarized as follows:\n 1. In *two-player zero-sum* stochastic games, the time-averaged state $\\bar\\pi_n = (1/n) \\sum_{k=1}^n \\pi_k$ converges to a Nash policy at a rate of $\\mathcal{O}(1/\\sqrt{n})$ in terms of the game's primal-dual gap (cf. Theorem 5 of the paper, and note that the output policy $\\hat\\pi$ is obtained by sampling uniformly over $k=1,\\dotsc,n$, so the theorem's guarantee concerns $\\bar\\pi_n$). We stress here that this result relies crucially on the zero-sum nature of the game, and *it does not apply to general stochastic games*.\n 1. In general stochastic games, the authors show that the empirical frequency of play (that is, the average number of action profiles played by a given policy) converges to the set of *coarse correlated equilibria* (a substantial relaxation of the notion of Nash equilibrium) at a rate of $\\mathcal{O}(1/\\sqrt{n})$ in terms of regret values (cf. Theorem 4).\n\n By contrast, our paper focuses on **the actual sequence of play** (i.e., $\\pi_n$ instead of $\\bar\\pi_n$) and the rate of convergence to **Nash policies** (not coarse correlated equilibria) in terms of the **distance** to such a policy (not the gap function or the regret). Needless to say, the convergence of time-averages is a much weaker convergence guarantee than the convergence of the actual trajectory of play to a Nash equilibrium: for example, if we want to minimize the loss function $f(x)=x^2$ over $[-1,1]$, the sequence $\\pi_n = (-1)^n$ converges in the mean to $0$, even though each individual iterate of the sequence yields the worst possible loss. In this regard, the $\\mathcal{O}(1/\\sqrt{n})$ convergence rates of Jin et al. are incomparable to our own as they concern a weaker type of convergence (time-averaged instead of the actual sequence), to a coarser solution concept (correlated equilibria instead of Nash equilibria), and with a different merit function. To make all this clear, we have included this discussion in the revised version of our paper.\n\n\n1. **On the evolution of the game.** \nYes, we meant that the stage game evolves over time (based on the transition from one state to another).\n\n\n1. **On the notation $a_i$ versus $\\alpha_i$.** \nThe choice of notation is, of course, subjective. Nash himself used $\\alpha_i$ to denote the actions of a game, while Shapley used $a_i$ for the payoffs of the game (not the actions) in his foundational paper on stochastic games. To avoid any errors during this (very short) revision phase, we maintained our original notation on this point.\n\n\n1. **On observing the state of the game.** \nTo the best of our knowledge, the literature on stochastic games (including the paper by Jin et al. that you cited) is almost exclusively based on this natural assumption: after all, the very notion of a policy is defined as a *map from states to actions*, cf. the original paper of Shapley, the recent review monograph by Solan in PNAS, the standard textboks by Filar and Vrieze, as well as the series of recent NeurIPS/ICML papers by Agarwal et al., Daskalakis et al. and many others. In practice, the \"state\" of a physical RL system (e.g., an aerial drone) involves the state of the environment (i.e., observable weather conditions and the like), so it is indeed observed before taking an action (e.g., deciding at which altitude to fly).\n\n\n1. **On the mismatch coefficient.** \nThe positivity postulate for the mismatch coefficient holds whenever the initial state distribution is fully mixed, (i.e., $\\rho$ assigns nonzero probability to each state in the game), an assumption which is fairly reasonable for most RL systems deployed in practice (e.g., game-playing and self-driving vehicles where all states can be observed initially). For this reason, this requirement is also standard in the literature on episodic stochastic games, see e.g., the references [a,b] below as well as the cited works [1,5,15] in the revised version of our paper and references therein.\n \n [a] Runyu Zhang, Z. Ren, Na Li, *Gradient play in stochastic games: stationary points, convergence, and sample complexity* \n [b] Runyu (Cathy) Zhang, J. Mei, Bo Dai, Dale Schuurmans, Na Li, *On the Effect of Log-Barrier Regularization in Decentralized Softmax Gradient Play in Multiagent Systems*\n\n---\n**[Please see next post for the continuation of our replies]**", " ---\n[This post is a continuation of \"Replies 1/3\"]\n\n1. **On the scalar product.** \nThe notation $\\langle\\cdot,\\cdot\\rangle$ actually refers to the canonical pairing between a primal and a dual vector, i.e., $\\langle v,\\pi \\rangle = \\sum_{j} v_j \\pi_j$, where the summation over $j$ is taken over an index set of appropriate dimension, depending on the input to $\\langle\\cdot,\\cdot\\rangle$. This dimension is $S\\times A_i$ for $\\pi_i$ and the corresponding sum over all players $i=1,\\dotsc,N$ for the policy profile $\\pi = (\\pi_1,\\dotsc,\\pi_N)$.\n\n\n1. **On the meaning of \"close\" in Definition 2.** \nFormally, we mean here that there exists a neighborhood $\\mathcal{U}$ of $\\pi^\\ast$ in $\\Pi$ such that the stated inequality holds for all $\\pi\\in\\mathcal{U}$.\n\n\n1. **On meager sets.** \nIn the sense of Baire's category theorem, a set is \"meager\" when it is a countable union of nowhere dense sets - and hence, negligible from a topological standpoint.\n\n\n1. **On the notion of \"sufficiently close\" in Proposition 1.** \nAgain, we mean here that there exists a neighborhood $\\mathcal{U}$ of $\\pi^\\ast$ in $\\Pi$ such that the stated inequality holds for all $\\pi\\in\\mathcal{U}$. The norm can be taken to be the standard $L^2$ norm (though the choice of norm does not really matter in this context).\n\n\n1. **On the meaning of \"construe\".** \nWe actually did mean \"construe\" in the sense of \"interpret\".\n\n\n1. **On the reverse P.** \nThis is the typographic pilcrow sign, which we used to mark the end of each example and make it more visible to the reader. We can remove it if the committee finds it distracting.\n\n\n1. **On training over datasets.** \nWe simply referred here to games where the policy can be encoded as a neural network - as in the case of deep reinforcement learning. We removed the remark to avoid confusion. As for ways to estimate a stohastic gradient, Model 3 provides a model-agnostic way to do so; in concrete applications (e.g., in drone-flying), the gradient can be estimated efficiently using deep RL methods as above.\n\n\n1. **On Eq. (14).** \nYes, we meant uniform sampling for all $s$.\n\n\n1. **On the value of $G$ in Eq. (15).** \nNotice that $b_{i,n} = v_i(\\hat\\pi_{n}) - v_i(\\pi_{n})$, from lemma D.7 we know that $\\|v_i(hat\\pi_{n}) - v_i(\\pi_{n})\\|\\leq \\frac{3\\mathcal{A}|}{\\zeta^3}\\sum_j\\|\\hat\\pi_{j,n}-\\pi_{j,n}\\|$. Moreover, by the definition of $\\pi_{i,n}$ and $\\hat\\pi_{i,n}$, we obtain that $|\\pi_{i,n}(\\alpha \\mid s)-\\hat\\pi_{i,n}(\\alpha \\mid s)| \\leq \\varepsilon_n$ for all $s \\in S$ , $\\alpha \\in A_i$, and therefore, $\\|\\pi_{i,n} - \\hat\\pi_{i,n}\\| \\leq \\sqrt{SA_i} \\varepsilon_n$. Combining the aformentioned quantities, we get the value of $G=\\frac{3N|\\mathcal{A}|^{3/2}\\sqrt{|\\mathcal{S}|}}{\\zeta^3}$ in Eq. (15).\n\n\n\n1. **On games that admit a stable Nash policy.** \nNo, not all games admit a stable Nash policy - the question is akin to the existence of evolutionarily stable policies in population games. However, many interesting classes of stochastic games do, among them potential stochastic games, games with strict equilibria and cooperative transitions, etc. In particular, in single-state repeated games with payoffs uniformly distributed in $[0,1]$, strict (and hence stable) Nash policies exist with probability at least $1-1/e \\approx 64\\%$, so this covers a very large class of games (Dresher, 1970).\n\n\n1. **On the overload of $r$.** \nExcellent catch, we have changed the exponent $r$ to $\\ell_{\\epsilon}$.\n\n\n\n1. **On the size of $\\mathcal{U}$.** \nThe size of $\\mathcal{U}$ scales with the minimum payoff difference between equilibrium and non-equilibrium actions per state, so it is only \"small\" in games with very small payoff differences between actions in a given state. The precise size involves solving a nonlinear inequality which does not admit a closed form description in general.\n\n\n---\n**[Please see next post for the continuation of our replies]**", " ---\n**[This post is a continuation of \"Replies 2/3\"]**\n\n1. **On the constants in the $\\mathcal{O}(\\cdot)$ guarantees of Theorem 2.** \nGetting an explicit estimate for the constant in the $\\mathcal{O}(\\cdot)$ guarantee of Theorem 2 is quite involved but, up to logarithmic and subleading factors, Chung's lemma [14,48] can be used to show that:\n - For $q<2\\mu\\gamma$ and $p=1$ the constant is $\\frac{C_1+C_2}{(2\\mu\\gamma -q)(1-\\delta)}$, where $C_1,C_2$ is the $\\sup_n\\gamma_nB_n$ and $\\sup_n\\gamma_n^2\\sigma_n^2$ respectively. We stress that these constants depend on the choice of parameters of the algorithm (i.e., $\\gamma ,\\varepsilon$ etc.) and the estimator used. This is the case used in the corollaries.\n - For $q\\geq 2\\mu\\gamma$ and $p=1$ the constant is $\\frac{(C_1+C_2)(1+ \\max\\{(2\\mu\\gamma)^2,4\\mu\\gamma\\})A}{1-\\delta}$, where $A$ is equal to $\\frac{1}{q-2\\mu\\gamma}$ in the case of an inequality; while in the case where $q=2\\mu\\gamma$ corresponds to a logarithmic factor which we have omitted.\n - For $p<1$, the constants are almost identical up to some dependence on $p$.\n\n\n1. **On the case $p=1$.** \nIn L286 of our original submission (the line numbers in the revision have changed because of the added material), we stated that $q=p/2$ (not $q=2p$), so $q=1/2$ if $p=1$. This equality results from the proof of Theorem 2: specifically, we define $q = \\min\\{\\ell_b, p-2\\ell_\\sigma\\}$; in the case of Model 3, $\\ell_b = p/2$ and $\\ell_\\sigma = p/4$, which results to $q=p/2$.\n\n\n1. **On the dependence of $n_0$ on the parameters of the game.** \nThe convergence time $n_0$ scales proportionally to the number of states and strategies in the game, the minimum payoff difference $c$ between an equilibrium strategy and a non-equilibrium strategy for any given state, and inversely proportionally to the algorithm's step-size (i.e., smaller step-sizes lead to larger values $n_0$) . Specifically, as can be seen from the last part of the proof of Theorem 3, $n_0$ can be upper bounded by the stopping time\n$\nn_0 \\leq n^\\ast = \\inf_{n\\geq1} \\{ H_p(n) ≥ \\frac{4MAS}{c\\gamma}\\}\n$\nwhere $M$ is a measure of the initial distance from equilibrium and $H_p(n) = \\sum_{k=1}^{n} k^{-p}$ is the $n$-th generalized harmonic number of order $p$ (recall that the algorithm is run with a step-size of the form $\\gamma_n = \\gamma / n^p$). For $p<1$, $H_p(n)$ scales as $\\Theta(n^{1-p})$, so, up to a universal constant, we have\n$\nn_0 = \\mathcal{O}\\left(\\left(\\frac{MAS}{c\\gamma}\\right)^{1/(1-p)}\\right)\n$\nThe dependence on the various parameters of the algorithm and the game cannot be lifted in the context of Corollary 2, so the finite-time convergence regime is always stronger in this regard.\n\n\n1. **On the overload of $R$.** \nGood catch, thanks! We changed $R$ to $W$.\n\n\n1. **On the constants $r$ and $a$ in the proof of Theorem 1.** \nNo, $r$ referred here to the definition of $\\mathcal{B}$, in L623 of the original submission. We have changed the notation of $r$ to $\\varrho$ to avoid any overload.\n\n\n1. **On the limit of $C(\\gamma,m)$ in the proof of Proposition B.1.** \nSince $\\gamma_n = \\gamma / (n+m)^p$, the terms involving $\\gamma_n$ in (B.15) both tend to $0$ as $\\gamma\\to0$ and $m\\to\\infty$.\n\n\n1. **On telescoping (B.2).** \nYes, for (B.16) we only need the fact $\\pi_k$ lies in $\\mathcal{B}$, this was a typo.\n\n\n1. **On the indicator of $E_n$ in (B.24).** \nThis was a typo, the indicator is not actually needed – thanks for catching this!\n\n\n1. **On (B.32).** \nThanks for catching the missing $+$ sign. The different sums are bounded because of (8) and the step-size conditions $p+\\ell_b > 1$ and $p-\\ell_\\sigma > 1/2$ of Theorem 1.\n\n\nWe thank you again for your detailed reading and your constructive input! We hope and trust that our replies have alleviated your concerns regarding the merits of our submission, and we look forward to an open-minded discussion if any such concerns remain.", " Thank you for your encouraging comments and positive evaluation! We reply to your main questions point-by-point below, and we have colored all relevant revisions in our paper in $\\color{purple}{\\textrm{purple}}$. All numbering used below corresponds to the updated version of our paper if not explicitly mentioned:\n\n1. **On the possible failure cases for Theorem 1.** \nThere are two types of unavoidable \"convergence failures\" that make Theorem 1 essentially tight.\n + *Locality of attractors.* In general, equilibrium policies are not unique – and, indeed, as the reviewer suggests, gradient-based dynamics may also admit non-equilibrium attractors, such as limit cycles and the like (chaotic behavior is less relevant in the stochastic case, as chaos is an inherenetly deterministic notion). As a result, in the presence of multiple equilibria/attractors, the best one can hope for is a local equilibrium convergence result, conditioned on the basin of attraction of said equilibrium.\n + *Probabilistic convergence.* The second obstruction has to do with the noise that enters the learning process (e.g., in the estimation of policy gradients via the REINFORCE algorithm). In this case, no matter how close one starts to an equilibrium policy, there is always a finite, non-zero probability that an unlucky realization of the noise can drive the process away from its basin, possibly never to return. \n\n These issues can only be overcome in games with a sufficiently strong global struture – such as potential or min-max games – *but not otherwise*. We have revised the relevant part of our paper to make this clear.\n\n\n1. **On rates of convergence for non-SOS stable policies.** \nThe reviewer is correct that the extra driving term in (C.9) is what provides the convergence benefit over (B.35). In general, it is possible to derive a rate of convergence as long as the game satisfies a local inequality of the form $\\langle v(\\pi), \\pi-\\pi^\\ast\\rangle \\geq (1/\\rho)\\|\\pi - \\pi^\\ast\\|^{\\rho}$ for some suitable $\\rho>0$; however, this level of generality didn't seem warranted, as such examples would not be generic. Beyond this, since the gradient profile near a solution can be arbitrarily flat (think of optimizing a quartic function like $x^4$ or some even higher power), it does not seem possible to obtain a rate of convergence in terms of distance to equilibrium unless there is some metric (sub)regularity condition linking the growth rate of the gradient around a solution to the distance. However, such an analysis would be beyond the scope of the current paper, so we did not undertake it.\n\n\n1. **On finite-time convergence results for different projectors.** \nThis is a very interesting question. Indeed, the projection step could be changed to some other mirror-like mapping – like a softmax/logit step, as suggested by the reviewer. However, a full-support method (like the one resulting from an exponentiated policy gradient scheme) would mean that the players' policy is fully mixed *for all* iterations of the algorithm, so it wouldn't be possible to achieve convergence to a deterministic policy in a *finite* number of iterations. Nevertheless, extending our analysis to mirror-type policy gradient schemes is a very fruitful direction for future research; we now discuss this as an open question in the conclusions section.\n\n\n1. **On numerical experiments.** \nThe numerical properties of the policy gradient algorithm have been studied quite extensively in the literature - see for example the cited papers by Leonardos et al. or Zhang et al. Given that the focus of our paper is the *analysis* of an existing, extensively tested algorithm (as opposed to proposing a new one), we felt that a raw verification of our theorems via numerical simulations would not offer further insights into the properties and behavior of policy gradient methods. Nonetheless, we would of course be happy to include some simulations in a further revision if the program committee converges that they are needed.\n\n\nWe hope that the above addresses your questions and remarks, and we will be happy to clarify any remaining points during the discussion phase.\nThanks again for your encouraging and constructive input!\n\n", " This paper studied the convergence of policy gradient (PG) methods with direct parameterization (Algorithm 2) to Nash equilibria in general stochastic games. The PG methods considered here apply to full gradient information, unbiased stochastic gradients with bounded variance, and REINFORCE estimator (Models 1-3). Explicit exploration (\"epsilon greedy\") is also used to bound the variance of PG estimators.\n\nTheorem 1 showed that a stable Nash policy is locally attractive to PG with high probability.\n\nTheorem 2 then showed that a second order stationary (SOS) Nash policy enjoys stronger results of last iterate convergence rates.\n\nTheorem 3 showed that if a Nash policy is deterministic, then a lazy policy gradient (LPG) enjoys a finite convergence to exact Nash policies. Strengths:\n\n1. The setting of general stochastic games is general and representative for multiagent learning.\n\n2. The PG methods studied here apply to multiple settings (full gradient, stochastic).\n\n3. The results and techniques look novel and interesting to me. I like the idea of discussing different results conditioning on whether a Nash policy is stable, SOS, and deterministic. \n\n4. The presentation is organized and clear.\n\nWeaknesses:\n\n1. It would be better if some simulation results can be provided to verify some of the results (high probability convergence and rate of convergence). 1. Below Theorem 1 it was stated that the high probability convergence results cannot be improved further. Could you comment on what could be the possible failure cases (e.g., limiting cycles or chaotic behaviours)?\n\n2. It seems to me SOS Nash policies enjoy convergence rates because of Eq. (C.9) has larger progress than Eq. (B.35). I am wondering for stable Nash policies, is it possible to establish a rate of convergence? If not what would be the difficulty, or any counterexample could be constructed to show the rate can be arbitrarily slow?\n\n3. The finite convergence results in Theorem 3 is interesting. I am wondering if there is a parameterization to maintain the policy as valid probability distributions (e.g., standard softmax policies), then what is the implication of this lazy projection in that scenario and can it still provide useful insights? This work is purely theoretical, and it has no potential negative societal impact as far as I can see.", " The authors consider multi-player stochastic games. They study the convergence property of policy gradient type algorithms toward Nash equilibrium policies that are second-order stationary, similar to the type of KKT sufficiency conditions. They prove that policy gradient algorithms where the gradient is estimated with the reinforce algorithm, if started close to a second-order stationary equilibrium converge to this equilibrium, in terms of squared distance, with high probability at the rate O(1/\\sqrt(n)) where n is the number of steps. If the Nash equilibrium is deterministic, for a lazy version of the policy gradient algorithm, the rate of convergence improves to a constant number of steps. Contributions:\n-flexible algorithmic template for the analysis of policy gradient methods, novelty: low, relevance: low.\n-O(1/\\sqrt(n)) rate of convergence of policy gradients method with reinforce gradient estimate towards second-order stationary equilibrium Nash equilibrium, novelty: medium, relevance: medium.\n-Finite-time convergence for a lazy version of the policy gradient methods toward a deterministic Nash equilibrium, novelty: medium, relevance: medium.\n\n\nThe paper is well-written even if some notations could be clarified, see specific comments. It could be interesting to ground the considered setting with a concrete example, see specific comments. Similarly, the claims are obtained through a cascade of hypotheses that are not exhaustively discussed and justify. The proofs seem correct at least in the part I read. The related work could be improved. For example, the author may also want to compare the obtained rate with the rate obtained by Jin et al. (V-Learning—A Simple, Efficient, Decentralized Algorithm for Multiagent RL , 2021) and reference therein since they seem to consider the same setting. It would also be interesting to discuss the results to provide lower bounds on the rate of convergence toward a Nash equilibrium. Specific comments/questions:\n\n-L30: By \"game itself evolves\" you mean the strategy of the other player or the rules of the game?\n\n-L80: The notation $a_i$ for the action seems more natural than $\\alpha_i$.\n\n-L92: Do you have a concrete example of game that you can model with this framework? In particular, a game where after each step all players fully observe the state of the game.\n\n -L127: Can you elaborate on this assumption, which at first sight seems very strong in particular for deterministic policy. And do you have a non-trivial example where this assumption is true?\n\n-L138: Can you precise what is the scalar product you consider and I assume that you see \\pi as a vector of size S\\times A right? \n\n-L153: What do you mean by close?\n\n-L163: Can you detail this point?\n\n-L167: Can you clarify \"sufficiently close\" and what norm do you consider?\n\n-L198: constructed\n\n-L210-215: 'reverse P'\n\n-L212: Can you precise what you mean by 'game involves training over datasets' and how you estimate the gradient.\n\n-L230, (14): for all s?\n\n-L235: Can you provide for G the dependence in A and S?\n\n-L244: Do all stochastic games admit a stable Nash policy?\n\n-L250: r is already used for the rewards.\n\n-L278: Can you provide any intuition on what is the typical size of \\mathcal{U}? Indeed if this set is too small there is no hope to be able to start in it.\n\n\n-L285: Can you provide an order for the constant hidden by the O in terms of the parameters of the problem? \n-L286: This part is not very clear if p=1 how do we know that q = 1/2 since 2p = q ?\n\n-L289: Can you provide lower bounds in order to state that it is effectively the rate of convergence of PG ?\n\n-L292: Can you explain how to 'to adapt the parameters of the learning process according to the complexity and limitations of the environment' in the light of Theorem 2?\n\n-L332: Can you precise how n_0 depends in the parameters of the problem? Because if this constant n_0 is very large in comparison to the one hidden in O(1/n) the relevant regime will still be the one of Corollary 2.\n\n-L630: R is already used for the cumulative rewards.\n\n-L634: a is not defined, is it any such that a+\\sqrt{a} < r, r is the rate of \\epsilon-greedy? If so not that it is only defined in Corollary 1 stated after Theorem 1.\n\n-L656: Can you detail why the two limits are zero.\n\n-L667: For (B16) you do not telescope the bound (B2) but just use that \\pi_k is in \\cB?\n\n-L678: Can you explain why you need the indicator 1_{\\cE_n} to get (B.24)?\n\n-L700: + is missing in (B.32). And you should also detail quickly why the different sums are bounded for the choice of the parameters. see Strengths And Weaknesses.", " This paper studies the convergence of the policy gradient method for a class of MDP problems. The equilibrium point investigated is the second-order stationary point, which is a popular notion in the optimization literature.\n Pros: The topic studied in this paper is no doubt interesting and meanwhile challenging. Instead of adopting the first-order condition (KKT) used in most of the game papers in literature, this paper borrows the idea of a second-order stationary point from the optimization community and established the corresponding framework to understand the well-definedness and learning behavior in the stochastic game regime. The authors establish asymptotic convergence (in high probability) for a class of MDP games.\n\nCons: However, I have some concerns regarding (1) the generality of the set-up (especially on the stopping time), (2) verifiability of the assumptions, and (3) key steps with missing proofs.\n\nDetailed questions that require clarifications are listed below. (1) Most real-world applications do not have a random stopping time $T$. The authors should better motivate and justify this setting. In addition, it is well known in the stochastic control literature that a control problem with a random stopping time sampled from an exponential distribution with rate lambda corresponds to an infinite-time horizon problem with discount factor lambda. The authors should make the linkage between these two classes of problems if there are any.\n\n(2) Lemma 2 is listed without proof. This is an important building block that later results rely on. I doubt whether the PL condition could hold (as stated in Lemma 2) without additional assumptions. It is well-known that the PL condition does not hold for the general two-player linear-quadratic zero-sum game case and some carefully constructed conditions are required [1]. General-sum game [2] and MDP games [3] are much harder problems and I am surprised to see that PL condition holds almost for free in Lemma 2.\n\n(3) The convexity of the projection set is never discussed. When $\\Pi$ does not enjoy good properties, finding the projected point can be NP-hard.\n\n(4) In Theorem 1, the authors assume that $\\pi^*$ is a stable Nash policy. However, the authors never verified under what verifiable conditions the original problem has a Nash equilibrium policy and under what conditions it is unique. Similar issue also appears in Theorem 2. The authors never discussed the sufficient conditions for $\\pi^*$ to be SOS. This leads to the danger that the main results (Theorems 1 and 2) may never be applicable.\n\nMinor comments:\n\n(a) The $a_t$ in Eqn (1) should be replaced by $\\alpha_t$ based on the definition. \n\n(b) Stopping time $T$ is not mathematically rigorously defined (i.e., the adaptiveness).\n\n(c) Please be precise with what you mean by ``realized values’’ in line 217. I understand you refer to the instantaneous rewards but it is a bit vague in the statement.\n\n[1] Zhang, Kaiqing, Zhuoran Yang, and Tamer Basar. \"Policy optimization provably converges to Nash equilibria in zero-sum linear quadratic games.\" Advances in Neural Information Processing Systems 32 (2019)\n\n[2] Mazumdar, Eric, et al. \"Policy-gradient algorithms have no guarantees of convergence in linear-quadratic games.\" arXiv preprint arXiv:1907.03712 (2019)\n\n[3] Xie, Qiaomin, et al. \"Learning zero-sum simultaneous-move Markov games using function approximation and correlated equilibrium.\" Conference on learning theory. PMLR, 2020\n=========\n==========\nI have read the reviews from other reviewers and the responses from the authors. The authors have addressed my concerns and I raised my rating to 6. Yes.", " This paper studies the local stability of stationary Nash equilibria under the projected gradient (PG) algorithm in episodic finite (states and actions) stochastic games under various assumptions on the players observations --full, intermediate (stochastic gradient) or low (value based)-- Form the fact that stationary equilibria of finite stochastic games satisfies a version of Polyak-Łojasiewicz condition (a consequence of the semi-algebraicity), stationary Nash equilibria satisfy a gradient dominance property which easily implies that equilibria can be expressed as solutions of a variational inequality (VI). The rewriting of the equilibrium condition as a VI allows to classify the stability of Nash equilibria from stable, to second-order stationary (as in Mertikopoulos, Zhou Math Prog 2019). They then use the variational inequality conditions to establish their main results, namely 1) if an equilibrium is Nash stable, then if one starts closely to it, PG converges to it with high probability 2) when the equilibrium is second-order stable then the convergence time can be estimated (roughly 1/n or 1/n^1/2)...This rate can be improved to finite time convergence for pure stationary equilibria. Strengths: this is an excellent paper. It shows the local stability of stationary Nash equilibria under the projected gradient under several observational assumptions whenever that Nash equilibrium has some additional stability properties. The fact that it uses only the variational inequality property of the Nash equilibrium implies that the result extend to a wider class of game, beyond the one studied in the paper (for example concave continuous games as studied in Mertikopoulos, Zhou 2019). The main difference with the MZ paper is that MZ studies Online Mirror Descent while this paper studies projected gradient descent. \n\n\nSome weaknesses: only local convergence results, only on a subclass of stochastic games (episodic stochastic games). Also, It seems to me that the existence of stable and second order stationary equilibria in finite stochastic games is rather hard to obtain (outside strict pure equilibria or zero sum games). - Can this result be coupled with the one (in zero-sum and potential stochastic games) where best iterate convergence results have been proved? namely, can we deduce from your analysis a convergence a global convergence is some zero-sum and potential SGs --\n\n- Can you give some non trivial examples of finite stochastic games with a stationary equilibrium which is second order stable (or stable) without being strict or the game zero-sum?\n - Can you please discuss the classes of games, beyond the class of stochastic games, where your results apply?\n\n- Can your results extend to ergodic stochastic games -where there is no ending, and the learning is obtained during the play-?" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 5 ]
[ "xsD6iX_z_I", "WqGLheGukyw", "kHs6nOnLq8z", "5neWKZ6Dd7N", "bOP7aInLmzqW", "e2MeCNkI7ra", "eVLDI79jYHu", "eVLDI79jYHu", "eVLDI79jYHu", "k69HUOgZk_o", "k69HUOgZk_o", "k69HUOgZk_o", "k69HUOgZk_o", "smqr1-UOlx8", "nips_2022_QedyATtQ1H", "nips_2022_QedyATtQ1H", "nips_2022_QedyATtQ1H", "nips_2022_QedyATtQ1H" ]
nips_2022_cZ41U927n8m
Semi-Supervised Learning with Decision Trees: Graph Laplacian Tree Alternating Optimization
Semi-supervised learning seeks to learn a machine learning model when only a small amount of the available data is labeled. The most widespread approach uses a graph prior, which encourages similar instances to have similar predictions. This has been very successful with models ranging from kernel machines to neural networks, but has remained inapplicable to decision trees, for which the optimization problem is much harder. We solve this based on a reformulation of the problem which requires iteratively solving two simpler problems: a supervised tree learning problem, which can be solved by the Tree Alternating Optimization algorithm; and a label smoothing problem, which can be solved through a sparse linear system. The algorithm is scalable and highly effective even with very few labeled instances, and makes it possible to learn accurate, interpretable models based on decision trees in such situations.
Accept
This paper extends graph-based semi-supervised learning to decision tree classifiers, where the optimization gets much more challenging. The proposed solution reformulates the problem with a new auxiliary variable, which leads naturally to an iterative solution of alternating between 1) supervised learning on trees, and 2) label smoothing via a sparse linear systems. High accuracy is favorable interpretability of the method are demonstrated in numerical experiments. All the reviewers, including myself, find the paper a solid contribution to the methodology and analysis. There are a few concerns such as computational complexity, and the rebuttal has done a good job addressing it (and other concerns). These additional results and insights can be included in the final version of the paper.
train
[ "qhXHRqvCh2m", "WGmJrmBhUI", "l6W-xTttnog", "ztPtB70Wo-gu", "_kmGVtKz5T", "65gkmFHlxNQ", "NU4wmworjUZ", "Ba63RflQfTq", "AFdX5nL_K1", "VTcYwyG-pXE", "JWNWEFTYSit", "SG2RJ6199SM", "Zt6D9XSpTum", "l9WTFGknNYF", "MAGrmfww_C7" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' detailed explanations, and my main concern has been addressed. For that reason I decide to raise my score to 6. ", " I appreciated the author detailed response. Most questions have been resolved, and author should add EBBS for comparison in the revised version. I raised my score to 6. ", " I agree generally with author remarks about comparisons with deep learning approaches. The point of the paper is on training decision trees with graph-based SSL objective. Future works should explore these comparisons, but I agree that this is beyond the scope of the current study.\n\nAuthors should add the discussion on GPU implementations raised in our comments. Some implementations may yield improvements to the overall runtimes. Also, comments from rebuttal about parallelization are interesting and could be added to revised submission. These combined suggest fruitful directions of future research for proposed approach. I also agree with the authors that scaling with respect to sample size is an appropriate metric to guage performance in large-scale SSL problems. Comments about limitations of trees on non-tabular data should be added for more complete picture of limitations of the proposed approach. I agree with the authors about the complexity discussion, and that this should be expanded in the texts as per reviewer recommendations. Also recommend moving comparison to SSCT from supplement to main paper (space permitting).\n\nI agree with Reviewer sdWW that the discussion about theoretical guarantees could be slightly. Stronger guarantees are likely difficult to obtain, as the authors point out, and the derivations and experimental results presented are a good first step. I recommend expanding this discussion per sdWW’s comments and with author response.\n\nPer reviewer dfEo’s recommendations, some of the comparisons from [1] and [2] could have been added, although I’m not sure that they would reveal much. Agree with authors about the GNN references comparison not being directly applicable. Reference [1] appeared within the “not required to compare” timeframe. Also agree with rebuttal about bi-level optimization. Overall, these references are closely related, but do not cover the same topic. Authors should include the comparisons to [1] and [2] from their rebuttal in the revision.\n\nAfter reading the other reviewers' comments and author responses, I update my score to 6-7. ", " 1. By \"probabilistic error term\", I meant exploring the Probably Approximately Correct (PAC) learnable conditions, but maybe that's a stretch and can be a potential future direction of exploration. \n \nI appreciate the authors detailed responses and applaud them for their hard work. In my opinion, due to the lack of theoretical guarantees, this work needs more rigorous experimental evaluations to substantiate the claims and demonstrate a clear benefit of using LapTAO over the other tree based methods like XGB and others. I will stand by my original evaluation and ratings. \n", " > 3. About experimental results... other baselines don't consider graph structure\n\n- We did include two baselines that consider a graph structure: LapSVM and SSCT (see suppl. mat. Figure 3). SSCT uses a \"clustering\" score at each tree split which is based on a neighborhood graph. We will move the comparison with SSCT to the main paper if there is space. Regarding LapSVM, the performance is comparable or better (especially when \\% of labeled data is small). This is remarkable given that we achieve this using a single oblique tree whereas LapSVM uses a kernel SVM. Finally, see our additional comparison with EBBS (1 tree) above.\n\n- Also, as a generic comment, incorporating graph structure in tree learning is not easy task and the remaining baselines do not use this information. Otherwise, this problem would have been solved a long time ago. Therefore, other baselines had to use various heuristics (e.g. self-training) to enable SSL for trees. In this paper, we believe we are the first to show that it is possible to define a training objective function for a decision tree that incorporates a graph prior and optimize it (approximately). Since our focus is on learning a single tree within the SSL framework, we've included all tree-based SSL baselines.\n\n&nbsp;\n\n> Can author provide more details about Graph Laplacian matrix construction? The paper only mentioned Gaussian affinities with k-nearest neighbors, it clusters all available feature vectors?\n\nWe use the same approach as for constructing affinities for t-SNE with a fixed perplexity $K$ (Hinton and Roweis, Stochastic Neighbor Embedding, NIPS 2003). Basically, it is similar to regular Gaussian affinities but the bandwidth $\\sigma$ is set individually for each training point such that it has a distribution over neighbors with perplexity $K$. We describe it further in the suppl. mat. (section 3.2). Such affinities are widely used in non-linear dimensionality reduction and we find it works better in our experiments. More generally, the graph construction really depends on the problem and practitioners may want to explore other possibilities (e.g. string kernels might work better for NLP related problems).", " > \"1. The framework is not novel to me, [1] also introduces a bi-level optimization method, incorporating label smoothing into decision tree. The author may conduct more literature reviews and comparison with existing methods although [1] and [2] are from Graph Neural Networks (GNNs) community.\"\n\nThanks for providing those two references. We were unaware of them, as they are from the Graph Neural Networks literature and upon first inspection appear completely unrelated to our goal (SSL with decision trees). Also, [1] appeared in the beginning of May, which is less than 1 month prior to the NeurIPS deadline (the reviewer guidelines state \"Authors are not expected to compare to work that appeared only a month or two before the deadline\"). That said, we are happy to comment on this work, as there is indeed a connection. (Work [1] builds on [2] but [2] itself has no relation with our tree SSL focus.)\n\nFirstly, note that \"bilevel optimization\" refers to an optimization problem having optimization problems in their constraints. This is not our case. Our overall problem (1) is a regular optimization problem but involving a non-differentiable/non-convex function $T$ (the decision tree). We reformulate this as a constrained problem in (2)-(3), but this is still a regular, not bilevel optimization. Finally, we apply the augmented Lagrangian and alternating optimization to the constrained problem, resulting in our actual algorithm.\n\nNow regarding your reference [1]. As mentioned, this is a GNNs work which involves a graph prior in the objective and an algorithm (EBBS) specifically designed for gradient boosted decision trees (GBDT). By carefully inspecting [1], we do realize that if we limit EBBS to a single tree, then their problem reduces to fitting a tree to the smoothed labels. This is similar to the beginning of the penalty path in our algorithm (our initialization step), where we do fit our oblique tree (using TAO) to smoothed labels. Please refer to section 3 and pseudocode 1 in the suppl. mat. Note however that the basic idea of first smoothing the labels throughout the unlabeled points (\"label propagation\") and then fitting a mapping to them (\"induction\") is well known since the seminal graph-Laplacian SSL approaches, such as references [24-26] in our paper.\n\nImportantly, our algorithm then alternates between label smoothing and tree fitting, while increasing the penalty parameter $\\mu$, as a way to optimize our overall problem (1) jointly over the labels and the tree. Let us see this works better. Although [1] did not provide an implementation, we did our best to implement the version of EEBS for a single tree using their provided algorithm and experimental setup. Here are our results for cpu_act and MNIST, which are clearly better for LapTAO:\n\n**CPU_ACT:**\n|Method \\ % of lbl data | 1% | 3% | 5% | 8% | 10% | 20% |\n|----------------------------|---------|----------|---------|--------|---------|----------|\n|LapTAO | 255.13 | 65.75 | 12.03 | 10.36 | 9.19 | 8.32 |\n|EBBS (1 tree) | 261.05 | 92.76 | 16.52 | 12.78 | 12.41 | 9.87 |\n\n&nbsp;\n\n**MNIST:**\n|Method \\ % of lbl data | 1% | 3% | 5% | 8% | 10% | 20% |\n|----------------------------|---------|--------|--------|-------|--------|---------|\n|LapTAO | 9.61 | 6.93 | 6.27 | 6.12 | 5.97 | 5.45 |\n|EBBS (1 tree) | 10.57 | 7.49 | 7.05 | 6.39 | 6.15 | 5.91 |\n\n&nbsp;\n\nAbout running LapTAO datasets from [1,2]. We agree that those are interesting potential applications and exploring LapTAO could be worth trying. However, in terms of dataset characteristics (size, feature dimensions, number of edges), we believe our benchmarks are on the same level and even much bigger; and we also cover both classification and regression problems. For instance, the largest dataset in [1,2] has 54k instances (and most of them use small number of features), whereas we ran our algorithm on datasets up to 1M points.\n\n&nbsp;\n\n> 2. Regarding training runtime\n\nPlease see our detailed [response (to all reviewers) above on computational complexity, scalability and reported runtimes](https://openreview.net/forum?id=cZ41U927n8m&noteId=JWNWEFTYSit).\n", " > 1. Results on larger datasets...\n\nPlease refer to our detailed [response (to all reviewers) above on computational complexity and newly reported runtimes](https://openreview.net/forum?id=cZ41U927n8m&noteId=JWNWEFTYSit). In general, there is no need for mini-batch updates; the graph Laplacian is sparse and computed explicitly, although perhaps using approximate nearest neighbors, in common with standard SSL work using graph Laplacians. See our extended explanation of the computational complexity for further speedups possible. As an example, our algorithm scales well on 1M data points (SUSY) on a regular PC. As for the large scale image datasets, please see our comment on limitations below.\n\n&nbsp;\n\n> 2. Comparison with deep learning based approaches\n\nThanks for pointing out this possible future research direction. However, this paper is focused on training decision trees with a graph-based SSL objective, which has never been possible before as far as we know. Therefore, (almost) all baselines are tree-based methods.\n\n&nbsp;\n\n> Statistical guarantees...\n\nFig. 1 is purely for illustrative purposes and to motivate our work. Indeed, for a certain choice of labeled points, the results of naive baseline and LapTAO can be somewhat similar, which is not surprising giving the simplicity of the dataset. The two-moons dataset is another classical example to illustrate SSL. For a certain choice of labeled points, all SSL algorithms will show similar results compared to naive baselines.\n\n&nbsp;\n\n> Theoretical guarantees...\n\nWe are not sure what you mean by \"probabilistic error term\". We can guarantee a monotonic decrease in both steps (label step and tree step) and hence a monotonic decrease of the augmented Lagrangian for fixed multipliers at each iteration. Beyond that, given the NP-hardness of training a tree and the nonconvexity of the overall SSL objective, stronger theoretical guarantees are a difficult but interesting topic of future research.\n\n&nbsp;\n\n> I request the authors to show results which highlight the limitations of their model. It will be great to get an idea of the size and types of data for which one can opt their method over other SOTA.\n\n1. Please refer to our detailed [response (to all reviewers) above on computational complexity and newly reported runtimes](https://openreview.net/forum?id=cZ41U927n8m&noteId=JWNWEFTYSit). One limitation is that our algorithm is slower than the naive baselines. This is the price to pay for a better model. That said, the algorithm is scalable (linear on the sample size) and can be parallelized in certain parts.\n2. Tree-based methods are known to work very well with tabular data, but less well than neural nets with image data. Therefore, we did not run our algorithm with large scale image datasets. In such situations, one could still use features from a pretrained deep neural network and then run LapTAO. As of now, we suggest applying LapTAO for mostly tabular data where tree-based frameworks are dominantly used (e.g. XGBoost, Random Forests, etc.).", " \n> Scalability, computational complexity and exact runtimes...\n\nPlease see our detailed [response (to all reviewers) above on computational complexity and reported runtimes](https://openreview.net/forum?id=cZ41U927n8m&noteId=JWNWEFTYSit). Importantly, as noted in our explanation there, note that the \"tree$-$step\" does **not** require solving the logistic regression on the whole training set over all branch nodes in each TAO iteration. We only run $\\Delta$ (tree depth) logistic regressions on the whole training set in total in each TAO iteration.\n\n&nbsp;\n\n> \"The selection of methods in the experiment part is not representative and seems very random...\"\n\nWe've included all tree-based SSL baselines (SSCT, self-training, fully supervised tree) that we are aware of. Furthermore, we've extended self-training for oblique trees and for the regression setting to make an apples-to-apples comparison. Unfortunately, SSCT does not support oblique trees and extending it requires significant modification to the implementation. Moreover, SSCT has scalability issues even with axis-aligned tree which will get even worse with oblique splits. Even more, we went further and compared against other methods (non tree-based) that also apply manifold regularization (e.g. LapSVM). The performance is comparable or better (especially when the proportion of labeled data is small). This is remarkable given that we achieve this using a single tree, whereas LapSVM uses a kernel SVM. We explain the motivation and reasons behind choice of baselines in section 4.1.\n\n&nbsp;\n\n> \"Similarly, the authors can implement experiments on the extension of LapTAO to other models...\"\n\nExtending LapTAO to other models (e.g. neural nets, GBDT) is straightforward but beyond the scope of this paper. Our focus was on making the optimization of a graph-based SSL objective possible, for the first time, with decision trees.", " > The main algorithm LapTAO is essentially applying ADMM with coordinate descent to the original problem (1).\n\nThanks for pointing out. However, we are not sure why reviewer listed this as a weakness: we solve a long-standing problem (graph SSL with trees) in a convenient way using modern optimization techniques. Also, our proposed approach is not quite ADMM. The problem in (1) is not directly in a form where we can apply ADMM. We believe that reformulating the problem by introducing new variables and consequently reducing it to a linear system and tree fitting is non-trivial. Moreover, there are other important contributions in our method: exact solution for the leaves given the fixed decision nodes (section 3.3), deriving the solution for the beginning of the $\\mu$ path (i.e., initialization), accelerating the \"label step\" (section 1.2 in the suppl. mat.), etc.\n\n&nbsp;\n\n> The choice of using TAO method is not justified...\n\nIt is true, and an advantage of our approach, that (in principle) we can use any other tree learning algorithm in the tree step. But there are important reasons for using TAO:\n1. TAO supports axis-aligned and oblique trees.\n2. TAO is able to find much better optima of the tree objective function in a scalable way. Critical for this is the fact that TAO guarantees a monotonic decrease of the objective at each iteration (see reference [4] in the main paper).\n3. TAO can handle sparsity regularization terms for oblique trees, such as an L1 penalty. This generates highly accurate but relatively shallow trees that use few nonzero weights, which leads to high interpretability.\n4. TAO supports warm start, i.e., the ability to improve over a given tree (from the previous iteration), which is important to save runtime (by providing a good initialization) and also to avoid oscillations (switching erratically to different local optima). The ability to use warm-start is a rather unique property among tree-learning algorithms. Indeed, the traditional, greedy recursive partitioning algorithms such as CART or C4.5 do not support warm start; instead, they induce a new tree from scratch at each iteration. This is problematic because it leads to significant instability and noisy behavior across iterations. (It is well known that a little change in the training data often leads to completely different tree structures and parameters in these algorithms; see Peter Turney: \"Technical Note: Bias and the Quantification of Stability\". Machine Learning(20):23-33, 1995.) Another type of tree-learning algorithms are based on mixed integer optimization (brute-force search, typically via a branch-and-bound algorithm). These algorithms do not support warm-start either, but more importantly, their worst-case runtime is exponential and they do not scale beyond tiny trees (depth up to 4) and tiny datasets -- the latter making them wholly unsuitable for SSL, obviously. They also have significant restrictions, usually requiring binary (not continuous) features and axis-aligned trees only.\n5. And last but not the least: it works well in practice, as we convincingly show in the experiments section.\n\nTo confirm the importance of using TAO with sparse oblique trees, we experimented with other algorithms in the \"tree--step\" of our algorithm. For instance, consider below results for CPU_ACT:\n\n|Method \\ % of lbl data | 1% | 3% | 5% | 8% | 10% | 20% |\n|------------------------------|----------|---------|--------|---------|-------|-----------|\n|LapTAO | 255.13 | 65.75 | 12.03 | 10.36 | 9.19 | 8.32 |\n|LapCART | 263.50 | 83.79 | 18.14 | 13.93 | 13.04 | 11.69 |\n|CART_SELF | 293.45 | 228.63 | 21.89 | 21.04 | 20.45 | 14.92 |\n\nHere, LapCART indicate our proposed algorithm but \"tree$-$step\" was replaced by CART. For reference, we also report the performances of our originally proposed LapTAO and CART_SELF baselines. Results clearly indicate superiority of LapTAO over other baselines. They also show that, if we do insist in using CART, our algorithm improves over the CART self-training baseline.\n\n&nbsp;\n\n> The main LapTAO algorithm has no theoretical analysis.\n\nThe reviewer should realize of the difficulty of the computational problem: training a decision tree on its own is NP-hard, and the graph-Laplacian SSL learning problem is heavily nonconvex and nondifferentiable due to the model being a tree. That said, we can guarantee a monotonic decrease in both steps (label step and tree step) and hence a monotonic decrease of the augmented Lagrangian for fixed multipliers at each iteration. Given the competitive empirical performance of our algorithm, stronger theoretical guarantees are an interesting topic of future research.\n", " > Line 330: in the description of Figure 4 it says \"$\\alpha=1$ and $\\alpha=0.1$\", should the second value be $\\alpha=10$? Same also on line 331, see the caption of Figure 4.\n\n> Supplemental material, line 13: Should it say \"$\\textbf{L}$ is positive\" instead of \"$\\textbf{J}$ is positive\"?\n\nThanks for noticing these typos! Yes, the second value should be $\\alpha=10$ and the first matrix in suppl. mat. should be $\\textbf{L}$. We will correct that.\n\n&nbsp;\n\n> Why was only 1 iteration used in most of the experiments, as described in the caption of the pseudocode in Figure 1 of the supplemental material? Were other values tried?\n\nFor each fixed value of $\\mu$, (ideally) we should iterate (tree$-$step and label$-$step) until convergence and we investigated this for toy 2D dataset and on MNIST. Specifically, we tried >10 iterations per $\\mu$ and it only marginally improved the overall performance but significantly increased the runtime. Therefore, we kept only one iteration for the remaining set of experiments which is commonly practiced in other applications of augmented Lagrangian.\n\n&nbsp;\n\n> Regarding tree visualizations and interpretability: a tree with $\\alpha=0$ and \"the discussion on model interpretability and sparsity could be improved...\"\n\nThanks! We will add more discussion (together with more elaborate analysis of our obtained tree) in the next revision. Our main point in that section was to show that we can inspect and verify our final model trained using SSL (interpretable semi-supervised learning?). We can ask such questions as: does the unlabeled data was able to improve our model? Whether it leads to a better feature utilization? Are the features used by the tree meaningful? etc. We believe this is extremely important for sensitive applications such as credit scoring where regulations are strict (GDPR, AI act, etc.)\n\n&nbsp;\n\n> Regarding scalability and GPU\n\nPlease see our detailed [response (to all reviewers) above on computational complexity and reported runtimes](https://openreview.net/forum?id=cZ41U927n8m&noteId=JWNWEFTYSit). Regarding GPU implementation, we are thinking about this avenue as future research direction. Indeed, leveraging GPU can significantly boost the runtime but, unfortunately, implementing trees in GPU is non-trivial due to bad memory locality. That said, there are some research done in this direction (e.g. GPU version of XGBoost). In our current implementation, the simplest thing to consider would be GPU acceleration of the logistic regression training for large scale data.\n\n&nbsp;\n\n> In \"tree--step\", replace TAO oblique with CART axis-aligned\n\nWe indeed experimented with other algorithms in the \"tree$-$step\" of our algorithm. For instance, consider below results for CPU_ACT:\n\n|Method \\ % of lbl data | 1% | 3% | 5% | 8% | 10% | 20% |\n|----------------------------|---------|----------|---------|--------|---------|----------|\n|LapTAO | 255.13 | 65.75 | 12.03 | 10.36 | 9.19 | 8.32 |\n|LapCART | 263.50 | 83.79 | 18.14 | 13.93 | 13.04 | 11.69 |\n|CART_SELF | 293.45 | 228.63 | 21.89 | 21.04 | 20.45 | 14.92 |\n\nHere, LapCART indicate our proposed algorithm but \"tree--step\" was replaced by CART. For reference, we also report the performances of our originally proposed LapTAO and CART_SELF baselines. Results clearly indicate superiority of LapTAO over other baselines. That said, it is still beneficial to apply our algorithm with CART since it improves over self-training baseline.\n\nAnother important thing is that CART does not support warm-start at each iteration, because it grows a tree from scratch from the root rather than update the current tree's parameters. This is problematic because CART (and related greedy recursive partitioning algorithms such as C4.5) are known to be very sensitive to the training set: a little change in the data typically leads to completely different tree structures and parameters. Indeed, we observed a significant instability and noisy behavior across iterations with CART. This does not happen with TAO because it takes the previous iteration's tree as initialization.", " We thank all reviewers for their valuable comments and for the effort they have put in our paper. Below we provide responses to the generic comments raised by several reviewers:\n\n> **Computational complexity and scalability**\n\nAlthough the paper (page 5) had a paragraph about the computational complexity, we realize it was too terse and we'll expand it. The most important thing to notice is that the complexity is linear on the sample size $N$. In detail, at the top level, LapTAO runs a problem-dependent number of iterations (depending on the $\\mu$ schedule, typically less than 20). Each iteration has to solve two subproblems (approximately):\n\n- The label step: this is a large, sparse linear system of $N \\times N$, solved approximately with Conjugate Gradients, initialized by the previous iterate (warm-start). Each CG iteration is $O(N k)$ where $k$ is the average number of neighbors in the graph, and the number of CG iterations is at most $N$ (in practice, convergence occurs much faster). The total runtime of the label step is less than 30 seconds in the largest experiment we conducted (1GB of data, 1M points). Convergence can be further improved via preconditioning (e.g. Jacobi). (We can also solve the linear system exactly in $O(N^2)$ by caching its SVD, as noted in the supplementary material section 1, but this is only convenient if $N$ is a few thousands at most.)\n\n- The tree step: fitting an oblique tree with TAO to the $N$ training points. Each iteration of TAO updates each decision node and leaf node. For each leaf, we compute the average of its reduced set (training points reaching it), so this is $O(N D)$ over all the leaves (since their reduced sets total $N$ points and $D$ is the feature dimension). For each decision node, we train a logistic regression on its reduced set. Assuming logistic regression is linear on the sample size and dimensionality, this is also $O(N D)$ for all decision nodes at the same depth (since, as for the leaves, their reduced sets total $N$ points), although with a larger constant factor in the big-O notation than for the leaves. Hence, processing all the decision nodes in the tree is $O(\\Delta N D)$, or equivalently, running $\\Delta$ logistic regressions on the whole training set. See more details in [4]. Importantly (regarding a question by **reviewer bXo9**), note that the cost is not that of solving a logistic regression on the whole training set *over all decision nodes* (of which there are $2^{\\Delta}-1$); we only run $\\Delta$ logistic regressions. This is a critical advantage of TAO and is due to the fact that each node (decision or leaf node) only handles the points in its reduced set. In summary, the overall runtime of TAO is $O(\\Delta D N)$ per TAO iteration (we run 10 TAO iterations in our experiments).\n\nSince the tree step dominates the label step, in terms of runtime, our algorithm is almost like sequentially training decision trees (as in boosting), which is what the self-training baseline does.\n\nThere are two additional speedups which we did not explore here. First, TAO itself can be parallelized depth-wise, i.e., the nodes at the same depth (whose reduced sets are disjoint) can be optimized in parallel. Our implementation and runtimes in the paper are purely sequential. Second (regarding a question by **reviewer e5V7**), using GPUs. This is possible with GPU-friendly implementations of logistic regression, and also because oblique trees involve scalar products (unlike axis-aligned trees). In general, GPU acceleration of oblique tree algorithms is an interesting topic worthy of research.\n\nFinally, this computational cost should also include computing the nearest-neighbour graph (and the graph Laplacian L). This is indeed a large cost. A naive implementation requires $O(D*N^2)$ to calculate the distance vector for each point and determine the nearest neighbors. This can be parallelized for each point. With large datasets, one can perform approximate nearest neighbors search (e.g. via Locally Sensitive Hashing). However, the graph construction is orthogonal to our work, and leveraging the graph Laplacian is by far the standard approach in the SSL literature.\n\n&nbsp;\n\n> **Exact training runtimes on benchmark datasets (section 4.2)**\n\n|Dataset\\Method | LapTAO (ours) | oblique-self | axis-self | SSCT |\n|----------------------|-------------|-----------------|----------|------------|\n|cput_act | 1072s | 934s | 23s | 936s |\n|mnist | 11027s | 9572s | 514s | 15932s |\n|SUSY | 22421s | 17873s | 816s | >1d |\n\n**Note**: we ran our code on a regular PC, with little parallel processing and using unoptimized Python implementation. Therefore, the training runtime for LapTAO can be significantly improved.\n", " Many real-world machine learning applications encounter both labeled and unlabeled data. \nIn such problems there may be a scarcity of high-quality labeled data, while large amounts of unlabeled or partially labeled data are available. \nSemi-supervised learning (SSL) involves the task of training machine learning models on such datasets to obtain more accurate predictions than models trained on labeled data alone. \nGraph-based methods for regularization have been demonstrated to provide efficient methods for SSL problems, including the popular Laplacian regularized least squares and Laplacian support vector machines (LapSVM).\nHowever, extending graph-based regularization to the problem of training trees in SSL tasks has received comparatively less attention. \n\nThis paper proposes an algorithm for training decision trees in SSL problems. The proposed approach combines ideas from manifold regularization, sparse regularization, and tree optimization to obtain a sample efficient algorithm for training sparse decision trees. \nThe algorithm alternates between a \"label-step'' and a \"tree-step'' until a stopping criterion such as maximum number of iterations or convergence is satisfied. \nIn the label-step the augmented Lagrangian method is applied, yielding an unconstrained optimization problem whose solution yields an updated approximation of the labels based on the current (fixed) tree.\nIn the tree-step, tree alternating optimization (TAO) is applied to update the parameters of the decision tree to fit the current estimates of the labels. \n\nThe authors apply the resulting LapTAO (for Laplacian TAO) algorithm to the training of oblique decision trees in a variety of tasks. The results show that oblique trees trained with LapTAO obtain smaller MSE on the test set and testing error compared to a variety of alternative approaches for training axis-aligned and oblique trees in SSL, as well as KNN on the labeled data alone. The authors also demonstrate LapTAO yields similar performance to LapSVM on the Fashion-MNIST problem given greater than 3% of training data, while outperforming that method by considerable margin when trained with less than 3% labeled data. Finally, the authors display a visualization of the sparse parameter vectors obtained by the algorithm in the Fashion-MNIST case to demonstrate the interpretability of the resulting oblique decision tree.\n The paper is well-written, and the proposed approach is interesting and novel. The derivations are simple and straightforward, and the presentation and length of the paper are good. In particular, Section 3 of the paper is easy to follow and the authors provide ample discussion on initialization, extension of LapTAO to classification and multioutput regression problems, and extension of the general proposed alternating approach to other models such as neural networks. The experiments also provide many comparisons to other approaches, demonstrating the LapTAO algorithm achieves higher accuracy over other approaches for SSL, with a focus on tree-based methods in particular but also including LapSVM, on a variety of tasks. SSL is an important problem relevant to many application areas and decision trees are also an important direction of active research. \n\nHowever, the paper lacks somewhat in the depth of the experimental validation of the claims of the improved performance over other approaches for SSL and in discussion of computational complexity of the approach in large scale machine learning tasks. The experiment results presented are for the MNIST, susy, cpuact, year\\_pred, and fashion-MNIST datasets. As the authors describe, the largest experiment was conducted on a CPU and took less than 24 hours. It would be interesting to see how the LapTAO method performs on some larger scale benchmark prediction tasks where GPUs are necessary, in particular with regard to the computational complexity relative to other popular approaches. It would also be interesting to see further comparisons on the proposed regularization approach with different types of trees, for example using this formulation with axis-aligned trees and CART.\n\nSection 4.3 on model interpretability displays visualizations of the weights of the nodes of some oblique trees trained with LapTAO on the Fashion-MNIST dataset. While sometimes it appears clear from the displayed weights why the model takes one branch over another (e.g. in the case of the boot from node 1 of the $\\alpha=10$ case of Figure 4), other times it is not so clear. It would be interesting to see a comparison with models trained without the $\\ell_1$ term, i.e. $\\alpha = 0$. Overall, the discussion on model interpretability and sparsity could be improved. \n\n Line 330: in the description of Figure 4 it says \"$\\alpha = 1$ and $\\alpha=0.1$'', should the second value be $\\alpha=10$? Same also on line 331, see the caption of Figure 4.\n\nSupplemental material, line 13: Should it say \"$\\mathbf{L}$ is positive\" instead of \"$\\mathbf{J}$ is positive''?\n\nWhy was only 1 iteration used in most of the experiments, as described in the caption of the pseudocode in Figure 1 of the supplemental material? Were other values tried?\n\n The authors addressed the limitations and potential negative societal impact of the work. ", " This paper studies the semi-supervised decision tree learning problem. By introducing additional variables $z$, the original problem can be reformulated and solved iteratively as two simpler problems: a label smoothing problem, which can be solved through a sparse linear system, and a supervised tree learning problem, which was solved by the Tree Alternating Optimization algorithm in this paper. The numerical experiments demonstrate that the proposed method in this paper is able to achieve high testing accuracy and good interpretability. Strength: \n1. This paper studies semi-supervised version of decision tree learning problem, which has not been well-studied so far. \n2. The algorithm LapTAO introduced in this paper can be easily extended to classification and multioutput regression problems, and can even be extended to other models with little modification. \n\nWeakness:\n1. The main algorithm LapTAO is essentially applying ADMM with coordinate descent to the original problem (1). \n2. The choice of using TAO method for \"Tree-step\" does not quite make sense to me, this seems arbitrary and the reason is not fully justified. There are many other efficient algorithms for training optimal decision tree, with or without regularization term. \n\n 1. Since TAO method in \"tree-step\" requires solving the logistic regression on the whole training set over all branch nodes for multiple iterations, for large dataset it can be very time consuming. For that reason, I do not expect the LapTAO algorithm can be run for too many iterations within a particular time limit. My question is, have the authors tried to use other more efficient algorithms (can be greedy) to fit the tree? \n2. Computation time should also be reported in the experiment section. 1. The main LapTAO algorithm has no theoretical analysis. \n2. The selection of benchmark methods in the experiment part is not representative and seems very random. For example, the authors do not have to compare with KNN or any other axis-parallel decision tree, instead, the authors should focus on comparing the performance of LapTAO with the application of other semi-supervised learning algorithm on decision tree (e.g., SSCT in the supplementary materials). Similarly, the authors can implement experiments on the extension of LapTAO to other models, and compare with the successful semi-supervised learning algorithms of those models. ", " This paper introduces a bi-level optimization method for semi-supervised learning tasks, which iteratively solves a supervised tree learning problem and label smoothing problem. The experimental results demonstrate that LapTAO can achieve accurate and interpretable in label scarcity situations. ## Strengths:\nThe paper is well-written and easy to follow, the reformulation part is well-organized. Some model interpretability are provided, making the model more trustful.\n\n\n## Weakness:\n 1. The framework is not novel to me, [1] also introduces a bi-level optimization method, incorporating label smoothing into decision tree. The author may conduct more literature reviews and comparison with existing methods although [1] and [2] are from Graph Neural Networks (GNNs) community. But most node classification problems in GNNs are semi-supervised learning problem. The paper may have broader impact if the author consider more tasks or datasets from [1] and [2]. \n\n2. About model complexity or training time, the author should provide more comparison with other baselines (especially for strong baseline **oblique–self**, the performance is close to LapTAO in Figure 2 ). The author roughly described the complexity and rarely mentioned other baseline's training time. \n\n3. About experimental results, Figure 2 may not be compelling for me, since only LapTAO introduces graph Laplacian matrix and other baselines don't consider graph structure. The gap between LapTAO and other competitors is probably caused by introducing graph information. LapTAO and LapSVM is a fair comparison for me since both methods start from graph Laplacian matrix and the gap between these two methods is narrowed compared with Figure 2. The author may need to carefully conduct a fair baseline comparison or justify why there is a gap between LapTAO with other baselines in Figure 2. \n\n[1]. https://openreview.net/forum?id=nHpzE7DqAnG\n\n[2]. https://openreview.net/pdf?id=ebS5NUfoMKL\n\n 1. Can author provide more details about Graph Laplacian matrix construction? The paper only mentioned Gaussian affinities with k-nearest neighbors, it clusters all available feature vectors? no potential negative societal impact ", " This work proposes an approach to do semi-supervised learning with decision trees. Their approach splits the objective function into a sparse linear system and a decision tree optimization part. They use alternating optimizations to learn the parameters and minimize the objective function. The linear part is solved analytically. An approximate solution for the tree optimization is obtained using a previously developed TAO method. The authors demonstrate the effectiveness of their method for the SSL tasks on different regression/classification tasks and also analyze the interpretability of their model. Pros:\n1. A well thought off application of the Alternating Optimization technique. Since, the TAO algorithm cannot directly optimize the Eq.(1), the reformulation is a neat way of circumventing this problem. [Lines 158-159]\n2. The paper is well motivated. The approach is reasonable and the math works out. \n\nSome improvements:\n1. Results on larger datasets: Will be interesting to see the results on larger image datasets. I am curious on how the affinity matrix will be designed. Will you take a subset of images which is defined by a batch size? In that case the affinity matrix will need to be recalculated and then the algorithm will be slower. Please share your thoughts and maybe some best practices that you recommend. \n2. Comparison with deep learning based approaches: I understand that interpretability is also one aspect to compare. Visualizing the CNN filters for instance in Fig. 4. Also, I would like to see a comparison with SOTA DL methods, that will give the readers an idea about the limitations. \n Questions & Suggestions\n1. In the Fig. 1 example, can you also provide any statistical guarantees? For instance, running with different number of parameters, different initializations, tree sizes etc. for LapTAO and other methods. I suspect that with the choice of labels for this 2D data, the classifier can easily choose any of the decision boundaries in the right 2 plots. I will definitely appreciate adding the statistical significance along with the error percentages. \n2. Theoretical guarantees: Any ideas of deriving the convergence rate for the alternating optimization? I understand that the tree optimization is NP-Hard but having a probabilistic error term will be very helpful to understand the limitations of this approach. \n\nThis is a good direction to explore further. I would like to know the authors views on the above points and their approach to handling them. \n I request the authors to show results which highlight the limitations of their model. It will be great to get an idea of the size and types of data for which one can opt their method over other SOTA. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 5 ]
[ "Ba63RflQfTq", "_kmGVtKz5T", "VTcYwyG-pXE", "NU4wmworjUZ", "65gkmFHlxNQ", "l9WTFGknNYF", "MAGrmfww_C7", "AFdX5nL_K1", "Zt6D9XSpTum", "SG2RJ6199SM", "nips_2022_cZ41U927n8m", "nips_2022_cZ41U927n8m", "nips_2022_cZ41U927n8m", "nips_2022_cZ41U927n8m", "nips_2022_cZ41U927n8m" ]
nips_2022_303XqIQ5c_d
You Only Live Once: Single-Life Reinforcement Learning
Reinforcement learning algorithms are typically designed to learn a performant policy that can repeatedly and autonomously complete a task, usually starting from scratch. However, in many real-world situations, the goal might not be to learn a policy that can do the task repeatedly, but simply to perform a new task successfully once in a single trial. For example, imagine a disaster relief robot tasked with retrieving an item from a fallen building, where it cannot get direct supervision from humans. It must retrieve this object within one test-time trial, and must do so while tackling unknown obstacles, though it may leverage knowledge it has of the building before the disaster. We formalize this problem setting, which we call single-life reinforcement learning (SLRL), where an agent must complete a task within a single episode without interventions, utilizing its prior experience while contending with some form of novelty. SLRL provides a natural setting to study the challenge of autonomously adapting to unfamiliar situations, and we find that algorithms designed for standard episodic reinforcement learning often struggle to recover from out-of-distribution states in this setting. Motivated by this observation, we propose an algorithm, Q-weighted adversarial learning (QWALE), which employs a distribution matching strategy that leverages the agent's prior experience as guidance in novel situations. Our experiments on several single-life continuous control problems indicate that methods based on our distribution matching formulation are 20-60% more successful because they can more quickly recover from novel states.
Accept
The paper introduces a new formulation for single life reinforcement learning which is interesting. Moreover, an algorithm is presented for solving this RL scenario. The paper was evaluated positively by all reviewers. The 2 borderline reviews main concerns were: - missing theoretical evidence / motivation for the algorithm (Riviewer muVA): This concern has been mostly addressed by the authors. They motivate their choice of the weights, but how to incorporate the weights into the algorithm is clear to me on an intuition level, but not so much backed on theory. - the algorithm was not illustrated for scenarios with changing goals (Reviewer jw3X): This concern was addressed by the rebuttal. Unfortunately, the two reviewers with the borderline scores did not respond to the rebuttal, but I think their concerns have been mostly addressed and they should have raised their score. Hence, I recommend accepting the paper.
train
[ "c1h6L1oVe-C", "Zi70v7eiS-b", "WXYNtqV3I39", "N356D1MDDz", "PYy4GPycET", "ZsAsYYnh34xs", "efNo-qjh_d", "niD57hsm_Al", "aIfLQXrUG44", "br1j_6Jhtt", "6N3DYltWz0", "xldJeKRCJ2O", "ermtgKfcFLV", "G3k2E3nkTzq", "eKcMAjUBAMu", "j44HPvaKBo7", "KBDgrML8Zqr" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hi reviewer jw3X, \n\nWe wanted to check in again. Can you let us know if our revisions and response address your concerns? If not, we would be happy to provide further revisions for remaining concerns. Thank you!", " Hi Reviewer bBp1,\n\nWe wanted to check in again to see if our revisions and response address your concerns, and if so, whether your evaluation of our paper has changed. If not, we would be happy to provide further revisions. Thank you!\n\n", " Hi Reviewer muVA, \n\nWe wanted to check in again. Can you let us know if our revisions and response address your concerns, and if so, whether your evaluation of our paper has changed? If not, we would be happy to provide further revisions. Thank you!", " Thank you for the very thorough revision of the work and responses to the reviewers. I think the paper would be an interesting addition to the RL literature. I feel like my original score still reflects my opinion about the paper, therefore I've decided to keep it.", " Hi reviewer bMAN,\n\nThanks again for your thorough review! Please let us know if our response has addressed your concerns and if there's anything else you have questions about. ", " Hi reviewer muVA,\n\nWe have updated the paper with some additional theoretical justification for QWALE. GAIL finds a policy whose occupancy measure minimizes the Jensen-Shannon (JS) divergence to the given prior data. In contrast, particularly because our given prior data may include arbitrarily suboptimal state, we do not want to treat all prior data as equally desirable. Thus, we aim to minimize the JS divergence over the policy's occupancy measure and a target distribution that leads towards task completion. This gives QWALE, where the altered target distribution that the algorithm minimizes the JS divergence over leads to the weighting term added to the discriminator update. We describe this in more detail in Section 5 and Appendix A.1.\n\nWe wanted to follow up to see if the response and revisions address your concerns. We would be happy to provide further clarifications and revisions if you have any further questions. Thank you again for all of your detailed comments!", " Hi reviewer jw3X,\n\nPlease let us know if our response addresses your concerns. We would be happy to provide further revisions to address any remaining issues. Thank you again!", " Hi reviewer bBp1, \n\nWe wanted to follow up to see if the response and revisions address your concerns. Please let us know if you have any further questions. Thank you again!", " Thank you for your detailed comments. We are glad that you find the paper interesting. Please let us know if the responses below address all of your concerns!\n\n> No theory … For example, how does the weightings added to the discriminator update rule affect the theoretical analysis of GAIL [26]...No explanation was given for the specific choice of the weightings.\n\nWe have begun drafting some theoretical justification for QWALE and plan to revise Section 5 with it in the next week. Our initial analysis has led to improvements in the method; in particular, we find that it is better justified to not include the weights on the negative examples when training the discriminator, which has led to an improvement in performance, particularly in the Franka-Kitchen domain. Thank you for helping bring about these improvements. While it's possible that other weighting schemes could also be used, we hope our explanation will provide some justification to the reader for our particular design choice.\n\n> In line 618 of the Appendix you give a different reward function used to speed up training (different from the one in Algorithm 1). What exact reward function was used for training? Was the same reward function used for the GAIL baseline?\n\nWe used the auxiliary reward function given in Appendix A.1. for all QWALE and GAIL runs. We have fixed Algorithm 1 to reflect this. \n\n> In figure 5 (or the Appendix), please include the trajectories for SAC and GAIL (and compare them with QWALE)\n\nWe have moved the online state visitation plots for SAC from Figure 2 to Figure 5 for a clearer comparison to QWALE. We will include trajectories for GAIL in the Appendix in the final version. \n\n> How sensitive is QWALE to the hyper-parameters involved? A hyper-parameter sweep plot would be useful (since there are no theoretical contributions to strengthen the claims)\n\nOur implementation is built on top of the public implementation of SAC from the EARL benchmark and uses the default hyperparameter values. The only additional hyperparameter used in QWALE is a bias term $b$. In our implementation, we do not need to tune b because we just use the value of the most recent state, but we will provide a hyperparameter sweep of different constant b terms for the Tabletop environment in the Appendix in the final version. Additionally, we will provide more theoretical grounding to our method in our revised Section 5.\n\n> Missing reference in line 248: \"Appendix ??\" \n> qθ appears in algorithm 1 and line 243, is critical to the approach, but is never defined. Did you mean qϕ? \n> How is Donline updated in Alg 1? \n\nWe have made minor typo changes in Section 5 to address these three points. We thank you for your thorough reading of our work and constructive feedback. \n", " We thank all the reviewers for their thoughtful and thorough comments, and we appreciate the positive assessment of the work from all reviewers. We believe that the feedback received has been very helpful in improving the paper. Based on the comments from all reviewers, we have performed new experiments and revised some of our writing to improve the clarity and structure of the paper. \n\nIn summary, our key changes include:\n- Additional experiments added to the Appendix, including:\n - 3 additional baseline studies (as suggested by Reviewer bMAN): As expected by Reviewer bMAN, we find that SAC without online learning performs poorly and finetuning SAC with a behavioral cloning loss only helps if the dynamics are not changed.\n - An additional comparison running a reset-free algorithm, MEDAL, as well as an additional comparison with Progressive Neural Networks, a continual learning method, in the SLRL setting in the Tabletop domain. We include these comparisons to demonstrate how the SLRL setting differs from typical autonomous RL and continual learning settings. Both methods perform poorly in SLRL setting, which is unsurprising because SLRL has the goal of completing the desired task a single time as quickly as possible, which is different from the other two settings. Thus, typical reset-free and continual learning algorithms may not be effective in the single-life setting. \n- To improve clarity, we have moved the contents of what was formerly Section 5, including the state visitation plots for SAC finetuning, into the Experiments section (now Section 6). Hence, what was Section 6 is now Section 5 (description of QWALE). We removed the plots in what was formerly Figure 4 and moved the results into Table 1 due to space constraints.\n- We have added some additional justification for QWALE to Section 5 and Appendix A.1., and our analysis has led to improvements in the method. \n\nWe hope that these additional experimental results and revisions have addressed the concerns of the reviewers. If there are any remaining questions, please let us know!\n", " We thank you for your detailed suggestions. We include clarifications and answers to individual points below. Please let us know if these responses address all of your concerns!\n\n> \"The approach assumes that although there is a shift between source and target environments, the state space is essentially the same…\"\n\nWe have revised Section 7 to better clarify QWALE’s limitations with regards to assuming that the state space between the source and target MDPs stay the same. In our experimental setup, while the state space generally stays the same between source and target environments, their occupancies are not the same, e.g. it is inevitable that the agent will find itself in a state out of distribution from the prior data. For example, at test time in the Franka-Kitchen domain, both the microwave and cabinet are open, which was not encountered in the prior data. \n\n> \"The novelty of the method itself is limited. It is based on GAIL and adds a weighting based on the critic network, which is reminiscent of critic filtering in CRR ( https://arxiv.org/abs/2006.15134 )\"\n\nWe have added a reference to CRR in the revised Section 2. \n\n> I have suggestions for additional baselines…\n\nThank you for these experiment suggestions. We have run these additional baselines on all four domains and added their results to Table 2 in the Appendix and display them below. We find that the results are as you predicted, and we find that QWALE outperforms these additional baselines on all four domains. \n\n| | Method | Avg $\\pm$ Std error | Success / 10 | Median | | Method | Avg $\\pm$ Std error | Success / 10 | Median |\n|-------------------------------|---------------|----------------------|--------------|--------|-----------------------------|---------------|----------------------|--------------|--------|\n| Tabletop | SAC-no online | 181.5k $\\pm$ 17.9k | 2 | 200.0k | Cheetah | SAC-no online | 143.7k $\\pm$ 28.6k | 3 | 200.0k |\n| | SAC-scratch | 99.5k $\\pm$ 22.5k | 8 | 98.8k | | SAC-scratch | 200.0k $\\pm$ 0 | 0 | 200.0k |\n| | SAC-BC | 80.8k $\\pm$ 32.5k | 6 | 1.5k | | SAC-BC | 93.5k $\\pm$ 23.6k | 7 | 60.6k |\n| Pointmass | SAC-no online | 200.0k $\\pm$ 0 | 0 | 200.0k | Kitchen | SAC-no online | 200.0k $\\pm$ 0 | 0 | 200.0k |\n| | SAC-scratch | 120.8 $\\pm$ 18.5k | 8 | 112.9k | | SAC-scratch | 155.0k $\\pm$ 24.5k | 3 | 200.0k |\n| | SAC-BC | 200.0k $\\pm$ 0 | 0 | 200.0k | | SAC-BC | 101.1k $\\pm$ 28.1k | 6 | 71.7k |\n\n\n> I'd suggest providing the details about the algorithm clearly in the main text…\n\nWe have revised Section 5 to provide additional details for QWALE in the main text, including more information about the hyperparameter $b$ and normalization of Q values. \n", " We thank you for your thorough review. We include clarifications and answers to individual questions below. If these responses do not address all of your concerns, please let us know!\n\n> \" Unlike the motivating example of finding water on Mars where the water would not be found at the same place as a desert on Earth, the experiment setup does not discuss if the goal position would be changed, different from that seen in the prior dataset.\"\n\nOne of the main contributions of this work is to formalize the SLRL problem, and we aimed to do so in a way that encapsulates many real world situations. In our experimental setup, we do keep the rewards the same between the source and target domains, but this does not mean that the goal state distributions must stay the same. To illustrate an instantiation of SLRL where the goal position is changed, we will add an experimental domain with new goal positions in the final version. We hope that future work can address more severe distribution shifts than our current experimental setup.\n\n> \"SLRL is claimed to be a special case of continual or lifelong learning… How does SLRL method compare to any continual lifelong RL approaches?\"\n\nExisting continual and lifelong methods are typically not reset-free and are hence not well-suited for SLRL. To illustrate this, we ran a new experiment that evaluates Progressive Nets [1], a continual learning framework that transfers previously learned features, on our Tabletop domain. \n\n| | Method | Avg $\\pm$ Std error | Success / 10 | Median |\n|------------------------------|----------------------|---------------------|--------------|--------|\n| Tabletop | Progressive Networks | 159.0k $\\pm$ 21.2k | 3 | 200.0k |\n| | QWALE | 44.4k $\\pm$ 24.6k | 9 | 8.9k |\n\nWe find that unlike QWALE, progressive nets does not quickly recover from novel situations to complete the desired task. This experiment shows how the SLRL setting may necessitate algorithms different from typical continual RL approaches. We show the results in the Appendix.\n\n> \"Is a fundamental assumption in SLRL that the reward distribution and state which represent goal should have same distribution as the shown in prior trajectories?\"\n\nIn SLRL, the reward between the source and target MDPs is assumed to stay the same, but this does not mean that the goal state distributions must stay the same, since the goal state can be folded into the state space such that changes in the goal state distribution correspond to different initial state distributions. Eventually, we hope that this work may be extended to the case where the reward may be different in the online trial from the prior data. We have added a mention of this in the conclusion in Section 7. \n\n[1] Rusu, Andrei A., et al. \"Progressive neural networks.\" arXiv preprint arXiv:1606.04671 (2016).\n", " We thank you for your detailed feedback. We include clarifications and answers to individual concerns below. In light of these new clarifications, please let us know if you have any remaining concerns. We are happy to answer any further questions you may have.\n\n> \"I am not sure how clear it is the importance of SLRL problems. The examples described of a robot exploring another planet or a rescue robot are closely related to the concept of safe reinforcement learning or safe exploration [1], where some constraints have to be respected. Moreover, I would think that similar problems are addressed in classical robotics motion planning literature.\"\n\nIt's true that similar situations to the examples given may be addressed in safe reinforcement learning and in classical motion planning literature. However, safe reinforcement learning typically considers the episodic RL setting with access to resets, and motion planning also makes very different assumptions to our setting, such as giving the agent awareness of all known obstacles during test time. To not overcomplicate our problem setting formalization, we do not specifically account for the case where constraints must be respected, but these may be incorporated into the SLRL setting in future work. \n\n> \"The main limitation of this work is that it’s very similar to other approaches like the ICLR 2022 paper Autonomous Reinforcement Learning [2] which introduces a similar setting in which environment resets are rare or not available at all, and I am not sure how useful it is to the community.\"\n\nWe acknowledge that the SLRL setting is similar to the autonomous reinforcement learning in that both do not allow resetting of the environment. However, key to our setting is the goal of completing the task a single time rather than recovering an effective policy that can perform the task repeatedly. In addition, also key to our setting is figuring out how best to leverage prior data, which may differ from the target environment. As a result, our approach, QWALE, is quite different from typical approaches in ARL, which generally do not focus on completing the task as quickly as possible. \n\nTo illustrate that ARL methods do not translate well to the SLRL setting, we run MEDAL [1], a recent state-of-the-art ARL method, in the SLRL setting in the Tabletop environment and find that it performs significantly worse than QWALE, as shown in the following table. \n\n| | Method | Avg $\\pm$ Std error | Success / 10 | Median |\n|------------------------------|--------|----------------------|--------------|--------|\n| Tabletop | MEDAL | 176.2k $\\pm$ 15.9k | 3 | 200.0k |\n| | QWALE | 44.4k $\\pm$ 24.6k | 9 | 8.9k |\n\n\nWe also display these new results in the Appendix.\n\n\n> \"Figure 2 positioning is not good. It’s hard to understand what the point of Figure 2 is until you reach Section 7.\n\nWe thank you for pointing this out. We have moved this figure and the contents of the section to the Experiments section (now Section 6) in our revision to improve clarity. \n\n> \"Both PointMass and TableTop training/testing environment sound very similar to the test bed environments used in meta learning papers, where the dynamics or the goals change slightly in the environment. \" \n\nUnlike meta-learning settings, the experimental problems we consider include data from only one source domain rather than from multiple source domains or tasks. Thus, standard meta-RL algorithms are not directly applicable.\n\n> \"Figure 5 is never linked, I guess it should be linked somewhere around section 7.3?\"\n\nThank you for pointing this out. We have added a reference to the figure in what is now Section 6.3.\n\n[1] Archit Sharma, Rehaan Ahmad, and Chelsea Finn. \"A State-Distribution Matching Approach to Non-Episodic Reinforcement Learning.\" International Conference on Machine Learning (2022).\n", " This paper considers the non-episodic RL setting (no environment resets) where an agent is given experience data from a source task and needs to efficiently learn to maximize rewards. They give discussions on why current RL approaches like SAC and GAIL will struggle in this setting (mainly due to distributional shifts from a source task). They then propose QWALE, an \"adverserial imitation learning\" approach (similar to GAIL) that uses Q-values (pretrained) to weight a discriminator that is learned and used as auxiliary rewards during training. They show experimentally that previous works (including GAIL) indeed struggle in this setting and are outperformed by QWALE. This is a very interesting paper that proposes a novel algorithm (QWALE) to learn policies in non-episodic discounted settings. I believe this is very relevant to the RL community, and the contributions are significant relative to previous works. Mainly,\n- They propose the single life RL setting, which is a novel more realistic non-episodic setting where the agent is given data collected (and policies/value functions learned) from previous experience.\n- They provide intuitive explanations for why previous works can struggle in this setting.\n- They provide a number of experiments in multiple domains (including complex continuous domains) demonstrating how QWALE out performs previous RL methods like GAIL in the non-episodic setting.\n\nHowever, I have a couple important concerns on whether the claims made in the paper are well supported. If I understand correctly, the specific main contributions of this work are the new discriminator update rule (eqn 1) and the reward shaping approach (line 7 in algorithm 1). However \n- No theory is given about them. For example, how does the weightings added to the discriminator update rule affect the theoretical analysis of GAIL [26]. How does the proposed reward shaping affect optimality?\n- No explanation was given for the specific choice of the weightings ($exp(Q(s, a) − b$) and $exp(-(Q(s, a) − b))$. Why is there an exponential in there and what is the effect of b theoretically or experimentally?\n- b is claimed to incentivize the agent to move towards states with higher value than its current state. This was not supported theoretically nor experimentally.\n - Missing reference in line 248: \"Appendix ??\"\n- $q_\\theta$ appears in algorithm 1 and line 243, is critical to the approach, but is never defined. Did you mean $q_\\phi$?\n- How is $D_{online}$ updated in Alg 1?\n- In line 618 of the Appendix you give a different reward function used to speed up training (different from the one in Algorithm 1). What exact reward function was used for training? Was the same reward function used for the GAIL baseline?\n- In figure 5 (or the Appendix), please include the trajectories for SAC and GAIL (and compare them with QWALE)\n- How sensitive is QWALE to the hyper-parameters involved? A hyper-parameter sweep plot would be useful (since there are no theoretical contributions to strengthen the claims) The authors have adequately addressed most of the limitations (except where mentioned in my review above) and potential negative societal impact of this work.", " This paper introduces and tackles the single-life reinforcement learning (SLRL) problem, in which there are no episodic resets available, and the agent has to solve the task in one-shot. The agent has access to prior data which comes from a similar but not the same environment, and this data might not be expert data. The authors propose QWALE, Q-Weighted Adversarial Learning, which extends Adversarial Imitation Learning methods to the general setting in which the data might not be optimal for the current environment, by weighting the experiences with a pretrained Q-function. Strength\n- It introduces and describes single-life reinforcement learning as a specific set of tasks in which an agent has to learn how to recover from mistakes on its own without relying on episodic reset\n- Good comparisons to other methods\nWeaknesses\n- I am not sure how clear it is the importance of SLRL problems. The examples described of a robot exploring another planet or a rescue robot are closely related to the concept of safe reinforcement learning or safe exploration [1], where some constraints have to be respected. Moreover, I would think that similar problems are addressed in classical robotics motion planning literature.\n\n[1] Garcıa, Javier, and Fernando Fernández. \"A comprehensive survey on safe reinforcement learning.\" Journal of Machine Learning Research 16.1 (2015): 1437-1480. - Figure 2 positioning is not good. It’s hard to understand what the point of Figure 2 is until you reach Section 7.\n- Both PointMass and TableTop training/testing environment sound very similar to the test bed environments used in meta learning papers, where the dynamics or the goals change slightly in the environment. For this reason, I would expect that Meta Learning algorithms like MAML would perform well in environments like tabletop, have the authors tried something similar?\n- Figure 5 is never linked, I guess it should be linked somewhere around section 7.3?\n Overall, this paper introduces SLRL and shows how current algorithms are not capable of autonomously recovering in an environment in which they have to reach the goal but are not provided with resets from bad states. It evaluates current algorithms and proposes a new weighted adversarial learning approach to overcome this problem. QWALE is sound and the idea of weighting the examples based on the Q-function trained in the source environment makes sense.\nThe main limitation of this work is that it’s very similar to other approaches like the ICLR 2022 paper Autonomous Reinforcement Learning [2] which introduces a similar setting in which environment resets are rare or not available at all, and I am not sure how useful it is to the community.\n\n[2] Sharma, Archit, et al. \"Autonomous Reinforcement Learning: Formalism and Benchmarking.\" ICLR 2022.", " The paper motivates single life RL setup involves pre-training on prior experience and generalizing another setting with novel states and that it is different from episodic or reset-free RL. As the objective is to perform well in single life on novel setting, the paper proposes to bias the exploration at test time to the known distribution at training time. For this, GAIL with the discriminator weighted with Q values while training on prior data is used. QWALE trained policy generalizes to new initial position of mug in Tabletop organization, wind in pointmass, hurdles in half cheetah and new combination of task in Franka-Kitchen environment. The paper is well-written and structured. The paper discusses the The Q-values from prior dataset allow that generalization to novel unseen scenarios at test time. \n\nHowever, unlike the motivating example of finding water on Mars where the water would not be found at the same place as a desert on Earth, the experiment setup does not discuss if the goal position would be changed, different from that seen in the prior dataset. The goal remains the same in prior data and “novel” test setting, for example, in pointmass environment at (100, 0), in Franka kitchen with both microwave and cabinet closed, etc. - SLRL is claimed to be a special case of continual or lifelong learning. SLRL focuses to explore as shown in the prior data and avoids exploring unknowns in the novel setting so that it doesn’t go out of training distribution. How does SLRL method compares to any continual lifelong RL approaches? A comparison could demonstrate scenarios where focussing on the prior data for exploration clearly beneficial (or detrimental) to the performance of the agent.\n- Is a fundamental assumption in SLRL that the reward distribution and state which represent goal should have same distribution as the shown in prior trajectories? It seems that weighting on Q-values from prior data will induce this assumption and limit the possible scenarios in which the policy will generalize to new settings. The authors have discussed the potential limitations and future scope of SLRL. Some aspects of potential negative societal impact of the work should be included, like if SLRL is used in home or space-probing robot, what the scenarios will be that it could not handle.", " The paper introduces the setup of single-life reinforcement learning (SLRL), where the algorithm is given offline data from some environment and then is deployed in a related but slightly different environment; it needs to perform as good as possible in a single trial, leveraging online learning as well as offline data. This differs from both no-reset RL (because we care only about a single trial) and 0-shot generalization in RL (because we can learn along the way).\n\nThen the authors propose QWALE, an algorithm based on GAIL, to tackle the setup of SLRL. They provide an empirical evaluation in 3 different continuous control environments. They compare their approach to several baselines, such as vanilla GAIL, RND, and vanilla SAC. They demonstrate that typically success rate is improved and, more visibly, the speed of reaching the solution is higher.\n Strengths:\n\n[S1] The introduced setup of SLRL seems important, describing a quite natural real-world scenario.\n\n[S2] The paper is well written and easy to follow.\n\n[S3] The empirical evaluation is sound and demonstrates improvements brought by the presented approach (see the Questions section for the discussion about baselines though).\n\nWeaknesses:\n\n[W1] The approach assumes that although there is a shift between source and target environments, the state space is essentially the same. If e.g. visual observations were used and the environment shifted from Earth to Mars, then the discriminator would learn to discriminate based on visual cues, which is not what we want in this approach. I think this is fine, it should be clearly stated as a limitation though.\n\n[W2] The novelty of the method itself is limited. It is based on GAIL and adds a weighting based on the critic network, which is reminiscent of critic filtering in CRR ( https://arxiv.org/abs/2006.15134 ). However, it is used in a novel setup.\n [Q1] I have suggestions for additional baselines:\n- just run SAC in the source environment and evaluate in the target one (without online learning); this is just a sanity check to verify that the shift is challenging enough - the performance should be very poor\n- just train SAC on the target environment, completely from scratch\n- fine-tune SAC, and add a behavioral cloning loss of the offline data as a regularization; this is another way to “anchor” the solution in the prior data. I conjecture this should help if only start states are changed (and not the dynamics), and otherwise would probably not work well.\n\n[Q2] I’d suggest providing the details about the algorithm clearly in the main text, including the baseline term b described as “implementation detail” and details of normalization of Q values.\n\n[Q3] I’d suggest citing the CRR work (mentioned in W2, section Weaknesses).\n Please see W1 (section Weaknesses) and address this in the text." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 3 ]
[ "j44HPvaKBo7", "eKcMAjUBAMu", "G3k2E3nkTzq", "6N3DYltWz0", "KBDgrML8Zqr", "G3k2E3nkTzq", "j44HPvaKBo7", "eKcMAjUBAMu", "G3k2E3nkTzq", "nips_2022_303XqIQ5c_d", "KBDgrML8Zqr", "j44HPvaKBo7", "eKcMAjUBAMu", "nips_2022_303XqIQ5c_d", "nips_2022_303XqIQ5c_d", "nips_2022_303XqIQ5c_d", "nips_2022_303XqIQ5c_d" ]
nips_2022_KblXjniQCHY
Neural Circuit Architectural Priors for Embodied Control
Artificial neural networks for motor control usually adopt generic architectures like fully connected MLPs. While general, these tabula rasa architectures rely on large amounts of experience to learn, are not easily transferable to new bodies, and have internal dynamics that are difficult to interpret. In nature, animals are born with highly structured connectivity in their nervous systems shaped by evolution; this innate circuitry acts synergistically with learning mechanisms to provide inductive biases that enable most animals to function well soon after birth and learn efficiently. Convolutional networks inspired by visual circuitry have encoded useful biases for vision. However, it is unknown the extent to which ANN architectures inspired by neural circuitry can yield useful biases for other AI domains. In this work, we ask what advantages biologically inspired ANN architecture can provide in the domain of motor control. Specifically, we translate C. elegans locomotion circuits into an ANN model controlling a simulated Swimmer agent. On a locomotion task, our architecture achieves good initial performance and asymptotic performance comparable with MLPs, while dramatically improving data efficiency and requiring orders of magnitude fewer parameters. Our architecture is interpretable and transfers to new body designs. An ablation analysis shows that constrained excitation/inhibition is crucial for learning, while weight initialization contributes to good initial performance. Our work demonstrates several advantages of biologically inspired ANN architecture and encourages future work in more complex embodied control.
Accept
This paper introduces the use of neural circuit architectural priors to build controllers for a physically simulated c-elegans-like swimmer implemented in MuJoCo as part of the DeepMind control suite. By leveraging the bio-inspired architectural priors, the controller starts with structured behavior (rather than highly erratic random movements as is commonly the starting point for embodied RL initial behavior). And the architectural prior supports continued learning from this starting point. The work is seen as original, interesting, and quite clear. The work is also nicely self-contained. That said, this paper has received mixed and borderline reviews (6, 4, 4, 6), and there were some concerns about scalability and utility to the AI community. This paper was discussed with the SAC, and we decided that despite some of these legitimate concerns, this paper should be accepted. This paper has clear goals and can help us rethink some of our approaches to architectures. Moreover the potential audience spans both neuroscience and AI. We (the AC and SAC) still highly encourage you to seriously consider comments from the reviewers. From both the positive-leaning and negative-leaning reviewers, there is respect for what was done as a work of modeling, but concerns about whether this constitutes only well-done computational modeling, or if it really amounts to anything that could be useful for AI more generally (and if it could scale to other bodies). You outlined some next steps, and how similar approaches could be used in other scenarios and with more complex bodies; we recommend that you include that discussion in this paper. We'd also strongly encourage you to avoid assertive claims about how neuroscience-inspired ideas can generally improve AI systems, and acknowledge limitations in this case study. While this case study is a provocative first step, the reviewers and AC tend to believe it will prove quite difficult to extend this strategy to more complex bodies. Overall, focusing on what was achieved in this paper, nice work.
train
[ "ImaErFFS6YH", "YUXi2C-uFCL", "1HIvb7CxFy2", "PcLmsQinVGi", "ot84N7AmhFX", "9MkKyMMvqmI", "3bLsGGywgz9", "eyU-kZF_umO", "uAI6X6hm1wo", "lNp1mAfUXhp", "qmyFXQbBUB4", "xx-clmzelKJ" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for trying out the requested experiment. I understand these are tight timelines, but I'm afraid I can't significantly vote for acceptance based on the assumption that very significant changes will be made between now and the camera-ready version. That said, I'll still increase my score (3-->4) to represent the optimism I feel about a potential future version of this work.", " I thank the authors for their response to my review. I am grateful to them for describing what they think the contributions of their paper are beyond the Swimmer task, however, if their goal is to `walk readers through a detailed construction process from neural circuits to ANN architecture`, I think I am inclined to agree with R2's takeaway message when they say that this work could have a greater impact if refactored as a contribution to neuroscience rather than AI. Alternatively, the work would be enhanced if the authors did more a systematic investigation to understand `why small MLPs don't learn (Fig 4B) and whether large MLPs learn small-network solutions embedded within them (Appendix B). We don't have the answers for these questions.` I also had hoped that the authors would answer my comment `Furthermore, it would nice to have been able to see an expanded x-axis for Figure 4A so as to know when NCAP performance first reached the avg reward of 800 (was NCAP achieving this reward from time=0 or time=?)`. As such, I will leave my score as is.", " **Navigate task: \"Why not show experiments with different reward functions so there's still room for learning? Going to a goal location is a particularly simple example: locomotion is necessary, but performance should still be initially low since the goal location isn't part of the behavioral prior. This could showcase your method's ability to learn without significant additional effort.\"**\n\nThanks, this is a great suggestion for how to showcase the effect of learning beyond initial performance. We recently tested this out in preliminary experiments (not reported in paper) where we train on such a \"navigate\" task (the standardized variant in DeepMind Control Suite). We used a lower-level NCAP-Swimmer motor module, controlled by a higher-level MLP task module. As you expected, performance started low then improved as the MLP task module learned. Significantly, algorithms like PPO and OpenAI-ES *weren't able to learn at all using a generic MLP* (i.e. a flat learning curve), presumably because the navigate task provides quasi-sparse rewards that makes it difficult to learn to swim while learning to go to a goal; this is in line with previously reported RL algorithm benchmarks [1] (see \"swimmer-swimmer6\" on pg. 11). In contrast, our architectural prior facilitated better exploration at the start and contributed to faster learning of this hard task.\n\nUnfortunately, given the limited time left in this rebuttal period, we're unable refactor the manuscript to present these results formally; however, we'd be keen to include them in the camera-ready version!\n\n[1] \"Tonic: A deep reinforcement learning library for fast prototyping and benchmarking\", Pardo 2021.\n", " > We respectfully disagree: While the difference that learning makes here is admittedly limited, we don't see this as a weakness per se — in fact, it shows that our \"prior\" is working correctly. In works featuring behavioral priors [1,2], a tabula-rasa architecture (e.g. MLP) is trained to imitate demonstrations, then learning is slowed or frozen, and a higher-level policy can control the acquired movement primitives. In works featuring trajectory priors [3], the movement primitives are \"hardcoded\" through equations of motion. In our work featuring architectural priors, movement primitives are initialized and constrained through ANN structure. Much of this structure is indeed \"hardcoded\" with limited plasticity for finetuning. This mirrors the mechanisms of biology: we discuss the worm's circuits in the paper, but quadruped and even human locomotion circuits exhibit a priori structure [4] (honed through evolution) with limited plasticity constrained by the circuit composition. The idea of porting such structure into AI motor control settings is the primary contribution of this work, and we do not think that the limited learning in worm circuits affects that. Engineering prior knowledge/structure into ANNs is important for reinforcement learning and robotics (R1), even if it runs counter to a pure learning approach.\n\nThis would be fine if the motivation for this work was to provide a hand-crafted behavior space for e.g. down-stream hierarchical control, but the paper and experiments aren't written from this perspective. Rather, the primary motivation appears to be that this behavior prior *allows for subsequent learning*. If your experiments only show a ~5% performance increase during training, then that casts significant doubt on the extent of subsequent learning that is possible.\n\nGoing back to one of my unanswered questions, why not show experiments with different reward functions so there's still room for learning? Going to a goal location is a particularly simple example: locomotion is necessary, but performance should still be initially low since the goal location isn't part of the behavioral prior. This could showcase your method's ability to learn without significant additional effort.", " Thank you for your feedback and support! It was valuable to hear your comments from the non-neuro, reinforcement learning point of view, and we are quite pleased that you found the work \"novel and inspiring.\" Our goal in this paper is exactly to build such bridges between neuro and AI, which is one of the reasons we highlight the C. elegans case study and provide extensive neuro background. We hope these responses satisfactorily answer your questions:\n\n**Context within neuro-AI (#1): \"Maybe the authors can comment on the connection between the proposed algorithm and some other neuro-inspired network structures?\"**\n\nSure, the works you referred to in [3, 4, 5] aim to approximate backpropagation in layered networks using biologically plausible learning rules, since the mechanisms needed to compute exact backpropagation in ANNs are considered implausible within the brain. Our Swimmer NCAP architecture doesn't share the same aims, and we trained using both (exact) backprop-based RL algorithms and a backprop-free ES algorithm.\n\n**Scaling to high-dimensional agents (#2): \"Could the authors provide some comments on whether or not the algorithm can be applied to other creatures and other tasks?\"**\n\nYes, we believe that biologically inspired network architecture will be useful for other bodies and movements too! Please kindly refer to our response to **R1** about how a Quadruped NCAP could take a similar approach as our Swimmer NCAP.\n\n**Oscillator units (#3): \"[1, 2] are some literature I think is related to the oscillator units discussed in the paper\"**\n\nThanks for pointing to these works. Yes, there have been many works that use oscillators to drive motor control dynamics, especially in classical robotics and trajectory-based priors. In a way, our model can be seen as a hybrid approach that combines neural networks with specialized components like oscillators, but we certainly can't claim novelty to oscillators! However, the core contribution of this work, beyond any specific component, lies in demonstrating how neural circuits can inspire AI motor control architectures.\n", " Thank you for your insights and depth of feedback! We appreciated your comments about the paper's quality and originality. We hope these responses satisfactorily address the raised questions, especially those about broader implications.\n\n**Broader implications (#3): \"What do you think the broader implications of your paper are, beyond the Swimmer task?\"**\n\nThe goal of this paper is to start a conversation about how the fields of neuromechanical modeling and AI motor control can inform each other. As you point out, the connectome of C. elegans has been available for years, but interesting questions related to how understanding neural circuits can inform AI systems have remained under-explored. The broader implications of this paper are fourfold:\n\n1) By demonstrating the advantages that biologically inspired network architecture can provide relative to MLPs (comparable asymptotic performance with significantly improved initial performance, data efficiency, parameter efficiency, interpretability, and transfer), we motivate future work exploring neural-circuit-inspired architectural priors in other AI motor control bodies. As **R1** notes, adding prior structure into ANNs is an important problem for continuous control / robotics, and our work provides a novel perspective on how this can be done. Please kindly refer to our response to **R1** about how a Quadruped NCAP could take a similar approach as our Swimmer NCAP. Indeed, we are pursuing such directions ourselves, but tackling additional bodies is necessarily outside the scope of this current paper. \n\n2) By adopting a formalism that combines the standard discrete-time ML framework with features from computational neuroscience, we demonstrate the value of biologically inspired synapse sign constraints (i.e. excitation vs. inhibition) and special cell types (i.e. intrinsic oscillators). It is not common within ML to incorporate such features within ANNs, but our results encourage further exploration.\n\n3) By systematically investigating the effects of MLP architecture size (Fig 4B) and MLP-NCAP interpolation (Appendix B), we raise interesting questions about MLP training dynamics. In particular, we authors have had spirited discussions about why small MLPs don't learn (Fig 4B) and whether large MLPs learn small-network solutions embedded within them (Appendix B). We don't have the answers for these questions.\n\n4) By walking readers through a detailed construction process from neural circuits to ANN architecture, we hope this paper to be a good pedagogical resource demonstrating how systems neuroscience can guide neural network modeling. We believe that C. elegans was an ideal case study to start with, due to the availability of its connectome, the simplicity of its locomotion circuits, and the similarity of its body to simple AI motor control benchmark bodies. Already **R4**, with a primary background in RL, has found this work \"novel and inspiring\"; other AI/ML researchers at NeurIPS may also find this work's perspective to be original and surprising.\n\n**Discussion section: \"The authors could have been more explicit about the limitations of their work\"**\n\nThank you for your suggestion, we agree. We already discussed the asymptotic performance results a bit (lines 265-271), but we have improved the discussion section to mention limitations more explicitly.\n\n**Parameter count: \"Are there not 6 parameters for NCAP?\"**\n\nYes, there are 6 parameters total in our model: 4 motor module (prop, osc, ipsi, contra) and 2 task module (speed, turn) — see Appendix B. In Table 5, we say 4 parameters because we are training only the motor module for the swim task (no turning or speed control necessary).\n\n**Observation space (#1, #2): \"What the state at time t corresponds to?\" / \"Speed is given as an input to NCAP?\"**\n\nYes, you are correct that input to the MLPs is o, q1, q2, …, qN. The NCAP model receives the exact same input. In particular, speed (s) and turn (r, l) control signals are not used when training only the motor module; they are currently only used in the interpretability section, which demonstrates a navigate task.\n\n**Action space: \"I wasn't clear on how you constrain the output of the network to be in [-1, 1] (line 214)\"**\n\nWe compute the joint accelerations by taking the difference between the muscle activations on each side of the body, which we clamped to be each within [0, 1]. We've updated the manuscript to clarify this (line 235).\n\n**Figure labels: \"The x tick labels on Figure 6 are not vertically aligned\"**\n\nWeird, something happened in the pdf conversion step. Thanks for pointing it out, we'll fix it in the final manuscript! ", " Thank you for your feedback. We appreciate your highlighting the paper's unique perspective, clear writing, and thorough ablations. Here are our responses to your questions and concerns:\n\n**Pure learning vs priors: \"It's unclear the extent to which learning matters here. Initial performance looks to be around 800 and final performance is perhaps 850. This is a weakness …\"**\n\nWe respectfully disagree: While the difference that learning makes here is admittedly limited, we don't see this as a weakness per se — in fact, it shows that our \"prior\" is working correctly. In works featuring behavioral priors [1,2], a tabula-rasa architecture (e.g. MLP) is trained to imitate demonstrations, then learning is slowed or frozen, and a higher-level policy can control the acquired movement primitives. In works featuring trajectory priors [3], the movement primitives are \"hardcoded\" through equations of motion. In our work featuring architectural priors, movement primitives are initialized and constrained through ANN structure. Much of this structure is indeed \"hardcoded\" with limited plasticity for finetuning. This mirrors the mechanisms of biology: we discuss the worm's circuits in the paper, but quadruped and even human locomotion circuits exhibit a priori structure [4] (honed through evolution) with limited plasticity constrained by the circuit composition. The idea of porting such structure into AI motor control settings is the primary contribution of this work, and we do not think that the limited learning in worm circuits affects that. Engineering prior knowledge/structure into ANNs is important for reinforcement learning and robotics (**R1**), even if it runs counter to a pure learning approach.\n\n**Scalability to other body/tasks: \"Would NCAP work for non-Swimmer environments?\"**\n\nThe Swimmer NCAP example presented here would be useful for worm-like bodies. However, the broader concept of a neural circuit architectural prior is, in principle, applicable to a wide class of bodies. This is because they reflect the actual working solutions used by biological organisms, which have long set the performance standards for AI motor control. Please kindly refer to our response to **R1** about how a Quadruped NCAP could take a similar approach as Swimmer NCAP.\n\n**Parameter counts: \"Why should we care about the parameter counts of small models independent of data efficiency?\"**\n\nThat's a good question. We agree that even the largest model considered here isn't bottlenecked by memory or compute. So as you suggest, there may no real practical reason to care (at least, for this simple case). But perhaps there's a more philosophical reason: Is it satisfying that ~73,000 parameters are needed to learn even this simple swimming task with MLPs? As we show in Figure 4B, MLP performance dramatically deteriorates as the number of parameters is reduced, yet even the smallest MLP tested has 74 parameters, which is an order of magnitude more than the 4 parameters in the Swimmer NCAP. These results suggest that Swimmer NCAP performs well not merely because it has fewer parameters than MLPs, but rather because it captures useful structure about the problem. The reward/params metric mainly serves to underscore this point: that a parameter in the NCAP architecture is in a sense \"worth more\" than one in an MLP. Why don't small MLPs learn via PPO/DDPG/ES for this simple task? What advantages can smarter ANN structure similarly have for more complex tasks? These are questions that future work may care to investigate.\n\n**Transfer results: \"Transfer is stated as not possible for MLPs, but ignores work adapting deep learning methods to do just that\"**\n\nWe should clarify what we meant: We were referring the kind of trivial transfer that exploits modularity of a network architecture. For example, a convolutional neural network, by exploiting its spatial locality, can trivially be applied to images larger or smaller than it was trained on (within limits). Similarly, the Swimmer NCAP, by exploiting its segmented structure, can trivially be applied to bodies longer or shorter than it was trained on (within limits). This type of transfer is not so trivial to do with an MLP, with weight matrices constrained by input/output dimensions. We are familiar with the works you refer to, both of which also decompose their policy in a modular way — through a computational graph [4] or network of message passing modules [5]; the resulting architectures are therefore not MLPs.\n\n[1] \"Neural probabilistic motor primitives for humanoid control\", Merel et al., 2019.\n\n[2] \"Learning agile robotic locomotion skills by imitating animals\", Bin Peng et al., 2020.\n\n[3] \"Controlling legs from locomotion — insights from robotics and neurobiology\", Buschmann et al. 2015.\n\n[4] \"Weight agnostic neural networks\", Gaier & Ha, 2019.\n\n[5] \"One policy to control them all\", Huang et al., 2020.", " Thank you for your feedback and support! We hope these responses satisfactorily answer your questions:\n\n**Scalability to complex bodies: \"Would it be possible to give some guidelines on how to apply the given design principles to more complex problems such as legged locomotion?\"**\n\nYes, for example: Quadrupeds (e.g. dog, cat, horse, rat) are a common class of bodies in both biology and robotics. The neural circuits controlling gait (walk, bound, gallop) are contained within the spinal cord and act semi-autonomously — producing robust locomotion even when top-down connections from the brain are lesioned [1]. Significant work in the neuromechanical modeling community has focused on quadruped locomotion [2, 3]. These works could be translated to AI motor control bodies as we do in this paper, and a Quadruped NCAP architecture would make use of similar integrators, oscillators, and constrained excitatory/inhibitory connections as Swimmer NCAP. Indeed, quadruped locomotion is something we are pursuing, but it outside the scope of this present conference paper. We see this paper as a case study that uses a simple organism to demonstrate several advantages of ANN architecture inspired by systems neuroscience and to motivate future work bridging the neuromechanical modeling and AI motor control communities.\n\n**Safety: \"Adding structure also provides some bounds to safety during learning\"**\n\nThis is a great point. Simpler, sparser architectural structures may indeed be safer: their interpretability facilitates debugging and more constraints can be engineered. We've added this to the manuscript!\n\n[1] \"Controlling legs from locomotion — insights from robotics and neurobiology\", Buschmann et al. 2015.\n\n[2] \"Computational modeling of spinal circuits controlling limb coordination and gaits in quadrupeds\", Danner et al., 2017. \n\n[3] \"Organization of the mammalian locomotor CPG\", Rybak et al., 2015.", " The paper presents a method to embed priors for motor functions to artificial neural networks. The authors demonstrate a rather simple example of using architecture of C.elegans to control swimmer in mujoco control suite problem. The proposed method has similarities with previous methods such as CPGs, but it has clear differences as the authors explain. Using the architecture proposed by the authors, the controller for the 5 link swimmer is reduced to 4 parameters. The training results show pretty much instant learning curve with converged policies comparable to MLPs trained with Reinforcement Learning or Evolutionary Strategies. The main strength of the paper is to add prior knowledge and structure to artificial neural networks for motor controlling tasks. This is a very important point for reinforcement learning and robotics. It reduces the number of samples required for learning and it introduces interpretability to the network. \n\nAs one would expect, the results show that the same task can be learned with way less parameters in much smaller number of samples. Adding structure also provides some bounds to safety during learning, but the authors did not emphasize this point much.\n\nThe paper is very well written. The concepts and similar works are explained in details in a nice flow. It is very easy to follow the main idea while also understanding the details of the method.\n\nIn terms of weaknesses, as the authors acknowledge the authors demonstrate the method on a rather simple problem (swimmer), and they do not give a clear guideline to extend it to more complex motor control problems such as legged locomotion. Would it be possible to give some guidelines on how to apply the given design principles to more complex problems such as legged locomotion (or even 2D walker problem)? The main limitation is the lack of scalability of the method on more complex problems. It would be great to see authors give different architectures for different problems instead of a single one (swimmer).", " The authors introduce NCAP, a neural network architecture inspired by computational models of actual nematode locomotion. Experiments are done on the \"swimmer\" environment, which bears some relationship to nematode locomotion. They show that when optimized for forward velocity (via RL or ES), NCAP has much better initial performance than conventional architectures, despite having far fewer numerical parameters. Ablations reveal than the sign constraints in particular are crucial for performance. Finally, the authors show that training transfers to bodies with a different number of joints. # Strengths\n\n1) This is a unique perspective. It's rare for work in systems neuroscience to be fully translated into models that can be optimized with methods designed for deep learning.\n\n2) The writing is clear throughout, though more insight into the motivation behind the unit types would be nice. E.g. way are oscillator units preferable to under ways of maintaining internal state (such as LSTMs)?\n\n3) Ablations are thorough, though again more insight into these results would be appreciated.\n\n# Weaknesses\n\n1) It's unclear the extent to which learning matters here. Initial performance looks to be around 800 and final performance is perhaps 850. This is a weakness, because simply hardcoding a solution to a simple locomotion problem is not a significant AI result. I'm not convinced that this'll be useful for anything other than the exact task/body setup you use here.\n\n2) The experimental results are very limited in scope. All of the AI papers mentioned in related works involve techniques being applied across a range of embodiments and tasks. And even within the constraints of the swimmer environment, none of these related works are compared against.\n\n3) Misc. Figure 4 shouldn't use a log x axis (or at least have both versions visible somewhere). Transfer is stated as not possible for MLPs, but ignores work adapting deep learning methods to due just that: \"One policy to control them all: Shared modular policies for agent-agnostic control\". I'm not convinced that parameter-count is a meaningful metric in this regime, but even if it were \"Weight Agnostic Neural Networks\" could likely produce a single-parameter policy class with similar performance properties to NCAP.\n\n#Takeaway\n\nI would strongly encourage resubmitting after more general results are obtained. Alternatively, I think this work could be reframed as a contribution to neuroscience rather than AI, by emphasizing the testing the functional role of various nematode cells, etc and downplaying the potential utility in AI applications. Though this would still require considerable reworking and would likely be better served by a different publication venue. 1) Would NCAP work when optimizing a different reward function, like reaching a goal location?\n\n2) Would NCAP work for non-swimmer environments? Particularly those with similarly structured action spaces (joint accelerations).\n\n3) Why should we care about the parameter counts of small models independent of data efficiency? Even the largest model considered here isn't bottlenecked by memory or compute, and it's unclear these gains would hold for larger models. In particular, why is the reward/params metric in Figure 5 meaningful?\n\n4) What do the transfer results (Figure 6) look like for a fresh initialized NCAP agent (i.e. no training)? Yes.", " This paper introduces a new neural network architecture, \"NCAP\" (Neural Circuit Architectural Priors), for performing a locomotion task (the Swimmer task from the DeepMind Control Suite). The network is inspired by the locomotion circuits of C. elegans. The authors train both NCAP, as well as various MLP architectures to perform the Swimmer task, and do so using 3 different RL training methods (PPO, DDPG and ES, an evolutionary black-box optimization method). The NCAP architecture consistently achieves higher average reward earlier in training compared to the MLP architectures, while having significantly fewer trainable parameters. Strengths:\n1. Originality: despite the connectome of C. elegans having been available for years, I don't think there are many papers trying to analyze the key components of the anatomy of C. elegans that enable these worms to learn efficiently, as well as the implications of the C. elegans connectome for building AI systems. In this sense, the paper tackles interesting, under-explored questions\n2. Quality: the paper is relatively well written. The authors are thorough in their review of Related Work, and the figures - especially the early figures that give the overview of their framework and the nematode - are clear and easy to understand.\n\nWeaknesses: \n1. Clarity: Please clarify what the state at time $t$, $s_{t}$, corresponds to on line 240. You give the inputs to NCAP in the equations of Section 3, but please clarify the exact inputs you are giving to the MLP comparison networks. Is $s_{t}$ for the MLP networks just $\\{o, q_{1}, q_{2} ,..., q_{N}\\}$ or does this also include the speed $s$ and $r$ and $l$? From Appendix B, it looks like you don't give the speed $s$ and $r$ and $l$ to the MLP. \n2. Clarity/quality: It seems strange to me that speed is given as an input to NCAP, given that reward is based on speed. This is especially true if you only give speed as an input to NCAP and not to the MLP networks. To what extent can the early success of NCAP relative to the MLP networks be attributed to the fact that speed is given as an input to NCAP? It would have been nice to see an ablation study where the authors removed the speed input in order to get a sense of how crucial this input is. Furthermore, it would nice to have been able to see an expanded x-axis for Figure 4A so as to know when NCAP performance first reached the avg reward of 800 (was NCAP achieving this reward from time=0 or time=$10^{4}$?)\n3. Significance: while the results of NCAP on the Swimmer task are impressive, I am not sure of the broader significance of this paper. This is especially true because of the fact that it appears that the asymptotic performance of NCAP is lower than that of some MLP architectures (Figure 4). Are there other tasks where the NCAP architecture can be applied? Or is the take-away message from this paper that sign constraints on weights in neural networks are important, and enable faster learning? I am excited to hear what the authors have to say on this point: what do you think the broader implications of your paper are - beyond the Swimmer task? \n\nMinor points:\n1. Clarity: Are there not 6 parameters for NCAP? $w_{prop}, w_{speed}, w_{turn}, w_{osc}, w_{ipsi}, w_{contra}$? In Table 5, you say that there are 4 parameters\n4. Clarity: there are many constrained parameters and outputs for the NCAP network. While you clarify how weights are appropriately constrained to have the desired signs, I wasn't clear on how you constrain the output of the network to be in [-1, 1] (line 214).\n\nExtremely minor point:\n1. Quality (I only include this here because I noticed it when I zoomed into the figure and I would want to correct it if this was my paper): the x tick labels on Figure 6 are not vertically aligned My questions were outlined above, but I repeat these here for completeness:\n\n1. Clarity: Please clarify what the state at time $t$, $s_{t}$, corresponds to on line 240. You give the inputs to NCAP in the equations of Section 3, but please clarify the exact inputs you are giving to the MLP comparison networks. Is $s_{t}$ for the MLP networks just $\\{o, q_{1}, q_{2} ,..., q_{N}\\}$ or does this also include the speed $s$ and $r$ and $l$? From Appendix B, it looks like you don't give the speed $s$ and $r$ and $l$ to the MLP. \n2. Clarity/quality: It seems strange to me that speed is given as an input to NCAP, given that reward is based on speed. This is especially true if you only give speed as an input to NCAP and not to the MLP networks. To what extent can the early success of NCAP relative to the MLP networks be attributed to the fact that speed is given as an input to NCAP? It would have been nice to see an ablation study where the authors removed the speed input in order to get a sense of how crucial this input is. Furthermore, it would nice to have been able to see an expanded x-axis for Figure 4A so as to know when NCAP performance first reached the avg reward of 800 (was NCAP achieving this reward from time=0 or time=$10^{4}$?)\n3. Significance: while the results of NCAP on the Swimmer task are impressive, I am not sure of the broader significance of this paper. This is especially true because of the fact that it appears that the asymptotic performance of NCAP is lower than that of some MLP architectures (Figure 4). Are there other tasks where the NCAP architecture can be applied? Or is the take-away message from this paper that sign constraints on weights in neural networks are important, and enable faster learning? I am excited to hear what the authors have to say on this point: what do you think the broader implications of your paper are - beyond the Swimmer task? The Discussion section for this paper was a little lackluster, and the authors could have been more explicit about the limitations of their work. For example, one limitation is that you've only demonstrated the utility of the NCAP architecture on the Swimmer task. Another limitation is that the asymptotic performance of NCAP seems to be lower than that of the MLP architectures (Figure 4). ", " \nNeural Circuit Architectural Priors for Embodied Control\nIn this paper, the authors are inspired by the Caenorhabditis elegans locomotion circuits, and design a structurally similar neural network structure. This prior structure provides valuable starting motion control knowledge and enables faster training in the task of the swimmer from the deepmind control suite.\n \nOriginality: This paper draws a very interesting connection between neuroscience and reinforcement learning. Since I have no previous knowledge in biology or neuroscience, I cannot comment on how novel this idea is. From the reinforcement learning point of view, I think it’s pretty novel and inspiring.\n\nQuality and clarity: The paper is well written. The related work is pretty adequate, and the figures in the paper (for example Figure 1 and Figure 2) are pretty intuitive and easy to understand.\nThe experiments are well designed, with every experiment having clear purposes and explanations. \n\nSignificance: How well does it scale to high-dimensional agents? How well does it scale to other tasks? I think these are some of the questions that would greatly affect how significant the proposed algorithm is in the community of reinforcement learning (I cannot comment on the neuroscience part unfortunately).\nWithout experiments on more creatures (for example the ant like creatures, the fish like creatures in the deepmind control suite), or more experiments for different tasks that aim to show the locomotion skills not only on swimming but also on reaching, avoiding, and even 3D swimming, the significance of the algorithm remains unclear.\n 1) From a neuroscience’s perspective, maybe the authors can comment on the connection between the proposed algorithm and some other neuro-inspired network structures? For example the feedback alignment network in [3], [4] and [5]\n2) Back in the “Strengths And Weaknesses”, I mentioned the lack of experiments. Could the authors provide some comments on whether or not the algorithm can be applied to other creatures and other tasks?\n3) The periodic phase generation module, while not exactly the same, was proposed and used in some character animation research such as [1], [2]. These are some literature I think is related to the oscillator units discussed in the paper.\n \nI don’t see any potential negative societal impact in this work.\n\n[1] Van de Panne, Michiel, Ryan Kim, and Eugene Fiume. \"Virtual wind-up toys for animation.\" In Graphics Interface, pp. 208-208. CANADIAN INFORMATION PROCESSING SOCIETY, 1994.\n[2] Holden, Daniel, Taku Komura, and Jun Saito. \"Phase-functioned neural networks for character control.\" ACM Transactions on Graphics (TOG) 36, no. 4 (2017): 1-13.\n[3] Lillicrap, Timothy P., Daniel Cownden, Douglas B. Tweed, and Colin J. Akerman. \"Random synaptic feedback weights support error backpropagation for deep learning.\" Nature communications 7, no. 1 (2016): 1-10.\n[4] Nøkland, Arild. \"Direct feedback alignment provides learning in deep neural networks.\" Advances in neural information processing systems 29 (2016).\n[5] Frenkel, Charlotte, Martin Lefebvre, and David Bol. \"Learning without feedback: Fixed random learning signals allow for feedforward training of deep neural networks.\" Frontiers in neuroscience 15 (2021): 629892.\n\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 3 ]
[ "1HIvb7CxFy2", "9MkKyMMvqmI", "PcLmsQinVGi", "3bLsGGywgz9", "xx-clmzelKJ", "qmyFXQbBUB4", "lNp1mAfUXhp", "uAI6X6hm1wo", "nips_2022_KblXjniQCHY", "nips_2022_KblXjniQCHY", "nips_2022_KblXjniQCHY", "nips_2022_KblXjniQCHY" ]
nips_2022_zUbMHIxszNp
Micro and Macro Level Graph Modeling for Graph Variational Auto-Encoders
Generative models for graph data are an important research topic in machine learning. Graph data comprise two levels that are typically analyzed separately: node-level properties such as the existence of a link between a pair of nodes, and global aggregate graph-level statistics, such as motif counts. This paper proposes a new multi-level framework that jointly models node-level properties and graph-level statistics, as mutually reinforcing sources of information. We introduce a new micro-macro training objective for graph generation that combines node-level and graph-level losses. We utilize the micro-macro objective to improve graph generation with a GraphVAE [41], a well-established model based on graph-level latent variables, that provides fast training and generation time for medium-sized graphs. Our experiments show that adding micro-macro modeling to the GraphVAE model improves graph quality scores up to 2 orders of magnitude on five benchmark datasets, while maintaining the GraphVAE generation speed advantage.
Accept
This paper proposes a new generative model for the generation of graphs. Different from most of existing approaches, the proposed method considers both node and graph level properties to capture high-order connectivity and overcome sparsity of any observed graph. The writing is general clear and the results are convincing. The reviewers are overall positive, with some concerns on the motivation, which has been addressed well by the authors in the rebuttal. Some other questions raised by the reviewers are also appropriately addressed, which leads to the increase of some scores. The downside of the approach lies in the time complexity in collecting the macro-level statistics. But overall, it is a good paper worth accepting.
train
[ "VH0F61NuNt0", "sKVGOiAFJZy", "yZekHQodo7A", "WD4mLL53cR", "FnkSlUhCQBP", "2O0Lerhi0RJ", "BRaYTS4_Dqt", "HwV6yeHVfqa", "sDr2O_5pHY5", "3g2iLFWAbLb", "AKFpBkPsL4q", "qdKNOG0RoIy", "aj7J4lEPx-N" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for re-evaluating our work and increasing their rating. Below are responses to the follow-up questions.\n\n**Q.** Can you explain why your model performs poorly when you use only one graph statistic? I wonder why the performance improved rapidly when you use all of three statistics. Does this result imply that using more and more graph statistics brings better performance, or combination of those three statistics you used shows specifically good performance?\n\n\nAs the second block of table 5 shows, and as expected, different statistics are more important for different datasets and have different effects. However, as mentioned in lines 574-576 no single graph statistic has the power of all three combined.\n\nThe framework uses a combination of the three graph statistics as default and the empirical experiment shows the improvement ranges up to 2 orders of magnitude improvement in 8 studied datasets with different properties. However as mentioned in lines 168 and 169, the default graph statistics can be extended for specific target statistics. Our experimental result shows that, depending on the dataset, extending the default statistics by adding more graph statistics generally results in better performance, however, it arises additional computational overhead. \n\nWe agree there is an interesting research question around the interaction of different graph statistics that are opened up by the framework. \n In Section 7, Conclusion and future work [lines 331-335], we mention that “micro-macro modeling opens a number of fruitful avenues for future work. i) Investigating which graph statistics are important for generating which types of graphs. This connects with the rich area of graph kernels [34] that are often based on graph statistics. ii) Investigating which graph statistics are important for particular domains.”\n\n\n", " Thank you for your answers on my questions and your additional experiments on our suggestions. Lots of my doubtful points have been fulfilled.\nHowever, when I look thoroughly at the results of target statistics ‘Degree’, ‘Cluster’, and ‘orbit’ of ‘ogbg-molbbbp’ dataset in Table 5 (page 16, section 8.7.), it can be seen that the performance of your model got worse if you use only one of the graph statistics, while your model showed a good performance if you use all of three suggested statistics together.\n\nI have the following two questions regarding this observation.\n1. Can you explain why your model performs poorly when you use only one graph statistic?\n2. I wonder why the performance improved rapidly when you use all of three statistics. Does this result imply that using more and more graph statistics brings better performance, or combination of those three statistics you used shows specifically good performance?\n\nOverall, I really appreciate your detailed and faithful responses.\nI would change my rating to Weak Accept, 6.", " We thank the reviewer for the care and attention devoted to the paper. Below our responses to the questions can be found. Please note that all references and citations refer to the paper. \n\n**Questions:** The proposed MM objective function is applied on GraphVAE for an AB design and its effectiveness is shown for graph generation. Do you think if the benefits of micro-macro modeling would generalize to other models or other graph tasks? Some related discussions in this regard are necessary.\n\nWe thank the reviewer for the attention devoted to the paper. Below you will find our responses to the question.\n\n**Part 1.** Do you think if the benefits of micro-macro modeling would generalize to other *models*? \n\nThanks for the interesting question. The proposed model is a new multi-scale perspective on graph modelling, not a new GGN architecture. For evaluation we choose a specific GraphVAE. In fact, we expect part of the impact of our paper will be to stimulate research into using MM modelling to improve the performance of many graph architectures including Autoregressive and GANs (see section 6, lines 310-317, also future work, line 334.) \n\n\n**Part 2.** Do you think if the benefits of micro-macro modeling would generalize to other graph *tasks*? \n\nWe thank the reviewer for bringing up this interesting research question. We believe exploiting graph statistics can potentially be utilized in many studies of deep graph-level representation learning. Since the encoder of our GraphVAE-MM models is trained to produce a graph embedding, natural downstream tasks are graph classification, graph clustering, and visualization. We have started experiments on graph classification using GraphVAE-MM and are observing improvements. This paper focuses on graph generation, one of the research frontiers of graph representation learning.\n\nWe have discussed and added the possibility of using graph-level statistics in unsupervised graph-level representation learning and its effect on downstream tasks as future work, see lines 335-337 in revised version. \n", " We thank the reviewer for the care and attention devoted to the paper. Below our responses to the questions and comments can be found. Please note that all references and citations refer to the paper. \n\n**Q1.** As a result of examining the paper or code, it seems that node feature or edge feature is not generated. In the case of using the Real-world dataset, it is considered important to generate not only the structure of the graph but also the node/edge feature well. Can you suggest an experiment related to this part?\n\n\nThe model can simply expand to generate the node and edge features. For example, as in the GraphVAE [41], the decoder generates the node feature, X, and edge features, E. The micro-macro (MM) loss would be of the form:\n\n$L_θ (A, E, X) = L^{0}_{θ} (A) + γ L^1_θ (F_1, . . . , F_m) + ΓL^2_θ (X,E)$\n\nwhere $Γ$ is a hyperparameter that balances the feature aspect.\n\nWe agree that attributed graphs are an important topic. As you point out, a strength of the GraphVAE [42] approach is that it accommodates node and edge features. However, this is not true of the auto-regressive baselines [43, 32, 11] and other traditional graph models (going back to Erd ̋os and Rén) that focus on the problem of learning structural information from graph data. As the auto-regressive baselines are considered SOTA by many (e.g Hamilton [20]), we wanted to use exactly their setting. In our view, the ease with which GraphVAEs [41] can accommodate node/edge features is another advantage over current auto-regressive methods (in addition to the other GraphVAE [41] strengths we demonstrate in the paper). \nOur paper follows graph structure learning research track and shows that jointly modeling local and global level properties can hugely affect learning the structure of graph data. We have added as a future direction evaluating the impact of micro-macro modeling on graphs with node or edge features, thank you for the suggestion (lines 335-337). ", " **Q2.** As described in the paper, graph statistics are reflected using the calibrated Gaussian framework. There are two questions about this.\n\n**a)** Are graph statistics used for performance evaluation or reflected in objective function actually following Gaussian distribution? For example, if you think about 'degrees', it seems that the skew distribution is easy to appear due to the nature of the graph dataset. If so, it seems necessary to verify whether 'degrees' can be assumed to follow Gaussian distribution.\n\nThanks for the question. We agree with the reviewer that graph statistics may have non-gaussian distribution. However, we should clarify that the gaussian distribution is used for the conditional distribution $p(F_u|z)$ (probability of observed graph statistic given the graph embedding). General VAE theory indicates that the marginal distribution over graph statistics,$\\int p(F_u|z)p(z)dz$, can in principle fit any distribution over graph statistics, including skewed ones. Empirically, the MMD metrics are showing a close match between the observed distribution over graph statistics and the distribution implicitly defined by our trained GraphVAE-MM model, including node degree. Next, we expand on this point in some detail.\n\nVAE can capture high-dimensional complicated data distributions, and it is widely applied to various data, such as images, videos, and audio and speech. With respect to graph statistics (as opposed to edges), our model behaves like a VAE [26]. The VAE decoder outputs a Gaussian distribution over statistics/feature vectors given a latent variable z. This does not mean that the statistic/feature vector has an unconditional or marginal Gaussian distribution. The data distribution $p_\\theta(X)$ is given by the unconditional/marginal distribution $\\int p(X|z)p(z)dz$. The VAE can in principle (with a powerful enough decoder) model any input data distribution. Similar to VAE, GraphVAE-MM can accommodate any distribution over graph statistics with a sufficiently powerful decoder.\n\n\n**b)** Can graph statistics be reflected in other ways without utilizing the calibrated Gaussian framework? Is there an experiment that can support the evidence using the Calibrated Gaussian framework?\n\nWe could use a Gaussian VAE framework in the decoder where the variances are treated as hyperparameters (as in the original VAE paper). We obtained good empirical results with hyperparameter search, but it is a time-consuming process and we believe would deter users from applying our method. Adapting the Calibrated Gaussian (with a novel standardization for graph statistics with divergent scales) allows us to avoid hyperparameter search without a loss of performance. \nIt would be fairly simple to condition on graph statistics by using them as part of the input to the encoder (e.g. using the Graph NNN framework [deepmind]). The difficulty is to generate graph statistics by adding to the decoder the ability to output them. Since we are using a graphVAE to generate adjacencies, it is natural to use a VAE to generate graph statistics as well. \n\nWe believe that our results about the usefulness of adding graph statistics to a generative model will stimulate future research with other generative frameworks to answer your question. Modelling graphs at two different scales opens a new line of research. In section 6 we discuss micro-macro modelling for other GGM architectures [lines 310-317]. For example for GANs we suggest that “A way to combine MM modeling with GANs is to augment the input to the discriminator with graph statistics computed for both real and generated graphs.” In this approach there is no explicit probabilistic model over graph statistics, hence no conditional (calibrated) Gaussian.\n\n**Q3.** Figure 1 shows 'connected components' as an example to be improved in this paper. It would be nice to show that the proposed model has improved this part well.\n\nThanks for the suggestion. In our experiments we reported graph diameter, a new evaluation metric not previously used, which depends strongly on graph connectivity. \nEmpirical results for the graph diameter, as illustrated in tables 2 and 8.b, shows significant improvement in the MMD between the test graphs and the generated graphs diagram by applying the micro-macro modeling of GraphVAE-MM. The improvement ranges up to 2 orders of magnitude improvement in 7 out of 8 studied datasets.\n\n\n**Comment 2.** I think it would be easier to understand if the architecture overview was attached. I suggest adding a picture of the structure for the reader's clear understanding.\n\nThanks for the suggestion, the figure is added to the paper, see Appendix figure 4.b in the revised version.\n", " **Limitations:**\nThe positive social impacts presented in this paper include molecular presentation and medical discovery. However, since the model proposed by real-world dataset shows weak performance, it is seen as an important limitation.\n\nWe are *puzzled* by this comment. If the comparison is between GraphVAE with and without micro-macro modelling, Table 1 shows a big improvement, *especially on the real-worlds graphs*. If the comparison is between GraphVAE and auto-regressive baselines, GraphVAE-MM beats the baselines on the MMD RBF, graph fidelity (realism) metric, and is very competitive on the F1 PR, diversity metric. In addition, in our experiments GraphVAE-MM (as well as GraphVAE) are much faster than the auto-regressive baselines both in generation and train time (Figure 3 and table 9). This is true for all datasets, including the real-world ones. *The other reviews* seem to share our conclusions. Next we go through the details of our evidence, including some new results following your helpful suggestions.\n \n### The improvement in the real-worlds datasets (Table 1)\n\n*Ogbg-molbbbp dataset.* Table 1 shows that the micro-macro modeling improvement on Ogbg-molbbbp is substantive and *bigger than* any other datasets, including *synthetic datasets*. The magnitude of the increase is 39.35 in F1 PR and 0.18 reduction in MMD RBF. In percentages, the improvement is 72% and 111% in F1 PR and MMD respectively which indicate significant improvement in both fidelity (reality) and diversity of generated graphs and achieving a near-perfect score.\n\n*Protein dataset.* The GraphVAE-MMD RBF score improves by 70% and achieves 0.03 which is almost the ideal score, i.e. the MMD RBF of 50/50 split of the test set. Figure 10 in the appendix also supports our contention. The figure visually contrasts generated Protein graphs by GraphVAE and GraphVAE-MM, where GraphVAE-MM, matches the complicated patterns in the target graphs the best.\n\n### Choice of real graph datasets \nThe paper studies Protein and the ogbg-molbbbp real-world datasets. Previous studies have also used 2 real-world datasets [43, 32, 11]. We replaced the Cloud dataset from these studies with the ogbg-molbbbp because it was not feasible to train the GraphRNN baseline on the Cloud dataset. Also the ogbg-molbbbp dataset is well known in the community from the Open Graph Benchmark [23]. Protein and ogbg-molbbbp are from biology with information about proteins and molecules respectively. \n\n### New Results Following the Reviewer Suggestions\nWe do agree that more experiments are always better, and especially experiments on real-world datasets. We had used the **QM9** dataset that you mentioned but were not able to train the auto-regressive baselines on it because they do not scale well in the number of graphs; this dataset has more than 130K graphs. The revision now shows the QM9 improvements from micro-modeling by comparing GraphVAE vs. GraphVAE-MM (Table 8 in the appendix). \nFor QM9 the MMD RBF of generated graphs improved, In percentages, by 15%. Given the already strong performance of GraphVAE on QM9, this is a substantive improvement and provides evidence of Micro-macro modeling effectiveness. Thank you for the suggestion.\n\nWe have added experiments on the real-world **MUTAG** and **PTC** benchmarks [44], and the performance of GraphVAE-MM has held up well. (See Tables 8 and 9 in Section 8.10 of the revised appendix, we also updated our anonymous [*GitHub*](https://github.com/ddccbbee/GraphVAE-MM)). On MUTAG and PTC is the improvements from micro-modelling are even better than those we reported on the Protein and ogbg-molbbbp datasets. On MUTAG, substantive improvements over our strongest baseline, the BiGG auto-regressive method. On PTC, the pattern is similar to those reported in the main body of the paper: Substantive improvement in generation quality over all baseline except for BiGG, competitive quality and much faster runtime and generation than the autoregressive baselines. ", " We thank the reviewer for the care and attention devoted to the paper. Below our responses to the comments and questions can be found. Please note that all references and citations refer to the paper. \n\n**Q1.a)** How the graph generation task benefits from fitting graph statistics. In other words, what is the limitation to only fitting an adjacency matrix in graph generation? \n\nThanks for the important question. The “Motivation” section in the introduction was meant to address the general benefits of fitting graph statistics. We briefly review and expand on our arguments there. \n\n*General Motivation.* We list two key properties: user control and graph realism (lines 30-33). For user control, a user may know which target graph statistics are important in their domain (See also the discussion in ref 33). In our MM framework, the user only needs to specify the target graph statistics and learning will automatically select graph models that match them (line 36-37). For example for a large payment graph recording economic transactions, a macro economist may be mainly interested in the average price level of a target goods basket. For a central bank managing a payment system, the total number of transactions and the maximum payment amount may be more important. Allowing the user to specify target statistics is a way to incorporate their domain knowledge. \n\nAs for graph realism, compared to standard GGM objective functions that are based on predicting individual adjacencies, matching graph statistics serves as a regularizer for latent representations that increases the realism of the generated graph structures (lines 31-32). *This is because standard objective functions based on adjacency matrices alone weight all edges equally.* However, adjacencies (or non-adjacencies) have different roles in the graph global structure. Some edges play a critical role in maintaining the connectivity/community structure, while the rest are less important. Figure 1 in the paper illustrates this difference. Graph statistics can reflect the different roles. For example, consider K-step transition probability, which is utilized in the paper. The K-step transition probability matrix, k>1, encodes the connectivity information of the graph (see also discussion in lines 192-193). \n\n*Another issue with modelling adjacency matrices only is that they are sparse.* They are highly imbalance with most (>90%) entries being 0. By directly matching the adjacency matrix, a model tends to generate an overly sparse graph. To address the graph sparse structure Kipf and Welling [27] used weighted cross-entropy. However, it has been shown that weighted cross-entropy can result in distortion in measuring the quality of reconstructed data [38]. \nOn the other hand, graph global statistics are generally a scalar or dense matrix/vector of real values. For example, the number of triangles studied in this paper is a scalar, and as mentioned in line 191, K-steps transition probabilities are generally dense matrices. Our experiments show that the MM objective leads GraphVAEs to generate graphs with more realistic densities.\n\n*Empirical Improvement.* Our results show the benefits of adding graph statistics empirically. Specifically Tables 1, 2 and the extended experiment in the appendix following Reviewer dtRG’s suggestions, tables 8 and 9. Our experiments show that adding global properties to the GraphVAE, indicated as GraphVAE-MM, improves graph quality scores up to 2 orders of magnitude on eight benchmark datasets.\n\n**Q1.b)** And what kind of graph statistics should be chosen as targets?\n\n\nThe paper uses three graph statistics as default statistics for applications where the user does not specify target statistics, and to evaluate the general idea of micro-macro modelling. \nAs we discussed in section 4 [lines 170-202], our criteria for choosing graph statistics are as follows. 1) Meaningful and easy to interpret. 2) Differentiable with respect to the entries in a reconstructed soft adjacency matrix. 3) Permutation-equivariance. 4) Known from prior research to be generally important for graph modelling in different domains from prior research. \n\nWe agree with the reviewer that investigating the statistics which should be chosen as targets is an interesting research question. In Section 7, Conclusion and future work [331-335], we mention that “micro-macro modeling opens a number of fruitful avenues for future work. i) Investigating which graph statistics are important for generating which types of graphs. This connects with the rich area of graph kernels [34] that are often based on graph statistics. ii) Investigating which graph statistics are important for particular domains.”\n\n", " \n**Q2.)** How to form descriptor functions with respect to vector label histogram and triangle count? It is not clear how to form descriptor functions with respect to vector label histogram and triangle count. And how to guarantee the descriptor functions are differentiable.\n\nThanks for the question. Here we clarify the definitions and explain how the descriptor functions differentiability is guaranteed. \n\nAs mentioned in line 114, “a descriptor function is the function which maps an adjacency matrix A to a l-dimensional graph statistic”. Next we go over the default graph statistics and corresponding descriptor function which are used to calculate each of them. \n\n*Triangle count.* As explained in section 4, the number of triangles in a simple graph $A$ is a scalar and computed by\n$Tri(A) = \\sum_i A^3_{ii}$ where $A$ is a soft adjacency matrix with $A_{ij}\\in[0 \\quad 1]$. Since matrix multiplication and summation is differentiable, so is the descriptor function, $Tri(A)$ with respect to $A_{ij}$.\n\n*S-Step transition probability kernel.* Similar to Triangle count, S-Step transition probability kernel is calculated by a simple (matrix) multiplication, $P^s(A) = {(D(A)^{−1}A)}^s$. Since division, sum and matrix multiplication is differentiable $P^s(A)$, is also differentiable with respect to $A_{ij}$. Also see lines 186-191.\n \n*Vector Label Histogram (VLH).* As explained in lines 175 - 185, VLH is calculated by applying a soft histogram function to degree vector $V$, where $V_i =\\sum_ j A_{ij}$. \nLearnable and differentiable histograms function have been used and studied in classification methods with differentiable end-to-end deep architectures, see [Learnable Histogram: Statistical Context Features for Deep Neural Networks\"](https://arxiv.org/abs/1804.09398)\n\n\n\n**Weakness W2.** Both calculating and fitting graph statistics bring new computing costs, e.g., the complexity is $O(N^3)$ to compute the transition probability matrix.\n\nThe reviewer correctly pointed out the computational complexity of graph generation increases by adding graph statistics. One of the contributions of our paper is a set of techniques to achieve fast generation nevertheless (Table 7). We discussed computational overhead from different aspects:\n\n*Training Overhead VS Generation Overhead.* The graph statistics only affect the model in the training and do not cause any overhead in the generation/test phase.\n\n*Computation Time.* As discussed in section 4, exploited graph statistics including transition matrix can be calculated in parallel in near constant time for small and medium size graphs.\n \n*GraphVAE-MM is substantially faster than GGM benchmarks.* Despite the computational overhead, compared to the popular benchmark GGMs, the GraphVAE-MM has substantially less train and generation time, see figure 3 and table 7. \nFuture Work. As discussed in section 6, line 307, Approximating graph statistics can reduce the computational cost significantly [24, 15, 31, 38] and can be exploited in future studies.\n\nWe also note that edge reconstruction and evaluating the edge reconstruction probability are already expensive, and tends to dominate the graph statistic computations; see table 10.", " We thank the reviewer for the care and attention devoted to the paper. Below our responses to the question can be found. Please note that all references and citations refer to the paper. \n\n**Questions**. How the proposed model is different from the GAN-based model? \n\n\nThanks for the interesting question.\n\nIn section 6 under “Micro-Macro modeling for other GGM architectures” we discuss other GGM architectures. The general difference to a GraphVAE architecture is that the GAN does not have an encoder component but adds a discriminator. We expand on our discussion of GANs specifically. \n\nTo the best of our knowledge, GAN-based GGMs either 1) directly work on the adjacency matrix [8] or 2) mimic the graph by generating the random walks [7]. \nApproach 1) adapts GAN and operates directly on graph adjacency matrices. The approach is a likelihood-free generative model in which the generator maps a graph latent to an adjacency matrix, and the discriminator classifies the adjacency matrix as real or synthetic.\nIn approach 2), graphs are represented by generated random walks. The insight behind this approach is that transition counts can capture graph structure. NetGan and MolGan [7,8] are one of the popular GAN-based generative models which are mentioned in section 6 and adopt these approaches. \n\nAs discussed in the paper [lines 21-22] and in [9] both random walk and adjacency matrices are graph local level information which means the GAN models are limited to the local aspect of the graph, rather than using global graph-level statistics. The proposed micro-macro model is a new multi-scale perspective on graph modelling. In fact, we expect part of the impact of our paper will be to stimulate research into using MM modelling to improve the performance of many graph architectures including Autoregressive and GANs (see Section 6). \n\nFor empirical comparison, we compared the NetGAN method with the proposed model, GraphVAE-MM, in statistics-based evaluation (Section 5.3 line 256). The table below compares NetGAN and the GraphVAE-MM on lobster and grid graph generation tasks. As shown, the proposed approach MM model graph structure metrics by 1-4 orders of magnitude compared to NetGAN. \n\n---\n Dataset: Lobster Grid \n---\n Stat: Deg. Clus. Orbit Spec Deg. Clus. Orbit Spec \n--- \n NetGAN [7] 1.56 0.03 0.86 3.20 1.97 1.31 0.95 0.46 \n---\n GraphVAE-MM 2e-4 0 0.008 0.017 5e-4 0 0.001 0.014 \n\nOur experimental design closely followed that of recent SOTA papers in graph generation, which also did not compare with NetGAN. \n \n", " This paper studies the problem of graph generation, and proposes a new model using both micro and macro level supervision information in GraphVAE architecture. Fitting adjacency matrix is the micro supervision, and three kinds of graph statistics, i.e., degree histogram, number of triangles, and higher-order proximity relations, are adopted as macro supervision. The object consists of ELBOs modeling micro-macro loss and a KL-divergence between the prior and the approximate posterior of hidden representation. The proposed model is validated on 3 synthetic, and 2 real-world graph datasets. The experimental results the proposed model generates graphs with a lower discrepancy between generated and test graph embeddings than graphs generated by competitors in terms of MMD RBF and F1 PR. Strong points:\n\nS1. The macro objective of fitting graph statistics in graph generation is novel to me. \n\nS2. The paper proposes a general micro-macro ELBO as the objective, and then implements the ELBO by graph neural networks.\n\nS3. The experimental results show the proposed model outperforms the competitors.\n\nWeak points:\n\nW1. It is not clear how the graph generation task benefits from fitting graph statistics. In other words, what is the limitation to only fitting adjacency matrix in graph generation? From this line, I have a concern about what kind of graph statistics should be chosen as targets?\nThis paper selects three graph statistics, but does not present an explanation for this selection. \n\nW2. The efficiency. Both calculating and fitting graph statistics bring new computing costs, e.g., the complexity is $O(n^3)$ to compute the transition probability matrix.\n\nW3. It is not clear how to form descriptor functions with respect to vector label histogram and triangle count. And how to guarantee the descriptor functions are differentiable. Q1. How the graph generation task benefits from fitting graph statistics. And what kind of graph statistics should be chosen as targets?\n\nQ2. How to form descriptor functions with respect to vector label histogram and triangle count? Yes.", " The contributions of this paper was to model graph data jointly at two levels: a micro level based on local\ninformation and a macro level based on aggregate graph statistics. Positives:\n\n1. The idea of this work is interesting and novel, which trys to use probabilistic model to explore the local and global graph statistics.\n\n2. The performance of this work is very good, comparing to the existing GraphVAE. And the code is available.\n\nNegathive:\n1. The scalability of this work may be a challenge, the compexity of the descriptors is either O(N^2) or O(N^3). Also the algorithm reuqires pre-define a graph descriptors to compute the graph statistics.\n\n2. The algorithm part is straightforwad. Basically, it designs a MM loss into one unified fraemwork. It seems that many GAN-based models can achieve the similar function. Any discussion?\n 1. How the proposed model is different from the GAN-based model? Yes", " This paper jointly models micro and macro level graph information for graph generation. A principled joint probabilistic model for both levels is proposed and an ELBO training objective is derived for graph encoder-decoder models. Extensive experiments and visualization results validate the efficacy of adding micro-macro modelling to GraphVAE models for graph generation. Strengths:\n1. This paper is well motivated and the idea of utilizing node-level properties and graph-level statistics to constrain graph generation seems reasonable.\n2. The design of micro-macro (MM) loss is clear and theoretically solid.\n3. The authors have done a thorough analysis of the proposed model and validated its effectiveness through qualitative and quantitative evaluation. The main claims are supported by the experimental results.\n\nWeaknesses:\nMy main concern is that the proposed objective function is only applied on GraphVAE following an AB design. Although the experimental results are satisfactory on graph generation, it remains unclear whether the benefits of micro-macro modeling would generalise to other models. \n The proposed MM objective function is applied on GraphVAE for an AB design and its effectiveness is showed for graph generation. Do you think if the benefits of micro-macro modeling would generalise to other models or other graph tasks? Some related discussions in this regard are necessary. The authors have adequately discussed the limitations of their work.", " The authors of this paper newly presented a function that can reflect graph statistics in the graph generative model. They have shown various experiments and visualizations proving graph statistics are well-reflected. In addition, designing an objective function to reflect different graph statistics simply is a significant contribution. Originality: (Yes) \nThe proposed method seems to be original in that the authors proposed a new but simple VAE-based objective function to reflect graph statistics.\n\nQuality: (Neutral) \nSince the purpose of this study is to generate graphs that reflect graph statistics, theoretical support and experiments for the purpose are well shown. However, the performance on real-world datasets such as the molecule is marginal. In particular, when only one graph statistic is used, the performance degradation is greater than that of GraphVAE, which needs clarification. Since it shows good performance only when all three statistics presented in the paper are written, it is necessary to explain why these three combinations were selected and what synergy they show.\n\nClarity: (Yes) \nThere was no difficulty in understanding what the paper was trying to say, and it shows sufficient proof of the formula. I think it would be easier to understand if the architecture overview was attached. I suggest adding a picture of the structure for the reader's clear understanding.\n\nSignificance: (Neutral) \nThis model seems to have particular strengths in experiments using Synthetic datasets. In addition, it seems to be a good contribution that it showed a higher performance improvement compared to GraphVAE. However, as discussed in the paper, performance in real-world datasets seems to be more important to contribute in practical areas such as molecule and medical discovery. However, the experimental results presented in the paper do not support this. Additional experiments will be needed to show that graphs are well generated using the QM9 dataset shown in GraphVAE.\n\n 1. As a result of examining the paper or code, it seems that node feature or edge feature is not generated. In the case of using the Real-world dataset, it is considered important to generate not only the structure of the graph but also the node/edge feature well. Can you suggest an experiment related to this part? \n\n2. As described in the paper, graph statistics are reflected using the calibrated Gaussian framework. There are two questions about this. \n a) Are graph statistics used for performance evaluation or reflected in objective function actually following Gaussian distribution? For example, if you think about 'degrees', it seems that the skew distribution is easy to appear due to the nature of the graph dataset. If so, it seems necessary to verify whether 'degrees' can be assumed to follow Gaussian distribution. \n b) Can graph statistics be reflected in other ways without utilizing the calibrated Gaussian framework? Is there an experiment that can support the evidence using the Calibrated Gaussian framework? \n\n3. Figure 1 shows 'connected components' as an example to be improved in this paper. It would be nice to show that the proposed model has improved this part well.\n The positive social impacts presented in this paper include molecular presentation and medical discovery. However, since the model proposed by real-world dataset shows weak performance, it is seen as an important limitation." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "sKVGOiAFJZy", "aj7J4lEPx-N", "qdKNOG0RoIy", "aj7J4lEPx-N", "aj7J4lEPx-N", "aj7J4lEPx-N", "3g2iLFWAbLb", "3g2iLFWAbLb", "AKFpBkPsL4q", "nips_2022_zUbMHIxszNp", "nips_2022_zUbMHIxszNp", "nips_2022_zUbMHIxszNp", "nips_2022_zUbMHIxszNp" ]
nips_2022_grzlF-EOxPA
Conformal Frequency Estimation with Sketched Data
A flexible conformal inference method is developed to construct confidence intervals for the frequencies of queried objects in very large data sets, based on a much smaller sketch of those data. The approach is data-adaptive and requires no knowledge of the data distribution or of the details of the sketching algorithm; instead, it constructs provably valid frequentist confidence intervals under the sole assumption of data exchangeability. Although our solution is broadly applicable, this paper focuses on applications involving the count-min sketch algorithm and a non-linear variation thereof. The performance is compared to that of frequentist and Bayesian alternatives through simulations and experiments with data sets of SARS-CoV-2 DNA sequences and classic English literature.
Accept
The paper proposes a method based on conformal inference in order to obtain confidence intervals for the frequencies of queried objects in very large data sets, based on sketched data. The applicability of the method relies solely on the exchangeability assumption for the data, not on the sketching procedure nor on the data distribution, and is therefore very general, as emphasized by all reviewers. The reviewers have done a great job and this should (and has been) acknowledged by the authors. There has been some objections concerning the applicability of the main assumption (exchangeability), the meaningfulness of the experimental comparison with prior work and the interpretation of the resulting plots, or on the amount of theoretical content of the paper. But the post-review discussion appears to have been very active and fruitful. It overall gives me the impression that the authors took very seriously the comments and will improve the manuscript accordingly, and that many objections could be answered by a more appropriate exposition. Given that this paper lies of the edge of the acceptance threshold, this improvement is very important, as the reviewers concerns (which have some strong overlap) will otherwise be probably be shared by the wider audience of readers. This is especially true given the statistics flavor of the paper which does not target the main NeurIPS audience, implying that an even greater effort has to be put on the presentation. Very detailed, and I find meaningful from a layman perspective, answers have been provided by the authors, and not all their content will fit in the additional page. There is thus a important work of selection and re-writing ahead of the authors before publication.
train
[ "nAvgHpZlJLq", "hQR6degg0FO", "ZnBjGUF5eI", "dQu7kLuCmYm", "ZIfNxxjMQ7", "9Q_R4AiyNWc", "PXGTZflxzI", "ZgaFnabHy2V", "T4D8qGoXfV", "NqFLIxqgctm", "QsDri2zsT-Q", "dPQ2-h_x4K", "X1X0smeWk6f", "Ild25872VD" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer vGGR,\n\nThank you for taking the time to read our rather long response and for continuing the discussion. Please let us clarify that we did not mean to suggest any misunderstanding might be due to any fault on your side. By contrast, we were very pleased to see you read our paper carefully. What we meant to convey with the term \"misunderstanding\" was that some of the concerns you raise (e.g., the delicacy of exchangeability assumptions) are not problems of which we are unaware, or which we want to hide. The \"misunderstandings\" to which we referred are due to our less-than-perfect exposition, which we are however fully prepared to correct in the next round of revision or in the camera-ready version, if the paper is accepted. Specifically, we will clarify the following points (we can call them that if we don't like the word \"misunderstanding\"):\n\n - Our method is not specifically tied to the one-sided conformity scores currently employed. It is completely straightforward to apply our method with existing two-sided conformity scores such as those found in Sesia and Romano (2022), and all our theoretical results will remain equally valid. We will write down explicitly how to compute the two-sided scores and then we will repeat the experiments with these scores. We agree this will be a nice extension, and we are grateful for the suggestion, but it really isn't a big deal to implement it. The technical and conceptual novelty of our paper is the recasting of the sketching problem into conformal inference framework, not the choice of conformity scores.\n\n - The issue of exchangeability is indeed more subtle in the sketching context than in other applications of conformal prediction. We had mentioned this point in some places, but we agree it needs to be emphasized and explained even more clearly. In particular, we will better explain that the frequency-conditional methodology and the frequency-conditional performance metrics presented in our paper are specifically designed to mitigate the exchangeability issues. This is not to say we have completely resolved the exchangeability issues: to the contrary, we are already proposing in the discussion one interesting but more technically challenging way to further reduce our reliance on exchangeability in the future. However, we do not think the exchangeability assumptions are sufficiently problematic to justify discarding conformal inference as a viable and potentially very promising framework for uncertainty estimation with sketched data. To the contrary, exchangeability is a subtle and important topic that needs to be discussed carefully, and which will need to be researched further in this context. We would like to think our paper takes a first meaningful step in that direction, while providing motivation and ideas for further work.\n\n > One has to partition the examples a priori. In other words, one has to decide what is the collection of rare examples for which one wants conditional coverage for before seeing the data. \n\nWell, let's just clarify we only need to decide a priori what range of frequency values qualifies as \"rare\", not which specific examples are rare. That being said, it's true that we cannot achieve the \"full-conditional\" coverage that one would ideally hope for, but this is a fundamentally tough problem. It's not very reasonable, at least not at this point of the methodological development, to demand full conditional coverage in the generality that we work in. The only other existing approaches to solve our problem are limited to deal with specific (linear) sketching algorithms or must operate under an extremely conservative \"adverserial data\" framework.\n\n> I also explicitly stated in the review that the extent of the technical contribution is NOT a major weakness. In the strengths, I actually pointed to the novelty of the construction of the conformity scores. \n\nThank you!\n\n> This was more of an observation about the difficulty of the problem when one is willing to assume a warmup phase - in certain cases of interest, conformal prediction is not even needed.\nThank you for pointing this out. We just wanted to say we also thought about it, and it was a distraction on our side not to mention it. We will fix it!\n\nThank you again for the very helpful comments!", " We thank the authors for providing a detailed response and addressing some of the concerns that were raised. In response to your comments, I am willing to increase the score for the paper slightly, but would like to clarify that the vast majority of the shortcomings that I discussed were NOT misunderstandings on my part. The paper is interesting, but the typically mild assumption of exchangeability needed for conformal prediction happens to be stronger for the sketching problem considered in the paper. \n \n- Weakness #1: While the general approach does not require one-sided confidence intervals, the only nonconformity scores defined (and studied in the simulation and real-data examples) in the paper (in particular, Section 3.3) were one-sided with the deterministic upper bound. This is why I also ask in the question section why the two-sided intervals were not considered explictly. One would think that the one-sided prediction intervals considered have an artificially high upper bound and the lower bound attempts to compensate for this, which may be undesirable. One may argue that this is more of a writing issue than content, but the nonconformity score is undoubtedly part of the method and this was the only one considered. \n- Weakness #2: I explicitly state in my review that the 2-grams are treated as fixed but are sampled from in an iid manner. While this leads to “validity,” what is now covered is not the underlying frequency but a sampled frequency and that the probability measure involves this sampling. \n- Weakness #3: I am aware of the conditional validity literature, but the stated approach is not a panacea. One has to partition the examples a priori. In other words, one has to decide what is the collection of rare examples for which one wants conditional coverage for before seeing the data. One may not use any other additional information to choose the item of interest. Again, this is not a major weakness, but was simply an observation regarding the limitations of exchangeability in this setting. Exchangeability is satisfied in common idealized setups, so it is more or less “benign” or harmless, although people have studied cases when this assumption is violated in certain ways. \n- Weakness #4: I also explicitly stated in the review that the extent of the technical contribution is NOT a major weakness. In the strengths, I actually pointed to the novelty of the construction of the conformity scores. \n- Weakness #5: This was more of an observation about the difficulty of the problem when one is willing to assume a warmup phase - in certain cases of interest, conformal prediction is not even needed. ", " Dear Reviewer vGGR,\n\nThank you again for your careful read and detailed comments about our manuscript. We have taken your feedback very seriously and we have as a result identified several areas in which our exposition could be made clearer. We feel that our paper will improve significantly as a result of your feedback, and we are grateful for that. We were just wondering whether you could kindly let us know whether you are satisfied by our responses, or whether you have any remaining questions which we might be able to address in the final few hours before this author discussion period ends.\n\nSincerely,\nThe authors", " Dear Reviewers HNPK, gvFV, and vGGR,\n\nThank you for reading our paper carefully and for providing many insightful comments. We have answered your questions point-by-point below, and we thank you in advance for taking the time to review our responses.\n\nWe have learnt from this first round of discussion that some parts of our paper could have been explained more clearly. We hope you will give us the opportunity to improve the exposition based on this discussion. We also hope that our responses address your concerns and clarify all possible sources of confusion. Of course, we would be very happy to continue the discussion if you have any remaining/follow-up comments or questions!\n\nThank you!\n\nThe anonymous authors", " Weakness 4. We are not trying to claim this paper introduces ground-breaking theoretical advances, because it does not, but perhaps it is not fair either to say our technical contributions are not substantial enough. There has been a lot of recent activity in conformal inference, while other unrelated papers have started to consider the problem of tightening the error bounds for the CMS by adopting randomized data perspectives; e.g., Ting (2018) and Cai et al. (2018). Yet, our rigorous mathematical formulation of the sketching problem into a conformal prediction framework had not been suggested by anyone else before. While we do build upon existing conformal prediction ideas, this is hardly a standard application of conformal prediction. Clearly, the connection was not sufficiently obvious to spur anyone else before to make it. Another reviewer has praised this novelty: “The paper's biggest strength is the sheer novelty of the combination of conformal prediction and data sketching. To my knowledge, the combination is completely new.” Further, our stronger notion of coverage defined in Equation (8) is also novel. This was inspired by prior work on Mondrian conformal prediction (Vovk et al., 2005), but it is original insofar as it specifically addresses your previous comment about the issue of exchangeability in the sketching problem. We understand that you might have not previously fully appreciated the importance in the context of sketching of the stronger coverage defined in Equation (8), which our method can provably guarantee. This is our fault: we didn’t explain it as well as we should have. However, given that this issue has been clarified, would you re-assess the technical contributions of the paper?\n\nWeakness 5. It is true that our procedure becomes trivial if the element of interest appears in the warmup phase, or if one is interested in a small number of elements, in which case one may just track these elements following the same reasoning used to justify the warmup phase. However, how is this a weakness of our method? There is definitely some weakness in our exposition, as we should have mentioned this explicitly. We will do so if given the opportunity to revise. However, this fact indicates a strength of our method, not a weakness: for some queries we can simply get perfectly tight frequency bounds for free. Note that we did not take advantage of this cool fact in our numerical experiments because it would have not seemed 100% fair towards the other methods, which do not have access to the warm-up data. However, if we did take advantage of this, as one should do in practice, we would see that our conformal prediction sets would become even better! Thank your for reminding us to mention this!\n\nWeakness 6. We are not going to argue with you on this final point: our exposition can be improved. You have already pointed out several opportunities for clarification in the above comments, and we are very grateful for that. We are convinced these improvements will make the paper much more broadly accessible, but they do not require major methodological or conceptual changes. Therefore, we are completely confident that we could take care of them before a camera-ready version of the paper is due, if we are given the opportunity to revise our submission.\n\nQuestion 1. Right, it is definitely possible (and easy) to apply our method to construct two-sided intervals. We have already discussed this in our answer to your comment above. In short, the reason why our applications focused on one-sided interval was to ensure consistency with the classical approach, but we can easily add more implementation details and empirical results for the two-sided case.\n\nQuestion 2. You are right: the coverage should always be nearly tight in theory if the conformity scores have a continuous distribution. The conformity scores we use here are not continuous, and that makes sense because the problem is intrinsically discrete. However, it is also true that should be able to obtain even shorter valid prediction intervals if we make the conformity scores continuous by adding some randomization; see for example Romano et al. (2020). We did not randomize the conformity scores because we thought it would unnecessarily complicate the notation, but we would be happy to explain that extension in the revised paper if you think it can be useful to do so.\n\nLimitations. We hope our previous comments clarified that indeed our paper does consider quite carefully the subtle issues involved with the data randomness and the exchangeability of the query points, even though our exposition did not clearly reflect it. We will make sure to improve the exposition accordingly if given the opportunity to do so.", " Weakness 3. Again, this comment is also due to some misunderstanding which we can resolve by improving the exposition, and we do not think it points to a true weakness in this paper.\nFirst, the assumption that the test point is exchangeable when combined with the training examples generally never is a “benign” assumption. It is a very useful assumption, which has allowed many successful applications of conformal inference and has not prevented the rapid growth of this field, but the literature is very well aware of the delicacy of exchangeability. A lot of effort has been dedicated to relax it as much as possible; see for example Vovk et al. (2005), Tibshirani et al. (2019), Barber et al. (2022). The truth is that we have also taken rigorous and effective measures to mitigate our reliance on such assumption in this paper, but perhaps we did not explain this important point as well as we could have. We are very grateful to have the opportunity to clarify this. \nIn hindsight, we believe the second part of Section 3.1, starting from line 152, should have been explained more carefully and placed into its own sub-section, with a sub-title such as “Relaxing the exchangeability assumption with frequency-conditional coverage”. Our method is specifically designed to control the notion of frequency-conditional coverage defined in Equation (8). This notion of coverage is stronger than the standard marginal coverage in Equation (7), and it is specifically intended to deal with the fact that it may be preferable not to treat the test point as exchangeable with all the training data. Controlling the stronger coverage in Equation (8), as our method provably does, is essentially equivalent to (partially) relaxing the exchangeability assumption. More precisely, our method allows us to to achieve provably valid coverage even conditional on a query being relatively rare. In other words, this means that our coverage guarantee still holds even if some covariate shift occurs in the test set, causing queries which were previously rare in the training data to suddenly become more common (or the other way around). In particular, this is precisely why it is not true that we “can only construct prediction intervals for randomly chosen items, which is quite restrictive particularly if the dictionary is large.” That would be accurate if we could only control Equation (7), but we can control Equation (8). In fact, this is a significant relaxation of the general exchangeability assumption which very intuitive and useful in our frequency estimation problem. We have now realized that the value of this important but somewhat subtle component of our contribution might have been missed by a large audience because it was not explained very carefully. We hope we have answered your question to satisfaction, and that you will give us the chance to incorporate these clarifications into the paper. Of course, this is not to say that we have completely removed the exchangeability assumption. There is more work to do, such as to address the interesting open question mentioned in Section 5. However, we are leaving that to follow-up work because it is a pretty searching issue by itself.", " Weakness 1. This comment is due to some misunderstanding which we can resolve by improving the exposition, and we do not think it points to a true weakness in this paper. Indeed, our novel methodology is not at all limited to one-sided intervals. All the key methodological components of our paper, in Section 3.1 and 3.2, are designed to accommodate two-sided intervals; see for example Algorithm 2 (in the appendix) and Theorem 2. The reason why our practical demonstrations, in Section 3.3 and Section 4, focus on one-sided intervals is the same reason why we pay special attention to the CMS and variations thereof: this facilitates the comparison with related prior work, and especially with the classical approach which gives us a probabilistic lower bound and a deterministic upper bound. Further, the exposition of our conformity scores is a little easier to explain for one-sided intervals (Section 3.3), although the two-sided counterparts would not be much longer to explain because they are already available from Sesia and Romano (2021), for example. That being said, we agree that it would also be interesting to compare the performance of our conformal intervals to that of the Bayesian and bootstrap (Ting, 2018) ones in the two-sided case. We would be very happy to add such comparisons to this paper if we are given the opportunity to revise it. Even though did not have time to add the-two sided comparisons during this rebuttal phase, this is a relatively straightforward change which does not require any additional methodological novelty. Therefore, there is no doubt it would only involve a minor revision. All we need to do is to replace the special conformity scores in Section 3.3 with their more general two-sided counterpart developed by Sesia and Romano (2021), and then re-run the experiments. In hindsight we wish we had already done so in the submitted manuscript, so thank you for bringing this opportunity for improvement to our attention. Finally, note that there is no reason to expect any surprises from these additional two-sided experiments. Our one-sided intervals are already much shorter than the classical ones, and they will become even shorter if we allow them to be two-sided. The Bayesian and bootstrap intervals have no more advantage against ours in the two-sided case than they have in the one-sided case; in fact, there is no substantial difference in how any of these methods operate under the one-sided vs. two-sided framework.\n\nWeakness 2. This comment is also due to some misunderstanding which we can resolve by improving the exposition, and we do not think it points to a true weakness in this paper. \nFirst, it is clear that the 2-gram data (or the k-mer data) would not be exchangeable if we were to process (e.g., sketch and query) the 2-grams in the same order in which they appear in the original natural language document. However, this is not what we do in our experiments (you can verify this by looking at the submitted code) and it is not how we would like practitioners to apply our method. Unfortunately, it appears that we accidentally forgot to explain this point in the paper. We are grateful for the opportunity to clarify. What we do in our 2-gram experiments is to sample i.i.d. 2-grams from the collection of all possible 2-grams in our data set. This data-generation mechanism automatically satisfies the exchangeability assumption, and it would not be unreasonable to approximately follow the same idea in practice. For example, one can always process a data set stored on hard drive in a random order; this approach may be slower than sequential reading, but it is feasible and, combined with sketching, it still allows one to be memory-efficient. In general, we should make it even more explicit that practitioners should only apply our method to data that are either exchangeable to begin with, or which can be made exchangeable by suitable randomization as we do in our experiments. Of course, randomizing the data is not always feasible (e.g., it cannot be done in the case of an online data stream), but that is why we clearly state that our method is not going to be a panacea for all possible applications.\nSecond, the other existing methods (Bayesian and bootstrap) also assume i.i.d. data, so this limitation is not specific to our paper. The only existing approach that makes no i.i.d. assumptions is the classical one, but in that case the price to pay is that the bounds are always impractically wide.\nThird, we should clarify that our exchangeability assumption can be relaxed insofar as the query points are concerned, but this is discussed in more detail in our next comment.", " Question 3. The interpretation of our conformal confidence intervals satisfying Equation 7 is the following. Suppose we were to repeat infinitely many times the multi-step experiment consisting of (1) sampling data from the underlying data-generating distribution, (2) sketching it, (3) querying a random object sampled from the same data-generating distribution, and (3) constructing a conformal prediction interval for the true frequency of the queried object among the sketched data. Then, 90% of those infinitely many conformal prediction intervals would contain the corresponding true frequencies. In other words, the marginal probability (with respect to all randomness in this experiment, except the sketching randomness which can be safely fixed) that the true frequency of a queried object is in its corresponding conformal prediction interval is 90%. This is much weaker than the classical probabilistic guarantee, but we say so very explicitly. The point is that the classical probabilistic guarantee is often so strong as to become very impractical to achieve. \n\nIt is worth pointing out that our method can also produce conformal confidence intervals satisfying a stronger notion of coverage: Equation 8. In the paper, we refer to this notion of coverage as “frequency-conditional” coverage, and we evaluate it in the fashion of Figure 2. “Frequency-conditional” conformal coverage can be interpreted as follows. Suppose we were to repeat infinitely many times the multi-step experiment consisting of (1) sampling data from the underlying data-generating distribution, (2) sketching it, (3) querying a random object sampled from the same data-generating distribution, and (3) constructing a conformal prediction interval for the true frequency of the queried object among the sketched data. Suppose also that we discard the results of all experiments in which the random queried object falls outside of the desired frequency bin. Then, 90% of that infinite subset of remaining conformal prediction intervals would contain the corresponding true frequencies. In other words, the conditional probability that the true frequency of a queried object is in its corresponding conformal prediction interval given that the query is assigned to a specific confidence bin is 90%. This is still weaker than the classical probabilistic guarantee, but it is stronger than the marginal coverage discussed above. \n\nIt is also useful to recall that the guarantee of Ting (2018) is similar to our frequency-conditional coverage with bins of size 1, as mentioned in our paper. Therefore, Ting’s guarantee is somewhat stronger than ours in most cases, but ours has the advantage of being applicable beyond the linear CMS. The Bayesian guarantee is a bit different, because it treats the query as fixed but models the data as random with a specific data-generating model. However, if the Bayesian prior is well-specified, and the experiment is repeated many times in the frequentist sense explained above, then the Bayesian intervals will still satisfy our notion of coverage. Our experiments confirm empirically that this is indeed the case, although they also highlight how the Bayesian solution fails if the prior is mis-specified. We will be happy to improve the paper exposition in light of this discussion. Further, the Bayesian approach is also theoretically limited to the linear CMS, at least for now.", " Question 2. As discussed above, the classical method can only make probabilistic statements with respect to the randomness of the hash functions; this inevitably results in extremely wide confidence intervals. Unfortunately, there is not much we can do to fix this issue, other than developing the alternative method that we develop in this paper. The point of Ting (2018), Cai et al. (2018), and of our paper is precisely this: the classical confidence intervals for sketching are often too wide to be of much practical use. As far as we know, overcoming this limitation of the classical bounds requires taking a “learning from the data” perspective, and so that is what we do. Therefore, in a certain sense it is true that our confidence intervals are not designed to solve quite the same problem as the classical confidence intervals, and therefore the comparison should be interpreted carefully. However, this is hardly a weakness of our paper. If we had not compared our intervals to the classical ones, some reviewer would have almost certainly asked us to do it. In fact, it is very important to compare our intervals to the classical ones, because such comparison demonstrates very clearly how the classical intervals are often extremely wide. Next, regarding the comparison between our intervals and those of Ting (2018), they are actually quite similar in the case of the linear CMS, as we discuss in the paper. Finally, regarding the comparison between the frequentist intervals vs. the Bayesian ones, this is also a fair comparison because we carry out repeated experiments. In other words, it makes perfect sense to evaluate the performance of a Bayesian method over repeated experiments according to frequentist metrics (average coverage and length). Such comparison would be more problematic if our paper took a Bayesian perspective and did not repeat multiple experiments, but than that is not what we do here. ", " Question 1. We believe your statement that \"A coverage guarantee with respect to the sketching randomness feels natural for working with sketched data.\" can be debated. \n\nTrue, most of the literature on sketching has taken a worst-case view and treats only the sketching function as random. This historical perspective is acknowledged in the introduction, and we do not deny it may sometimes remain preferable. Yet, why should that worst-case view feel generally more natural to machine learning experts? Broadly speaking, one could say the shared goal of machine learning and statistics is to “learn from data” something meaningful about a distribution (e.g., how to make predictions, or how to estimate an unknown parameter). Intuitively, in order to learn one must typically assume at least some sort of randomness in the data, and it is not uncommon to go as far as to assume i.i.d. samples. Then, even more assumptions (e.g., parametric models) tend to be needed for more detailed theoretical studies. Thus, from the perspective of many (or perhaps even most) NeurIPS readers, it should feel counter-intuitive to look at our sketched frequency estimation problem from a worst-case perspective, especially if an alternative view is offered. \nThe results in our paper show that the deterministic approach always leads to very wide intervals, precisely because it cannot learn from data. This limitation was already known; for example, Ting (2018) writes: “Although the Count-Min sketch has been useful for estimating counts, the problem of returning a practical error bound has not been addressed before”. That is because the classical bounds are not very practical. Now, it is true that sometimes one has no choice but to take a very pessimistic view (e.g., if the data collection mechanism may be adverserial), and in those cases one should stick with classical sketching. However, this is a machine learning conference, so it makes sense for us to try to learn something from the data. \n\nNext, it is worth repeating that our paper is not the first one to look at sketching from a machine-learning perspective in which the data are treated as random. Two related lines of work precede us. On the one hand, Ting (2018) took a frequentist perspective similar to ours but focused only on the linear CMS. Ting’s method is acknowledged in Cormode and Yi’s 2020 book “Small summaries for big data”. (Note: Graham Cormode is one of the fathers of the CMS algorithm). On the other hand, there is a growing Bayesian line of work which started from Cai et al. in NeurIPS (2018); this work also looks at sketching from a “random data” perspective. The Bayesian approach computes a posterior distribution, but it has the downside of making modeling assumptions which can become problematic if they are mis-specified (Figure 1). The Bayesian approach is also limited to the linear CMS. This is where our method comes in: we can take the random-data perspective of Ting (2018) and Cai et al. (2018) one long step further, by completely removing all modeling assumptions and all restrictions on the form of the sketching algorithm. \n\nFinally, classical deterministic bounds only exist for few relatively simple sketching algorithms. Even though the applications in this paper focus on the CMS and variations thereof for simplicity, our method is much more general: it can be applied to any arbitrarily complex and possibly unknown sketching algorithm. There is simply no existing alternative which can assess sketching uncertainty in a practical way and under such generality.\nIn conclusion, we agree that the introduction could be improved for clarity. In part, our conciseness was due to the space limitations. Fortunately, we can distill this discussion into the extra page allowed if our paper is accepted. \n\nRegarding your question about applications, we start by referring to Cai et al. (2018). Next, the demonstrations in Sections 4.2-4.3 provide more concrete examples. Our first example is about counting k-mers from DNA data, and the second one is about counting n-grams in text. These are classical sketching applications; see Cormode and Yi’s 2020 book. Yet, the potential impacts of our work do not stop there. As mentioned above, we can deal with any sketching algorithm. This flexibility is very valuable because it is plausible that sketching algorithms will become more pervasive and diverse in the future. First, we have lots of “big data” that are expensive to process and transfer; sketching can make them more efficient to deal with. Second, there are privacy and fairness considerations creating incentives to work with different types of “sketched” data instead of raw sensitive data (Melis et al., 2016; Corrigan-Gibbs and Boneh, 2017). The conformal inference ideas discussed in this paper may thus become even more relevant to sketching in the near future, including in the rapidly growing field of federated learning (Li et al., 2019; Rothchild et al., 2020; He et al., 2020).", " Studying theoretically the length of the confidence intervals. This is a good question, but unfortunately it is not an easy one to answer. The challenge is that the length of our confidence intervals will generally depend on the unknown data distribution, on the sample size, on the specifics of the sketching algorithm, and on the chosen form of the conformity scores. These are a lot of complex moving parts. In fact, there are so many complex moving parts that it should already feel quite remarkable that we can get rigorous coverage guarantees for our algorithm. That being said, there are several works in the literature which study theoretically the length of conformal prediction intervals in some settings; see for example Lei et al. (2018), Sesia and Candes (2020), or Sesia and Romano (2021). However, those theories require so many additional assumptions that their results are only easy to interpret insofar as they are utilized to compare the relative efficiency of different conformal inference techniques. Therefore, even if we went through the rather time-consuming exercise of carrying out similar theoretical analyses in the context of sketching, it is not so clear what new insight we could possibly gain. At the moment, our method is the only one of its kind in the context of frequency confidence interval estimation from arbitrarily sketched data, and it offers technically different guarantees compared to other types of approach. Thus, we remain convinced that it is more meaningful to compare the length of each method’s confidence intervals empirically, at least within the scope of this paper.\n\n\nExhangeability. This is also a very good question and it is related to a similar comment by Reviewer gvFV. In truth, we are already evaluating the performance of our method from a perspective that goes well beyond full exchangeability. For example, Figure 2 summarizes the performance of our confidence intervals separately for rarer and more frequent queries. Stratifying queries by their training population frequency means that we are moving beyond excheangeability. Let us elaborate on that. The left-hand-side of Figure 2 tells us that our confidence intervals would have the desired coverage even all the test queries involved objects with relatively low frequency in the training set. Therefore, we are already doing at least some of what you are suggesting we should do. Of course, there are also other possible ways in which exchangeability may be violated, but a shift in the training frequency of the test queries is arguably the foremost concern that one should have in the context of sketching. Fortunately, our whole method is precisely designed to deal with that! To highlight that, we would like to refer you back to the second part of Section 3.1, the frequency-conditional notion of coverage in Equation (8), and the details of Algorithm 2. See also our response to a similar comment by Reviewer gvFV for further details on how we are dealing with the limitations of exchangeability. \nAs we also admit in our response to Reviewer gvFV, not everything in our paper was as clearly explained as it could have been. In particular, the subtle but crucial connection between the limitations of the exchangeability assumption and Equation (8) was perhaps not very accessible, and neither was the discussion of our solution for moving (partially) beyond exchangeability by guaranteeing the stronger frequency-conditional coverage defined in Equation (8). We would be very happy to incorporate these discussions and clarify the exposition if given the opportunity to revise the paper.\n\n\nWhy are our confidence intervals so short in the simulation results? For starters, our intervals are generally much shorter than the classical ones because the latter are extremely conservative, to the point of being often impractical. This is a known issue; for example, Ting (2018) writes: “Although the Count-Min sketch has been useful for estimating counts, the problem of returning a practical error bound has not been addressed before”. By contrast, our intervals are not too much shorter than those obtained with the method of Ting (2018) or with the Bayesian approach. Partly, they can be a little shorter because our method can in principle take advantage of the specific data structure induced by any sketching algorithm, while the other two methods are designed for the linear CMS. Therefore, it is intuitive that we can achieve some improvements when we deal with data sketched through more powerful non-linear versions of the CMS. Partly, our intervals can be a little shorter than those of Ting (2018) simply because we guarantee a somewhat weaker form of coverage. As mentioned in the paper, Ting (2018) controls the strongest possible version of our frequency-conditional coverage in Equation (8), while we typically have to work with wider bins (weaker coverage). Of course, our advantage is that we are not theoretically limited to the linear CMS.", " This manuscript develops conformalized sketching: a method using conformal prediction to construct confidence intervals for frequency queries based on sketched data. The method works at any desired level, unlike the standard version of count-min sketch, and also incorporates conservative updates. Theoretical results and simulations (with both synthetic and real-life datasets) are provided which establish validity and showcase the appeal of the proposed method. \n\n\nUPDATE: after reading the author's response, I have increased my score to a 7 (accept). Strengths: the paper is well written and appears to be up to date with the literature. A overview of conformal prediction is provided and the results are self-contained. \n\nWeakness: more theoretical results would be helpful. For example, some theoretical results concerning the length of the confidence intervals, and also performance when the exchangeability assumption does not hold would be helpful.\n Further discussions about the length of the confidence intervals, and also performance when the exchangeability assumption does not hold would be helpful. For example, why are the confidence intervals for conformalized sketching so short in the simulation results? See questions and the weakness comment. Potential negative societal impact not applicable.", " The paper proposes a conformal method that, given a sketching function $\\phi$, computes a confidence interval for the frequency of a random query sampled exchangeably from the data. The paper's biggest strength is the sheer novelty of the combination of conformal prediction and data sketching. To my knowledge, the combination is completely new.\n\nUnfortunately, the method has at least two significant limitations. First, it is unclear whether the type of theoretical guarantee that the method can provide is of interest for working with sketched data since the probability of coverage is with respect to the randomness of the query and not of the sketching. Second, for this guarantee to hold, the random queries must be exchangeable. 1. A coverage guarantee with respect to the sketching randomness feels natural for working with sketched data. By contrast, it is still not obvious to me that a coverage guarantee with respect to the draw of data points can be useful in many situations. Could you provide additional examples in which the latter guarantee is of equal or greater interest? The more concrete the examples, the better. Examples from prior applications would be excellent.\n\n2. It looks like the proposed method and the comparison methods all have different theoretical guarantees. For example, for the classical method, the probabilistic statements are made with respect to the randomness of the hash functions, whereas for the proposed method, they are with respect to the sampling of the new query point and the sketched data. (I am not familiar with other comparison methods.) Should we exercise more caution in interpreting the experimental results if this is the case? To what extent are the proposed method and the comparison methods truly comparable?\n\n3. On a related note, how should I interpret these different confidence intervals? I know how to interpret the classical bound when I have an approximate answer based on the sketched data, and I want to relate it to the exact answer based on the un-sketched data. However, I am struggling to find an interpretation for the conformal interval.\n\n4. Minor\n- Figure 1: too mall -> too small\n- The captions for figures and tables could be made more informative. It was a bit annoying to keep flipping back and forth between an earlier figure and a later one to understand what the latter was about. The authors appear to be aware of the limitations of their method (cf. Section 5). However, I think that these limitations deserve a more extensive treatment.", " The authors propose a method for constructing confidence intervals for counts based on approximate counts generated by a sketching algorithm. The proposed method involves keeping track of exact counts for a sparse dictionary of items and computing conformity scores based on disparity between the sketched count and true count. Using these scores, a prediction region is generated for a randomly selected element of the dictionary using conformal prediction. The method is shown to be competitive on synthetic and real data examples. Strengths:\n- The proposed method inherits the flexibility of conformal prediction and thus works for any sketching algorithm and for any performance measure i.e. conformity score. \n- The construction of the ordered pairs to transform the sketching problem into a supervised learning problem amenable to conformal prediction is novel. \n- The fact that it suffices to keep track of $m_0 \\ll m$ counts in order to construct a prediction interval for the test object due to the exchangeability assumption is interesting. \n\nWeaknesses:\n- One of the main arguments that the authors make in favor of conformal prediction over deterministic approaches is that the deterministic approaches are too conservative. Yet, the procedures proposed use a deterministic upper bound. Thus, these prediction intervals are also conservative in the sense that the probability that the true quantity is greater than the upper bound is 0. \n- While it is typically a mild assumption, exchangeability is a nontrivial one for sketching problems. In fact, the 2-gram real data example does not appear to be exchangeable since consecutive 2-grams share a word and thus the joint distribution of 2-grams is not invariant under permutation. The authors are instead treating the 2-grams as fixed and are sampling IID from them, but then the interpretation of the conformal prediction guarantee is less clear, as the probability statement includes this sampling variation. In this setup, the prediction interval covers a sampled frequency instead of the true one.\n- Another typically benign feature of conformal prediction that may be problematic in this setting is that the test point is assumed to be exchangeable when combined with the training examples; this means that the one can only construct prediction intervals for randomly chosen items, which is quite restrictive particularly if the dictionary is large. \n- The technical contributions of the paper are not substantial. This is not meant to be a major criticism since the combination of simplicity and generality is what makes conformal prediction useful in a wide variety of areas.\n- The procedure becomes trivial if the element of interest appears in the warmup phase or if one is interested in a small number of elements, in which case one may just track these elements following the same reasoning used to justify the warmup phase. \n- While the writing is clear, I believe the exposition can be improved in places. For example, the sketching problem should be first introduced in more generality before the count-min sketch is introduced, since the paper is not solely about CMS or CMS-CU. Also, it is misleading to refer to the constructed prediction sets as one that “is as short as possible” on line 162 since optimality of prediction sets is a different topic that is not investigated here.\n - Is there a reason that you did not consider two-sided prediction intervals? One would think that it is possible to simply treat all entries of the hash matrix that the item was mapped to as a covariate and then use a standard regression non-conformity score.\n- Is there a reason that coverage for conformal prediction is (nearly) 100%? When the non-conformity scores are distinct, conformal prediction is supposed to have the upper bound $1- \\alpha + \\frac{1}{n+1}$ on coverage as well. Does randomizing the scores fix this? The authors are forthright about exchangeability being potentially restrictive for sketching problems, but do not discuss the subtle issues with interpreting the sampled items in the real data examples. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4 ]
[ "hQR6degg0FO", "ZnBjGUF5eI", "Ild25872VD", "nips_2022_grzlF-EOxPA", "9Q_R4AiyNWc", "PXGTZflxzI", "Ild25872VD", "T4D8qGoXfV", "NqFLIxqgctm", "X1X0smeWk6f", "dPQ2-h_x4K", "nips_2022_grzlF-EOxPA", "nips_2022_grzlF-EOxPA", "nips_2022_grzlF-EOxPA" ]
nips_2022_fDDTJakJKR7
A Single-timescale Analysis for Stochastic Approximation with Multiple Coupled Sequences
Stochastic approximation (SA) with multiple coupled sequences has found broad applications in machine learning such as bilevel learning and reinforcement learning (RL). In this paper, we study the finite-time convergence of nonlinear SA with multiple coupled sequences. Different from existing multi-timescale analysis, we seek scenarios where a fine-grained analysis can provide a tight performance guarantee for single-timescale multi-sequence SA (STSA). At the heart of our analysis is the smoothness property of the fixed points in multi-sequence SA that holds in many applications. When all sequences have strongly monotone increments, we establish the iteration complexity of $\mathcal{O}(\epsilon^{-1})$ to achieve $\epsilon$-accuracy, which improves the existing $\mathcal{O}(\epsilon^{-1.5})$ complexity for two coupled sequences. When the main sequence does not have a strongly monotone increment, we establish the iteration complexity of $\mathcal{O}(\epsilon^{-2})$. We showcase the power of our result by applying it to stochastic bilevel and compositional optimization problems, as well as RL problems, all of which recover the best known or lead to improvements over their existing guarantees.
Accept
This paper provides convergence analysis for nonlinear stochastic approximation with a "multi-sequence" update structure motivated by applications in reinforcement learning and bilevel learning. When all sequences have strongly monotone increments, the authors provide iteration complexity of O(\epsilon^{−1}) to achieve accuracy, which improves the existing O(\epsilon^{−1.5}) complexity for two coupled sequences. When the main sequence does not have strongly monotone increments, they establish iteration complexity O(\epsilon^{−2}). The reviewers agreed that the techniques in this paper are novel, and that it is well-written. In addition, the paper improves upon existing results when applied to problems in reinforcement learning and bilevel optimization, and hence is likely to have broader impact. However, the reviewers felt that the for the final version, the discussion of the smoothness assumption needs to be expanded, and the comparison with prior work needs to be improved.
train
[ "-1AWB3SJd1", "m0UZA7y0CvS", "NsUnMQusuaPe", "LhkKOE_W1mU", "cT_yovly1H4", "8r5g45Feb_g", "eI81sw7wkp-", "rSNw_JkC-U", "K998hbgvd90", "y--yDJCt1mA" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the responses. Overall, I think the writing of the current submission is not sufficiently clear due to the lack of the above important discussions. In my understanding, I still feel that Lipchitz assumption for $y*$ is a kind of stronger assumption, which gives a chance to directly get a single-timescale algorithm without modifying the algorithm. But obtaining a single-timescale algorithm by exploring such a stronger assumption is still an interesting contribution. Thus, I would like to keep my score unchanged.", " I find satisfactory by authors' response. I'd like to raise the score to 6.", " Thank you for the supportive comments! Our response to your questions follows.\n\n**1. I agree that the monotonicity assumption (Assumption 4 and Assumption 5) is standard in many existing work. But I am not clear how this assumption is connected to concave or strongly concave function. Are they just different name?**\n\nThanks for raising this question. Assumptions 4 and 5 are slightly weaker than the strong-concavity/convexity conditions. Let us explain. \n\nA function $h(x)$ is strongly-monotone on **a point $x'$** if there exists a constant $\\mu>0$ such that $$\n\\langle h(x)-h(x'),x-x' \\rangle \\leq -\\mu\\|x-x'\\|^2,~\\forall x.\n$$\n\nOn the other hand, as an implication of the strong-concavity of $g$, we have \n$$\\langle \\nabla g(x)-\\nabla g(x'),x-x' \\rangle \\leq -\\mu\\|x-x'\\|^2,~\\forall x,x'$$ holds for some $\\mu > 0$. Then if $h(x)=\\nabla g(x)$, we see that $h(x)$ is strongly-monotone on **any point $x'$**. While in Assumption 4, given any $y^{n-1}$, we only assume $h^n(y^{n-1},y^n)$ (here $y^{n}$ is the variable) to be strongly-monotone on the optimal point $y^{n,*}(y^{n-1})$. This assumption is weaker than assuming $h^n(y^{n-1},y^n)$ to be strongly-monotone on any point $y^n$. Therefore, if $\\nabla_{y^n} g^n (y^{n-1},y^n)=h^n (y^{n-1},y^n)$, Assumption 4 is weaker than the strong-concavity of $g^n (y^{n-1},\\cdot)$.\n\nThanks again for summarizing our paper and recognizing the merits of our paper!", " Thank you for appreciating our work! Our response to your questions follows.\n\n**1. If the smoothness assumption does not hold, can the existing complexity of SA be improved using the analysis in this paper?**\n\nIn the current analysis, the smoothness of lower level optimal solution is essential in the decomposing of the lower level drift term in (51), which is the key enabler for the refined bounds thereafter. Without this condition, an improved rate may be not possible.\n\n**2. In addition to stochastic bilevel and compositional optimization, what other new special class of stochastic optimization problems can benefit from this generic results?**\n\nAnother potential application is the min-max problem in the form of $\\min_x\\max_y f(x,y)$. Using stochastic gradient descent-ascent method will result in an update scheme in the form of STSA with two sequences, where $x_k$ is the main sequence and $y_k$ is the follower sequence. Under certain assumptions on $f(x,y)$ (e.g., smoothness of $f$, strong-concavity of $f$ with respect to $y$), we can cast the stochastic min-max method as a special case of the STSA. \n\n**3. It would be interesting to discuss and highlight in the main paper how the new proof improves the existing analysis. Now they are hidden in the supplementary document.**\n\nBy exploiting the smoothness of $y^*$, we decompose the optimality drift term (e.g., (51) and (79)) into two terms. Through the thereafter refined analysis, we show that the two terms are decreasing without relying on the decay of $\\alpha_k$. This allows $\\alpha_k$ to be in the same time-scale as $\\beta_{k,n}$ instead of being in a faster decaying time-scale, therefore improving the convergence rate. We agree that it is a good idea to highlight the crucial steps and will do so in the revision. \n\nThanks again for the support and recognizing the contribution of our work!", " We thank the reviewer for the support and the careful review. Our response to your comments follows.\n\n**1. I think Assumption 3 is a little bit weird because LHS is a random variable while the RHS is a non-random term and without a high probability guarantee.**\n\nIndeed, we want to clarify that Assumption 3 needs to hold for any possible LHS. \n\n**And I think it is not the generalized version of Assumption 2.1 in [1] because the second-order condition in Assumption 3 can induce moment assumption in [1].**\n\nWe are sorry for the confusion. We wish to make a correction here that Assumption 3 is not directly comparable with [Assumption 2.1, 1] since [Assumption 2.1, 1] assumes the independence between the noise variables across iterations. Assumption 3 is a generalization of the bias and variance assumption in stochastic programming [Assumption A1, 2] or the noise assumption in single-sequence SA [Assumption A4, 3] to multi-sequence case. In particular, it is easy to check that [Assumption A1, 2] implies Assumption 3 by setting $\\xi_k=\\nabla f(x_k)-G_k$ and $c_0=0$. When applying STSA to the stochastic optimization problems (see, e.g., Lemma 8), the conditional independence between the samples of different levels along with the standard small bias and bounded variance assumption in optimization will imply Assumption 3. We have rephrased the justification of Assumption 3 in the revision.\n \n```\n[1] V. Konda and J. Tsitsiklis. Convergence rate of linear two-time-scale stochastic approximation.\n[2] S. Ghadimi and G. Lan. Stochastic first-and zeroth-order methods for nonconvex stochastic programming.\n[3] B. Karimi, B. Miasojedow, E. Moulines, and H. Wai. Non-asymptotic analysis of biased stochastic approximation scheme.\n```\n\n**2. I would be appreciated if the author could provide an example of a weak dependent sequence as Assumption 3 describes.** \n\nConsider two-sequences SA. At each iteration, given $x_k$, let $z^\\prime_k$ be a random variable with distribution $P(\\cdot|x_k)$, and for $t\\in \\mathbb{N_0}$, let $z_{t,k}$ be a random variable with distribution $P_t(\\cdot|x_k)$ . Suppose the sequences are generated with\n\\begin{align}\nv(x_k,y_k)&= \\mathbb{E}{\\small z_{t,k}}\\big[\\sum_{t=0}^\\infty \\gamma^t f(x_k,y_k;z_{t,k})\\big],~\\xi_k=\\sum_{t=0}^H \\gamma^t f(x_k,y_k;z_{t,k})-\\mathbb{E}{\\small z_{t,k}}\\big[\\sum_{t=0}^\\infty \\gamma^t f(x_k,y_k;z_{t,k})\\big],\\nonumber\\\\\\\\\nh(x_k,y_k)&= \\mathbb{E}{\\small z_k^\\prime}[g(x_k,y_k;z_k^\\prime)], ~\\psi_k = g(x_k,y_k;z_k^\\prime)-\\mathbb{E}{\\small z_k^\\prime}[g(x_k,y_k;z_k^\\prime)]\n\\end{align}\nwhere $f$ and $g$ are bounded functions ($C_f =\\sup \\Vert f \\Vert$), $H \\in \\mathbb{N}$ and $\\gamma \\in (0,1)$. Suppose $z_{t,k}$ is conditionally indepenedent of $z^\\prime_k$ given $\\mathcal{F_k}$, then we have\n\\begin{align}\n \\Vert \\mathbb{E}[\\xi_k | \\mathcal{F_k^1}] \\Vert= \\Vert \\mathbb{E}[\\xi_k | \\mathcal{F_k}] \\Vert=\\big \\Vert \\mathbb{E}{\\small z_{t,k}}\\big[\\sum_{t=H+1}^\\infty \\gamma^t f(x_k,y_k;z_{t,k})\\big]\\big \\Vert \\leq \\sum_{t=H+1}^\\infty \\gamma^t C_f \\leq \\frac{\\gamma^{H+1} C_f}{1-\\gamma}.\n\\end{align}\nGiven $\\alpha_k < 1$, setting $H=\\lceil \\log_{\\gamma} \\alpha_k \\rceil$ leads to $c_0=\\frac{\\gamma C_f}{1-\\gamma}$ in Assumption 3. It is easy to check that $c_1=0$. Lastly, $\\sigma_0,\\sigma_1$ exist by the boundedness of $f,g$.\n\n**3. Is $y^n$ arbitrary in Assumption 4? If so, please use more precise language.**\n\nYes, $y^n$ is arbitrary in Assumption 4. We will make this clear in the revision. \n\n**4. Line 169, it is not $\\mathcal{O}(K^{-2})$ but $\\mathcal{O}(\\epsilon^{-2})$ or $\\mathcal{O}(K^{-1/2})$.**\n\nYes, it should be $\\mathcal{O}(\\epsilon^{-2})$ in line 169. Thank you for the correction!\n\nWe very much appreciate your time and efforts on reviewing our paper. After reading our rebuttal, we hope you can re-evaluate the merits of our paper.", " We thank the reviewer for the support and the careful review. Our response to your comments follows.\n\n\n**1. I think this paper lacks sufficient discussions and comparisons on the assumptions, which makes the technical part of this paper not sufficiently clear.** \n\nThanks for your insightful comments and precise understanding on the technical details! Yes, we agree that more discussion on the assumptions will improve the clarity further. In the final version (with one additional page), we will add more discussion on the assumptions, and the detailed comparison with [1][2]. For now, we will respond to your questions below.\n\n**1a). $\\xi$ and $\\psi$ are mutually independent, otherwise, the second equation in (57) does not hold. In [1], they do not have this assumption. I think this mutual independence should be highlighted as a formal assumption and a comparison with the existing work is needed as well.** \n\nThe second equation in (57) requires Assumption 3 to hold. And in general, the mutual independence between $\\xi_k$ and $\\psi_k$ is a sufficient but not necessary condition for Assumption 3. We agree that highlighting the mutual independence of $\\xi_k$ and $\\psi_k$, although it makes Assumption 3 slightly more restrictive, could be clear or easier to understand for audience. We can make this explicit in the revision if all the reviewers agree on this.\n\n**Can we still get a single-timescale algorithm when $\\xi$ and $\\psi$ are not mutually independent?**\n\nUnder the current framework, the small bias condition, that is, the upper bound condition on $\\|\\mathbb{E}[\\xi_k|\\mathcal{F}_k^1]\\|$ and $\\|\\mathbb{E}[\\psi_k^n|\\mathcal{F}_k^{n+1}]\\|$ in Assumption 3, enables fast decreasing of the LHS of (59). When bounding LHS of (59), this condition allows us to treat $\\xi_k$ as a bias upper bounded by a decreasing sequence by Assumption 3, as shown in the proof. If the small bias assumption does not hold, that is, we do not have the bound for $\\mathbb{E}[\\xi_k|\\mathcal{F}_k^1]$, $\\xi_k$ needs to be bounded as a variance instead. In latter case, a faster decreasing step size $\\alpha_k$ is then needed to compensate for the non-decreasing of variance, which results in slower convergence. Nevertheless, it is very interesting to think about how the independence condition can be relaxed. \n\n\n**1b). Existing works e.g., [1] [2], only need a Lipchitz assumption for $y^\\ast$, while this work makes a relatively stronger assumption that the gradient of $y^\\ast$ exists. Such a comparison is also necessary in the paper to clarify the major difference in the assumptions and the reason why we can obtain a single-timescale algorithm.**\n\n\nFrom our understanding, as is shown in Lemma B.4 of [1], the momentum update reduces the noise of the stochastic gradient, thus enabling a single time-scale algorithm. In [2], a large batch size of $\\Theta(\\epsilon^{-2})$ is used to ensure a small varaince, which enables the constant step size. While in this work, we do not assume a variance reduction effect. Our analysis relies on the additional smoothness assumption on $y^*$. In particular, the smoothness condition enables the decomposition of the optimality drift terms (e.g., (51) and (79)) into two terms. As shown in the fine-grained analysis thereafter, as iterates converge, the two terms are fast decreasing without relying on the decay of $\\alpha_k$, allowing $\\alpha_k$ to be in the same time-scale as $\\beta_{k,n}$ rather than decreasing in a faster time-scale. In the final revision, we will make a formal comparison with [1,2], while highlighting crucial steps in the analysis that lead to the theoretical merits.\n \n```\n[1] S. Qiu, Z. Yang, X. Wei, J. Ye, and Z. Wang. Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth Nonlinear TD Learning.\n[2] T. Lin, C. Jin, and M. Jordan. On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems. \n```\n\n**2. In this paper, the authors did not provide limitations of this work.**\n\nWe have added the limitation in the checklist and copied below.\n\n> This work only considers the strongly-monotone increments for the follower sequences, but it will be interesting to also consider the monotone increments with non-unique fixed point $y^*$. Right now the generic result in this work can only be applied to the unconstrained stochastic optimization, while it will be also interesting to consider whether it is possible to establish similar result that is applicable to the constrained stochastic optimization problems. We will add discussion on the limitations in the revision.\n\nThank you again for the careful review. Hope these resolve your remaining concerns. ", " In this paper, this paper studies the finite-time convergence of nonlinear SA with multiple coupled sequences. Different from existing multi-timescale analysis, the authors seek for scenarios where a fine-grained analysis can provide the tight performance guarantee for single-timescale multi-sequence SA (STSA). When all sequences have strongly monotone increments, they establish the iteration complexity of O(eps^{−1}) to achieve eps-accuracy, which improves the existing O(eps^{−1.5}) complexity for two coupled sequences. When the main sequence does not have strongly monotone increment, they establish the iteration complexity of O(eps^{−2}). Strengths:\n1. This paper provides an improved rate for the case where all sequences have strongly monotone increments in single-timescale multi-sequence SA.\n2. This paper further provides an analysis for a single-timescale multi-sequence SA when the main sequence does not have strongly monotone increment. \n3. The theoretical analysis is novel, which makes significant contribution to the related area.\n\nWeaknesses:\n\n1. I think this paper lacks sufficient discussions and comparisons on the assumptions, which makes the technical part of this paper not sufficiently clear:\n\nThere is another work studying the single-timescale minimiax optimization [1], which is a special case of the single-timescale two-sequence SA when the main sequence does not have strongly monotone increment. In their Algorithm 1, they show that a momentum updating step is needed in order to obtain a single-timescale algorithm. However, this paper does not need the momentum updates. In my understanding, the main technique for the proof to get rid of momentum is in (57) and (59). And (57) and (59) depends on two important assumptions: \n\na) \\xi and \\psi are mutually independent. Otherwise, the second equation in (57) does not hold. This means we need to sample twice independently in the application of the algorithm. In [1], they do not have this assumption. I think this mutual independence should be highlighted as a formal assumption and a comparison with the existing work is needed as well.\n\nb) Existing works e.g., [1] [2], only need a Lipchitz assumption for y*, while this work makes a relatively stronger assumption that the gradient of y* exists. Such a comparison is also necessary in the paper to clarify the major difference in the assumptions and the reason why we can obtain a single-timescale algorithm.\n\n[1] Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth Nonlinear TD Learning\n[2] On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems\n\n2. According to the submission instructions, the authors need to complete the check list after the reference section. In this paper, the authors did not provide limitations of this work. 1. Can we still get a single-timescale algorithm when \\xi and \\psi are not mutually independent? According to the submission instructions, the authors need to complete the check list after the reference section. In this paper, the authors did not provide limitations of this work. I think the authors can further discuss the limitation of this theoretical analysis in their submission. ", " This work proves the convergence guarantees for SA with double-sequence and also extends to multi-sequence and the results seem tight. Besides, this paper also applies their general results to SBO and SCO problems and show the improvement of their results. As I'm not an expert in optimization area, I'm only able to give some common issues in this paper. \n\nQuestions:\n1. I think Assumption 3 is a little bit weird because LHS is a random variable while the RHS is a non-random term and without a high probability guarantee. And I think it is not the generalized version of Assumption 2.1 in [30] because the second-order condition in Assumption 3 can induce moment assumption in [30].\n2. I would be appreciated if the author could provide an example of a weak dependent sequence as Assumption 3 describes.\n3. Is $y^n$ arbitrary in Assumption 4? If so, please use more precise language.\n4. Line 169, it is not $\\mathcal{O}(K^{-2})$ but $\\mathcal{O}(\\epsilon^{-2})$ or $\\mathcal{O}(K^{-1/2})$\n\nIn conclusion, in my own opinion, I think this paper provides a solid result. But I will consider changing my score after reading other reviewers' comments and authors' responses.\n see Strengths And Weaknesses see Strengths And Weaknesses", " The paper studies the finite-time convergence of nonlinear stochastic approximation (SA) with multiple coupled sequences. While there are few work on analyzing the performance of SA in this setting, existing analysis all adopt a multi-timescale analysis in the sense that the stepsizes for all the sequences decay at the different rates, which leads to a convergence rate that is slower than that of the single-sequence SA. The focus of this paper is on finding settings where it suffices to using single-timescale updates for multi-sequence SA, leading to the same convergence rate as that of the classic single-sequence SA. The implications of the new results on applications to bilevel, compositional and reinforcement learning have been discussed. Strengths:\n1.\tThis paper presents the first $k^{-1}$ and $k^{-1/2}$ convergence rates for nonlinear SA under two settings: a) all sequences have strongly monotone increments; and b) all but the main sequence have strongly monotone increments. These results have been then extended to multiple coupled sequences with the same iteration complexity, which is also new.\n2.\tThe paper presents a unified and new perspective of understanding the recent theoretical advances in stochastic bilevel, compositional optimization and reinforcement learning. Applying the new results of nonlinear SA with multiple coupled sequences to those special cases lead to either improved iteration/sample complexity or relaxed assumptions. \n3.\tThe nice thing about this framework is that the iteration/sample complexity of new SA algorithms developed in these applications can be established by just verifying the assumptions in this paper. If those assumptions are verified, they automatically enjoy the same convergence rate as single-sequence stochastic approximation or SGD. \n4.\tThe simulation, although very simple, gets the key point of this paper – there is a gap between the existing theory and the actual performance of nonlinear SA. \n\nWeaknesses:\n1.\tThe new iteration complexity results for nonlinear SA hold under the smoothness assumption on the fixed point. While the paper has justified it in several applications, it does not improve the existing complexity of SA without this smoothness assumption. \n2.\tThe paper can do a better job on discussing and highlighting in the main paper how the new proof techniques improve the existing analysis.\n 1. If the smoothness assumption does not hold, can the existing complexity of SA be improved using the analysis in this paper?\n2. In addition to stochastic bilevel and compositional optimization, what other new special class of stochastic optimization problems can benefit from this generic results? \n3. It would be interesting to discuss and highlight in the main paper how the new proof improves the existing analysis. Now they are hidden in the supplementary document. \n NA", " This paper fomulates the stochastic approximation of multiple coupled sequences and builds the non-asymptotic convergence rate for both strongly monotone and non-monotone case. Multiple applications are introduced after providing the theoretical convergence guarantee, including bilevel optimization and compositional optimization. ***Strengths***\n* This work is sufficiently complete and well-written. It gives a clear definition of the multiple sequences stochastic approximation and builds the convergence rate for both of strongly monotone and non-monotone cases. Also, those derived bounds have matches the best known results. \n\n* It is nice to see there is a unifying framework that can include stochastic bilevel optimization and stochastic compositional optimization. I am not sure if this paper is the first work doing so, but this paper reveals the significance of studying the stochastic approximation with multiple coupled sequences. \n\n* The sngle-time-scale learning rate scheduling can be better than the standard two-time-scale one. This result is really new.\n\n***Weaknesses***\n* I have not found any major weakness of this work. * I agree that the monotonicity assumption (Assumption 4 and Assumption 5) is standard in many existing work. But I am not clear how this assumption is connected to concave or strongly concave function. Are they just different name? Since this work is purely theoretical, there is no any negative impact." ]
[ -1, -1, -1, -1, -1, -1, 6, 6, 8, 8 ]
[ -1, -1, -1, -1, -1, -1, 4, 1, 5, 3 ]
[ "8r5g45Feb_g", "cT_yovly1H4", "y--yDJCt1mA", "K998hbgvd90", "rSNw_JkC-U", "eI81sw7wkp-", "nips_2022_fDDTJakJKR7", "nips_2022_fDDTJakJKR7", "nips_2022_fDDTJakJKR7", "nips_2022_fDDTJakJKR7" ]
nips_2022_Sxk8Bse3RKO
Reconstructing Training Data From Trained Neural Networks
Understanding to what extent neural networks memorize training data is an intriguing question with practical and theoretical implications. In this paper we show that in some cases a significant fraction of the training data can in fact be reconstructed from the parameters of a trained neural network classifier. We propose a novel reconstruction scheme that stems from recent theoretical results about the implicit bias in training neural networks with gradient-based methods. To the best of our knowledge, our results are the first to show that reconstructing a large portion of the actual training samples from a trained neural network classifier is generally possible. This has negative implications on privacy, as it can be used as an attack for revealing sensitive training data. We demonstrate our method for binary MLP classifiers on a few standard computer vision datasets.
Accept
This paper proposed a new algorithm to reconstruct a subset of training examples from a trained homogeneous binary classification neural network. Although there are still some limitations such as the zero training loss and homogeneity assumption, as well as limited experiments beyond MLPs, the reviewers also acknowledge that this results is very interesting and reveals an important property of deep neural networks that could potentially have far-reaching implications for privacy and security.
train
[ "ucD4TqIqVWh", "9XXT7psdBh", "3HeztmcDzNH", "63EXePafW12", "yOGfW7ISlDM", "Y__-glNWavz", "bbZp3CXgreN", "h9fJEFxx75J", "7eCYGUJ9cPA", "P6-vzallHhk" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I'd like to thank the authors for their answers and I have no further questions. While I understand the concerns of other reviewers, I'm weighting the conceptual contribution stronger than the practical limitations such that I am keeping my score. ", " Thanks for your elaboration. I updated my rating after reading your response and the other reviewers. I am still sceptical as layed out in the initial review, and the response was only partially helpful to eliviate the concerns.", " We thank the reviewer for the feedback, but would like to point to a few factual errors in the review:\n\n1) “The paper proposes a method and demonstrates its effectivenss on 2 layer MLPs”:\nIn our experiments we use up to 5 layer MLPs, please see Figure 4.\n\n2) “trivial reconstructions (e.g. just visualizing weights) give already good results” : \nWe devote Section 5.4 and Appendix C.2 specifically to show that this is *not* the case. In particular, we visualize all the weights of the first layer (in Figures 14 and 15, as well as pages 23-24 in the Appendix), and it is clear from those figures and from Figure 5 that this method has far worse reconstruction performance than ours.\n\n3) “The authors use only 500 samples. If they used just one sample per class, it would be quite likely that it resembles a class-average, i.e. that sample.”: \nFirst, we train up to 1000 samples, not 500 (see Figure 4 top right). Second, in our settings there are many samples per class, therefore reconstructing class averages would not be meaningful. One sample per class, as suggested by the reviewer, is a different and possibly much simpler setting than what is shown in our experiments.\n\nRegarding the CIFAR-10 test set, indeed it contains 10,000 samples. However, we consider binary classification of vehicles vs. animals (see line 193). Since the vehicles class contains only 4/10 of the total classes in CIFAR-10, there are only 4000 samples from the original test set for the vehicles class. Since we evaluate using a balanced test set, we only use 4000 samples for the animals class (even though there are a total of 6000 images in the original CIFAR-10 test set for this class). This sums to a total of 8000 samples in our test set.\n", " We thank the reviewer for the feedback and the comments.\n\nRegarding extension to CNNs: We emphasize that Theorem 3.1 works as-is on convolutional neural networks, hence our reconstruction scheme should (in theory) be able to work on them. \nIn this paper we focused on fully-connected networks, and we agree that generalizing our results to CNNs is a very interesting future research direction.\n\nRegarding training with differential privacy guarantees: Training a model using a differentially private algorithm gives very strong privacy guarantees against any data reconstruction attack, hence it should protect from our reconstruction scheme. We note that such algorithms inject noise to the training (usually to the gradient updates) which hurts the performance of the model. In particular, Theorem 3.1 and the KKT conditions will no longer work under this regime, hence we cannot expect our scheme to work. We also note that the noise from using mini-batch SGD is not enough to prevent our reconstruction scheme, as shown in Appendix C.\n", " We thank the reviewer for the comments and for the positive feedback. To answer your questions:\n\n1) Thanks for pointing this out, this is in fact a typo. The inputs are represented in [-1,1] (and not [0,1] as is erroneously written), and the prior bounds are [-1,1] respectively. This will be fixed (both in lines 213-214 and line 280).\n\n2) There is no particular reason, this penalty seemed to work well in practice. In addition to the suggestion made by the reviewer, it is also possible to optimize over $\\lambda^2$ instead of $\\lambda$, and thus enforce those variables to be non-negative.\n\n3) This is indeed a typo and will be fixed, it should be $\\lambda_i=0$ if $y_i\\Phi(\\tilde{\\theta},x) \\neq 1$. We will also change line 159 as the reviewer suggested.\n", " We thank the reviewer for the thorough review and the comments.\n\nExtension to CNNs: We emphasize that Theorem 3.1 works as-is on convolutional neural networks, hence our reconstruction scheme should (in theory) be able to work on them. \nIn this paper we focus on fully-connected networks. Generalizing our results to CNNs is a very interesting future research direction.\n\nExtension to skip connections: We note that although the theory is limited to homogeneous networks, in practice our reconstruction scheme may work on non-homogeneous networks as well. In particular, Figure 4 bottom right shows that we are able to reconstruct CIFAR images even for a non-homogeneous network. We hope that extending our scheme to skip connections should also be possible.\n", " The authors in this paper address a challenging, and interesting problem in the current deep neural network literature. Though it is widely believed that neural networks essentially memorize, thereby overfit to training samples, it is usually not very clear how one might be able to achieve this. This paper proposes a relatively clean and straightforward formulation to compute the training data points. The authors rely on the results of [Ji and Telgarsky, 2020] to frame the problem as discussed in Equations (2) - (5). The technique hinges on the fact that at the training data points, a linear combination of the derivatives of the network parameters can predict the parameters themselves. A very interesting observation in my opinion.\n Strengths : \n\n1. Clean formulation of the problem in terms of reconstruction loss to find the training points.\n2. The approach produces convincing results in some of the standard computer vision datasets like MNIST and CIFAR10.\n\nWeakness : \n\n1. This work cannot be extended to CNNs with skip connections easily due to the homogeneity assumption. Which limits the test error on CIFAR10 due to the above limitation is 71%. \n \nI wonder if the authors have suggestions, for a possible way to extend the current work to networks with skip connections ?\n Not general enough to be applicable to Resnet networks.", " This paper proposes a new method for reconstructing parts of the training data from a given trained homogeneous binary classification network without any further prior knowledge. It relies on a theoretical result by Lyu and Li as well as Ji and Telgarsky, who showed that the gradient flow of the training loss of such a network converges in direction to parameters that solve a maximum margin problem. The optimality condition to the latter is used to phrase an optimization problem for recovering training data on the margin. The resulting approach is tested in various contexts for a 2d toy example, a binary MNIST image classification, and a binary CIFAR-10 classification, and demonstrates significant abilities to recover some parts of the training data. To my mind this is a really nice paper, because it exploited a theoretical result to propose a completely new method for reconstructing the data a network has been trained on, which yields significantly better results than competing model inversion schemes. I agree with the authors that a significant amount of training data can be reconstructed surprisingly well. While the investigated network architectures as well as the allowed settings (requiring all training examples to be classified correctly such that the loss converges to zero) remain limited, I still believe the results to be very significant as they lead to several further questions, including why the optimization problems works well at all, how many training data points typically are on the margin, and if training data points that are not on the margin are protected from possible recovery attacks as well as possible further characterizations for stochastic gradient descent, other training methods, multi-class classification network, or regression networks. It reveals interesting properties of the trained weights which is why I consider this to be a strong paper. \n\nIn terms of weaknesses, the paper is of course limited to a specific setting, which is, however, understandable considering it is the first work in this direction. Besides this, I only have some minor points that I am phrasing as questions below. - The training images to be reconstructed are initialized from a zero mean Gaussian distribution, i.e. roughly half of the values being negative besides an explicit regularization term the tries to enforce a [0,1] box constraint. This seems contradictory. Is it not possible to initialize with a uniform distribution in [0,1]^d?\n- Constraints (such as lambda \\geq 0 or the abovementioned box constraint) are implemented as penalties instead of running projected gradient descent (i.e. enforcing hard constraints explicitly). Is there a specific reason?\n- To be very picky I suggest to write (5) as $\\lambda_i (y_i \\phi(\\tilde{\\theta};x_i)-1) = 0$ because strictly speaking I do not think $\\lambda_i$ has to be greater than 0 if $y_i \\phi(\\tilde{\\theta};x_i)=1$. And in line 159 I suggest not to use the word \"converge\" in \"we converge to some point $\\theta$\" because the training setting is designed such that no minimizer exists (the norm of $\\theta$ should slowly go to infinity and it can only converge in direction). Yes, there are several limitations in terms of the investigated network architectures and a number of open research questions, but the authors have adequately addressed them. ", " This paper gives a principled and novel scheme for reconstructing training set image only use parameters of a network. The approach is elegant and principled and the results are convincing. Strengths: Principled and elegant approach, with convincing experimental results\n\nWeakness: One thing I am wondering about is other more complicated network architectures, especially those that have been used in practice today. For example, how about we apply this to CNN? What will happen? Note that even CNN should be regarded as a \"baby version\".\n\nSo to this end, I do have some concerns about the experiments regarding effectiveness of the approach on state-of-the-art neural networks 1. Being curious, have the authors considered other network architectures? (well I feel it is natural to consider them, and maybe the author have already tried the algorithm on these networks?) If not, why?\n\n2. If a network is trained with differential privacy (there are plenty work study differentially private neural network training), how well would this approach work? Note that, it seems to me that, successful reconstruction seems to be in contradiction with DP Potential negative societal impact because the adversaries can reconstruct training data just from network parameters. But this is for good reasons because it means this phenomenon should receive attention and one should defense mechanisms", " The paper deals with an interesting problem. Reconstructing training data from weights. Mostly, it has been attempted to reconstruct data based on logit outputs only. The paper has far reaching impact on privacy. The paper proposes a method and demonstrates its effectivenss on layer MLPs. + The problem is highly relevant.\n+ Showing that training data can be accurately reconstructed from weights would be ground breaking and have far reaching practical implications with respect to privacy.\n+ The optimization procedure is interesting with some nice tweeks. \n- Unfortunately, and this is a key aspect, the paper falls short in evaluation. Evaluating on a practical network and a practical training setup (in terms of amount of data) is essential for the method to have far reaching impact and also be convincing. Since for simple architectures as those used in the paper trivial reconstructions (e.g. just visualizing weights) give already good results, the additional value of the method seems limited and it is also not clear whether it works at all in a practical setup - see detailed comments.\n\nDetailed comments:\n- Evaluation should be on a reasonable architecture: not just a two layer MLP, e.g., a CNN like ResNet or VGG. The reason why I am saying this, is for linear networks, it is known that weights might resemble class averages/ prototypes - the authors state this themselves (Figure 5b) - another reference showing this in a different context is, e.g., https://arxiv.org/abs/2202.08299. So it appears that 2 layer MLPs encode samples / class prototypes. The authors somewhat confirm this in Figure 5b, though admittedly their reconstructions are clearly superior to showing weights only. However, it is not clear, whether the approach would work on deep networks, i.e., is the optimization only good for fine-tuning already good prototypes? Would it fail (completely) if they did not exist? Where failure could be that only a very small fraction (I am talking 1/10^6 or so) of reconstructed images are actual training data. The authors themselves state that not only optimization attempts are successful and given that classifiers are subject to adversarial samples, the confidence in the generalizability of the findings to more realistic networks is very low. \n- Evaluation should use a larger training dataset: The authors use only 500 samples. If they used just one sample per class, it would be quite likely that it resembles a class-average, i.e. that sample. It is not obvious whether their method works if trained on 5000 or 50000 samples.\n- \"not an actual examples\" -> not actual samples\n- we use the original test sets of MNIST/CIFAR10 with 10000/8000 images respectively, and labeled accordingl -> Cifar10 has 10000 test samples, does not it? Could you add the needed evaluation (see above) or explain why the method works well in a practical setup? I recommend that the author have a dedicated paragraph or subsection named \"Limitations\", where they discuss them." ]
[ -1, -1, -1, -1, -1, -1, 7, 8, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "yOGfW7ISlDM", "3HeztmcDzNH", "P6-vzallHhk", "7eCYGUJ9cPA", "h9fJEFxx75J", "bbZp3CXgreN", "nips_2022_Sxk8Bse3RKO", "nips_2022_Sxk8Bse3RKO", "nips_2022_Sxk8Bse3RKO", "nips_2022_Sxk8Bse3RKO" ]
nips_2022_36Yz37cEN_Q
Redeeming intrinsic rewards via constrained policy optimization
State-of-the-art reinforcement learning (RL) algorithms typically use random sampling (e.g., $\epsilon$-greedy) for exploration, but this method fails in hard exploration tasks like Montezuma's Revenge. To address the challenge of exploration, prior works incentivize the agent to visit novel states using an exploration bonus (also called intrinsic rewards), which led to excellent results on some hard exploration tasks. However, recent studies show that on many other tasks intrinsic rewards can bias policy optimization leading to poor performance compared to optimizing only the environment reward. The low-performance results from the agent seeking intrinsic rewards and performing unnecessary exploration even when sufficient environment reward is provided. This inconsistency in performance across tasks prevents widespread use of intrinsic rewards with RL algorithms. We propose a principled constrained policy optimization procedure to eliminate the detrimental effects of intrinsic rewards while preserving their merits when applicable. Our method automatically tunes the importance of intrinsic reward: it suppresses intrinsic rewards when they are not needed and increases them when exploration is required. The end result is a superior exploration algorithm that does not require manual tuning to balance intrinsic rewards against environment rewards. Experimental results across 61 Atari games validate our claim.
Accept
Balancing between extrinsic rewards and intrinsic rewards is an important challenge for exploration in RL. This paper proposes a simple yet effective way to automatically adjust the balance between them. The large-scale empirical result across 61 Atari games shows a strong improvement over the baseline approaches. All of the reviewers agreed that the proposed method is novel, and the empirical results are convincing. The reviewers had no major concern about the paper. Thus, I recommend accepting this paper.
train
[ "UMz5Vj70Qdn", "tATNbHr971", "73vPV5OxzK1", "5SGXaYZIb2_T", "NNp0Mv5M4s_", "Q-j0W2KkmeG", "bpPQQl6sbLj", "4WnFu4IOfSd" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your response.\n\nI must have missed those explanations in the paper. Indeed it sufficiently covers my concerns.\nOverall I liked this work, very intuitive and simple scheme.", " I thank the authors for their response and for clarifying my question. ", " We’re happy to hear that you enjoyed the breadth of the empirical results, and found the core concept and methodology compelling.\n\n> I thought the abstract was a little weak. Some may not be familiar with \"intrinsic rewards,\" so maybe highlight the link with reward sparsity. Maybe include a few more details about your approach beyond just \"constrained policy optimization\". Very minor, but each word in your title should be capitalized except for \"via\".\n> \n\n**Answer:** Thanks for pointing it out. We appreciate your feedback concerning the abstract. We have updated the abstract with an explanation of intrinsic rewards and reward sparsity, and more details on our approach. We have capitalized each word in the title as well. We are happy to modify the abstract further based on the reviewer’s feedback. \n\n> I don't feel like anything in this work is particularly ground breaking, but rather an elegant solution to a specific problem. That said, I do appreciate the breadth of evaluation and it will be interesting to see this approach applied to more real-world scenarios.\n> \n\n**Answer:**\n\nWe are glad the reviewer finds that our method provides an elegant solution to a problem. While we agree that we have demonstrated results in the specific context of balancing intrinsic rewards against extrinsic rewards, we would like to position our contribution in context of its broader significance: \n\nExploration-exploitation is among the core challenges in RL. Prior to our work, when measured across Atari games, the standard exploration strategy for “on-policy” policy gradient algorithms was to randomly sample actions from a gaussian or boltzmann policy $\\pi(a|s)$. While intrinsic reward formulations like RND and ICM improved performance on a few sparse reward games, when measured on average across all ATARI games, they were not found to be better than naive random sampling strategies (i.e., they improved performance on some tasks and led to worse performance on others). Due to inconsistent performance across tasks and the need for extensive tuning of intrinsic against extrinsic reward, methods like RND/ICM have not found widespread use by RL practitioners. To the best of our knowledge, EIPO provides the first demonstration of a more sophisticated exploration bonus (i.e, RND) outperforming a naive exploration strategy without the need for manual tuning. As such, we believe it has the potential of widespread adoption as a core-component of on-policy policy gradient algorithms such as PPO. At the same time, we agree with the reviewer that it would be interesting to see the performance of our method on real-world tasks. \n\nAnother way to look at our work is: A major problem in RL is reward design. Practitioners often spend a substantial amount of time tuning the relative weighting of different reward terms. Intrinsic rewards can be seen as a “special case” of reward shaping where EIPO has been demonstrated to be successful. Our results suggest that EIPO may provide an elegant solution to the more general problem of reward shaping — something we are very excited to explore in the future. \n\n> I would have liked to see more discussion around the limitations of this approach, or possible applications where this may not be appropriate.\n> \n\n**Answer:** Thanks for asking about the limitations. In our view these are: \n\n- EIPO is dependent on a good choice of an intrinsic reward function (such as RND). We show that our method performs well using two state-of-the-art intrinsic reward functions: RND and ICM (see Appendix A.8). However neither of these intrinsic reward functions have a theoretical justification, nor are they guaranteed to fully explore the state space. In such cases, EIPO wouldn’t be able to overcome the inherent limitation of the intrinsic reward function itself. Coming up with “good” intrinsic reward functions remains an open challenge. Furthermore, EIPO makes the assumption that the policy $\\pi_{E+I}$ is close to $\\pi_{E}$ in the optimization, which we were able to successfully leverage with state-of-the-art intrinsic rewards. Although unlikely, it’s possible that with some intrinsic reward, the assumptions made by EIPO are violated.\n- While we have presented strong empirical results, we are still working on a theoretical analysis of convergence of our algorithm. We believe such an analysis will further boost confidence that our method can scale to more complex real-world tasks.\n\nWe have included a note of the limitations of EIPO in the revised version of our paper.\n\nIn summary, based on empirical evidence so far, we believe EIPO along with RND, can be used instead of PPO with naive exploration strategies such as $\\epsilon$-greedy, wherever practitioners use PPO today.", " We’re pleased to find that the reviewer found the empirical results to be very good, and that our motivation and approach are clear and well presented. We clarify the reviewer’s concerns below. \n\n-----\n> Wouldn't expect strong theoretical justification, but overall this is an optimization objective with 3 parts that can be presented as a 2-3 time-scale stochastic optimization scheme and convergence can be analyzed. Convergence analysis might shed some light on section 3.3 which seems a bit hard to comprehend.\n> \n\n**Answer:** \n\nThanks for raising this point. The convergence analysis requires two parts:\n\n**Part 1: Does the stationary point solution of the optimization problem in Eq. 5 remove the bias of intrinsic rewards from the mixed policy $\\pi_{E+I}$? In other words, do $\\pi_{E+I}$ and $\\pi_{E}$ achieve the same expected return when we optimize $\\pi_{E+I}$ using EIPO?** \n\nOur optimization objective is expressed as follows:\n$$\n\\begin{align}\n&\\min_{\\alpha}\\max_{\\pi_{E+I}} \\min_{\\pi_E} {L}(\\pi_{E+I}, \\pi_E, \\alpha) \\\\\\\\\n{L}(\\pi_{E+I}, \\pi_E, \\alpha) &= \\mathbb{E}_{\\pi_\\{E+I\\}}\\Big [\\sum^\\infty_\\{t=0\\} \\gamma^t ((1+\\alpha)R_E(s_t,a_t) + R_I(s_t, a_t))\\Big] - \\alpha \\mathbb{E}_\\{\\pi_E\\} \\Big[ \\sum^\\infty_\\{t=0\\} \\gamma^t R_E(s_t,a_t) \\Big]\n\\end{align}\n$$\n\nOne of the necessary conditions of the stationary solution to the above optimization problem is $\\partial \\mathcal{L} / \\partial \\alpha = 0$. Satisfying this condition implies the following identity:\n\n$$\n\\begin{align}\n0 &=\\dfrac{\\partial L}{\\partial\\alpha} = \\mathbb{E}_{\\pi_\\{E+I\\}}\\Big[\\sum^\\infty_\\{t=0\\} \\gamma^t R_E(s_t,a_t) \\Big] - \\mathbb{E}_\\{\\pi_E\\}\\Big[\\sum^\\infty_\\{t=0\\} \\gamma^t R_E(s_t,a_t) \\Big] \\\\\\\\\n&\\implies \\mathbb{E}_\\{\\pi_\\{E+I\\}\\}\\Big[\\sum^\\infty_\\{t=0\\} \\gamma^t R_E(s_t,a_t) \\Big] = \\mathbb{E}_\\{\\pi_E\\}\\Big[\\sum^\\infty_\\{t=0\\} \\gamma^t R_E(s_t,a_t) \\Big] \n\\end{align}\n$$\n\nAs the expected extrinsic returns of $\\pi_{E+I}$ and $\\pi_{E}$ are equal at some stationary point $\\alpha^*$, it means that at the optimal point, the mixed policy $\\pi_{E+I}$ is not biased by intrinsic rewards.\n\n**Part 2: Can our algorithm reach the stationary solution of Eq. 5?**\n\nAs non-convexity of Eq. 5 complicates the convergence analysis, we are still working on the analysis of our alternating algorithm for solving the min-max problem, and we will update our analysis in the next version of the manuscript.\n\nWe hope that our analysis takes a first step in clarifying the reviewer’s question, and we are happy to address any follow-up questions.\n\n\n-----\n> Why not optimize both agents in parallel at each time?\n> \n\n**Answer:** We indeed train both policies in parallel. However, we only use one policy to collect data and we update both policies with the same data for sample efficiency. The policy for collecting data is switched when the improvement in the optimization objective of the current data-collecting policy plateaus. For a more detailed description, pleas see Line 5 and Line 10 in Algorithm 1, as well as Section 3.3.\n\n\n-----\n> Why collect data using pi_{E+I} when optimizing pi_{E}? Why not optimize pi_{E} when using data collected by pi_{E}?\n> \n\n**Answer:** Thanks for your question. Lets consider the min-stage $\\min_{\\pi_E} J^\\alpha_\\{E+I\\}(\\pi_\\{E+I\\}) - \\alpha J_E(\\pi_E)$. As pointed out by the reviewer, it is definitely possible to train $\\pi_{E}$ using trajectories $\\tau_{E\n}$ collected from $\\pi_{E}$. However, for data efficiency, if we want to simultaneously train $\\pi_{E+I}$ with $\\tau_{E}$, one would require either off-policy importance weight corrections on the trajectories [1] or the state density function of policies [2]. It is well known that the trajectory correction terms increase in variance with longer horizon leading to training instabilities. On the other hand, state density correction requires estimating state densities, which is known to be difficult in practice with high-dimensional state observations such as images. \n\nWe can overcome these problems by making an approximations and the results described in Equations 6-8. The consequence is that we can train both $\\pi_{E+I}$ and $\\pi_{E}$ using $\\tau_{E+I}$ using the correction ratio $\\dfrac{\\pi_{E}(a|s)}{\\pi_{E+I}(a|s)}$ (irrespective of the problem horizon) which is easy to compute. This is the reason why we train both $\\pi_{E}$ and $\\pi_{E+I}$ using $\\tau_{E}$ in the min-stage. Similarly, we train $\\pi_{E}$ and $\\pi_{E+I}$ using $\\tau_{E+I}$ in the max stage. \n\n[1] Notes on Importance Sampling and Policy Gradient ([https://nanjiang.cs.illinois.edu/files/cs598/note6.pdf](https://nanjiang.cs.illinois.edu/files/cs598/note6.pdf)) \n\n[2] Liu, Yao, et al. \"Off-policy policy gradient with state distribution correction.\"  **(2019).", " We are glad that the reviewer found our paper to be interesting, novel, well-written and with strong empirical results. \n\n> How does EIPO relate with (or compare against) the line of works in [Zheng et al. 2018; Hu et al. 2020] that also consider automatic learning/adjustment of intrinsic reward component? \n[Zheng et al. 2018] Zheng et al. On learning intrinsic rewards for policy gradient methods. 2018. \n[Hu et al. 2020] Hu et al. Learning to utilize shaping rewards: A new approach of reward shaping. 2020.\n> \n\n**Answer:** \n\nThanks for pointing to these references. The primary focus of Zheng et al. 2018 and Hu et al. 2020 is on learning an intrinsic reward function. EIPO is agnostic to the choice of intrinsic reward. It can be used with the intrinsic rewards proposed by Zheng et al. 2018 and Hu et al. 2020., or any other intrinsic reward formulation such as ICM or RND that we used in the paper. We select RND because it achieves state-of-the-art performance in hard exploration Atari games. Furthermore, RND is easier to implement than Zheng et al. 2018 and Hu et al. 2020, and has an open source implementation.\n\nWe have included a discussion of these works in our related work section in Appendix A.9 (we will fit this section into the main paper during the next revision). \n\nPlease let us know if we can answer any other questions or provide further clarifications.", " This paper focuses on the exploration-exploitation trade-off in reinforcement learning. In particular, the authors aim to devise a method that automatically adjusts the importance of the intrinsic reward component (exploitation) and the extrinsic reward component (exploration). To this end, they pose an (extrinsic optimality) constrained policy optimization. Then, they write the Lagrangian dual problem and solve it iteratively. For implementation purposes, they leverage tools from TRPO/PPO literature. The proposed EIPO method is compatible with any intrinsic reward method; for experiments, they use RND. Finally, they conducted an extensive empirical investigation of their proposed method on all 61 Atari games. Strengths:\n\nThe proposed method is interesting and novel. The existing intrinsic reward-based exploration methods (e.g., RND) need to heavily tune the parameter $\\lambda$ in $r_E + \\lambda r_I$ for each environment to perform better. EIPO alleviates this manual tuning effort and automatically adjusts the importance scale $\\alpha$ between the intrinsic and extrinsic rewards via optimization. Further, EIPO is compatible with any intrinsic reward method. In this work, the authors do not directly learn or optimize the intrinsic reward $r_I$. \n\nThe paper is overall well written; the motivation and the story are well conveyed. It was easy to follow.\n\nOn the empirical side, the authors have done a large-scale study on all 61 Atari games and reported the results using rigorous metrics as prescribed in [18]. In particular, the empirical results validate the following: (i) the probability of improvement EIPO over PPO is higher than the baselines (variants of RND), (ii) EIPO performs better than all the baselines in the majority of games (as well as in terms of strict probability of improvement), (iii) EIPO performs comparable/better than heavily $\\lambda$-tuned RND (for each environment). These empirical findings make the paper strong. \n\n****\n\nWeakness:\n\nThe paper is missing a detailed discussion of related work. Question: \n\nHow does EIPO relate with (or compare against) the line of works in [Zheng et al. 2018; Hu et al. 2020] that also consider automatic learning/adjustment of intrinsic reward component? \n\n[Zheng et al. 2018] Zheng et al. On learning intrinsic rewards for policy gradient 376 methods. 2018.\n\n[Hu et al. 2020] Hu et al. Learning to utilize shaping rewards: A new approach of reward shaping. 2020. Their proposed method does not seem to cause any direct potential negative societal impacts.", " The authors pose the problem of exploration using intrinsic motivation as one of constrained optimization.\nNaturally, the goal of the agent maximizing the intrinsic+extrinsic reward should be to perform AT LEAST as good as an agent simply maximizing the intrinsic reward.\n\nAs such, they present a constraint where the E+I agent needs to perform at least as good as the E agent when measured only on E (extrinsic) rewards.\n\nEmpirical evidence shows this method indeed works. Strengths:\n\n- Motivation is clear. The approach is solid.\n- Empirical results look very good.\n- Paper is overall well written.\n\nWeaknesses:\n\n- Wouldn't expect strong theoretical justification, but overall this is an optimization objective with 3 parts that can be presented as a 2-3 time-scale stochastic optimization scheme and convergence can be analyzed. Convergence analysis might shed some light on section 3.3 which seems a bit hard to comprehend. My main question is around section 3.3 (and 3.2)\n\nWhy collect data using pi_{E+I} when optimizing pi_{E}? Why not optimize pi_{E} when using data collected by pi_{E}?\nWhy not optimize both agents in parallel at each time? Is there a restriction against optimizing pi_{E} and pi_{E+I} in both max and min stages, just making sure to use the appropriate importance sampling ratios? -", " Difficult exploration tasks which rely on sparse rewards can be particularly challenging for traditional RL methods. Injecting intrinsic reward functions can increase reward density, but also introduces bias that can hinder performance in more tractable exploration tasks. Regulating this bias often reduces to a balancing act between intrinsic vs extrinsic reward influence using manual hyperparameter tuning. This paper proposes an objective function that explicitly minimizes the difference between the learned, combined intrinsic + extrinsic policy and an optimal policy under only the extrinsic reward. The objective function is reformulated using lagrangian duality and the policy learning is constrained using TRPO to provide tractability. The method is then evaluated on 61 Atari exploration tasks that vary in difficulty. Results show that the proposed method can match state of the art performance on difficult tasks while still achieving PPO-level performance on simpler tasks, all without manual tuning of the reward balance. ## Originality\n\nThe core concept is simple but elegant. Formulation of the minimization constraint between mixed- and extrinsic-trained policies, reformulation using langrange duals, and the use of a TRPO-like objective to improve trainability were all creative. Overall, I like that the idea is approachable but very effective.\n\n## Quality\n\nThe overall quality of the paper is excellent, with a few minor caveats:\n\n- I thought the abstract was a little weak. It's more pronounced because the rest of the paper is very solid, so the abstract feels thin by comparison. A few suggestions:\n\t- Some may not be familiar with \"intrinsic rewards,\" so maybe highlight the link with reward sparsity. \n\t- Maybe include a few more details about your approach beyond just \"constrained policy optimization\"\n- Very minor, but each word in your title should be capitalized except for \"via\".\n\nOn the positive:\n\n- As mentioned above, the core concept is solid and well-presented.\n- Detailing of the methodology was excellent, from preliminaries through implementation details.\n- The number of experiments and the detail of the analysis were great.\n\n## Clarity\n\nThe paper was clearly written and approachable. The description of the problem and the formulation of the solution follow a coherent narrative and are straightforward to understand.\n\n## Significance\n\nI don't feel like anything in this work is particularly ground breaking, but rather an elegant solution to a specific problem. That said, I do appreciate the breadth of evaluation and it will be interesting to see this approach applied to more real-world scenarios. I have no major questions or suggestions, other than the minor suggestions provided above. I would have liked to see more discussion around the limitations of this approach, or possible applications where this may not be appropriate." ]
[ -1, -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, 3, 5, 3 ]
[ "5SGXaYZIb2_T", "NNp0Mv5M4s_", "4WnFu4IOfSd", "bpPQQl6sbLj", "Q-j0W2KkmeG", "nips_2022_36Yz37cEN_Q", "nips_2022_36Yz37cEN_Q", "nips_2022_36Yz37cEN_Q" ]
nips_2022_NdpUjzwsHp
S-PIFu: Integrating Parametric Human Models with PIFu for Single-view Clothed Human Reconstruction
We present three novel strategies to incorporate a parametric body model into a pixel-aligned implicit model for single-view clothed human reconstruction. Firstly, we introduce ray-based sampling, a novel technique that transforms a parametric model into a set of highly informative, pixel-aligned 2D feature maps. Next, we propose a new type of feature based on blendweights. Blendweight-based labels serve as soft human parsing labels and help to improve the structural fidelity of reconstructed meshes. Finally, we show how we can extract and capitalize on body part orientation information from a parametric model to further improve reconstruction quality. Together, these three techniques form our S-PIFu framework, which significantly outperforms state-of-the-arts methods in all metrics. Our code is available at https://github.com/kcyt/SPIFu.
Accept
All reviewers consider this a novel and effective contribution to the increasingly important subfield of 3D human reconstruction, particularly from unusual poses, or, as exposed in the rebuttal, with loose clothing. The key technical questions of the reviewers (both positive and negative) were about dependence on accurate pose parameters, and dependence on accurate surface fits, which would be incorrect for e.g. baggy clothing. The rebuttal does a thorough and convincing job in exploring these questions, Reviewer FRso says: - The three proposed methods "seem more like tricks". This does not refute their novelty - that would be achieved by pointing to specific prior art. - "What happens if [16] fails?" This is now well answered in the rebuttal, and the answer is satisfactory - "ablated versions perform worse" - the new tables show this can be the case on some datasets, but not on others. Of course, it would be ideal if some mechanism could downweight these contributions where appropriate, but that is not a task for this paper. Reviewer 49L7 says: - " It is not clear how this approach performs for the pixels that do not belong to SMPLX body". Now answered well in the rebuttal. - "seems to require very accurate underlying SMPLX fitting". Now answered well in the rebuttal. - "misses one important work, ICON". As the rebuttal notes, the code for this work was released very shortly before the deadline. The rebuttal is careful to give the timeline for the code release, rather than just using the CVPR conference date. The rebuttal also includes a preliminary but useful comparison to ICON, showing that in fact the paper outperforms ICON (when trained on similar data), but also noting that they are very different architectures, which further argues for both being exposed to the community. I agree with the authors that the later objections of R1 are "moving the goalposts". I would not necessarily dismiss those later objections if they were fundamental, but again, the rebuttal answers them convincingly. Reviewer sxwr was overall in favour of accept, but had some queries, again well responded to in the rebuttal.
train
[ "mAfKWp0r5XT", "ZEQAawtee2R", "kShovhKxQYF", "avud2F5kti", "y7xF-QlO7YN", "w_lQjI1sF9F", "tN6dcbIXXKu", "cn0NpekAirx", "hxvH918cbbk_", "VRMU-rVXc1r", "Fvcrhtf5wnM", "RmT7tLbq56L" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you, we take your comments very seriously. But ray-based sampling alone (i.e. PIFu + M + C) is not sufficient for a paper because it is only able to cleanly outperform the SOTA (PIFuHD) in the THuman2.0 dataset and not in the BUFF dataset (See below; lower values are better for all metrics shown). \n\n| |\\|| THuman2.0 || \\| | BUFF ||\n|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|:-----:|\n| **Method** |\\| | **CD** (${10^{-4}}$) | **P2S** (${10^{-4}}$) | \\|| **CD** (${10^{3}}$) | **P2S** (${10^{3}}$) |\n|PIFuHD |\\| | 2.758 | 2.215 | \\|| 1.964 | 1.845 |\n|PIFu + M + C |\\| | 1.999 | 1.626 |\\| | 2.022 | 1.840 |\n|S-PIFu (i.e. PIFu + M + C + B + N) |\\| | 2.035 | 1.629 | \\|| 1.923 | 1.726 |\n\nGiven that the models are all trained on the THuman2.0 train set, it is very important that the our models do well in a dataset with an entirely new data distribution (i.e. BUFF dataset) as it tests their ability to generalize correctly.\nThe table above shows that adding B and N is pivotal in ensuring that our final model (S-PIFu) is able to significantly outperform the SOTA (PIFuHD) in both datasets, without which we cannot really make a case to publish a paper.\n\nQuantitatively, if we choose to exclude B and N from S-PIFu, then the CD and P2S in BUFF dataset would worsen by 5.1% and 6.6% respectively. The slight tradeoff is that the CD and P2S in THuman2.0 test set would improve by 1.7% and 0.18% respectively.", " Thanks for the detailed responses as well as additional experiments to address the concerns! \n\nAfter reading other reviews and rebuttal, while my initial concerns are well addressed, I am now more concerned about negligible improvement by adding blendweights and surface normal as R3 mentioned. In fact, Tab. 8 in Supp. Mat shows that PIFu + C already performs nearly state-of-the-art performance without B or N, and it's not clear at all if B and N offer any conclusive contributions beyond adding C. As the performance itself is impressive, I would recommend making the paper simpler by focusing on the ray-based sampling as a key contribution and removing specific feature engineering from the main contributions unless adding each feature makes significant improvement. For this reason, I keep my initial recommendation of reject while encouraging to resubmit to a future conference/journal. ", " Dear Reviewers,\n\nIt is the last day of the Author-Reviewer discussion period. We would like to ask the reviewers if any part of our rebuttal requires more attention or if the reviewers have any additional concern regarding the paper? \n\nBest regards, \\\nAuthors", " We have added the full results comparing ICON and S-PIFu when both models are trained and evaluated on the entire THuman2.0 dataset. Please refer to the newly added section 1.4 in our supplementary materials.", " We have added the full results comparing ICON and S-PIFu when both models are trained and evaluated on the entire THuman2.0 dataset. Please refer to the newly added section 1.4 in our supplementary materials.", " We have added the full results comparing ICON and S-PIFu when both models are trained and evaluated on the entire THuman2.0 dataset. Please refer to the newly added section 1.4 in our supplementary materials.", " Thank you for your helpful comments.\n\nPlease refer to Section 1.3 of NIPS_Supplementary_Materials_SPIFu_After_Rebuttal.pdf (i.e. our updated supplementary materials). \n\nWe will include all rebuttal materials in either the main paper or the supplementary materials should our submission gets accepted.", " Thank you for your kind comments.\n\nPlease refer to Section 1.2 of NIPS_Supplementary_Materials_SPIFu_After_Rebuttal.pdf (i.e. our updated supplementary materials). \n\nWe will include all rebuttal materials in either the main paper or the supplementary materials should our submission gets accepted.", " Thank you for your strong comments.\n\nPlease refer to Section 1.1 of NIPS_Supplementary_Materials_SPIFu_After_Rebuttal.pdf (i.e. our updated supplementary materials). \n\nWe will include all rebuttal materials in either the main paper or the supplementary materials should our submission gets accepted.", " This paper presents several input features derived from parametric human model to PIFu-like 3D reconstruction task of clothed humans. At each pixel from the input view, this work propose to compute the multiple ray intersection with the parametric body model and extract positions, skinning weights, and surface normal at the intersections as auxiliary pixel-aligned features. The experiments show that the proposed features significantly improve the robustness and accuracy of reconstruction over prior methods in THuman2.0 and BUFF datasets. The strengths of the paper can be summarized as follows:\n- The simplicity of the proposed method is a great advantage. Especially the plug and play nature of the proposed feature encoding allows us to boost the performance regardless of the baseline method (e.g, PIFu, PIFuHD).\n- The experimental results support the effectiveness of the proposed approach. \n\nOn the other hand, the paper has the following weaknesses:\n- It is not clear how this approach performs for the pixels that do not belong to SMPLX body. Non-overlapped regions are quite common for clothed human as silhouette of clothed humans is larger than minimally clothed SMPLX. Intuitively, the proposed approach provide no information for these pixels. \n- Related to this, the proposed approach seems to require very accurate underlying SMPLX fitting. However, as PaMIR and ICON mentioned, off-the-shelf body pose estimation networks do not provide very accurate fitting and thus additional optimization is required. It is not clear how well the proposed approach handles inaccurate pose parameters. Qualitative results from images in the wild and evaluation with noise perturbed pose parameters are highly recommended. \n- The paper misses one important work, ICON, which improves robustness over PIFu by conditioning SMPL body. Please discuss and compare.\n\nICON: Implicit Clothed Humans Obtained From Normals\nYuliang Xiu, Jinlong Yang, Dimitrios Tzionas, Michael J. Black; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 13296-13306\n\nDespite its promising performance, I think the paper is not ready for publication due to its incomplete evaluations and comparisons. Also the illustration and exposition can be further improved. - Please address the concerns above.\n- After reading through the paper, I still couldn’t figure out what “S” in S-PIFu stands for. Please clarify.\n- Since intersection locations can be obtained via barycentric interpolation, why is simply averaging of 3 vertices used? Does the performance improve by taking barycentric interpolation?\n- What value is inserted if the ray does not hit surface? \n- What is the motivation behind conditioning vertex positions as input features? Please elaborate.\n\nOther comments:\n- Citation format is not the official one. \n- L5: blendweight-based soft human parsing label is merely skinning weights . Please use common term.\n- L87: The method that proposes to use front and back normals are PIFuHD not PIFu.\n- L168: principal component analysis Neither limitation nor its societal impact is discussed in the paper.", " This paper introduces S-PIFu which incorporates a SMPL-X model into PIFu for single-view clothed human reconstruction. Specifically, the authors propose a ray-based sampling technique to transform a SMPL-X model (fitted to the input image) into a set of 2D feature maps embedding coordinate, parsing label, and surface normal information. These feature maps are concatenated with the input image (together with the estimated front and rear normal maps) and fed to the encoder of PIFu. Occupancy prediction then follows the normal pipeline of PIFu. Both quantitative and qualitative evaluations show SOTA results. Pros:\n+ The proposed ray-based sampling technique for transforming a SMPL-X model into a set of 2D feature maps sounds novel. It provides a simple and effective way to encode both geometric and semantic information of the 3D parametric model. This 2D feature map representation works seamlessly with the PIFu framework.\n+ The proposed soft parsing labels based on blending weights sounds novel. It is more informative than discrete parsing label and can help to improve the labels around the boundary between two body parts.\n+ Other than spatial information (i.e., 3D coordinates) which have been exploited by previous works, this work also extract human parsing labels and surface normals from the parametric model which provide further semantic and geometric information for occupancy prediction.\n+ Quantitative and qualitative evaluations show the proposed method outperforms PIFu, PIFuHD, and ARCH++.\n+ Experiments are included to demonstrate the improvement brought by the coordinate, parsing label, and surface normal feature maps, respectively, over PIFu.\n+ The paper is well written and easy to follow.\n\nCons:\n- The authors argue that ARCH++ depends on having a near-perfect mapping between the canonical and posed spaces. Note that such a mapping is defined by the skinning weights and hence it depends on the accuracy of the model fitting. Likewise, this work constructs 2D feature maps from the fitted SMPL-X model and therefore it also depends on the accuracy of the model fitting. However, there is no analysis showing how the model fitting accuracy/error will affect the reconstruction.\n- Unlike ARCH++ where pixel-aligned features (256 channels) and spatial features (96 channels) are computed independently before concatenation for occupancy prediction, 2D feature maps (~198 channels) are constructed from the fitted SMPL-X model which are concatenated with the input image (3 channels) and the estimated front and back normal maps (6 channels), and fed to the encoder of PIFu. Note that only 3 out of ~207 channels in the input come from the RGB image. This suggests the occupancy prediction may heavily depend on the fitted SMPL-X model. There is no analysis on any possible bias on the source of input.\n- Meshes with extreme self-occlusion are excluded in training. This reduces the number of usable meshes in THuman2.0 from 526 to 362. With reduced training data, the performance of the models (both the proposed model and existing models for comparison) will be affected (to different extends), particularly in handling self-occlusion.\n- For each mesh in THuman2.0, only 10 RGB images at different yaw angles have been used in training. Previous works typically use 360 images around each models for training. This difference in training setting makes it not possible to directly/easily compare the quality of models with previous works. For instance, the results of ARCH++ presented in this paper look much worse than in the original paper.\n In addition to the points listed as cons above, \n- The original PIFu does not take estimated front and rear normal maps as input. They are only being used in PIFuHD. Please clarify your implementation of PIFu in the comparison and update the results if necessary.\n- Average coordinates of the face vertices are being used in producing the coordinate feature map. It is actually possible to compute the exact intersection of a ray with a face. How does this compare with using average coordinates?\n- Similarly, average normal is computed from the face vertices in producing the normal feature map. How does this compare with simply using normal of a face (triangle)? Alternatively, normal at the point of intersection can be computed using some shading model (e.g., Phong shading). How does this compare with using an average normal?\n- Following the last two comments, it seems average parsing label (although not explicit mentioned) is computed from the face vertices in producing the parsing feature map. It is possible to compute an interpolated parsing label by modifying a shading model to work with parsing labels (e.g., Phong shading). How does this compare with using an average paring label?\n- Table 1 shows that S-PIFu performs worse than PIFu+C on THuman2.0. This suggests that most improvement is actually brought by the coordinate information, and the parsing and normal information may actually harm the occupancy prediction (due to the increased channel number?). Further analysis and discussion are needed. It might be desirable to show the comparison between S-PIFu, S-PIFu - C, S-PIFU - B, and S-PIFu - N in the ablation study. Since the estimated front and rear normal maps are not used in PIFu, another variant to be considered is S-PIFu - 'front & rear normal'.\n No specific limitation and negative societal impact need to be addressed.", " This paper proposes a new method for single-view clothed human reconstruction by incorporating a parametric body model into a pixel-aligned implicit model. The proposed method has a ray-based sampling method to transform a parametric model into a set of feature maps, proposes blendweight-based soft human parsing labels to improve the structural fidelity, and a method to extract body part orientation information from a parametric model to improve the reconstruction quality. Strengths.\n\n1. A ray-based sampling method is proposed to transform a parametric model into a set of pixel-aligned 2D feature maps.\n2. Blendweight-based soft human parsing labels are used.\n3. A method that extract and capitalize on body part orientation information from a parametric model is proposed.\n\nWeakness.\n\nThe main idea of this paper is to enrich the 2D features with more information derived from SMPL-X parametric body model, before feeding the 2D features into PIFU model for reconstruction. However, three proposed methods seem more like tricks than novel techniques to me.\n\nI am not sure why estimation of body parts orientation can help correct errors caused by shape variations of a body part (explained in L175-176).\n\nThe proposed method relies on [16] to generate the SMPL-X parametric body model. What happens if [16] fails?\n\nBased on Table 1, it seems that PIFU+C performs best on the THuman2.0 dataset than the S-PIFu method. Besides, on the BUFF dataset, all three ablated versions (PIFu+C, B, N) perform (slightly) worse than PIFu or PIFuHD. Can the authors explain this? Please see the above section. It seems not." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 4 ]
[ "ZEQAawtee2R", "w_lQjI1sF9F", "nips_2022_NdpUjzwsHp", "tN6dcbIXXKu", "cn0NpekAirx", "hxvH918cbbk_", "RmT7tLbq56L", "Fvcrhtf5wnM", "VRMU-rVXc1r", "nips_2022_NdpUjzwsHp", "nips_2022_NdpUjzwsHp", "nips_2022_NdpUjzwsHp" ]
nips_2022_48Js-sP8wnv
Use-Case-Grounded Simulations for Explanation Evaluation
A growing body of research runs human subject evaluations to study whether providing users with explanations of machine learning models can help them with practical real-world use cases. However, running user studies is challenging and costly, and consequently each study typically only evaluates a limited number of different settings, e.g., studies often only evaluate a few arbitrarily selected model explanation methods. To address these challenges and aid user study design, we introduce Simulated Evaluations (SimEvals). SimEvals involve training algorithmic agents that take as input the information content (such as model explanations) that would be presented to the user, to predict answers to the use case of interest. The algorithmic agent's test set accuracy provides a measure of the predictiveness of the information content for the downstream use case. We run a comprehensive evaluation on three real-world use cases (forward simulation, model debugging, and counterfactual reasoning) to demonstrate that SimEvals can effectively identify which explanation methods will help humans for each use case. These results provide evidence that \simevals{} can be used to efficiently screen an important set of user study design decisions, e.g., selecting which explanations should be presented to the user, before running a potentially costly user study.
Accept
The paper proposes simulated evaluations (SimEvals) to guide explainable AI (XAI) researchers about what explanations to include in a user study. All the reviewers agreed that this is a novel contribution to a significant and timely problem. There were common questions around the empirical evaluations that the authors clarified during the feedback phase. The reviewers have acknowledged the authors' responses and have confirmed that their questions were adequately addressed. By adding the new table contrasting prior work that the authors included in their feedback, as well as the clarifications from the reviewer discussion, the paper will be substantially stronger.
test
[ "CyG677CxgF", "x7tA1bTydLY", "cu_SZo7zEeg-", "gzQ0CNWLLed", "QZf7i2ltfhm", "YzB7Zo23iyb", "6eAy9Pc8lho", "gGp5kZnJ74V", "ZCEroW-EtQq" ]
[ "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your review and for acknowledging the strengths of this work. \n\nWe first respond to the weaknesses below:\n\n**Snippet:** *“Test accuracy of the agent may not be a wholistic measure of the performance, might need to augment with other metrics as well.”*\n- We agree that including additional metrics in future work may paint a more complete picture of the algorithmic agent’s behavior. However, we show in our experiments that __test accuracy is an informative measure for selecting explanations__ for a target use case. \n\n**Snippet:** *“demographics of the AMT workers could have a bearing on the performance validation scheme”*\n- We emphasize that our validation scheme also considers findings from the users of the other two studies that we cite: the forward simulation user study (Hase and Bansal) was conducted with undergraduates and undergraduate college students and the data bugs user study (Kaur et al) was conducted with data scientists.\n- Additionally, while there may be variability between individual AMT workers’ demographics that may affect their ability to complete the task, we account for this variability by assigning Turkers to each condition uniformly at random and enforcing that each individual worker can only complete the HIT once.\n\n**Snippet:** *“Might have to consider the influence of cognitive human biases including aspects related to subjectivity, confirmation bias, priming effects among others in evaluation”*\n- Please see __Point #3__ in the general response. \n\n**Snippet:** *“information related to institutional ethical clearance should be included in the main paper.”*\n- We will move information about the IRB to the main text; it is currently in Appendix H.1.\n\nWe now respond to the questions:\n\n**Snippet:** *“What were the demographics of the AMT workers who participated in the user study? Would this not have a bearing on the effectiveness of the validation and in turn the proposed framework?”* \n- We recruited AMT workers by filtering for those with a higher than 97% acceptance rate and those from US or Canada, and otherwise used all default settings without applying additional selection criteria. We responded to the second question above.\n\n**Snippet:** *“How was it ensured that presenting the AMT workers with the set of scenarios similar to the agent's training and validation phase did not induce a \"priming effect\"? If the users were not presented with similar situations, is it possible that the results could have changed?”*\n- We ensured that each experimental condition used the __same__ instructions, interface, and formulation to avoid “priming” users. Thus, the differences we observe in user performance should be attributed to the usefulness of the information contained in the explanation itself. \n- We intentionally have the same “scenario” in training and validation for the AMT workers intentionally so that the user, like the algorithmic agent, can learn some heuristic in train and apply it in validation. This is what user studies like Hase and Bansal do. It could be interesting in future work (whether through simulations or user studies) to look at how well agents/humans can generalize to even more dissimilar “scenarios”. \n\n**Snippet:** *“How does the performance of the framework change with the number of explanations…what kind of advantage or performance benefit the proposed framework can provide?”* \n- While the performance of the algorithmic agent may not change in the same way as the performance of a human user as we increase the number of explanations (e.g., the training set size), we clearly state in L56-57 that the goal of SimEval is not to attempt to model such human factors which may include fatigue or cognitive overload. We explicitly encourage researchers to consider these additional human factors when interpreting SimEval results (see L153-158 of initial submission).\n- We believe that we demonstrate SimEvals can still be very helpful for user study design, as explanations that are not predictive for a use case will not be helpful to humans, regardless of these additional human factors.", " Thank you for your review. We respond to your two concerns below, which we believe also address your questions:\n\n**Snippet:** *“While this is novel, experiments like those the framework suggests have been run in the past… How much is this formalization likely to help other researchers relative to just continuing to have researchers run one-off, domain-dependent experiments?”* \n- Please see __Point #1__ in the general response for how our formalization is concretely different from prior algorithmic evaluations along multiple dimensions.\n\n**Snippet:** *\"But I’m not sure if this means that future researchers will run these experiments faster, be more likely to run this kind of experiment, or so on.”*\n- We believe this work will help researchers be more likely to run algorithmic evaluations. Presently, researchers who run human subject studies to evaluate explanations (e.g. [14, 16, 18, 20, 35] from our citations) by and large do not run any algorithmic evaluation(s) to inform their user study. We hypothesize this is because of the lack of a framework that is generalizable across use cases and explanations and of guidance on how to run such evaluations, which is exactly the problem our work addresses. We are also working on a tutorial as a resource for researchers.\n\n\n**Snippet:** *“At best the results seem to match in terms of the rough ranking (particularly the first place) when comparing user accuracy to agent accuracy. However, there’s cases where the ranking doesn’t match up”. * \n- Thank you for bringing up this concern, and please see __Point #2__ in the general response where we clarify our contribution claim and demonstrate how our results strongly support that claim. Thus, we believe spearman’s rank correlation would not be an appropriate test (furthermore, such a test would not work well for small size N, where N = 3 or 4 for our use cases). \n\n**Snippet:** *“The possibility of a SimEvals-like approach being used to avoid user studies entirely.”*\n- Please see our response to __Point #3__ in the general response.", " Thank you for your review. We are excited that the reviewer believes that this work “is an important contribution to the XAI literature”. We respond to your comments below:\n\n**Snippet:** *“I think it's important for the authors to engage with [the cited] literature”.* \n- Thank you for the pointers to these papers (the first of which is an XAI user study and the second of which is a position paper motivating more utility-focused evaluations of explanations). We added them to our related work. \n\n**Snippet:** *“I could imagine a version of SimEval that exists completely independently of users, where the purpose of SimEval is to assess how informative an explanation is about a particular model behavior”.* \n- We agree with the reviewer that an interesting direction for future work is to explore how SimEvals, which measure the predictive information in an explanation, could potentially be used for purposes other than human subject study design. We chose to motivate our work with the goal of aiding human subject studies as we believe it is a very timely problem with high impact, as evidenced by the large number of XAI user studies (e.g. [14, 16, 18, 20, 35] from our citations and more in the HCI space as you mention) that we believe could benefit from improved evaluation workflows.\n\nWe now address your questions:\n\n**Snippet:** *“What if the SimEval Agent's behavior differs from the user's behavior? That is, even if an explanation is informative (in a statistical or ML sense) about some aspect of a model, how do we know that this information will also be useful for a human user?”* \n- Please see __Point #3__ of our general response.\n\n**Snippet:** *“The SimEval framework assumes that the evaluator knows what \"good\" and \"bad\" downstream behavior looks like.…what about use cases that aren't easily quantified?”*\n- We agree with your concern and note that quantitatively evaluating the utility of explanations in such settings is difficult not only to do algorithmically (using SimEvals) but also more broadly in user studies, as it is also unclear how to measure a human’s success in such settings. We believe that exploring metrics for “success” in such settings is an important direction for future work that can inform how SimEvals can be applied.\n", " Thank you for the review. We are glad that you agree that our work addresses a “timely and important topic”. We first respond to your question and then the 2 main weaknesses.\n\n**Snippet:** *“why a comparison with human studies (done by the authors, there's no need of replication of previous results) is not necessary for the 2 scenarios where it has not been performed”*: \n- Our initial submission __did__ compare the algorithmic agent performance to human performance from prior user studies for forward simulation (L252, 262 of original pdf) and data bugs (L274, 284 of original pdf). That said, we agree that these comparisons can be made more clear in the main text. We __added a “Human Test Accuracy” column__ to Tables 1 and 2 and describe these comparisons in more detail in Appendix G. \n- Even without replicating the user studies for the 2 scenarios, we argue that there is value in comparing the results of SimEval to human results from __independently run, peer-reviewed user studies__, showing that SimEvals identifies promising explanations even across a diverse population of study participants. \n- When considering your question, we realized that there would be value in running an additional experiment: __conducting our own user study for the data bugs use case__. The original study conducted by Kaur et al led participants through a semi-structured exploration of various explanation tools. Thus, the original study did not individually evaluate each explanation’s usefulness in aiding users to identify bugs and did not consider potential baseline explanations to justify the need for explanations. \n- In our new MTurk experiment for data bugs, we follow a similar evaluation procedure as the counterfactual reasoning use case and introduce 2 new experimental settings to the original study. We find that SimEval can again identify promising explanations from unhelpful explanations: based on agent test accuracy, SimEval would select SHAP and GAM over LIME or a Model Prediction baseline, and we find that humans achieve 20% better test accuracy (on average) with SHAP/GAM than with LIME/Model Prediction. Our complete study results are in Table 3 of the revised draft.\n\nWe now respond to 2 main weaknesses in light of these results:\n\n(1): *“the lack of necessary extensive experiments”*\n\n**Snippet:** *“Since the evaluation method proposed is not of much technical novelty, the main strength of this work was supposed to lie in showing with extensive experiments that the results of SimEvals correlate with human results.”*\n- We strongly disagree with the comment, please see __Point #1__ in the general response which outlines multiple ways in which our work is novel. \n\n**Snippet:** *“there are very few experiments: only 3 or 4 different explanations are considered for each of the 3 scenarios.”*\n- We respectfully disagree that our work has “very few” experiments and believe that the reviewer’s summary of the experiments oversimplifies our evaluation, where the goal was to provide guidance on how researchers can instantiate SimEvals for 3 diverse use cases and verify those results. \n- In addition to training a SimEval agent for each of the “3 or 4 different explanations” per use case (as was done in previous work), we also studied the effect of varying multiple parameters (e.g., the strength of bug type, number of training set observations, number of data-points per observations, agent architecture) on agent performance.\n\n**Snippet:** *“the authors motivate that a better hyperparameter search … but they do not perform any experiments and leave this for future work”*\n- As stated in the Introduction (L60-62), while the SimEvals framework can be used for hyper-parameter selection, we focused on rigorously evaluating SimEvals for selecting explanation methods as this has been the exclusive focus of existing literature that runs user studies.\n\n(2): *“the unconvincing results even among the performed experiments”*\n\n**Snippet:** *“the authors claim that \"there is a match in the relative rankings\" in Table 3, however, we see that the ordering of SHAP, GAM and No explanation are different in SimEvals than in Human studies, and that the only ordering that matches is in the best explanations (from LIME).\"*\n- Please see **Point #2** in the general response where we clarify our contribution claim and demonstrate how our results strongly support that claim. \n\n**Snippet:** *“they do not mention if there's a statistical difference between SHAP, GAM, and No explanation in both settings”*\n- There is no statistical difference between the average accuracy when given SHAP, GAM, and No explanation for human subjects. We add p-values to Table 8.\n- There are also significant overlaps in the error bars for Agent performance on SHAP, GAM, and No explanation (see Table 6). \n\n**Snippet:** *“The authors do not mention anything about potential negative societal impact.”*\n- Please see our response to **Point #3** in the general response.\n", " We would like to thank all reviewers for their invaluable feedback. We appreciate their efforts to strengthen our work. We respond to related comments in a general post:\n\n__Point #1: Novelty + usefulness of framework__ (Reviewer hGqz and jL67):\n- As shown in the table which contextualizes related work that we discuss in Appendix A, no prior work both (1) provides a general algorithmic evaluation framework that reflects the diverse set of user studies that a researcher may consider and (2) verifies their proposed framework using multiple human subject studies.\n\n| |Algo Eval: Agent learns how to use explanation (i.e., no heuristic needed)?|Algo Eval: Framework generalizes to different use cases?| Human Eval: Runs a user study?|Human Eval: Agent results match human?|Human Eval: Number of explanations and baselines evaluated?|\n|:-:|:-:|:-:|:-:|:-:|:-:|\n|Expo|No|No|Yes|No|1 and 1|\n|Influence Functions|No|No|No|-|-|\n|Student-Teacher Models|No| No| Yes| No| 2 and 1|\n|Anchors|No| No| Yes| Yes| 2 and 0|\n|User studies e.g. [14, 16, 18, 20, 35]|-|-|Yes|-|Avg: 2-4 and 1-2|\n|Ours (SimEvals)|Yes|Yes|Yes (we run or compare to multiple studies)| Yes|4 and 2|\n- In this work, we also provide extensive technical guidance to researchers (Section 3, Appendix E) to illustrate how to generate data and set up SimEvals for a wide variety of explanation types, use cases, and data types.\n\n__Point #2: Contribution framing__ (Reviewer hGqz and jL67):\n- Reviewers hGqz and jL67 state our claim that agent and human rankings “match” is inaccurate. We agree that the current phrasing may be misleading, and we have rewritten our contribution as “SimEvals helps distinguish between which explanations are promising vs. unhelpful to humans” (L73). \n- When there is a significant gap between SimEval performance on two explanations, we observe a *similarly significant gap in human performance* when using the same explanations. We find that the difference between the “best” (i.e., most promising) explanation(s) and all others is statistically significant. Even though we observe in Table 2 (formerly 3) that the exact ordering of the 3 “worst” (i.e., unhelpful) explanations differs between the agent and humans, these differences are small and not statistically significant (see Table 8). This same reasoning can be applied to the other use cases as shown in agent vs human results in Tables 1 and 3 of the revised draft.\n- More broadly, we believe that researchers should focus on __statistically significant differences__ between explanation methods when interpreting their SimEvals, and we caution researchers from concluding that smaller differences may generalize to human performance (as highlighted in Section 6).\n\n\n__Point #3: The gap between agent and human__ (all reviewers):\n- While SimEvals can identify promising explanations for downstream user studies, we find in our experiments and analyses that raw agent and human performances are not equal. This is unsurprising given that SimEval intentionally captures the predictiveness of the information content and does not model human factors. \n- However, given that SimEvals may be adopted in future user study workflows, we agree with the reviewers’ suggestions that we should more clearly outline potential misinterpretations or misuses of SimEvals. We moved the existing discussion to the main text from Appendix J and extended it in Section 6 of the revised draft.\n\n", " The paper introduces an automatic evaluation framework, called SimEvals, to assess the usefulness of explanations in 3 use cases (forward simulation, model debugging, and counterfactual reasoning). SimEvals mimics user studies by training agents instead of humans. Thus it is cheaper and intended to use before more costly user study. \n\n=====\nAfter authors' response: My questions have been properly addressed and I am satisfied with the addition of the newly performed user study and the other clarifications being added to the paper. I have increased my score accordingly. Strengths\nThe paper addresses a timely and important topic, that of evaluating the usefulness of explanations in 3 real-world scenarios.\nIt is well written, easy to read, and clearly states its scope. \n\nWeaknesses\nThe 2 main weaknesses I see are: (1) the lack of necessary extensive experiments, and (2) the unconvincing results even among the performed experiments. Since the evaluation method proposed is not of much technical novelty, the main strength of this work was supposed to lie in showing with **extensive experiments** that the results of SimEvals correlate with human results. However, there are very few experiments: only 3 or 4 different explanations are considered for each of the 3 scenarios. And more importantly, only 1 scenario has a comparison with human studies (Table 3). The authors justify the lack of comparison with human studies in footnote 3 by saying that \"(a) their interfaces are not publicly available and (b) we do not have access to the same set of participants to compare our study results to theirs.\". However, both arguments are extremely shallow: for (a), they could have devised a reasonable interface to do this; for (b), there's absolutely no need to have the exact same participants to draw the conclusion of whether their automatic evaluation correlates with human studies. Finally, the authors claim that \"there is a match in the relative rankings\" in Table 3, however, we see that the ordering of SHAP, GAM and No explanation are different in SimEvals than in Human studies, and that the only ordering that matches is in the best explanations (from LIME). The authors mention that \"an ANOVA test and found a statistically significant difference between the Turkers’ accuracy using LIME vs. all other explanation settings\" but they do not mention if there's a statistical difference between SHAP, GAM, and No explanation in both settings, which, if it is, would have indicated different rankings of the explanatory methods between SimEvals and Humans. Also, the authors motivate that a better hyperparameter search for the explanatory methods can be done using SimEvals, but they do not perform any experiments and leave this for future work. Overall, I find the experimental setup very unconvincing, for a work in which this would have been its main focus. \n\nMinor:\n* Tables don't have bold numbers for the best results\nL272 evaluates --> evaluate\nL280 implements --> implement\nInclude results from human studies in Table 1\nbold results in Table 1\nL993 that,\nL725&728 ??\nL856 Table ?? Maybe the authors could give convincing arguments as to why a comparison with human studies (done by the authors, there's no need of replication of previous results) is not necessary for the 2 scenarios where it has not been performed. The authors do not mention anything about potential negative societal impact. ", " The authors propose a procedure for evaluating explanations using \"simulated evaluations\" (SimEvals). The authors demonstrate SimEval using common explanation methods, and demonstrate that their findings align with (human) user responses from a small MTurk survey. \nThis paper clearly presents an idea that I believe is an important contribution to the XAI literature. Their simluations and user study are both relatively small, but their results are convincing. It should be noted that many other researchers have thought about using downstream/user behavior to guide the design of \"good\" explanations. To my knowledge these studies live mainly in the HCI space. I think it's important for the authors to engage with this literature, because it is very similar in motivation to this work. Some examples:\n\n- https://doi.org/10.1109/TREX51495.2020.00005\n- https://doi.org/10.1145/3301275.3302265\n\nIt's interesting that the authors chose to frame their method as an alternative or supplement to user studies. I could imagine a version of SimEval that exists completely independently of users, where the purpose of SimEval is to assess how informative an explanation is about a particular model behavior---which is itself a prediction problem. (SimEval does this, but framed in terms of users.)\n\nSince the authors choose to center on users, this raises some important questions:\n\n1) What if the SimEval Agent's behavior differs from the user's behavior? That is, even if an explanation is informative (in a statistical or ML sense) about some aspect of a model, how do we know that this information will also be useful for a human user? I would like to see the authors engage with this question somewhere in the main body of their paper.\n\n2) The SimEval framework assumes that the evaluator knows what \"good\" and \"bad\" downstream behavior looks like. This is not always the case---the three use cases presented here (forward simulation, counterfactual reasoning, and data bugs)are certainly well-motivated, but what about use cases that aren't easily quantified? For example with debugging, we often don't know what the source of a bug is! Is this framework still useful in this case?\n\n See above Yes", " In this paper the authors propose a framework for simulated evaluations (SimEvals) meant to help guide explainable AI (XAI) researchers in terms of what explanations to include in a human subject study. The authors introduce the framework, which involves preparing a dataset with the outputs of the base model (the model that we want to explain) along with any explanations and then training another model as a soft surrogate for human evaluation. The authors go on to show how the performance of the second model on a test set can be seen to roughly correspond to human performance in two existing datasets and then in the results of one novel human subject study. \n The primary strengths of the paper are its formalization of the SimEvals framework, including the guidance and discussion around how and when to apply the framework, and the results section. While other researchers have run experiments that can be understood as instantiations of this framework, to the best of this reviewer’s knowledge, there have been no existing attempts to define this particular framework. That is therefore novel. The results do also demonstrate some value to this approach, though I do have concerns with them that I’ll address below. \n\nI have two major concerns in terms of the weaknesses of this paper, and they relate to the strengths above. First, there’s the framework. While this is novel experiments like those the framework suggests have been run in the past. As such, the question then turns to the strength of this formalization. In other words, how much is this formalization likely to help other researchers relative to just continuing to have researchers run one-off, domain-dependent experiments? This is unclear to me after reading the paper, particularly because much of the framework is itself domain-dependent. The authors speak to this point in the related work and Appendix A but there’s still no specific argument as to the benefits of this framework in comparison to running domain-dependent versions of these experiments without the framework. The closest we get is that the authors hope that this framework/paper will help future researchers run these experiments, which is certainly possible. But I’m not sure if this means that future researchers will run these experiments faster, be more likely to run this kind of experiment, or so on. \n\nMy second major concern is with the results, which appear to be somewhat weak. At best the results seem to match in terms of the rough ranking (particularly the first place) when comparing user accuracy to agent accuracy. However, there’s cases where the ranking doesn’t match up. For example in Table 3, where the agent is able to get useful information from SHAP and GAM, while these explanations appear actively harmful in the average human case. These results also do not support the claims made by the authors (e.g., “we find strong correspondences between the SimEval agents’ and humans’ accuracy on the use case”). Given the claims, I was expecting something more like a correlation test (e.g., Spearman’s rho) comparing the individual human and agent predictions for the specific individual cases. If there’s some reason such a test would be inappropriate, it would be helpful for the authors to offer an explanation. But the average accuracy doesn’t seem to be particularly representative of human performance, as the authors themselves indicate given the high-variance of LIME results in Table 3. \n\nOverall, I’m torn between believing there is value to this paper (and more value in the underlying work), but having that tempered by the weak results that do not match up with the claims, and the lack of clarity around what use cases the framework supports over the existing examples. I’ve settled on a weak accept for now but it’s a shaky weak accept.\n 1. Why not run a correlation test between the human predictions and the agent’s predictions on the same problems? \n2. What use cases that weren’t already possible or what benefits does the model confer? \n The authors overall do a good job expressing the limitations of their approach (particularly in the appendices). However, one issue they do not discuss is the possibility of a SimEvals-like approach being used to avoid user studies entirely. While the authors argue against this (“We note that the agent’s performance (i.e. test set accuracy) with a given explanation method is not intended to be interpreted as an anticipated human subject’s performance.”), other claims in the paper would seem to support this use case (“In our experiments, we show that the relative ordering of explanation method performance is consistent for SimEvals and human subject evaluations.”). Some discussion about how this could be avoided in the XAI research field would be appreciated. ", " The paper presents a framework called \"SimEvals\" for conducting use-case grounded algorithmic evaluations of information content presented in a user study. SimEvals involves training an agent that learns to predict the ground truth label given that the same information would be presented to a human user. The test accuracy of the trained agent is used as a measure for the effectiveness of the information content provided. Strengths:\n\n1. The paper presents an interesting framework that could potentially aid future user studies in terms of assessing which aspects need actual human user evaluation thereby saving manual efforts and costs\n\n2. The framework also expands the scope of XAI studies in the sense those explanation methods which are often overlooked may also find a place through this scheme\n\n3. The method will offer promising pathways to further research in this area\n\nWeakness:\n\n1. Test accuracy of the agent may not be a wholistic measure of the performance, might need to augment with other metrics as well. \n\n2. demographics of the AMT workers could have a bearing on the performance validation scheme\n\n3. Might have to consider the influence of cognitive human biases including aspects related to subjectivity, confirmation bias, priming effects among others in evaluation\n\n4. As the study involved human users, information related to institutional ethical clearance should be included in the main paper. What were the demographics of the AMT workers who participated in the user study? Would this not have a bearing on the effectiveness of the validation and in turn the proposed framework?\n\nHow was it ensured that presenting the AMT workers with the set of scenarios similar to the agent's training and validation phase did not induce a \"priming effect\"? If the users were not presented with similar situations, is it possible that the results could have changed?\n\nHow does the performance of the framework change with the number of explanations--typically even human users experience fatigue/cognitive overload when presented with a number of explanations. So in the light of this, what kind of advantage or performance benefit the proposed framework can provide?\n\n\n\n Please refer to comments under the last two questions" ]
[ -1, -1, -1, -1, -1, 5, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "ZCEroW-EtQq", "gGp5kZnJ74V", "6eAy9Pc8lho", "YzB7Zo23iyb", "nips_2022_48Js-sP8wnv", "nips_2022_48Js-sP8wnv", "nips_2022_48Js-sP8wnv", "nips_2022_48Js-sP8wnv", "nips_2022_48Js-sP8wnv" ]
nips_2022_s_PJMEGIUfa
LIFT: Language-Interfaced Fine-Tuning for Non-language Machine Learning Tasks
Fine-tuning pretrained language models (LMs) without making any architectural changes has become a norm for learning various language downstream tasks. However, for non-language downstream tasks, a common practice is to employ task-specific designs for input, output layers, and loss functions. For instance, it is possible to fine-tune an LM into an MNIST classifier by replacing the word embedding layer with an image patch embedding layer, the word token output layer with a 10-way output layer, and the word prediction loss with a 10-way classification loss, respectively. A natural question arises: Can LM fine-tuning solve non-language downstream tasks without changing the model architecture or loss function? To answer this, we propose Language-Interfaced Fine-Tuning (LIFT) and study its efficacy and limitations by conducting an extensive empirical study on a suite of non-language classification and regression tasks. LIFT does not make any changes to the model architecture or loss function, and it solely relies on the natural language interface, enabling "no-code machine learning with LMs." We find that LIFT performs comparably well across a wide range of low-dimensional classification and regression tasks, matching the performances of the best baselines in many cases, especially for the classification tasks. We also report experimental results on the fundamental properties of LIFT, including inductive bias, robustness, and sample complexity. We also analyze the effect of pretraining on LIFT and a few properties/techniques specific to LIFT, e.g., context-aware learning via appropriate prompting, calibrated predictions, data generation, and two-stage fine-tuning. Our code is available at https://github.com/UW-Madison-Lee-Lab/LanguageInterfacedFineTuning.
Accept
The paper demonstrates that pre-trained language models can be competitive at classifying non-textual data, where the input features are linearized into a text-like sequence and used as the conditional prefix for the language model. While the method is still not competitive with supervised learning methods, the fact that LLMs are able to do the task is intriguing. During the rebuttal, the authors also provided convincing answers to when one might prefer this approach (which is computationally expensive) over traditional methods. This paper provides timely insights into empirical research on LLMs. Therefore, I recommend acceptance.
train
[ "tEHI8v4XalB", "68IvMrXZe8K", "h7cWz5dA44Y", "tdXrxhZCzj", "MtKK6QgsXHH", "07YqmvObNK-", "8982ANhJXUt", "tGxoeI5btJ", "5Z_sESvb0Bi", "IbS_25gelwj", "NoHiMg3LN0", "8n7ke4l1jpT", "TOvSLEmNzwbV", "NXIFhzvcTVw", "oTIK6sgQfiI", "ixShrhTCz73", "aAiGrns56xR", "pO_xKfI8Dt_", "stqW0ZaxOU", "x5t4o414sia" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the Reviewer for increasing the score. \nWe appreciate your valuable feedbacks for our work and we will happily keep improving our paper.\nBest, ", " We thank the Reviewer and really appreciate your support for our paper. ", " I have read all the comments by the authors, and they mostly clear up my questions on the experiment settings and formatting issues. While I agree that this paper has contribution to the ML community, I cannot lean strongly toward acceptance due to its massive content. I think the content is more suitable for a journal instead of a 10-page conference paper since a lot of details are placed in the appendix, making the main content not very self-contained. Also, I still feel like this paper is more of an empirical report and it is hard to know how LIFT can be used in real-life use cases.", " Thanks for incorporating the comment and tuning down the claims. I support acceptance after revision and updated my score to 7. ", " **`Q. I am afraid that I will have to disagree with the claim on Line 219 that \"LIFT/GPT outperforms in-context learning\".`**\n\nWe agree that the provided result shows that 'subset-LIFT' (LIFT performed on the subset of data) has no significant statistical improvements with in-context learning. \nHowever, compared to in-context learning, the performance of LIFT on the full data is almost always staticially better. \nHere, we provide the updated result with in-context learning to support this claim.\n\n We include three training schemes: \n* `In-context learning`: few-shot prompting;\n* `LIFT/Subset`: LIFT on the subset of data used in the corresponding in-context learning setting;\n* `LIFT/Full-data`: LIFT on the full dataset. \n\n| Dataset (ID) | #Prompts | ODC | GPT-J | GPT-J | GPT-J | GPT-3 | GPT-3 | GPT-3 |\n|------------------|----------|-------|-------------|-------------|----------------|------------|-------------|----------------|\n| | | | In-Context | LIFT/ Subset | LIFT/ Full-data | In-Context | LIFT/ Subset | LIFT/ Full-data |\n| Breast (13) | 35 | 70.69 | 56.90±19.51 | 58.62±2.44 | 64.94±11.97 | 62.07±1.41 | 70.69±0.00 | 71.26±1.62 |\n| TAE (48) | 50 | 35.48 | 34.33±1.47 | 32.26±9.50 | 61.29±4.56 | 37.64±4.02 | 33.33±1.52 | 65.59±6.63 |\n| Vehicle (54) | 14 | 25.88 | 25.49±0.55 | 26.04±1.69 | 64.31±2.37 | 28.82±2.10 | 23.73±2.27 | 70.20±2.73 |\n| Hamster (893) | 43 | 53.33 | 48.89±3.14 | 60.00±10.88 | 55.55±16.63 | 57.78±6.29 | 53.33±0.00 | 53.33±0.00 |\n| Customers (1511) | 29 | 68.18 | 56.06±17.14 | 59.85±2.84 | 85.23±1.61 | 60.61±1.42 | 63.26±6.96 | 84.85±1.42 |\n| LED (40496) | 33 | 68.67 | 10.00±0.82 | 13.04±3.27 | 65.33±0.47 | 8.00±1.63 | 11.33±2.62 | 69.33±2.05 |\n\n**`Q. There is an important reference that is missing: Kao and Lee (2021) [4] have already fine-tuned BERT on non-language downstream tasks as standard fine-tuning on natural language downstream tasks and show that the performance is decent.`**\n\nThanks for bringing this work to our attention. We note that this work adds a linear layer on top of pretrained BERT for fine-tuning, which requires architectural changes that are not needed for LIFT. We have added this work to the related work.\n\n-----\n\n**Final notes:** We hope that our responses and new experimental results on (i) LIFT/Rand-GPT-J, (ii) LIFT/`LM pretrained on non-human languages`, and (iii) improving the performance of LIFT by utilizing feature names can help you better appreciate our work as well as considering increasing your score and supporting accepting our paper. We agree that LIFT has some limitations, but with its novelty, significance, and potential for future research, we believe LIFT can have a good impact on the machine learning society. Thanks again for your careful reading! ", " **`Q. The main content seems fragmented, incomplete, not self-contained, and hard to follow.`**\n\n1. The \"non-deterministic predictions\" that appeared in Line 18 and Line 70 do not appear in the rest of the paper, and it is unclear what \"non-deterministic predictions\" refers to. Is it related to the Bayesian prediction in Section 5.1?**\n \n Yes, the \"non-deterministic predictions\" refers to the uncertainty of predictions evaluated in Section 5.1. We show that if we allow non-deterministic predictions of LIFT by setting non-zero [`temperature`](https://beta.openai.com/docs/quickstart/add-some-examples), the prediction variance reflects its prediction confidence. We will reword the terminology into \"calibration\", which is a more precise terminology for this property. \n\n\n2. How is the \"increase the generation randomness\" in Line 137 done? Perhaps by changing the temperature for sampling?**\n\n The reviewer is correct that we control the generation randomness by changing the [`temperature`](https://beta.openai.com/docs/quickstart/add-some-examples) for sampling. \n\n3. What do the \"inductive biases of language models\" refer to in Section 4.2?\n\n The inductive biases of language models refer to the preference bias (axis-parallel decision boundaries) and behavior of language models in making the prediction. \n\n\n4. What is a median kNN in Line 241?\n\n The standard kNN makes predictions by taking the *mean* of the sample's $k$ nearest neighbors' predictions. Median kNN here refers to predicting with the *median* instead of *mean*, which is more robust.\n\n5. The paragraph \"Robustness to adversarial samples\" in Line 244 includes no experiment results, making the paper not very self-contained.\n \n\n In the revised manuscript, we moved the results in Section C.1.1 of the Appendix to the main body.\n\n\n6. It is unclear how the mixup is done in Section 6.2, the details of this experiment are not presented neither in the main content nor the appendix.\n\n We apply mixup on LIFT for binary classification problems as below. Given each sample $(x,y)$ having feature $x$ and label $y \\in \\{0,1\\}$, we convert it to $(x, \\tilde{y})$ for $\\tilde{y} = 10y$, so that the label is either 0 or 10. Given $(x_1, \\tilde{y}_1)$ and $(x_2, \\tilde{y}_2)$, we make mixup sample $(x^{\\text{mix}}, y^{\\text{mix}}) = (\\lambda x_1 + (1-\\lambda )x_2, \\lambda \\tilde{y}_1 + (1-\\lambda )\\tilde{y}_2)$, where the convex combination coefficient $\\lambda \\in \\{0, 0.1, 0.2, \\cdots, 1.0\\}$. In this way, we set the mixup label to be integer, \\textit{i.e.,} $y^{\\text{mix}} \\in \\{0, 1, 2, \\cdots, 10\\}$. We fine-tuned GPT using the input-output pair $(x^{\\text{mix}}, y^{\\text{mix}})$. We included this detail in the revised manuscript.\n\n\n\n\n6. It is unclear how the in-context learning is done in Section 4.3. For example, how is the training/testing split determined? Is there an evaluation set? Where does the variance shown in Table 10 come from?\n\n We used the same split of training/validation/testing all the time for all datasets used in different tasks. In in-contexting learning, we randomly select input-output pairs from the training dataset and concatenated them in front of each testing sample. We used a fixed group of 3 random seeds and run in-context learning on each dataset 3 times. We took average and STD on all 3 runs. We did not use the validation dataset in this case and all results reported were based on the testing dataset. \n\nWe have added the clarification above to our updated manuscript.\n\n**`Q. This paper is poorly formatted, and some of the figures are extremely small for reading.`**\n\nWe acknowledge the formatting issues in the last few pages of our paper. We have revised the paper to make it properly formatted.\n\n\n**`Q: Do the authors manually change the style file or change the font sizes of the table captions?`**\n\nNo, we didn't change the style file. We only used some latex commands (e.g.,`\\wrapfigure`) to save the space and `\\resize` the tables to fit them horizontally on a page. Table captions are below the table because we used the `\\caption` command after the tabular data. We updated them in the new version.\n\n**`Q. The sentence in Line 109 \"For tabular data, ... non-language tasks\" seems odd.`**\n\nWe revise this sentence to be more clear, as \"To the best of our knowledge, we are one of the first works studying the adaptation of LMs for non-language tasks on tabular data\".\n\n**`Q.During inference, how is the answer generated? By argmaxing or sampling? If it is sampled, what is the sampling scheme? When will the generation end?`**\n\nHow the answer is generated depends on the LM we are using. For GPT, the answer is generated via a sampling mechanism, while the generation randomness is controlled by [`temperature`](https://beta.openai.com/docs/quickstart/add-some-examples). GPT will generate `max_tokens` tokens and then stop.\n\n\n", " We would like to thank the Reviewer for your constructive comments as well as the appreciation of our interesting findings and the impressive amount of downstream tasks in experiments. \n\n**`Q. It is unclear whether the benefit of LIFT is just because the model is gigantic.`**\n\nAs per your great question, we ran extensive experiments to better explain the reasons for the good performance of LIFT. Please find our response to this question in our response to Q2 in the response to all reviewers & AC. In short, we observe that gigantic language models that are not pretrained in human language do not necessarily lead to as good performance as language models that are pretrained in human languages. Therefore, we conclude that the gigantic model itself is not the only reason for the good performance of LIFT. \n\n\n**`Q. Why one will want to use a PLM to fine-tune on non-language downstream tasks.`**\n\nThanks for your question. Please find our response to this question in our response to Q1 in the response to all reviewers & AC. \n\n**`Q. Dependency of LIFT on the input length of LMs.`** \n\n\nThanks for the great question. In the updated version, we acknowledge this limitation and the dependency of LIFT on the chosen language model. We provide two techniques for mitigating this issue.\n1. **Subsampling the features and keeping less number of digits.** For example, to handle MNIST, we (i) crop each image and keep the center with a size of 18x18, and (ii) use the integer representation of pixel (from 0 to 255). So, each image totally has 384 pixels and fits in the context length of the input. \n2. **Leveraging long-range Transformer-based language models.** \nThere are LMs with long context lengths based on the long-range transformer architectures, such as Transformer-XL (Dai et al., 2019), Longformer (Beltagy et al., 2020), BigBird (Zaheer et al., 2020), or Perceiver-IO (Jaegle et al., 2021).\n\n*References:*\n* Zaheer, M., Guruganesh, G., Dubey, K. A., Ainslie, J., Alberti, C., Ontanon, S., ... & Ahmed, A. (2020). Big bird: Transformers for longer sequences. Advances in Neural Information Processing Systems, 33, 17283-17297.\n* Beltagy, I., Peters, M. E., & Cohan, A. (2020). Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.\n* Dai, Z., Yang, Z., Yang, Y., Carbonell, J., Le, Q. V., & Salakhutdinov, R. (2019). Transformer-xl: Attentive language models beyond a fixed-length context. arXiv preprint arXiv:1901.02860.\n* Jaegle, Andrew and Borgeaud, Sebastian and Alayrac, Jean-Baptiste and Doersch, Carl and Ionescu, Catalin and Ding, David and Koppula, Skanda and Zoran, Daniel and Brock, Andrew and Shelhamer, Evan and others. Perceiver io: A general architecture for structured inputs and outputs. arXiv:2107.14795, 2021.\n\n**`Q: The memory of LIFT is quadratic to the input length, indicating that LIFT is highly memory inefficient compared to other models.`**\n\nWe thank the reviewer for the detailed comment. The reviewer is correct that the memory of LIFT is quadratic to the input length, given the use of the attention mechanism in GPT-J and GPT-3. To mitigate the memory issues of transformers, we can make use of LMs with a more memory-efficient variant or implementation of transformer models, e.g., Flash Attention (Dao et al., 2022) or with efficient or sparse attention methods (Linformer (Wang et al., 2020), BigBird (Zaheer et al., 2020) or ETC (Ainslie, et al., 2020) ) to reduce the quadratic dependency of models on the input length to linear. \n\n*References*:\n* Zaheer, M., Guruganesh, G., Dubey, K. A., Ainslie, J., Alberti, C., Ontanon, S., ... & Ahmed, A. (2020). Big bird: Transformers for longer sequences. Advances in Neural Information Processing Systems, 33, 17283-17297.\n* Ainslie, J., Ontanon, S., Alberti, C., Cvicek, V., Fisher, Z., Pham, P., ... & Yang, L. (2020). ETC: Encoding long and structured inputs in transformers. arXiv preprint arXiv:2004.08483.\n* Wang, S., Li, B. Z., Khabsa, M., Fang, H., & Ma, H. (2020). Linformer: Self-attention with linear complexity. arXiv preprint arXiv:2006.04768.\n* Dao, T., Fu, D. Y., Ermon, S., Rudra, A., & Ré, C. (2022). FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness. arXiv preprint arXiv:2205.14135.\n\n", " We thank Reviewer iBBJ for the detailed review and very helpful suggestions. \nWe thank the Reviewer for appreciating the importance, novelty, and impact of our research problem, extensive experimental results, and our efforts in making a well-written paper. \nOur responses are detailed below. \n\n\n**``Q. The performance of LIFT is not impressive. Why does the paper claim that LIFT performs relatively well compared to other methods?``**\n\nWe wrote \"LIFT performs well\" because we found that it is ranked within the top three approaches among all the tested baselines on average. That being said, we agree with the reviewer that we could tone down our claim by making claims more precise and specific. \n\nBesides, we would like to note that achieving impressive performance is not the only goal of LIFT. We believe that LIFT has a lot of potentials to be useful -- please find our response in Q1 of the common response.\n\n**`Q. Can LM successfully use the feature names?`**\n\nWe thank the reviewer for the sharp question. To better support our claim, we have expanded the corresponding experiments. We compare LIFT when feature names are correctly incorporated (`Correct-Names`) with the versions of LIFT when feature names are incorrectly incorporated with randomly shuffled orders (`Shuffled-Names`). \nAlso, we evaluate the performance of LIFT without feature names specified (`Without Names`). (In fact, we tried two different templates for each method in our revision to reduce the unwanted effect of prompt designs -- please see the revision for more detailed comparisons.) The results are reported in the Table below.\n\n| Dataset \\ Schemes | `Without Names` | `Shuffled-Names` | `Correct-Names` | \n|:-----------------:|:-----------:|:-----------:|:-----------:|\n|CMC| $\\mathbf{57.74 \\pm 0.89}$ | $57.06 \\pm 4.24$ | $57.40 \\pm 1.09$ | \n|TAE| $65.59 \\pm 6.63$ | $64.52 \\pm 8.53$ | $\\mathbf{69.89 \\pm 9.31}$ | \n|Vehicle| $70.20 \\pm 2.73$ | $69.22 \\pm 2.72$ | $\\mathbf{75.29 \\pm 2.04}$ |\n\nThe new experiment results support our claim that LM can successfully use the feature names. \n\n**`Q. Table 3 caption tries to conclude that larger models generally perform better.`**\n\n\nThanks for pointing this out. To clarify, our purpose was to show that: compared to the results with GPT-J and GPT-3-Ada (smallest version) used throughout the paper, the use of larger versions of GPT-3 (3 models) may improve the performance of LIFT. We did not mean to claim the comparison between the three larger versions of GPT-3. Based on the reviewer's comment, we have revised the caption to make it more clear.\n\n**`Q. Why does LM perform bayesian inference? ... This only implies that LIFT is calibrated.`** \n\nWe thank the Reviewer for pointing this out. We have replaced the conclusion that \"LM is performing Bayesian inference\" to \"LIFT is calibrated\" as per the reviewer's comment. \nAlso, we have replaced the term \"Bayesian inference\" with \"calibration\" in our paper. \n\n**`Q. Rename \"non-language machine learning tasks\" to \"real-valued machine learning tasks\"?`**\n\nThanks for engaging so deeply! Note that the datasets LIFT handles are not necessarily real-valued. For example, the categorical attributes are not real-valued, and we do not process the categorical attributes to be real-valued. To resolve the reviewer's concern, we have elaborated on what non-language tasks are earlier in the paper, including both the abstract and introduction.\n\n-----\n**Comment. '`I am very willing to increase the score to 8 if the author can either 1) justify why their claims are correct, or 2) re-position the paper as a \"negative empirical finding\", rather than advertising \"LIFT\" as a very promising approach for real-value-input ML tasks.`'**\n\nThanks for the suggestion and your willingness to update the score based on our response. In the revision, we have toned down most claims with more fair and accurate claims. We believe that with our modification of writing and our new experimental results, we have successfully justified the correctness of all our updated claims. Moreover, we will add \"negative empirical findings\" on regression tasks and emphasize various limitations of LIFT. We hope that our revision can help you reconsider your score and support accepting our paper. Thanks again for your careful reading of our work and for providing valuable feedback! ", " **`Q. From figure 4, it looks like this might not work for few-shot prompting. Did you run that experiment?`**\n\nThanks for a sharp observation. Yes, we observe that few-shot prompting also does not work well in these cases. In the following table, we report the performance of three schemes:\n* `In-context learning`: few-shot prompting;\n* `LIFT/Subset`: LIFT trained with few shots;\n* `LIFT/Full-data`: LIFT trained with the full dataset. \n\n| Dataset (ID) | # Shots | ODC | GPT-J | GPT-J | GPT-J | GPT-3 | GPT-3 | GPT-3 |\n|------------------|----------|-------|-------------|-------------|----------------|------------|-------------|----------------|\n| | | | In-Context | LIFT/ Subset | LIFT/ Full-data | In-Context | LIFT/ Subset | LIFT/ Full-data |\n| Breast (13) | 35 | 70.69 | 56.90±19.51 | 58.62±2.44 | 64.94±11.97 | 62.07±1.41 | 70.69±0.00 | 71.26±1.62 |\n| TAE (48) | 50 | 35.48 | 34.33±1.47 | 32.26±9.50 | 61.29±4.56 | 37.64±4.02 | 33.33±1.52 | 65.59±6.63 |\n| Vehicle (54) | 14 | 25.88 | 25.49±0.55 | 26.04±1.69 | 64.31±2.37 | 28.82±2.10 | 23.73±2.27 | 70.20±2.73 |\n| Hamster (893) | 43 | 53.33 | 48.89±3.14 | 60.00±10.88 | 55.55±16.63 | 57.78±6.29 | 53.33±0.00 | 53.33±0.00 |\n| Customers (1511) | 29 | 68.18 | 56.06±17.14 | 59.85±2.84 | 85.23±1.61 | 60.61±1.42 | 63.26±6.96 | 84.85±1.42 |\n| LED (40496) | 33 | 68.67 | 10.00±0.82 | 13.04±3.27 | 65.33±0.47 | 8.00±1.63 | 11.33±2.62 | 69.33±2.05 |\n\nAs one can see, the performance of few-shot prompting is similar to (or slightly worse than) the performance of LIFT trained with the same number of examples. \n\n**`Q. How are adversarial examples generated? `**\n\nWe generated adversarial examples using a proxy network (LeNet-5 or MLP) and transferred them. Please find Section C.1.1 of the Appendix for more details. \n\n\n\n**`Q. Measuring the embedded bias in LIFT.`**\n\nThanks for engaging so deeply and providing such a good suggestion! The reviewer is correct that language models are known to be biased, so the classifier obtained with LIFT could be also biased. In fact, we also observed a few cases where LIFT generates biased predictions (and explanations as well) in our own experiments. We plan to further explore this bias issue of LIFT in future work while leaving a remark on this issue in the current paper. \n\n----\n**Final notes**: We want to thank you again for such exemplary and detailed comments. We are excited that you find our idea novel and hope that our response makes you consider increasing your score and further support accepting our paper.\n\n\n", " We thank Reviewer GQen for the detailed review and very helpful suggestions. We appreciate that Reviewer GQen finds our idea and the use of feature names novel and interesting. \nMaking use of feature names with LMs is exactly our very first motivation for this work.\nWe also thank the Reviewer for appreciating our presentation in Table 1 and Figure 4. \n\n**`Q. It's intriguing as a method for understanding the pre-training dataset and also harnessing contextual information. I'd like to see more focus and additional tasks explored or an expansion on experiments on using the feature names with LMs.`**\n\nAs per your great suggestion, we have expanded this section with more experiment results and provide stronger evidence that including feature names in the prompts indeed improves the performance of LIFT. We compare LIFT when feature names are correctly incorporated (`Correct-Names`) with the version of LIFT when feature names are incorrectly incorporated with randomly shuffled orders (`Shuffled-Names`). Also, we evaluate the performance of LIFT without feature names specified (`Without Names`). (In fact, we tried two different templates for each method in our revision to reduce the unwanted effect of prompt designs -- please see the revision for more detailed comparisons.)\n\n\n| Dataset (DID) \\ Schemes | `Without Names` | `Shuffled-Names II` | `Correct-Names` | \n|:-----------------:|:-----------:|:-----------:|:-----------:|\n|CMC (23)| $\\mathbf{57.74 \\pm 0.89}$ | $57.06 \\pm 4.24$ | $57.40 \\pm 1.09$ |\n|TAE (48)| $65.59 \\pm 6.63$ | $64.52 \\pm 8.53$ | $\\mathbf{69.89 \\pm 9.31}$ | \n|Vehicle (54)| $70.20 \\pm 2.73$ | $69.22 \\pm 2.72$ | $\\mathbf{75.29 \\pm 2.04}$ | \n\n\nAs we can see from the table, the performance of LIFT is maximized when feature names are correctly specified. We plan to expand this analysis and include a more extensive analysis in the final version.\n\n\n**`Q. \"what should the y be\" vs \"what should be the y\"`**\n\n| Dataset (DID) \\ Schemes | `Original template` (what should the y be) | `New template` (what should be the y) |\n|:-----------------:|:-----------:|:-----------:|\n|CMC (23)| $57.74 \\pm 0.89$ |$57.40 \\pm 1.37$ |\n|TAE (48)| $65.59 \\pm 6.63$ | $66.67 \\pm 5.48$ |\n|Vehicle (54)| $70.20 \\pm 2.73$ |$71.96 \\pm 3.09$ |\n\nWe thank the reviewer for an interesting suggestion. The reviewer is absolutely correct that small changes in prompt design can make a large difference in the overall performance when using large language models. We tried your particular suggestion, and we saw some (yet marginal) differences in our experiments. Inspired by your suggestion, we plan to further investigate the effect of prompt design on the performance of LIFT and include a more thorough analysis in the final version. \n\n\n\n**`Q. The motivation of LIFT: when and why should we prefer it.`**\n\nPlease find our response to Q1 in our common response. \n\n\n**`Q. Suggestions on the writing.`**\n\nThanks for your helpful suggestions, which aided us greatly in improving the clarity of our writing. We have properly made corresponding edits to address each of your comments. We (1) added an early explanation of non-language tasks and the core idea, (2) added detailed descriptions and pretrained language models and keep only concise and important methods in figures, and tables to make them more readable, (3) added the dataset names, and (4) moved a lot of less significant experiments to the appendix. \n\n", " *References:*\n* Arik, S. Ö., & Pfister, T. (2021, May). Tabnet: Attentive interpretable tabular learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 35, No. 8, pp. 6679-6687).\n* Huang, X., Khetan, A., Cvitkovic, M., & Karnin, Z. (2020). Tabtransformer: Tabular data modeling using contextual embeddings. arXiv preprint arXiv:2012.06678.\n* OPT: https://github.com/facebookresearch/metaseq/tree/main/projects/OPT\n\n-----\n**Final notes**: We are excited that you find our work interesting and potentially impactful. We will integrate these answers into our new version. Thanks again for your careful reading of our work and for providing encouraging feedback!", " We thank the Reviewer evpg for the detailed review and constructive suggestions. We appreciate your acknowledgement that the paper is well-written and clear with interesting investigation directions and findings. We also share with Reviewer the belief that our findings could lead to interesting future research. Please find our answers to your comments and questions as follows. \n\n**`Q. Effect of replacing the input/output layers?`**\n\n**==Updated results on the effect of input layer and output layer replacements (last 2 columns)==**\n| Dataset (ID) | LIFT/GPT-J |FPT/GPT-J|FPT(Output Only)/GPT-J|FPT(Input Only)/GPT-J\n|:------------------:|:------------:|:------------:|:------------:|:------------:|\n|Blobs (2)|96.17±0.59|96.75±0.00|96.67±0.12|96.75±0.00\n|Two Circles (6)|75.92±1.65|74.33±0.31|76.33±2.49|69.83±1.31\n|Iris (61)|96.67±0.00|96.67±0.00|97.78±1.57|81.11±3.14\n|Customers (1511)|85.23±1.61|87.88±0.54|88.26±0.54|86.74±1.42\n|Wine (187)|93.52±1.31|100.00±0.00|99.07±1.31|92.59±3.46\n|LED (40496)|65.33±0.47|73.00±2.94|71.67±1.25|68.67±1.89\n\nThanks for your suggestions. We ran additional experiments by changing input/output layers of GPT-J, following the methods used in Frozen Pretrained Transformers (FPT)[https://arxiv.org/pdf/2103.05247.pdf]. Specifically, we reinitialized a trainable input layer and a trainable output layer, with frozen pretrained GPT-J transformer architectures in the middle. Same as in the FPT paper, the input dimension equals to number of features and the output dimension equals to number of classes, which varies depending on the tasks. We reported the results in the table above and compared them with our own method LIFT/GPT-J. As expected, FPT performs slightly better than LIFT on almost all tested cases. \n\nAs per the reviewer's suggestion, we are also working on investigating the effect of having trainable output layer only (or input layer only). Once our experiments are completed, we will report additional results here. \n\n**Updated**: We added experiments to test the effect of input layer and output layer separately. Both FPT and FPT with only output layer perform slightly better than LIFT on almost all tested cases. FPT with only input layer performs the worst. Our justification for these observations is that training an output layer is similar to training a linear classifier, which might be easier than training an input layer as an encoder.\n\n**`Q. Open question: Reasons for good performance of LIFT.`**\n\nAs per your great question, we ran extensive experiments to better explain the reasons for the good performance of LIFT. Please find our response to this question in our response to Q2 in the response to all reviewers & AC. In short, it is important to use large language models that are pretrained on human language data. \n\n\n**`Q. Are there baselines with more comparable numbers of parameters? What is the effect of model capacity?`**\n\nAs per your suggestion, we added a new comparison with baseline models with larger capacities. (Though they are still much smaller compared to GPT models!) In particular, we provide the results of deep neural network models based on architectures specifically designed for tabular data: TabNet and TabTransformer.\n* TabNet (Arik and Pfister, 2021) is one of the SoTA deep network architectures specifically designed for tabular data. \n* TabTransformer (Huang et al., 2020) is a deep network architecture specifically designed for tabular data with the use of Transformer architectures. To run TabTransformer, one has to specify the categorical features (with the number of unique values) and continuous features. For the simple evaluation, we consider all features as continuous features and use the TabTransformer directly. \n\nAs shown in the table below, these deep-learning-based baselines perform similarly to XGBoost and Random Forest.\n\n| Dataset (DID) | ODC | LIFT/GPT-3 | LIFT/GPT-J | TabNet | TabTransformer |\n|------------------|-------|------------|------------|-------------|----------------|\n| Blobs (2) | 25.00 | 96.67±0.24 | 96.17±0.59 | 96.75±0.00 | 50.00±0.00 |\n| Two Circles (6) | 50.00 | 81.42±0.82 | 75.92±1.65 | 74.25±12.39 | 49.25±1.29 |\n| Iris (61) | 33.33 | 97.0±0.00 | 96.67±0.00 | 97.78±1.92 | 72.22±5.09 |\n| Customers (1511) | 68.18 | 84.85±1.42 | 85.23±1.61 | 85.22±3.93 | 87.12±0.66 |\n| Wine (187) | 38.89 | 92.59±1.31 | 93.52±1.31 | 94.44±5.56 | 90.74±13.70 |\n| LED (40496) | 11.0 | 69.33±2.05 | 65.33±0.47 | 11.68±4.44 | 23.46±13.85 | 67.00±2.46 | 41.00±12.49 |\n\nIn addition to these experiments, we are planning to compare the performance of LIFT with a larger range of model sizes. This can be done by comparing the performance of LIFT with various open-sourced language models. In specific, Facebook's OPT model size varies from 125M to 175B. We could not complete the experiments during the rebuttal period, but we will keep you posted once we see any new results. \n\n", " **Q2. (paraphrased) Does LIFT really need a large \"pretrained\" model on \"natural language\" data?**\n\nFor better answering, we decompose this question into two parts. \n\nIn Q.2.1, we first answer \"Does LIFT really need a large **pretrained** model\"? To answer this, we compare the performance of LIFT applied to pretrained GPT-J and that applied to randomly initialized GPT-J. We observed that LIFT applied to randomly intialized GPT-J does not work. \n\nIn Q.2.2, we then answer \"Does LIFT really need a large pretrained model on **natural language** data\"? To answer this, we compare the performance of LIFT applied to natural-language-pretrained GPT-J and that applied to program-code-pretrained GPT. (Note that ideally, we wanted to compare it with GPT pretrained on gibberish, but we could not perform a large-scale pretraining on our own given a limited time and compute resource.) As a result, we found that there is a significant gap between the two cases, showing the necessity of our choice of **natural language models**.\n\n\n\n**Q.2.1 Does LIFT really need a large **pretrained** model?**\n\nThe following table presents our results. Here, we obtain baselines by applying LIFT onto different pretrained LMs. ODC is the Optimal Deterministic Classifier which always outputs the major class as the classification. We use ODC as a simple baseline for comparison.\n\n| Dataset (DID) | LIFT/GPT-J | LIFT/Rand-GPTJ | ODC | \n|------------------|-------|-------------|------------|\n| Blobs (2) | 96.17±0.59 | 25.65±1.58 | 25.00 |\n| Two Circles (6) | 75.92±1.65 | 49.88±5.01 | 50.00 |\n| Iris (61) | 96.67±0.00 | 27.78±20.79 | 33.33 | \n| Customers (1511) | 85.23±1.61 | 52.47±7.15 | 68.18 | \n| Wine (187) | 93.52±1.31 | 22.22±15.71 | 38.89 | \n| LED (40496) | 65.33±0.47 | 11.68±4.44 | 11.0 | \n\n\nWe compare our models with LIFT/Rand-GPT-J where Rand-GPT-J is the GPT-J model with weights being randomly initialized. More specifically, we randomly initialized a GPT-J model and fine-tuned the whole model (instead of LoRA). We tried various learning rates (0.1-0.5) and report the performance of the model that achieved the highest average validation accuracy. We also tuned the temperature as we observed the zero temperature consistently gave us poor performances. As a result, we used a temperature of 1. Even after we set the temperature to 1, only 10%~15% of outputs are valid. The accuracies listed in the table are calculated only among valid outputs. \n\nAs one can see from the results, LIFT/Rand-GPT-J achieves much lower performance than LIFT/GPT-J. Thus, we believe that good performance with LIFT comes from pretraining, not just from the large model size. \n\n**Q.2.2 Does LIFT really need a large pretrained model on *natural language* data?**\n\n| Dataset (DID) | LIFT/GPT-J (pretrained on natural language) | LIFT/CodeParrot (pretrained on program code) | LIFT/CodeGen (pretrained on program code) | ODC | \n|------------------|-------|------------|------------|------------|\n| Blobs (2) | 96.17±0.59 | 93.39±1.82 | 93.67±0.72 | 25.00 | \n| Two Circles (6) | 75.92±1.65 | 50.08±2.47 | 53.02±0.66 | 50.00 |\n| Iris (61) | 96.67±0.00 | 60.00±8.82 | 43.31±6.67 | 33.33 |\n| Customers (1511) | 85.23±1.61 | 43.11±3.34 | 45.96±8.96 | 68.18 | \n| Wine (187) | 93.52±1.31 | 33.88±3.87 | 77.78±0.00 | 38.89 |\n| LED (40496) | 65.33±0.47 | 23.46±13.85 | 11.00±4.00 | 11.0 |\n\nTo answer the second question, we apply LIFT to models that are pretrained on non-human language data. CodeParrot (https://huggingface.co/codeparrot) and CodeGen (Nijkamp et al., 2022) are the language models pretrained on programming source code. \n\nAs one can see from the table, LIFT applied with CodeGen or CodeParrot does not perform well in most cases. These results imply that LIFT works the best with language models that are trained on natural language, not just any data. \n\n*References*: \n* Nijkamp, E., Pang, B., Hayashi, H., Tu, L., Wang, H., Zhou, Y., ... & Xiong, C. (2022). A conversational paradigm for program synthesis. arXiv preprint arXiv:2203.13474.", " (ii) **Updatability via information retrieval**. Another drawback of current ML models is that it is difficult to \"update\" models when distribution shift occurs. Handling this distribution shifts in a compute-efficient manner has recently become one of the most active research areas in the field. Recently, researchers have come up with a very efficient way of updating language models by augmenting them with a retrieval mechanism. With such a retrieval mechanism equipped, language models can be efficiently updated as one can simply update its associated database. One can also connect language models with the Internet. See Retrieval-Enhanced Transformer (Retro) (Borgeaud et al., 2021), SPALM (Yogatama et al., 2021), FiD (Izacard et al., 2021), or $EMDR^2$ (Singh et al., 2021) for recent advances. \n\nWhile we only used language models that are not equipped with a retrieval mechanism in this paper, it is very straightforward to apply our framework to other language models that can retreive information from database or the Internet, i.e., LIFT can support a compute-efficient update mechanism.\n\n*References*: \n* Wei, J., Wang, X., Schuurmans, D., Bosma, M., Chi, E., Le, Q., & Zhou, D. (2022). Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903.\n* Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., & Iwasawa, Y. (2022). Large Language Models are Zero-Shot Reasoners. arXiv preprint arXiv:2205.11916.\n* Borgeaud, S., Mensch, A., Hoffmann, J., Cai, T., Rutherford, E., Millican, K., ... & Sifre, L. (2022, June). Improving language models by retrieving from trillions of tokens. In International Conference on Machine Learning (pp. 2206-2240). PMLR.\n* Yogatama, D., de Masson d’Autume, C., & Kong, L. (2021). Adaptive semiparametric language models. Transactions of the Association for Computational Linguistics, 9, 362-373.\n* Izacard, G., & Grave, E. (2020). Leveraging passage retrieval with generative models for open domain question answering. arXiv preprint arXiv:2007.01282.\n* Singh, D., Reddy, S., Hamilton, W., Dyer, C., & Yogatama, D. (2021). End-to-end training of multi-document reader and retriever for open-domain question answering. Advances in Neural Information Processing Systems, 34, 25968-25981.\n\n", " | | Prompt or Generated Text | \n|:-----------------:|:-----------:|\n|Raw input|Checking Account: less than 200 DM; Loan Duration: 48 months; Credit History: no credits/paid back duly; Loan Purpose: education; Number of applying for credits: 18424; Savings Account: less than 100 DM; Employment Time: 1~4 years; Installment Rate: 1%; Sex: Female; Other Debtors: No; Property: building society savings agreements/life insurance; Age: 32; Housing: self-owned house; Job: highly qualified employee/ self-employed; Foreigner: no|\n|First prompt (soliciting prediction)| A 32-year-old female is applying for a loan of 18424 credits for 48 months for other purposes. She has a checking account with less than 200 Deutsche Mark and a savings account with less than 100 Deutsche Mark. She had no credits or paid back all credits duly. The installment rate of this applicant in percentage is 1%. She is employed for more than one year but less than four years and is recognized as a highly qualified employee or is self-employed. She has lived in a self-owned house. She owns some building society savings agreements or life insurance. Given such information, will the bank accept or decline this loan application? |\n|First response|accept|\n|Second prompt (soliciting explanation)|A 32-year-old female is applying for a loan of 18424 credits for 48 months for other purposes. She has a checking account with less than 200 Deutsche Mark and a savings account with less than 100 Deutsche Mark. She had no credits or paid back all credits duly. The installment rate of this applicant in percentage is 1%. She is employed for more than one year but less than four years and is recognized as a highly qualified employee or is self-employed. She has lived in a self-owned house. She owns some building society savings agreements or life insurance. Given such information, will the bank accept or decline this loan application? **\\n The bank will accept this loan application. Let’s think logically. This is because**|\n|Second response (random seed 1)|the applicant is employed and is qualified in a high level. Since she is employed, she is able to make all the payments. Furthermore, she has a self-owned house and owns some building society savings agreements or life insurance. She has less than 2000 Deutsche Mark in checking and savings account. She had no credits or paid back all credits duly. She is able to pay the loans. All of these are positive factors to this loan. The bank will accept this loan.\n|Second response (random seed 2)| the applicant has a high income. She has lived in a self-owned house. She has some savings agreements or life insurance. She is a highly qualified employee. She has no negative credits. All of these facts show that she is not in danger of not being able to pay back the loan. Therefore, the bank will accept this loan application.|\n|Second response (random seed 3)|the applicant has a checking account with less than 200 Deutsche Mark and savings account with less than 100 Deutsche Mark. Hence she is a low-risk customer. Furthermore, she has no credits or paid back all credits duly. And the installment rate of this applicant is 1%. She is employed for less than four years and is recognized as a highly qualified employee. She has lived in a self-owned house. She owns all the building society savings agreements or life insurance. Hence she is considered a good customer. So the bank will accept her loan application |\n|Second response (random seed 4)| the applicant is employed with a high salary and is economically stable. She is not likely to file for bankruptcy. She can pay back the loan. All in all, we can see that the bank will accept this loan application.|\n|Second response (random seed 5)| the bank will accept all applications with a very low risk of default. There is a very low probability that she will be unemployed in the near future. She can afford to pay off this loan. Furthermore, this applicant has a high annual income.|\n\nNote that these explanations are generated without seeing *any* reasoning examples, i.e., these are solely based on the zero-shot reasoning capability of language models. While these explanations are not even close to perfect, they are still very reasonable. Improving the few-/zero-shot reasoning capabilities of language models is a fast-growing research field, so incorporating new techniques will further improve the explainability of LIFT. Given its inherent compatibility with the zero-shot reasoning capability of language models, we strongly believe that it is worth sharing our LIFT framework with the research community.\n\n", " ## To AC and All Reviewers\n\nWe thank the Reviewers for their insightful feedback and constructive comments and for providing suggestions that would improve our paper. \n\nFirst of all, we are encouraged that the reviewers found that:\n(i) the direction of investigation is interesting (R-evpg, R-GQen, R-iBBJ, R-igVi)), novel (R-GQen, R-iBBJ), and potentially impactful (R-iBBJ);\n(ii) the amount of evaluated downstream tasks and results is impressive (R-evpg, R-iBBJ, R-igVi), with interesting and non-intuitive findings on different aspects of the set-up (R-evpg, R-igVi), which could help practitioners gain intuitions (R-iBBJ) and lead to some interesting future research (R-evpg); \n(iii) our paper is generally well written and clear (R-evpg, R-iBBJ) with clear tables, figures (R-GQen, R-iBBJ), and details of the appendix, scope of the study, and societal impact (R-iBBJ).\n\nAs for the concerns/questions raised, we believe that we successfully addressed every single one, as replied to each reviewer. **We integrated most of the answers and new results in the newly updated version (attached).**\n\nIn particular, we found that there are two major questions raised by the reviewers. To answer them, we ran extensive additional experiments, and summarized below are the two most important questions and our responses to them.\n\n**Q1. (paraphrased) Why should one use LIFT? Isn't it too big/slow/expensive compared to small models? It supports \"No-code-ML\", but it alone does not seem sufficient to justify its employment in practice.**\n\nWhile we mostly emphasized the \"no-code-ML\" property of LIFT, indeed it has a lot of potentials to be more useful and powerful than many of the current ML models. Particularly, we believe that LIFT can bring a completely novel approach to enable (i) explainability and (ii) updatability via information retrieval. We will incorporate these additional benefits in the final revision.\n\n\n(i) **Explainability**. Most ML models cannot explain their own predictions. While there are specific algorithms developed to enable the explainability of such models, their efficacy is still in question. \n\nOn the other hand, LIFT, which is based on a large pretrained language model, can be made to explain its prediction using its reasoning capabilities (Wei et al., 2022), (Kojima et al., 2022). While we have not explored this aspect of LIFT in the current paper, we provide a small toy example to demonstrate the explainability of LIFT below. \n\nConsider the German-credit dataset, whose goal is to predict whether the bank should approve or decline loan applications. After asking GPT3 if one should approve/decline a loan application via LIFT, one can also ask GPT3 to explain its own prediction result. This can be implemented by making two consecutive inference calls as shown in the following table. Note that we provide five different responses generated with different random seeds.\n\n\n\n\n\n\n", " The paper investiagtes the application of pre-trained language models on the task of classifying non-textual data, without any architecture changes.\nThe input features are linearized into a text-like sequence and given as context, the output is then collected as a language model prediction.\nWhile it doesn'g perform best overall, it does achieve surprisingly competitive performance. The paper is well written and clear.\nThe direction of investigation is interesting.\nThere are many experiments presented, analysing different aspects of the set-up.\nThe findings are interesting and non-intuitive. For example, the model performing well on MNIST by describing each pixel in a textual sequence. \nWhile the setup is clearly not optimal for the tasks, the findings could lead to some interesting future research.\n\nPrevious work is mentioned, which replaces the input and output layers of the LM, whereas the current work keeps entirely the same architecture. \nThe performance of that system should be reported as well. \nAt the moment it is unclear whether keeping the original input or output layers actually helps or hurts the performance on different tasks.\n\nFurthermore, the replacement of the input and output layerst should be investigated separately.\nWhile giving the input as text might actually have some benefits (as experiments in Section 5.2 also indicate), using only language model predictions as output could be hurting performance. Even models that work with natural language generally fine-tune pre-trained language models by replacing the LM output layer with a task-specific version.\n\nAn open question left by the paper is whether these results are down to the architecture (and the large number of parameters) or the fact that the model is pre-trained on large amounts of textual data.\n\nThere should also be a discussion on the difference in the number of parameters.\nAs I understand, all the baseline models are quite simple, with perhaps a few hundred trainable parameters. In contrast, GPT-3 has billions of parameters.\nHow much does model capacity affect performance? Are there no baselines with more comparable numbers of parameters that could have been also reported?\n ^ Yes", " This paper fine-tunes pre-trained language models to perform non-language tasks, like regression of functions and classification. They test a large variety of tasks, and test against several standard algorithms ( like KNN, SVM, RF). Additionally, they investigate sample efficiency, ability to extrapolate, and robustness of these models. ### Strengths\n\nSection 5.2 is the most interesting part of this paper to me, and to me, this is one of the best motivations for fine-tuning a large language model to perform this task. I haven't seen this idea before, and it's intriguing as a method for understanding the pre-training dataset and also harnessing contextual information. I'd like to see more focus directed here and maybe additional tasks explored or an expansion on experiments in this direction. \n\n\n* I appreciate the summary listed in table 1!\n* I appreciated figure 4 (comparing # training samples vs. accuracy for different classes of tasks).\n\n### Weaknesses\nI don't doubt the results of this work, but after having read the paper, I'm still unsure what advantage we gain by finetuning a pre-trained language model to perform non-language tasks. It currently reads like the paper makes the argument that finetuning pre-trained language models can perform well on non-language tasks. However, these models are orders of magnitude more costly to train and run inference on, so when and why should we prefer it? Could the authors better motivate this?\n\nAdditionally, please explain what non-language tasks are earlier in the paper. Specifically, also say that the paper will test both classification and regression tasks. Please give an example of each in the abstract and also in the intro.\nAlso, the sentence: \"We note that though achieving tremendous success with natural data, deep learning still faces difficulties with standard machine learning tasks, especially on tabular data...\" seems to be at the core of this paper. I would include this idea earlier in the paper (abstract or intro).\n\n\nThis paper covers a lot of ground. I think it could be more focused by moving additional experiments to the appendix (for example, sec 6.2). \n\n### Additional Feedback\n* Suggestion to use \"what should the y be\" as opposed to \"what should be the y\" in the template. Language models are still fairly sensitive to prompts, and wording may change results. \n* Include the sizes of GPT-J (6b) and GPT-3 (175b) in the section on \"pretrained language models\" as a reference for the reader. \n* Figure 4 text should be larger \n * Also table 3 should highlight interesting results as the text is too small to read\n* It would make the paper easier to read if the authors could include the names of the tasks (in addition to the IDs) in the text to better understand the context of the task. For example, in section 5.2, which DID is the Medical Insurance dataset?\n * Did the authors try other templates? How did they perform? What was the sensitivity of the model performance on different prompts?\n* From figure 4, it looks like this might not work for few-shot prompting. Did you run that experiment?\n* I would include additional information about how adversarial examples were generated. Currently the social impacts section says that \"LIFT might have embedded bias targeting certain social groups.\" This seems possible to measure. In particular, for tasks where text is used in the prompt, the authors could compare LIFT with all other models they compare with to see if the examples that are mis-classified are systemically biased in some way. Additionally, this could be an interesting experimental setup to evaluate bias in the pre-trained model.", " This paper studies an interesting and important topic: can we use large pre-trained language models to machine learning tasks where the inputs are unstructured real vectors? Through a series of experiments on real/synthetic classification/regression tasks, the author concludes that\n- fine-tuning LLMs have performance on par with many traditional algorithms.\n- LLM is performing bayesian prediction\n- two-stage fine-tuning improves the performance\n- feature name helps LLM perform better. Strength:\n - The paper is generally very well-written, with clear figures and a lot of experimental details in the appendix. It also addresses its scope of study and societal impact properly.\n- The topic is interesting, potentially impactful, and novel. \n- There are a lot of experimental results, which could help practitioners gain intuitions.\n\nWeakness:\n- The performance by LIFT is not impressive.\n- Most of the important claims in the paper are not well-supported by the empirical evidence. See questions below.\n\n**I am very willing to increase the score to 8 if the author can either 1) justify why their claims are correct, or 2) re-position the paper as a \"negative empirical finding\", rather than advertising \"LIFT\" as a very promising approach for real-value-input ML tasks.**\n\nminor: \n- Non-language can also refer to image, sound, or haptic information, and I was confused what non-language task the paper was discussing when reading the title and the abstract. Probably rename \"Non-Language Machine Learning Tasks\" to \"Real-Valued Machine Learning Tasks\" (or something similar). For example, Flamingo (from DeepMind) also uses LM as an interface, but they used it for image + text. - Table 2: why does the paper claim that LIFT performs relatively well compared to other methods, even if in many or even most cases it cannot outperform 3-layer MLP? It does not seem that the algorithm is successfully making use of the LLM internal representation.\n- Table 3 caption tries to conclude that larger models generally perform better -- but it seems that GPT-3 Davinci is overall worse than GPT-3 Curie, and the performance increase w.r.t. scaling is extremely unclear. \n- Section 5.1 tries to claim that LLM is performing bayesian inference. I thought it meant \"LLM has a prior about what the function is look like and it performs posterior inference based on true labels\". However, the paper only experimented with \"trying sampling a lot of time and see whether the variance is correlated with noise level\". This only implies that LIFT is calibrated (where prediction logits/samples reflects the confidence), rather than that LLM is performing bayesian inference. \n- Table 5 tries to claim that LLM can successfully use the feature name -- for only 2 out of 3 classification dataset this seems to hold, and when it holds, the margin is very small (compared to the noise) and it is unclear whether they are noise. In particular, what is dataset 48 and 54, and what are their intuitions of LIFT can help? One experiment you could potentially run is to find a dataset where we have very strong. prior notion of what features are good and test the models' behavior on that as well. \n Yes, the author successfully addressed their limitations. ", " This paper aims to answer whether pre-trained language models (PLMs) can be used for fine-tuning on non-language downstream tasks without changing the input and output format of the original PLM. To answer the above question, this paper introduces a framework, Language-Interfaced FineTuning (LIFT), which formats non-language classification and regression tasks into natural language-like sentences by predefined templates. By extensive experiments on various datasets, they show that fine-tuning autoregressive PLMs on those downstream tasks can achieve relatively well performance. This paper also presents a series of analyses on the properties of LIFT. Update after rebuttal\n===========\nI have read all the comments by the authors, and they mostly clear up my questions on the experiment settings and formatting issues. While I agree that this paper has contribution to the ML community, I cannot lean strongly toward acceptance due to its massive content. I think the content is more suitable for a journal instead of a 10-page conference paper since a lot of details are placed in the appendix, making the main content not very self-contained. Also, I still feel like this paper is more of an empirical report and it is hard to know how LIFT can be used in real-life use cases.\n***\n***\n\n### Strengths\n1. This paper provides some interesting findings on directly fine-tuning PLMs on non-language downstream tasks.\n2. The amount of downstream tasks evaluated in this paper is impressive.\n\n### Weaknesses\n(If I am misunderstanding any parts of the paper in the following parts, please let me know.)\n1. **It is unclear whether the benefit of LIFT is just because the model is gigantic**\n * Since the goal of the paper is to answer \"can fine-tuning PLMs work on non-language downstream tasks\", it should be verified whether the exceptional performance of LIFT is really because *\"LIFT fine-tunes a model **pre-trained on human language**\"*. Chances are the good performance actually stems from *\"LIFT uses a gigantic transformer model with billions of parameters, and it doesn't matter whether it is pre-trained on human language or not.\"* To exclude the latter case, the paper should include an experiment to train a transformer model with the same architecture of GPT-3/GPT-J from scratch on the downstream task, instead of initializing the weights from a PLM. Since the trained from-scratch model is not pre-trained with human language, if this model indeed fails to perform well on those non-language downstream tasks, we can rule out the hypothesis that \"LIFT performs well simply because it uses a gigantic model\". Next, one should also make sure that it is because of pre-training on **human language** that makes the success of LIFT. For example, one can use artificial datasets to train GPTs and fine-tune them on non-human language downstream tasks, as in [1], [2], and [3]. If training from scratch and fine-tuning from a model pre-trained with artificial datasets both result in bad non-language downstream performance, one will be able to arrive at the conclusion that \"PLM fine-tuning\" solves non-language downstream tasks. \n2. I am **not convinced by the motivation** of \"why one will want to use a PLM to fine-tune on non-language downstream tasks\". \n * According to the paper, it seems that the goal is to achieve \"no-code machine learning with language models\". But I cannot see why that would be beneficial for two reasons. First, no-code machine learning can be achieved by the AutoML (Vertex AI by Google) without relying on PLM. It is unclear why one would want to rely on a bulky PLM to achieve no-code machine learning. Second, as illustrated in the experiment results, the performance of LIFT is still largely inferior to other simpler and smaller machine learning models; this will refrain LIFT from being used in real-world cases. It is unclear why one will resort to a larger and not always better model while one can just use smaller and better models (for example, always use logistic regression for tabular data and always use MLP for other classification tasks).\n3. The input length of LIFT is upper-bounded by the maximum sequence length of the PLM (2048 for GPT-J and GPT3), making it hard to be used for high-dimensional data. Moreover, considering that the input of LIFT is not only the raw feature but also the words in the templates ( \n$\\mathbf{x}_1 = v_1 $ \nwhat should be the \n$\\mathbf{y}$ \n), the actual acceptable feature length will be significantly less than 2048. It will be even lesser if each feature $v_i$ is tokenized to multiple tokens by the model. For example, considering MNIST, which has 784 features per image, assuming that each feature will only be tokenized to one token. Then when encoding an image using the template $\\mathbf{x}_i=v_i$, it will require at least $784\\times4=3136$ tokens, which is significantly longer than the maximum input length of the GPT models used in the paper. It is unclear how this is handled in the paper.\n4. The memory of LIFT is quadratic to the input length, indicating that LIFT is highly memory inefficient compared to other models. \n4. This paper tries to include too many things in the main content, making the snippets included in the main content seem fragmented, incomplete, not self-contained, and hard to follow. The following lists are pieces of evidence from the paper to justify the above claim.\n 1. The \"non-deterministic predictions\" that appeared in Line 18 and Line 70 do not appear in the rest of the paper, and it is unclear what \"non-deterministic predictions\" refers to? Is it related to the Bayesian prediction in Section 5.1? \n 2. How is the \"increase the generation randomness\" in Line 137 done? Perhaps by changing the temperature for sampling?\n 3. What do the \"inductive biases of language models\" refer to in Section 4.2?\n 4. What is a median kNN in Line 241?\n 5. The paragraph of \"Robustness to adversarial samples\" in Line 244 includes no experiment results, making the paper not very self-contained.\n 6. It is unclear how the mixup is done in Section 6.2, the details of this experiment is not presented neither in the main content nor the appendix.\n 7. It is unclear how the in-context learning is done in Section 4.3. For example, how is the training/testing split determined? Is there an evaluation set? Where does the variance shown in Table 10 come from?\n5. This paper is poorly formatted, and some of the figures are extremely small for reading.\n 1. Ill-formatted parts\n * According to the formatting instructions for NeurIPS 2022, \"the table number and title always appear before the table\"; all caption of the table in the paper appears **after** the table.\n * The size of the caption is sometimes significantly smaller than what should be expected, for example, Table 3, Table 5, Table 6, and Table 7. I am not sure whether this violates the guideline of \"not changing the font sizes except for the reference\".\n * The bottom of Figure 7 on page 8 is occupying part of Line 329 on page 9.\n * The figures in Table 3 are extremely small.\n * The legends in Figure 4 and Figure 5 are too small to read when printed on A4 paper.\n * According to the formatting instructions for NeurIPS 2022, \"Place one line space before the table title, one line space after the table title, and one line space after the table\". But the margin of Table 3 is quite odd and seems to violate the instructions.\n\n\n[1]Papadimitriou, Isabel, and Dan Jurafsky. \"Learning Music Helps You Read: Using Transfer to Study Linguistic Structure in Language Models.\" Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 2020.\n\n[2] Chiang, Cheng-Han, and Hung-yi Lee. \"On the transferability of pre-trained language models: A study from artificial datasets.\" arXiv preprint arXiv:2109.03537 (2021).\n\n[3] Ryokan Ri and Yoshimasa Tsuruoka. 2022. Pretraining with Artificial Language: Studying Transferable Knowledge in Language Models. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 7302–7315, Dublin, Ireland. Association for Computational Linguistics.\n\n ### Questions \n1. Do the authors manually change the style file or change the font sizes of the table captions?\n2. The sentence in Line 109 \"For **tabular data**, ... non-language tasks\" seems odd.\n3. During inference, how is the answer generated? By argmaxing or sampling? If it is sampled, what is the sampling scheme? When will the generation end?\n4. I am afraid that I will have to disagree with the claim on Line 219 that \"LIFT/GPT outperforms in-context learning\". This is because, in Table 10, the variances are very big. Considering the variance of both in-context learning and LIFT/GPT, a would rather say that there is no statistical significance between the performance of the two methods.\n5. Why can the experiment results in Section 5.1 imply that LIFT \"may\" perform Bayesian inference? \n6. There is an important reference that is missing: Kao and Lee (2021) [4] have already fine-tuned BERT on non-language downstream tasks as standard fine-tuning on natural language downstream tasks and show that the performance is decent.\n\n\n[4] Wei-Tsung Kao and Hung-yi Lee. 2021. Is BERT a Cross-Disciplinary Knowledge Learner? A Surprising Finding of Pre-trained Models’ Transferability. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2195–2208, Punta Cana, Dominican Republic. Association for Computational Linguistics. The authors have addressed their limitations and potential social impacts." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "h7cWz5dA44Y", "tdXrxhZCzj", "MtKK6QgsXHH", "tGxoeI5btJ", "07YqmvObNK-", "8982ANhJXUt", "x5t4o414sia", "stqW0ZaxOU", "IbS_25gelwj", "pO_xKfI8Dt_", "8n7ke4l1jpT", "aAiGrns56xR", "NXIFhzvcTVw", "oTIK6sgQfiI", "ixShrhTCz73", "nips_2022_s_PJMEGIUfa", "nips_2022_s_PJMEGIUfa", "nips_2022_s_PJMEGIUfa", "nips_2022_s_PJMEGIUfa", "nips_2022_s_PJMEGIUfa" ]
nips_2022_aqLugNVQqRw
Class-Aware Adversarial Transformers for Medical Image Segmentation
Transformers have made remarkable progress towards modeling long-range dependencies within the medical image analysis domain. However, current transformer-based models suffer from several disadvantages: (1) existing methods fail to capture the important features of the images due to the naive tokenization scheme; (2) the models suffer from information loss because they only consider single-scale feature representations; and (3) the segmentation label maps generated by the models are not accurate enough without considering rich semantic contexts and anatomical textures. In this work, we present CASTformer, a novel type of adversarial transformers, for 2D medical image segmentation. First, we take advantage of the pyramid structure to construct multi-scale representations and handle multi-scale variations. We then design a novel class-aware transformer module to better learn the discriminative regions of objects with semantic structures. Lastly, we utilize an adversarial training strategy that boosts segmentation accuracy and correspondingly allows a transformer-based discriminator to capture high-level semantically correlated contents and low-level anatomical features. Our experiments demonstrate that CASTformer dramatically outperforms previous state-of-the-art transformer-based approaches on three benchmarks, obtaining 2.54%-5.88% absolute improvements in Dice over previous models. Further qualitative experiments provide a more detailed picture of the model’s inner workings, shed light on the challenges in improved transparency, and demonstrate that transfer learning can greatly improve performance and reduce the size of medical image datasets in training, making CASTformer a strong starting point for downstream medical image analysis tasks.
Accept
The paper proposes a generative adversarial approach to 2D medical image segmentation. The problem is a standard one for MRI and improvements in this direction can have real-world impact. The reviewers were on the whole positive in their opinions of the paper. They found the design to be well-motivated and the paper to be well-written and easy to follow. The improvement to current methods was significant enough to be a good reason for acceptance. The reviewers generally found the feedback period to be helpful in swaying them in a more positive direction for accepting the paper.
train
[ "TTugiUuBw0d", "AYv8WOKK9MY", "sqrA2i7lj46", "OVMg6d44IA", "_Ov2FEMMg8L", "qshAlPFP8M9", "yqRNFAW7UV", "ys85Fb7fOQ", "WxHiFk6leKy", "PIefzigE9aI", "u4vrvyFOVG_", "45GIIcbQkHl", "FbG4KGpMrQ", "Bm0igmgj4dw" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nI thank the authors for responding to the concerns and questions I have raised.\nI carefully read the rebuttals and some of them convince me.\nI would like to increase my score to weak acceptance based on my evaluation.\n\nThank you for your replies and I have no more concerns.", " We thank the reviewer for acknowledging the positive changes we have made to the paper. We are genuinely happy that our major revision with more explanations and experiments properly addresses fellow reviewers' feedbacks and has made a difference. We thank the reviewer again for the constructive feedback which helps shape this revision!\n", " I appreciate the authors' responses as well as their additional experiments. My concern has been addressed and my overall opinion about the paper remains unchanged. Although the reviewer aCmr expressed concerns about the novelty and contribution of the submission, I believe the study has its own merits and incremental contributions and has the potential to benefit the relevant community.", " Dear Reviewer oVPV:\n\nThanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper!\n\nSince the author-reviewer discussion period will end soon in 2 days, we appreciate it if you take the time to read our rebuttal and give us some feedback. Please don't hesitate to let us know if there are any additional clarifications or experiments that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work.\n\nThanks for your time and efforts!\n\nBest,\n\nAuthors of Paper4486\n", " Dear Reviewer aCmr:\n\nThanks again for all of your constructive suggestions, which have helped us improve the quality and clarity of the paper!\n\nSince the author-reviewer discussion period will end soon in 2 days, we appreciate it if you take the time to read our rebuttal and give us some feedback. Please don't hesitate to let us know if there are any additional clarifications or experiments that we can offer, as we would love to convince you of the merits of the paper. We appreciate your suggestions. If our response resolves your concerns, we kindly ask you to consider raising the rating of our work.\n\nThanks for your time and efforts!\n\nBest,\n\nAuthors of Paper4486", " Thanks for your helpful comments! If you have further concerns, please feel free to contact us.\n\n> **Q5**: Motivation of adversarial networks.\n\n**A5**: Medical image semantic segmentation can be formulated as a typical dense prediction problem, which aims at performing pixel-level classification on the feature maps. Despite recent advances in medical image segmentation, it still remains unclear whether it is sufficient to learn both low-level anatomical features and high-level semantics since all label variables are independently predicted from each other. To this end, the motivation of the adversarial network is to reinforce the spatial contiguity between the output label maps and the ground-truth segmentation maps by detecting and correcting the higher-order inconsistencies [2,34]. In this study, we propose a transformer-based discriminator. Such designs have the following benefits: (1) It enables the discriminator to model long-range dependencies, making it better assess the medical image fidelity; (2) it can prioritize the most informative demonstrations on interesting anatomical regions, and differentiate irrelevant regions (i.e., background) from the category label regions; (3) it essentially endows the model with a more holistic understanding of the anatomical visual modality (categorical features). Moreover, we further experimentally demonstrate the necessity of using the adversarial network compared to the model without the adversarial network (please see Table 3 Line 282 - 283). For your convenience, the following Table shows the comparison results of “w/ adversarial network” (CASTformer) and “w/o adversarial network” (CATformer) on the Synapse multi-organ CT dataset.\n\n| Model | Dataset | DSC | Jaccard | 95HD | ASD |\n| :----------- | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: |\n| w/o adversarial network | Synapse | 82.17 | 73.22 | **16.20** | **4.28** |\n| w/ adversarial network | Synapse | **82.55** | **74.69** | 22.73 | 5.81 |\n| | | | | | |\n| w/o adversarial network | LiTS | 72.39 | 62.76 | **22.38** | 11.57 |\n| w/ adversarial network | LiTS | **73.82** | **64.91** | 23.35 | **10.16** |\n| | | | | | |\n| w/o adversarial network | MP-MRI | 94.17 | 86.50 | **6.55** | 3.33 |\n| w/ adversarial network | MP-MRI | **94.93** | **87.81** | 8.29 | **3.02** |\n\nAs we can see, our CASTformer w/ adversarial networks can outperform that of w/o adversarial networks in terms of DSC and Jaccard, and achieve comparable performance in terms of 95HD and ASD. As shown in Figures 3 (Line 236 - 237), 4 (Line 258 - 259), and 5 (Line 677 - 678), our method is capable of predicting high-quality object segmentation, considering the fact that the improvements in such settings are challenging. This demonstrates: (1) the necessity of adaptively focusing on the region of interests; and (2) the efficacy of semantically correlated information. Moreover, we conduct a thorough analysis of different GAN-based loss functions in Appendix I.\n\nOverall, we greatly improved the clarity of the paper and employed a more appropriate set of experiments in this revision. We hope that the changes will make this work in better shape for publication. Please feel free to contact us for further concerns.\n", " Thanks for your helpful comments! If you have further concerns, please feel free to contact us.\n\n> **Q5**: Ablation study. As you shown in the analysis, the proposed train processes (such as pre-training on natural image dataset and adversarial loss) seems very effective to improve the performance dramatically. Then, to support the proposed architectural excellence, it is necessary to compare between SwinUnet(or other SOTA architecture) with the proposed training process(adversarial loss+pre-training with computer vision dataset) and the proposed CASTformer. It is necessary to show that the proposed architecture (class-aware Transformer module, Multi-scale feature extraction, etc) is superior to others.\n\n**A5**: Thank you for pointing out the comparison with the other SOTA architectures (e.g., SwinUnet) with the proposed training process and the proposed CASTformer. Following your great advice, we conducted the ablation study on the Synapse multi-organ CT dataset. The table below shows the results of our proposed architecture (e.g., class-aware transformer module, multi-scale feature extraction) are superior compared to the other state-of-the-art method on the Synapse multi-organ CT dataset. All the experiments are conducted under the same experimental setting in Section 4. For brevity, we refer our CATformer and CASTformer using SwinUnet as the backbone to Swin-CATformer and Swin-CASTformer. We have highlighted the comparison in Appendix Section-O in the latest manuscript.\n\n| Method | DSC | Jaccard | 95HD | ASD |\n| :----------- | :-----------: | :-----------: | :-----------: | :-----------: |\n| Swin-CATformer (w/o pre-trained) | 76.82 | 65.44 | 29.58 | 8.58 |\n| Swin-CATformer (w/ pre-trained) | 80.19 | 70.61 | 22.66 | 6.02 |\n| | | | | | |\n| Swin-CASTformer (both w/o pre-trained) | 71.67 | 61.08 | 43.01 | 13.21 |\n| Swin-CASTformer (*only* w/ pre-trained $D$) | 76.55 | 64.27 | 34.62 | 12.13 |\n| Swin-CASTformer (*only* w/ pre-trained $G$) | 77.12 | 65.39 | 30.99 | 11.00 |\n| Swin-CASTformer (*both* w/ pre-trained) | 80.49 | 71.19 | 23.94 | 6.91 |\n\n| Model | DSC | Jaccard | 95HD | ASD |\n| :----------- | :-----------: | :-----------: | :-----------: | :-----------: |\n| Baseline (SwinUnet) | 76.33 | 65.64 | 27.16 | 8.32 |\n| Swin-CATformer (w/o Swin-class-aware transformer module) | 77.76 | 68.47 | 25.26 | 7.15 |\n| Swin-CATformer (w/o multi-scale feature extraction) | 78.45 | 78.26 | 24.94 | 7.08 |\n| Swin-CATformer | 80.19 | 70.61 | 22.66 | 6.02 |\n| Swin-CASTformer | 80.49 | 71.19 | 23.94 | 6.91 |\n\nAs we can see, using Swin-Unet as the baskbone, the following observations can be drawn: (1) “w/ pre-trained” consistently achieves significant performance gains compared to the “w/o pre-trained”, which demonstrates the effectiveness of the pre-training strategy; (2) we can find that incorporating the adversarial training can boost the segmentation performance, which suggests the effectiveness of the adversarial training strategy; and (3) our Swin-CASTformer with different modules can also achieves consistently improved performance. The results prove the superiority of our proposed method on the medical image segmentation task.\n\n\n> **Q6**: The term ‘class-aware’ is confusing.\n\n**A6**: Thank you for the great suggestion. The name `class-aware' comes from the observation that the progressive sampling component results in pixel samples that concentrate at nearby organ locations, even when they belong to different classes. This is best illustrated in Figure 8 (Line 662 - 663), where the right kidney (cyan) and liver (magenta) are adjacent: we can see the model learns to attend to both regions. Since the model tends to move to discriminative organ regions instead of the background, we refer to our model as class-aware or organ-aware. In addition, when two organs are adjacent, we observe that the samples near the boundary often move to their corresponding organ instead of crossing the boundary. In this study, we don’t expect the sampling module to yield information for classification, as it might only need to move samples to proper attention areas. However, we totally agree with the reviewer that additional quantitative proof would solidify the term. For example, we could analyze the offsets' behavior along organ boundaries. Due to the limited response period and computational resources, we will work on it in future works.\n\nOverall, we’ve made a substantial revision to the paper, which addresses all the issues, with emphasis on the clarity of the explanation of our method, and proper choice of baselines for comparison. We hope that the changes will make this work in better shape for publication. Please feel free to contact us for further concerns.\n\n\n\n", " Thanks for your helpful comments! If you have further concerns, please feel free to contact us.\n\n> **Q2**: Is there any performance drop using the bilinear interpolation in the feature domain? How much the performance drop for discrete sampling such as nearest neighbor sampling? I have some worries about the existence of continuity in 2D grid features. Please, refer some theoretical background proof or practical evidence for that. \n\n**A2**: Thank you for the great suggestion! We agree with your point that using the bilinear interpolation in the feature space might cause performance drops [8]. But the key motivation for using bilinear interpolation in this study is to build a differentiable sampling mechanism, which enables the backpropagation of the loss not only to the feature maps but also to the sampling grid coordinates [8]. A similar practice can also be applied in recent works [33, 67, 88-90]. On the other hand, the major difference between nearest neighbor sampling and the bilinear interpolation is as follows: nearest neighbor sampling first *quantizes* a floating-number sampling location $\\textbf{s}_t$ to the discrete granularity of the input feature map $\\textbf{F}_t$. Such quantizations introduce misalignments between sampling locations and the extracted feature maps, leading to a large negative effect on the pixel-wise prediction tasks [67]. In contrast, the bilinear interpolation allows us to remove the harsh quantization by computing the exact floating-number (fractional) values of the sampled location in order to avoid discontinuities in the sampling function.\n\nWe compare our bilinear interpolation with the nearest neighbor on the Synapse multi-organ CT dataset. All the experiments are conducted under the same experimental setting in Section 4.\n\n| Model | DSC | Jaccard | 95HD | ASD |\n| :----------- | :-----------: | :-----------: | :-----------: | :-----------: |\n| CATformer (w/o nearest-neighbor) | 79.81 | 69.64 | 28.97 | 8.31 |\n| CATformer (w/ bilinear interpolation) | **82.17** | **73.22** | **16.20** | **4.28** |\n\nAs we can see, adopting the bilinear interpolation can improve the segmentation accuracy compared to the nearest neighbor by a large margin. This is because the bilinear interpolation is more sensitive to localization accuracy. This also highlights that proper interpolation is critical to segmentation performance.\n\n> **Q3**: Fair comparison between pre-trained model and from-scratch model.\n\n**A3**: We train the model for 100 epochs in the pre-trained stage and train the model for 300 epochs. As for the from-scratch model, we train the model for 400 epochs. Therefore, it's a fair comparison between the from-pretrained model, and the from-scratch model on the Synapse multi-organ CT dataset. All the experiments are conducted under the same experimental setting in Section 4. The following Table shows that using the pre-training strategy is able to provide us with a good set of initial parameters that quickly adapt to new downstream medical segmentation tasks without re-building billions of anatomical representations, and further boost the performance.\n\n| Model | DSC | Jaccard | 95HD | ASD |\n| :----------- | :-----------: | :-----------: | :-----------: | :-----------: |\n| CATformer (w/o pre-trained) | 74.84 | 65.61 | 31.81 | 7.23 |\n| CATformer (w/ pre-trained) | **82.17** | **73.22** | **16.20** | **4.28** |\n\n> **Q4**: Why the term ‘generative’ is used for your model? Does your model generate the image which not exists in the input image?\n\n**A4**: Thank you! We appreciate the reviewer for acknowledging the advantage of our proposed method for the improved segmentation performance. Indeed the images used for adversarial training are pixels from the input image with organs, as opposed to being generated from some latent distribution. We agree with the reviewer and appreciate them for raising the concern about the wording. We have rephrased it in our latest revision.\n\n\n", " We thank the reviewer for acknowledging our contribution to the medical image analysis field, appreciating the dramatic performance improvement on our multi-class medical segmentation task, and providing constructive suggestions for the presentation of our work! We’ve made a substantial revision to the paper, which addresses all the issues, with emphasis on clarity of the explanation of our work. If you have further concerns, please feel free to contact us.\n\n> **Q1**: Comparision between your model and recent SOTA methods.\n\n**A1**: Thank you for the great suggestion! We follow your constructive advice to evaluate these works. Our work is related to DCN(Deformable convolutional networks) [88], Deformable DETR(Deformable DETR: Deformable transformers for end-to-end object detection) [89], DAT(Vision Transformer with Deformable Attention) [90]. However, our goals and motivations are different. In particular, the motivation and the sampling strategy are different from these works [88-90]. Our motivation comes from the accurate and reliable clinical diagnosis that rely on the meaningful radiomic features from the correct “region of interest” instead of other irrelevant parts [91-94]. The process of extracting different radiomic features from medical images is in the progressive and adaptive manner [92-93]. We have highlighted the comparison in Appendix Section-N in the latest manuscript.\n\nDCN [88] proposed to learn 2D spatial offsets to enable the CNN-based model to generalize the capability of regular convolutions. Because CNNs only have limited receptive fields compared to Transformers, DCN focuses on local information around a certain point of interest. In contrast, our CATformer/CASTformer take benefits of the Transformer-induced features by leveraging the local attention at the lower layers and highly non-local (global) attention at the higher layers to formulate the powerful representations.\n\nDeformable DETR [89] incorporated the deformation attention to focus on a sparse set of keys (i.e., global keys NOT shared among visual tokens). This is particularly useful for its original experiment setup on object detection. Since there are only a handful of query features corresponding to potential object classes, deformable DETR learns different attention locations for each class. In contrast, our approach aims at refining the anatomical tokens for medical image segmentation. To this end, we proposed to iteratively and adaptively focus on the most discriminative region of interests. This essentially allows us to obtain effective anatomical features from spatial attended regions within the medical images, so as to guide the segmentation of objects or entities\n\nDAT [90] introduces deformable attention to make use of global information (i.e., global keys shared among visual tokens) by placing a set of the supporting points uniformly on the feature maps. In contrast, our approach introduces an iterative and progressive sampling strategy to capture the most discriminative region and avoid over-partition anatomical features.\n\nFollowing your constructive advice, the table below shows the comparison results between DCN, Deformable DETR, DAT, and ours (CATformer/CASTformer) on the Synapse multi-organ CT dataset.\n\n| Model | DSC | Jaccard | 95HD | ASD |\n| :----------- | :-----------: | :-----------: | :-----------: | :-----------: |\n| DCN | 73.19 | 62.81 | 33.46 | 10.22 |\n| Deformable DETR | 79.13 | 66.58 | 30.21 | 8.65 |\n| DAT | 80.34 | 68.15 | 26.14 | 7.76 |\n| CATformer (ours) | 82.17 | 73.22 | **16.20** | **4.28** |\n| CASTformer (ours) | **82.55** | **74.69** | 22.73 | 5.81 |\n\nAs we can see, our approach (i.e., CATformer/CASTformer) can outperform existing state-of-the-art models, i.e., DCN, Deformable DETR, and DAT.", " We thank the reviewer for acknowledging our contribution and providing suggestions for the presentation of our work! In particular, we agree that comparisons with the reference papers are necessary to position this work properly. If you have further concerns, please feel free to contact us.\n\n> **Q1**: Relevant papers. \n\n**A1**: Thank you for the great suggestion. We agree with your point that the relevant papers share some similarities with our work. We follow your constructive advice to evaluate these works. We have highlighted in Section “Method”, and have extended two sections (See Appendix - L and M) in the latest manuscript to analyze and discuss these works. \n\nWe explore another state-of-the-art backbone proposed by Lin et al. [73], termed Feature Pyramid Network (FPN). FPN utilizes a top-down pyramid with lateral connections to construct the semantically strong multi-scale feature pyramid from a single-scale input. The major differences between FPN and our work are as follows:\n\n(1) The former utilizes a CNN-based decoder, and ours uses an All-MLP-based decoder. In particular, our motivation comes from the observation that the attention of lower layers tends to be local, and those of the higher layers are highly non-local [74]. As the decoder design plays an important role in determining the semantic level of the latent representations [9] and Transformers have the larger receptive fields compared to CNNs, how to use large receptive fields to include context information is the key issue [68-70,74]. The key idea is to essentially take benefits of the Transformer-induced features by leveraging the local attention at the lower layers and highly non-local (global) attention at the higher layers to formulate the powerful representations [74]. To this end, we utilize an MLP-based decoder for preserving more contextual information, specifically for medical imaging data, including more anatomical variances.\n\n(2) We devise the class-aware transformer module to progressively learn interesting anatomical regions correlated with semantic structures of images, so as to guide the segmentation of objects or entities. We study the model’s qualitative behavior through learnable sampling locations inside the class-aware module in Figure 8 (Line 705-706). As indicated, sampling locations are adaptively adjusted according to the interesting regions.\n\nThe table below shows the results of using an FPN decoder, MLP-based decoder, and the class-aware transformer (CAT) module, both of which include the backbone feature extractor (ResNet50), on the Synapse multi-organ CT dataset. All the experiments are conducted under the same experimental setting in Section 4.\n\n| Encoder | Decoder | DSC | Jaccard | 95HD | ASD |\n| :----------- | :-----------: | :-----------: | :-----------: | :-----------: | :-----------: |\n| ResNet50 w/o CAT | FPN | 74.64 | 63.91 | 29.54 | 8.81 |\n| ResNet50 w/ CAT | FPN | 78.11 | 65.63 | 28.06 | 8.08 |\n| ResNet50 w/o CAT | MLP | 80.09 | 70.56 | 25.62 | 7.30 |\n| ResNet50 w/ CAT | MLP | 82.17 | 73.22 | 16.20 | 4.28 |\n\nAs we can see, adopting the MLP-based decoder can outperform the state-of-the-art FPN decoder in terms of DSC, Jaccard, 95HD, and ASD, respectively. Similarly, incorporating the CAT module can also consistently improve the segmentation performance by a large margin on the Synapse multi-organ CT dataset. The results prove the robustness of our MLP-based decoder and the effectiveness of our proposed CAT module for medical image segmentation.\n\nTo deal with the imbalanced medical image segmentation, Lin et al. [72] proposed Focal loss in terms of the standard cross entropy to address the extreme foreground-background class imbalance by focusing on the hard pixel examples. The table below shows the results of the loss function. We follow $\\gamma = 2$ in the original paper.\n\n| Model | DSC | Jaccard | 95HD | ASD |\n| :----------- | :-----------: | :-----------: | :-----------: | :-----------: | \n| Focal Loss | 82.08 | 73.52 | 16.14 | 4.99 |\n| Dice + Focal Loss | 81.88 | 72.94 | 16.52 | 5.00 |\n| Dice + Cross-Entropy (ours) | 82.17 | 73.22 | 16.20 | 4.28 |\n\nAs we can see, using Focal loss and Dice + Cross-Entropy achieve similar performances. Due to the limited response period and computational resources, we will investigate different hyperparameter settings in Focal loss. We will make corresponding revisions to cite and provide detailed comparisons of them.\n\nOverall, thank you again for your suggestions and review! We believe that the papers suggested by the reviewers contain solid contributions and are highly relevant to our work in some respects. We have included the references and discussion of these relevant papers in our latest revision. We hope that the revision puts our work in better shape for publication. Please feel free to contact us for further concerns.", " Thanks for your helpful comments! If you have further concerns, please feel free to contact us.\n\n> **Q1**: Which computer vision dataset is employed?\n\n**A1**: In our experiments, we adopt the parameters pre-trained on ImageNet-21k to initialize our model.\n \n> **Q2**: Is the pre-trained model necessary?\n\n**A2**: In the medical imaging domain, the data are scattered in various hospitals. Thus, it’s rather difficult to construct a large dataset. On the other hand, it is noticed that the transformer-based methods generally require to be pre-trained on the large-scale datasets (e.g., ImageNet) to perform well [4,5]. To this end, it is necessary to use the pre-trained weights as a good starting point. Moreover, we further experimentally demonstrate the necessity of using the pre-trained model compared to the from-scratch model (please see Table 2 Line 282 - 283). We can see that using pre-trained weights to initialize the $G$ and $D$ can both contribute to the performance gains, which further justify our parameter initialization scheme. For your convenience, the following Table shows the comparison results of the pre-trained model and the from-scratch model on the Synapse multi-organ CT dataset.\n\n| Model | DSC | Jaccard | 95HD | ASD |\n| :----------- | :-----------: | :-----------: | :-----------: | :-----------: |\n| CATformer (w/o pre-trained) | 74.84 | 65.61 | 31.81 | 7.23 |\n| CATformer (w/ pre-trained) | **82.17** | **73.22** | **16.20** | **4.28** |\n| | | | | |\n| CASTformer (*both* w/o pre-trained in $G$ and $D$) | 73.64 | 62.68 | 42.77 | 11.76 |\n| CASTformer (*only* w/ pre-trained in $D$) | 78.87 | 69.36 | 30.54 | 9.17 |\n| CASTformer (*only* w/ pre-trained in $G$) | 81.46 | 71.80 | 27.36 | 6.91 |\n| CASTformer (*both* w/ pre-trained in $G$ and $D$) | **82.55** | **74.69** | **22.73** | **5.81** |\n\nGiven the above ablation study, we observe that using “w/ pre-trained” leads to higher accuracy than “w/o pre-trained” with significant improvements for the smaller sizes of datasets, suggesting that using “w/ pre-trained” provides us a good set of initial parameters for the medical image segmentation tasks. With using pre-trained weights, our CATformer outperforms the setting without using pre-trained weights by a large margin and achieves $7.33\\%$ and $7.61\\%$ absolute improvements in terms of Dice and Jaccard. Our CASTformer (“w/ pre-trained”) also yields big improvements ($+8.91\\%$ and $+12.01\\%$) in Dice and Jaccard. This suggests that the pre-trained model can contribute to satisfactory segmentation performance.\n\n> **Q3**: The comparison of experimental results is unfair.\n\n**A3**: It is worth noting that, in this study, all the other transformer-based methods [7,10,13,56] use the pre-trained weights of the ImageNet-21k dataset. Such practice is commonly-used in recent medical image segmentation models, which all utilize ImageNet-21k pre-trained parameters as a starting point. Therefore, it's a fair comparison with previous works in Tables 1 (Line 236 - 237), 6 (Line 667 - 668), and 7 (Line 671 - 672). \n\n> **Q4**: Limited novelty and contribution.\n\n**A4**: We focus on improving the interpretability of medical segmentation models, which is the key aspect of successful medical image analysis. We make the **first attempt** to build an adversarial training framework using a transformer-based architecture for solving this task, resulting in the **c**lass-**a**ware adver**s**arial **t**rans**former**s (CASTformer), which includes a class-aware transformer module to progressively learn interesting regions correlated with semantic structures of images. The key challenge of learning-based medical image segmentation is to address the issue in interpretability. To this end, we study the model’s qualitative behavior through learnable sampling locations inside the class-aware module in Figure 8 (Line 705 - 706). As indicated, sampling locations are adaptively adjusted according to the interesting regions.\n\nTherefore, this work could provide a good basis or starting point for the research of interpretable medical image segmentation. The good robustness of our proposed model to the pre-trained model and a relatively small medical dataset illustrates the benefits of leveraging pre-trained models from the computer vision domain and provides suggestions for future research that could be less susceptible to the confounding effects of training data from the natural image domain, enables our method to have the potential to be applied to other medical image analysis tasks such as medical image enhancement (CT/MRI/PET reconstruction) and registration, where the size of labeled medical image data could be much limited or even unavailable.", " This paper proposed a transformer-based model for medical image segmentation, which adopts the pyramid structure and adversarial training. The proposed method is validated on three benchmark datasets, and obtains advantaged results. Strengths: \n1. the class-aware transformer module is interesting. \n2. the paper is well written, and there are careful experimental analysis conducted.\n\nWeakness:\n1. the novelty and contribution are limited. The key contribution is the class-aware transformer module, a revised transformer, to learn the class aware context. Pyramid structure and adversarial training are common approaches used in medical image analysis.\n2.The motivation is unclear. Why the adversarial network is needed in this model?\n3. the comparison of experimental results is unfair. The proposed model is equipped with newly-added CAT and GAN, which is a bigger model than others. Even the pre-trained model is compared with other models. As indicated in this paper, it is a pre-trained model on public computer vision datasets. In the experiments, which computer vision dataset is employed? For medical image segmentation task, is the pre-trained model necessary? see weakness", " The authors introduce an effective generative adversarial transformer called CASTformer for 2D medical image segmentation. The main idea behind the design principle is to integrate the multi-scale pyramid structure to capture rich global spatial information as well as local multi-scale context information. The suggested class-aware transformer module enables CASTformer to discover useful aspects of objects incrementally and selectively. The generator-discriminator design is used to improve segmentation performance, enabling the transformer-based discriminator to capture both low-level anatomical information and high-level semantics. The proposed generative adversarial transformer called CASTformer for 2D medical image segmentation sounds interesting and useful. Both quantitative and qualitative results show the effectiveness of the proposed model. A number of ablations are performed to show that the suggested mechanisms are worth considering.\n\nThe authors provide clear justification for their work. This is a well-written manuscript and easy to follow indeed. The supplementary improves the overall attractiveness of the manuscript as well. The suggested mechanism is straightforward but effective. The authors validate their claims through extensive experiments and significant performance gain.\n\nThe study leverages pyramid structure to construct multi-scale representations and handle multi-scale variations. Some of the ideas here may be familiar to readers of the following papers [1, 2]. If these studies have any relevance to the topic at hand, it would be great if the authors would highlight it.\n\n[1] T.-Y. Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie. Feature pyramid networks for object detection. In CVPR, 2017.\n\n[2] T. -Y. Lin, P. Goyal, R. Girshick, K. He and P. Dollár, \"Focal Loss for Dense Object Detection,\" in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 42, no. 2, pp. 318-327, 1 Feb. 2020, doi: 10.1109/TPAMI.2018.2858826. - Could you please check the reference papers and make comments on it with respect to your work? Due to nature of the work, as per authors, research will not pose significant risks of societal harm to society. Rather, the study has the potential to positively contribute to a number of real-world clinical applications possibly.", " This paper proposed Transformer based segmentation model for medical image called CASTformer. Compared to the other segmentation algorithms, it utilizes 1) the multi-scale approach with CNN+Transformer hybrid architecture, 2) the class-aware transformer module for progressive sampling strategy, and the adversarial training scheme. Also, the pre-trained model in computer vision dataset helps to start their training in good starting point. The results show that CASTformer improves the segmentation performance impressively. - (significance) CASTformer shows the dramatic improvement of segmentation performance with large gap to the other algorithms. The author proposed better training processes to utilized the adversarial loss and the computer vision dataset as pre-training to improve the performance of segmentation task for the (relatively-) limited medical dataset.\n\n- (originality & quality) Some reader might claim that they combined many concepts already exists in computer vision field such as multi-scale feature extraction, feature sampling, adversarial loss and transfer-learning. However, it is important that they make that work and it reaches the dramatic performance improvement on their task.\n\n- (clarity) They support various ablation studies and comparison with many deep network algorithms. However, there are some lack of clarity and I asked those questions in the followings. - Please refer and explain the differences between your model and the deformable sampling CNN/Transformers: DCN(Deformable convolutional networks), DAT(Vision Transformer with Deformable Attention), Deformable DETR(Deformable DETR: Deformable transformers for end-to-end object detection), and etc. They are using same concept for feature sampling with irregular grid. The comparison results with the other deformable sampling module and the reason why it works better might be helpful for the readers.\n\n- Is there any performance drop using the bilinear interpolation in the feature domain? How much the performance drop for discrete sampling such as nearest neighbor sampling? I have some worries about the existence of continuity in 2D grid features. Please, refer some theoretical background proof or practical evidence for that. \n\n- Fair comparison between pre-trained model and from-scratch model.\nI wonder the epoch numbers for the training of from-pretrained model and from-scratch model is same or not. \n\n- Ablation study.\nAs you shown in the analysis, the proposed train processes (such as pre-training on natural image dataset and adversarial loss) seems very effective to improve the performance dramatically. Then, to support the proposed architectural excellence, it is necessary to compare between SwinUnet(or other SOTA architecture) with the proposed training process(adversarial loss+pre-training with computer vision dataset) and the proposed CASTformer. It is necessary to show that the proposed architecture (class-aware Transformer module, Multi-scale feature extraction, etc) is superior to others.\n \n- Why the term ‘generative’ is used for your model? Does your model generate the image which not exists in the input image? The proposed method utilizes discriminator to improve the performance of segmentation with adversarial auxiliary task for the ROI reconstruction not the generation of some parts not in the input. The outputs are the reconstruction of the attended region of input image, not the generation. In my opinion, the term ‘generative’ should be used at a minimum to maximize the persuasive power of your claim because some (quiet large portion of the medical field) peoples in real clinic have a lot of worries to use generative model for the medical imaging. \n\n\n- The term ‘class-aware’ is confusing. \nSome readers in computer vision field or machine learning field might expect that the proposed ‘class-aware’ module dynamically/adaptively utilizes the class information like in StyleGAN. However, the authors used the term ‘class-aware’ for the module which samples discriminative locations by iteratively estimated the sampling offsets based on the current input features. \nThe author might say the class-aware Transformer module samples different point according to the type of organ. However, does the class-aware module have ability to distinguish kidney/liver/spleen? I think the sampling module does not distinguish the type of organ. It focuses on the current feature pixel is related/connected to the other feature pixel or not according to the global information. \nTo support your claim, the author should provide the experimental support which adding a linear layer on the proposed class-aware module with the freezed weight has some ability to classify the organ type class.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 5 ]
[ "OVMg6d44IA", "sqrA2i7lj46", "PIefzigE9aI", "Bm0igmgj4dw", "45GIIcbQkHl", "u4vrvyFOVG_", "ys85Fb7fOQ", "WxHiFk6leKy", "Bm0igmgj4dw", "FbG4KGpMrQ", "45GIIcbQkHl", "nips_2022_aqLugNVQqRw", "nips_2022_aqLugNVQqRw", "nips_2022_aqLugNVQqRw" ]
nips_2022_p4xLHcTLRwh
SALSA: Attacking Lattice Cryptography with Transformers
Currently deployed public-key cryptosystems will be vulnerable to attacks by full-scale quantum computers. Consequently, "quantum resistant" cryptosystems are in high demand, and lattice-based cryptosystems, based on a hard problem known as Learning With Errors (LWE), have emerged as strong contenders for standardization. In this work, we train transformers to perform modular arithmetic and mix half-trained models and statistical cryptanalysis techniques to propose SALSA: a machine learning attack on LWE-based cryptographic schemes. SALSA can fully recover secrets for small-to-mid size LWE instances with sparse binary secrets, and may scale to attack real world LWE-based cryptosystems.
Accept
The authors propose SALSA: a machine learning attack on cryptographic schemes based on Learning With Errors (LWE) as the underlying hard problem. They show that SALSA recovers secrets of small and medium size LWE instances with sparse binary secrets. The main selling point is that if the attack could scale up, it may pose a real treat to real-world LWE-based cryptosystems. The reviewers found use of transformers to perform modular arithmetic interesting. At the same time, I agree with the reviewers that given the current computational complexity of the attack and given its poor scalability in the dimension of the lattice could potentially prevent its deployment in real-world systems. As such despite the merit of the paper, I find it to be a borderline submission.
train
[ "forYYc9TBq", "MqZhiJd1St", "Do1MYIaVEao", "53_rj-Q1_uE", "2_f2VmaBThl", "R__DxQYS5t", "r09HBhTDzA4", "1DzGu1REf-k", "d7xBDCaFugg", "cGzR-4LYfB5", "_VYZ-fH3v5M" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are happy to add the number of possible secrets as a baseline for Table 2.\n\nFor Table 4, we believe the number of possible secrets for a given $n$/density would be the most appropriate baseline, since our experiments in Table 4 restrict the range of values in $\\mathbf{a}$ but not the number of possible secrets. As a preview, for the first line of Table 4, the $log_2$ number of possible secrets is $29$ (50 choose 8), while for the last line, the $log_2$(secrets) is $41$ (50 choose 15). We appreciate this suggestion. Adding this baseline will further highlight the improvement in SALSA’s performance when $\\mathbf{a}$ values are restricted and strengthen our argument that finding better ways of teaching modular arithmetic to models would enable SALSA to scale. ", " Thank you! I've read through the response and am happy with the comments.", " Thank you for your remarks. Even if the Hamming weight isn't given to the algorithm, I think the results in Table 2 should be put in perspective by mentioning the number of possible secrets.\n\nRe Table 4, can the authors think of a baseline to compare the performance?", " We thank the reviewer for this provocative question and helpful feedback.\n\n__Limitations of gradient descent for solving LWE__: We have read the paper by Shalev-Schwart, Shamir and Shammah (SSSS henceforth), and we will add a discussion of it in the camera ready version. Meanwhile, we would like to make the following observations. \n1. The LWE problem amounts to (discrete) linear regression on a $n$ dimensional torus with radius $q$. When $q$ is infinite, LWE reduces to pure linear regression, and gradient methods will succeed. For finite values of $q$ the gradients are informative, unless the model’s prediction for one coordinate is on the opposite side of the torus from the true solution (e.g. $q$=100, true value is 0, prediction is 50). At this point, the projection of the gradient along the coordinate axis is uninformative, since either direction on the torus points towards the true solution. For large $q$, this situation is very uncommon, and so gradient methods should work. For very small $q$, this will happen more often, and gradient methods will be perturbed. In the degenerate case $q$=2, gradient methods will always fail, as you observe. \n\t For the $q$=251 used in SALSA, gradient methods can recover secrets (i.e. solve the modular linear regression problem) for $n$ up to 128 so far. For fixed values of $q$ and $\\sigma$, we do not think the situation should be worse for larger $n$ or Hamming weight. In other words, we do not believe that, apart from the specific case of very small $q$ ($q$<10), LWE belongs to the “failing gradient” class of problems described in SSSS. We are inspired by your comments to test SALSA on larger $q$ values. \n\t Interestingly, we see this intuition about the size of $q$ also in the classical lattice reduction approach to solving LWE. [Laine-Lauter](https://eprint.iacr.org/2015/176) use LLL to show concrete polynomial-time attacks against large $q$ and explore the boundary of how small $q$ can be before these attacks start to fail. The intuition there is that when $q$ is large, LLL directly finds a vector which is “small enough” compared to $q$ to break the system. When $q$ is small it cannot find a small enough vector. This is further explored in [Chen-Chua-Lauter-Song](https://eprint.iacr.org/2020/539.pdf) which gives concrete running times for generalizations of LLL and finds the border for the size of $q$ where these attacks start to fail.\n2. While it is true that small $q$ reduces the information content of the gradient, this is aggravated in SSSS because they focus on SGD with batch size 1. By using larger batches (at least 128 examples), we average gradients over several examples, thus reducing the noise. The use of the Adam optimizer probably helps as well, since it averages each gradient step with previous steps. We had noticed, at the beginning of this research, that Adam outperformed simpler schemes like SGD, and we believe your comment provides the explanation. Thank you again. We will mention this in the paper. \n3. The Flat Activation problem mentioned in section 5 of SSSS was one of the main reasons why ReLU became popular. We only use ReLU activation. The layer normalization performed in the transformer also helps avoid vanishing gradients.\n\n__Comment about plateau being exponential in length $n$__: We find this comment interesting, but experimentally, we have observed that the plateau length appears to scale more with Hamming weight than dimension. For example, the plateau is much longer for a $n=30$, Hamming weight 5 secret than for an $n=110$, Hamming weight 3 secret. This reinforces our intuition that Hamming weight, not problem dimension, is the key obstacle to scaling SALSA (see Section 5.4). ", " Thank you for your insightful feedback. Our responses to your questions and comments are below. \n\n__Measuring the train/test overlap__: We randomly generate RLWE samples during training and testing (line 211). There are $n^q$ possible RLWE samples for a given problem setting. For an example to have a significant probability of appearing twice, we would need to generate on the order of $n^{q/2}$ examples (this is an instance of the [birthday attack](https://en.wikipedia.org/wiki/Birthday_attack)). Since our training and test sets never exceed a few billion examples, the probability of overlap is negligible.\n\n__Hamming weight 5 secret, scaling secret density__: SALSA has recovered Hamming weight 5 secrets for $n=30$, but these results were infrequent and thus not included in the paper. Since the submission, SALSA has also solved $n=80$, Hamming weight 4 problems. We agree that secret density is the main scaling challenge for SALSA, and we believe Table 4 provides strong intuition for how we can scale density. Table 4 shows that SALSA can recover secrets with much higher density when values in input $\\mathbf{a}$ have restricted range (i.e. < $p \\cdot q$). Our interpretation of this result is that higher density secrets are more difficult because they cause $b$ values to “wrap around” the modulus more times. When $\\mathbf{a}$ values are restricted, the $b$ values “wrap” less for higher density secrets, enabling high density recoveries (Table 4). Since submitting this paper, we have been experimenting with methods that account for this wraparound behavior during learning, and have recovered secrets with higher Hamming weight (up to Hamming=10 for $n=20$). \n\n__Can SALSA handle higher Hamming weight and dimension simultaneously?__ Scaling $n$ and Hamming weight together is a key goal for future work based on SALSA. Our plan for scaling Hamming weight is outlined above, while our plan for scaling $n$ is mainly focused on increasing encoder/decoder dimensions. This is motivated by results in Tables 3, 16, and 17, which indicate that increasing the encoder/decoder dimension would allow SALSA to handle higher dimensions. We do not think these two directions (changing the learning algorithm to account for modular arithmetic and scaling model architecture) are incompatible, and we are working to unify them.\n\n__Training examples seen multiple times?__ As mentioned previously, we randomly generate training examples, so the probability that the same example appears twice is negligible so long our training set has less than $n^{q/2}$ examples. As discussed in Section 6 (lines 305-317), we experiment with sample reuse, in which samples are reused up to K times before being discarded and find this significantly improves SALSA’s efficiency (Figure 4).\n\n__Relation to SVP sieving problem:__ Indeed, recovering the secret vector given many LWE samples amounts to recovering the shortest non-zero vector in a certain lattice, which we show can be done with machine learning via SALSA. Alternatively, in a well-known existing approach, the uSVP lattice attack, the key idea is to encode the information of the secret vector into the description of a lattice, and to use (exponential time) sieving algorithms to find the shortest vector in that lattice. With SALSA, we don’t need to sieve in the lattice to find the secret, we are using a different approach (model training plus designing a distinguisher which uses the trained model). So it’s unclear how the SVP sieving algorithm can play a role in SALSA, and we see it as a different way of solving the same problem. We would appreciate any further clarification to the question.\n\n__Modular arithmetic not well-connected to other results:__ We apologize if the presentation is confusing: Section 3 shows that we can achieve modular multiplication with high accuracy, which we need to make SALSA work. However Section 3 also shows that in the multi-dimensional analogue, we do not directly achieve high accuracy. So we are forced to come up with a different approach: designing a distinguisher which can use the half-trained (low accuracy) model for the multi-dimensional multiplication plus addition problem (i.e. inner product of vectors) to recover the secret. The modular arithmetic results themselves are novel and so we presented them separately first, and they are also a core concept and a key ingredient for SALSA (multidimensional modular arithmetic). Prior work on training ML to do modular arithmetic did not succeed [50,51], so we thought it important to showcase our experiments which successfully performed modular arithmetic using ML models. We can strengthen the connections between this part of the paper and SALSA in the camera ready version.\n\n__Non-monotonicity in Table 2:__ We believe the decrease in samples required and runtime for $n > 90$ vs. $n=90$ is due to the increased architecture size used for the $n > 90$ experiments (see caption of Table 2). ", " While we fully acknowledge that SALSA cannot yet attack real-world cryptographic implementations, our current results remain significant because they *open a new field of machine learning-based cryptanalysis of post-quantum cryptography (PQC) algorithms*. This is the first work to succeed in demonstrating an ML-based attack against lattice-based cryptosystems, specifically the underlying LWE hardness assumption. A major hurdle to overcome was showing that ML models can be trained to do modular arithmetic (see our Section 3), which is a trivial task for number theorists. \n\nNow that we have demonstrated the possibility of training ML models to do modular arithmetic (previously thought not possible in earlier work – see [50,51] in our paper), we use this as a building block to design and explore attacks on lattice-based systems. Given that LWE has recently been selected by NIST as its PQC standard, it is very important that the community investigate all angles of attack on this problem. We have shared our work in this direction so far with the NIST PQC team. It is widely acknowledged that cryptanalysis of LWE methods is a critical problem, and our method introduces a new approach which will open a new line of research in the area. \n\n__Runtime of secret recovery corresponds to the number of possible secrets?__\nThe running time of SALSA does not depend on the Hamming weight = 3 (e.g. this is never given as a parameter), so you cannot make a direct runtime comparison to the number of possible secrets for any fixed Hamming weight. We show a wide range of experiments, including one set achieving much higher Hamming weight in a restricted setting (see Table 4, with recoveries up to Hamming weight 15). Interestingly, we observe that the number of samples used for training is relatively flat as dimension increases from dimension 30 to 128 (Table 2), which is radically different from the case for lattice reduction algorithms. This in itself is motivation for further study in this direction.\n\n__Other points re: SALSA’s runtime and comparison to exhaustive search__:\nOverall, we do not have enough information yet to accurately quantify SALSA’s runtime. Our current attack is largely unoptimized. We train a transformer from scratch for each attack, so many LWE samples are “wasted” in teaching the model what tokens to expect, etc. Pretraining transformers to use in our attacks may significantly reduce SALSA’s runtime and sample requirements. Additionally, we are conservative about memory use (e.g. use small models, only use a few GPUs per training run) which likely increases runtime. Overall, much more work is required before we feel confident making a definitive statement about SALSA’s secret recovery runtime and how it scales.\n\nIndeed, the exhaustive search approach you reference is always an option and may run faster than our current experiments for low Hamming weight. However, an exhaustive search requires knowing the exact hamming weight in advance, or iterating through all possible Hamming weights. Even though we have only succeeded unconditionally with low hamming weight examples so far, SALSA does not a priori know the secret Hamming weight.\n\nAdditionally, our results suggest that SALSA’s difficulty with higher Hamming weights comes not from the increase in the dimension or the number of possible keys (as is the case for exhaustive search), but rather the fact that modular arithmetic becomes more difficult for models to learn as Hamming weight increases, as discussed in Section 5.4. Once we achieve desirable accuracy on testing samples, our secret recovery algorithms run in time *linear with respect to the dimension*, regardless of Hamming weight. In conclusion, we agree that Hamming weight is a limiting factor for SALSA which we are continuing to improve, but it doesn’t necessarily make the problem exponentially hard for SALSA, as it does in the case of exhaustive search.\n\n__Comparison to Darmstadt challenges__:\nIn our text, we say that SALSA’s results are comparable to a version of the Darmstadt challenges with “sparse, binary secrets” (lines 369-370). In other words, we do not compare SALSA’s performance to the Darmstadt challenges themselves. We will edit that sentence to clarify that we are attacking lattice dimensions in the same range as the Darmstadt challenges, but that we restrict to the easier case of sparse binary secrets. The point we are trying to make in referencing Darmstadt and other LWE-based homomorphic encryption (HE) methods is that the dimensions and the densities we consider are connected to real world problems, even though not when combined. If we were breaking any outstanding Darmstadt challenges or HE schemes, we would say so.", " Thank you for your feedback. Our responses to your questions and comments are below. \n\n__Runtime in Table 2__: Table 2 contains the total time to secret recovery for each parameter setting. This includes the runtime of model training and secret recovery, which is run during the validation step of each training epoch. \n\n__Compare to baseline methods__: Our methods are so different from other LWE attacks that it would be a significant engineering effort to convert these to run on our cluster. Beyond this, in a head-to-head time comparison, existing lattice attacks would win for some of the parameter choices, as mentioned in Section 6 (for runtimes of other lattice attacks, see reference [20] in our paper). However, we are not claiming that SALSA outperforms existing methods. Instead, it represents a novel approach to a longstanding cryptographic problem, with promising initial results and the potential to scale. \n\n__Ethics and broader impact__: We can add a discussion of ethics and broader impact to Section 8 containing the following points. We will first discuss the value of our work in alerting the cryptographic and ML communities to the risk of ML-based attacks on PQC. Even if current attacks do not succeed, providing early warning of potential threats is an important contribution. Second, we will emphasize that SALSA represents a proof of concept that cannot be used against real-world implementations (i.e. the PQC schemes which NIST standardized on July 5) and note that additional scaling work will be necessary before these techniques will be relevant to attacking real-world cryptosystems. Finally, we will disclose that we sent a copy of our paper to the NIST PQC standardization group before submitting. In their response, they shared that they value this type of work, since it can indicate potential attack directions. ", " - The authors propose SALSA, an attack on LWE-based cryptographic schemes, which has emerged as quantum-resistant cryptosystem, based on machine learning. \n- The authors demonstrate that the process of modular arithmetic can be trained using transformer.\n- Using trained transformer, the authors show that the instances from LWE can be distinguished from random instances.\n- The authors propose two secret recovery algorithms, `direct secret recovery` and `distinguisher secret recovery`, and they justify these algorithms theoretically. # Strengths\n- The idea to perform modular arithmetic using a transformer seems interesting and novel.\n- The method can recover secret even though the trained transformer has low-accuracy.\n- Tables in the appendix are very helpful. For example, Table 20 and Table 21 helps me understand the broad effectiveness of the method.\n\n# Weaknesses\n- The algorithms require large computational resources and long runtime.\n- Scalability of the method with respect to the dimension of the lattice seems poor. - Does the runtime on Table 2 contain the training time? If not I want to see the training time of transformers used in the paper.\n- If the authors can conduct other baseline methods on their machine, I want to see the comparison on runtime required for SALSA and the baseline methods on the same device. \n\nPOST REBUTTAL COMMENTS : The authors answer the questions. My concerns have been addressed and I decide to maintain my score. That said, I wish that the authors include ethics or broader-impact statement in the paper or appendix.", " The paper proposes a machine learning approach to modular arithmetic and to breaking LWE-based cryptosystems based on said arithmetic.\n The solved instances are trivial. For example, the last instance in Table 2 corresponds to finding a 128-bit secret of which all but three bits are zero. There are only $128 choose 3 = 341,376$ such secrets. I suspect this number of secrets can be tried in much less than the 23 hours it took this work. I think it's not appropriate to say this is comparable to the Darmstadt challenge where the secret is chosen uniformly at random from the whole domain, that is $n$ values modulo $q$, where $n \\in [40,120]$ and $q$ is several thousand. The density of 0.002 is used with $n$ more than 10,000 leading to a much much larger number of possibilities.\n\nThe authors only show figures for reducing density with increasing dimension. This limits the possibilities for the secret. I think it's therefore overblown to say that the solution \"may scale to attack real-world LWE-based cryptosystems\".\n Do you agree that the running time of secret recovery roughly corresponds to the number of possible secrets?\n yes", " The paper performs cryptanalysis of the LWE problem using transformer models. They show that transformers can learn modular arithmetic. This justifies that, in principle, a transformer can learn the mapping from LWE inputs to outputs. They then show two ML-based cryptanalysis strategies which involve learning the LWE function and using this function to recover the secret. They evaluate on moderately-sized LWE problems. Strengths:\n\nThe paper is a pleasant read. It includes a good amount of problem-specific background.\n\nSALSA is a novel attack on LWE. Regardless of whether SALSA in its current incarnation will be able to scale to larger problems, I think this is an interesting path for further investigation. SALSA also appears to have some principled justification.\n\nThere is a healthy amount of ablation and understanding how SALSA performs at different parameter settings. This is important for any new application of ML, and especially for a cryptanalysis technique.\n\nWeaknesses:\n\nThe modular arithmetic results are not very well connected to the LWE results. My understanding of models' struggles with arithmetic is more that they're bad at extrapolating beyond the arithmetic they see in training. Modular arithmetic doesn't really require extrapolation, which might make this less interesting a result.\n\nSALSA is not as interpretable as existing cryptanalysis techniques. That is, while it may be able to argue that a certain parameter setting is breakable, it is difficult to extrapolate how well SALSA will work on more real-world problems. Indeed, there is already some strange non-monotonicity in SALSA's performance: from n=90 -> n=110 (in Table 2), SALSA requires fewer samples and less running time.\n\nIt is unclear whether SALSA can simultaneously handle higher Hamming weights and higher dimension. The strategy relies on being able to make predictions on these a_i vectors given random LWE samples. Could you attempt to measure the train/test overlap here? What is the closest vector in the (expanded) training set to the a_i vectors?\n\nDoes SALSA ever manage to extract a Hamming weight 5 secret? I see Hamming weight 4 secrets are possible if n < 70. Can you argue a bit more why this small Hamming weight limitation seen in the experiments is not inherent to the approach?\n\nThe training data expansion technique seems to bear some resemblance to sieving-based SVP algorithms. Could you comment on this? Could it be helping?\n\nSince an epoch consists of 300000 examples (rather than a full pass through the training set), are there any examples seen multiple times? I think the authors discuss limitations with SALSA as it exists, but I think they are more optimistic about SALSA than is backed up by their experiments.", " This paper tackles LWE, a so-called *hard learning problem* by training transformers models. The general idea is to feed a model with tuples of data/labels that are instances of an LWE distribution. If the model learns something, that means that it has implicitly guessed the secret variable $s$. The authors then proposes two ways to *extract* the secret $s$ from a model, in order to solve LWE.\n\nAfter first demonstrating the efficiency of transformers models on modular arithmetic, the authors apply their technique on a special instance of LWE, namely with low-density binary secrets, in dimensionalities not yet close cryptographic-use, but still far from trivial. Based on these experimental outcomes, the authors argue to what extent the technique could scale to more realistic instances of LWE for cryptographic puposes. Except for a few typos, the paper is well-written and easy to follow. The authors concisely introduce all the cryptographic material necessary to grasp the LWE problem. The principles of the attack are intuitive to follow. The parameters of the problem are clearly stated and discussed. The only drawback I could notice is that it does not introduce the transformers model, nor its variants that are yet mentioned and discussed throughout the paper. I guess that most of the target audience is familiar with such models, but I wish it could not have been assumed.\nOverall, I enjoyed reading this paper. Moreover, the contribution seems original enough to me, and to the best of my knowledge, I am not aware of missing references close to this work.\n\nMy main concern about this papers covers the significance of the results. The authors deal with some parameters of the problem (the dimensionality $n$, the density of the secret, the modulus $q$ that are not yet cryptographic standards, although this is honestly and clearly stated by the authors. And despite, the parameters remain far from trivial. \nYet, the latter point sets the ground for the authors to argue to what extent their work could be extended to higher parameters in future works. Whether this extension is credible would imply the significance of the paper. And so far, I am not fully convinced that the difficulty only results from the intrinsic hardness of the LWE problem (as hypothesized l.272). I elaborate hereafter.\n\nI have been aware of some works [1,2] explaining and quantifying the length of the initial plateau in the loss function in Figure 3. For example, it is known that the *learning parity* problem (i.e. for $q=2$, and $\\sigma=0$), can be efficiently solved thanks to Gaussian elimination, but cannot be efficiently solved with gradient descent, regardless of the underlying model ($i.e.$ whether using transformers or not). Indeed, the initial plateau may have a length exponential with the parameter $n$ (removing the sparsity assumption). I assume that with low noise levels such as those considered in the paper, and with higher values of $q$, the same argument might apply. If so, this would be provably hard to solve (some instances of) LWE with gradient descent, whereas those LWE instances would not be cryptographically *hard*.\n\n[1] https://proceedings.neurips.cc/paper/2020/hash/e7e8f8e5982b3298c8addedf6811d500-Abstract.html\n\n[2] https://proceedings.mlr.press/v70/shalev-shwartz17a.html Any thought regarding my comment on the limitation of gradient descent? Do you think that it formally applies to your problem? Regardless of your answer, I think it would be valuable to add a discussion about that.\n No big limitation, except maybe the one I raised in the strength/weaknesses." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 3, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 3, 3 ]
[ "Do1MYIaVEao", "2_f2VmaBThl", "R__DxQYS5t", "_VYZ-fH3v5M", "cGzR-4LYfB5", "d7xBDCaFugg", "1DzGu1REf-k", "nips_2022_p4xLHcTLRwh", "nips_2022_p4xLHcTLRwh", "nips_2022_p4xLHcTLRwh", "nips_2022_p4xLHcTLRwh" ]
nips_2022_huT1G2dtSr
Robust Imitation via Mirror Descent Inverse Reinforcement Learning
Recently, adversarial imitation learning has shown a scalable reward acquisition method for inverse reinforcement learning (IRL) problems. However, estimated reward signals often become uncertain and fail to train a reliable statistical model since the existing methods tend to solve hard optimization problems directly. Inspired by a first-order optimization method called mirror descent, this paper proposes to predict a sequence of reward functions, which are iterative solutions for a constrained convex problem. IRL solutions derived by mirror descent are tolerant to the uncertainty incurred by target density estimation since the amount of reward learning is regulated with respect to local geometric constraints. We prove that the proposed mirror descent update rule ensures robust minimization of a Bregman divergence in terms of a rigorous regret bound of $\mathcal{O}(1/T)$ for step sizes $\{\eta_t\}_{t=1}^{T}$. Our IRL method was applied on top of an adversarial framework, and it outperformed existing adversarial methods in an extensive suite of benchmarks.
Accept
This work proposes imitation learning via the route of mirror descent inverse-rl. Mirror descent is a well-understood optimization algorithm and framing IRL via it is a good theoretical exercise. Using an expert to help schedule learning is a novel theoretical contribution in the context of adversarial imitation learning. This is directly guiding how to design the approach. The current concern is that the experimental results are not statistically significant and even though the nice theoretical properties of mirror descent are nice to potentially leverage, it is not coming through strongly yet. One suggestion is to drastically increase the number of random seeds (say 25) and report 2*standard error instead of standard deviation, especially comparing to RAIRL. The promising innovation is the idea of multiple discriminators which can better account for distribution shift. The authors are encouraged to bolster the experiments with this in mind and frame this as the central point of the work with the theory of MD as supporting evidence. The writing of the paper can also be improved and the idea of estimating experts for curriculum can be highlighted better as this is a significant contribution and is currently a bit buried in text. A significant refactoring of Section 4 and a running example that connects Fig 2 and 3 to the algorithm in Section 5 will greatly help. Lines 130-151 are not the main contribution and can be moved to appendix or cut. This is also mainly an imitation learning paper and not an IRL paper as the reviewers have noted. While the naming follows the convention of other IL papers like GAIL, RAIRL, it can be a bit misleading. Perhaps the authors can reconsider the name.
train
[ "oWZ5U2fDJri", "QcWr32wiGG3", "bFOeN0qjuR", "aFcxAnSYa1f", "TvGtDv5kW9", "EZ__lX6cylc", "gAaa1K94fV-", "3qUVzyq3xRy", "pqwWbhvJAId", "TM4-bftkWIc", "I7Hgr3Y1oL", "y2QhDF3YA3w", "BqRHebuJYdH", "e9JDF8MnZp3", "kKdmrpsVyPT", "zqPT3-RfnTGE", "xvHvoKEKEV5c", "GgjNxHw0lqY", "nRy16duSBny", "GM5MhZ5GFJ", "Y7Q3c7LIPIB", "pN2lz8SZ53U", "wwC2puPOSNZ", "Xr6sWyeWRad", "BeVQi2AXyQs", "2Oza7PzS0OK", "gJn2URmrtbt", "zM6dbJ9kE4", "66n29AexUI-I", "tr-TsEEOSl3", "n9cTLm5YHK", "FJKnNBsE06o", "icz7sDf18xZ", "CidadnfoeNt", "jqnc-rarpx-" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your insightful comments and discussion.\n\n\nSincerely, Authors.", " Thanks for the clarifications. I will take these into account when discussing the paper with the other reviewers. I don't have any additional questions at this point.", " We are very grateful that our response cleared many of your concerns. We will do our best to answer further questions until the end of the discussion period.\n\n**The question is whether AIL (i.e., MD-AIRL with an L2 divergence) has those same nice properties. For example, does the analysis fail for the L2 divergence? Or, does MD-AIRL have faster convergence rates than AIL? Said in other words, does the theory give us any reason to prefer MD-AIRL over AIL?**\n\nThank you for the insightful question. We now understand your point better. We would like to answer this question and explain our claims in more detail by following points.\n\n1. **MD-AIRL with an L2 divergence.** MD-AIRL with the L2 divergence is possible, and the analyses also hold in the Euclidean space. As we mentioned in Discussion #1, it is difficult to estimate L2 divergence.\n2. **AIL with a good learning rate scheduling mechanism (both a policy and a discriminator).** \n \n We think this is what you are referring to. We think this is a good idea, but there are a few concerns.\n \n - **Discriminator**\n - **Convergence.** The neural network parameters will converge to a certain point.\n - **Policy**\n - **Convergence.** The neural network parameters will converge to a certain point since the discriminator converges. The policy $\\pi_\\theta$ finds a most suitable parameter $\\theta^\\ast$.\n - **Is $\\pi_{\\theta^\\ast}$ similar to $\\pi_E$?** While $\\theta^\\ast$ would be the best solution that incorporates the historical sequence of $\\theta_t$ where $t\\in[1,\\infty)$ (according to our discussion), the relationship between $\\pi_{\\theta^\\ast}$ and $\\pi_E$ is unknown due to the nonlinearity of neural networks. The Euclidean distance $\\lVert \\theta_1-\\theta_2\\rVert_2$ does not proportional to $\\lVert \\pi_{\\theta_1} - \\pi_{\\theta_2}\\rVert_2$ (the matrix norms induced by the vector 2-norm).\n3. **MD-AIRL with statistical divergences (the proposed model).**\n - **Policy**\n - **Convergence.** The neural network parameters will converge to a certain point since the IRL reward function converges. We claimed that the policy $\\pi_\\theta\\in\\Pi$ finds a most sutiable parameter $\\pi_\\ast \\in \\Pi$.\n - **Is $\\pi_{\\ast}$ similar to $\\pi_E$?** We claimed that $\\pi_\\ast$ would be similar to $\\pi_E$ (or the best solution in $\\Pi$) in terms of the given Bregman divergence.\n4. **Reason to prefer MD-AIRL over AIL?** \n - $\\pi_E\\in\\Pi$ **and the data are sufficent.** With this ideal situation, both algorithms are preferred.\n - **The expert parameterization is unknown, or the data are insufficient.** Our theoretical analyses are essential to solving this real-world problem. We claimed that MD-AIRL is suitable because (1) it is convergent and (2) the convergence guarantees the best imitation learning performance for $\\Pi$ under the regularized MDP assumption.\n\n**Do the baselines use the same architectures?**\n\nYes, the RAIRL-DBM model [1] shares the same reward architecture of $\\psi_\\phi$, and it also uses an additional neural network; hence, the numbers of neural network parameters are also equal (also, BC does not have a reward architecture). This architecture setting was intentional, hence the difference of the results in experiments are caused by different learning mechanisms. \n\n**Standard deviation.**\n\nThank you for the instructive comment. We are pretty sure that the evaluation is correct; we respectively introduce our intention on the evaluation for `StdDev([[AverageReturn(pi, i) for pi in pi_each_seed_list] for i in last_1e5_steps])`. Let us consider the evaluation matrix:\n| | pi_1 | pi_2 | … | pi_5 |\n| --- | --- | --- | --- | --- |\n| i=0 | AvgRet(pi_1, 0) | AvgRet(pi_2, 0) | | AvgRet(pi_1, 0) |\n| i=5000 | AvgRet(pi_1, 5000) | AvgRet(pi_2, 5000) | | AvgRet(pi_2, 0) |\n| … | … | | | … |\n| i=100000 | AvgRet(pi_1, 100000) | AvgRet(pi_2, 100000) | … | AvgRet(pi_5, 100000) |\n\nWe believe that each element of the table can be considered as an independent trial since **(1)** the numbers are evaluated at the end phase of learning **(2)** there is no correlation between pi trained with different random seeds, and **(3)** the considerable amount steps (5000) was put between every evaluation. \n\nAgain, thank you for your great participation in this discussion.\n\nSincerely,\n\nAuthors.\n\n[1] Wonseok Jeon, Chen-Yang Su,Paul Barde,Thang Doan, Derek Nowrouzezahrai, and Joelle Pineau. Regularized inverse reinforcement learning. In 9th International Conference on Learning Representations, 2021.", " Thanks for continuing the discussion! This has been helpful for me to get an even better understanding of the paper. Thanks for responding to all of the questions. Unless mentioned below, the answers have resolved my concerns.\n\n> Are there guarantees that AIL fails to be a \"good\" online learning algorithm in this same sense?\n\nI think I might not be asking this question in the same way. At a high-level, the theoretical results of the paper seem to say that MD-AIRL has certain nice properties. The question is whether AIL (i.e., MD-AIRL with an L2 divergence) has those same nice properties. For example, does the analysis fail for the L2 divergence? Or, does MD-AIRL have faster convergence rates than AIL? Said in other words, does the theory give us any reason to prefer MD-AIRL over AIL?\n\n> architectures\n\nThanks for the pointer! I had somehow missed this in reading the paper. Do the baselines use the same architectures?\n\n> standard deviations\n\nI'm not sure `StdDev([[AverageReturn(pi, i) for pi in pi_each_seed_list] for i in last_1e5_steps])` is correct. Written in terms of checkpoints, I think it should be `StdDev([Average([AverageReturn(pi, i) for i in last_1e5_steps]) for pi in pi_each_seed_list])`. \n", " … **but it seems like historical averaging could be applied to either standard AIL updates or MD-AIRL updates**\n\nWe respectfully point out that MD works much more meaningfully in policy optimization. For example, suppose a parameterized Gaussian policy $\\pi_\\phi$ and its neural network function $f_\\phi$ that outputs parameters of a multivariage Gaussian policy i.e, $f_\\phi(s) = (\\mu_\\phi(s), \\Sigma_\\phi(s))$ , such that $a \\sim \\mathcal{N}(\\mu_\\phi(s), \\Sigma_\\phi(s))$. Suppose $\\phi_t$ as the neural network parameters of $f_\\phi$ at the iteration $t$.\n\n- From the example in Discussion #2, we assumed that the expression $\\lim_{t\\to\\infty}\\mathbb{E}[\\nabla\\Omega(\\pi_{\\phi_t}(\\cdot\\vert s))]$ resembles a mapping of Guassian mixture (by consistently changing its mode).\n- **Historical averaging on $(\\phi_t)_{t=1}^\\infty$.**  The technique favors the “average” of historical neural network parameters. It stabilizes the learning process, but analyzing its meaning in the probabilistic space is difficult.\n- **MD of a policy $\\pi_\\theta$ for the target sequence $(\\pi_{\\phi_t})^\\infty_{t=1}$.** The MD algorithm finds the best policy $\\pi_\\theta$ that is closest to $\\nabla\\Omega^\\ast(\\lim_{t\\to\\infty}\\mathbb{E}[\\nabla\\Omega(\\pi_{\\phi_t}(\\cdot\\vert s))])$, where the Bregman divergence is used measuring distance, e.g. forward KL divergence when $\\Omega$ is the negative Shannon entropy. Therefore, we claim that MD brings pleasing outcomes regarding statistical divergence induced by $\\Omega$.\n\n**Where are the architectures discussed on page 9? The word \"architecture\" only seems to appear on page 6, but the architectures aren't defined there.**\n\nWe respectfully remind you that we referred to **Fig. 9** on page 25 in the appendix (Supplementary Material: [pdf](https://openreview.net/attachment?id=huT1G2dtSr&name=supplementary_material)), which illustrates detailed reward architecture of **Eq. (15)**.\n\n**Quick clarification about the \"stochastic policies\" -- the standard deviation reported in these tables is** `StdDev([AverageReturn(pi) for pi in pi_each_seed_list])`,not `StdDev(Return(pi))`**, right?**\n\nSpecifically, **we report the results scores on the last $10^5$ steps with five different seeds.** Therefore, the standard deviation is calculated from `StdDev([[AverageReturn(pi, i) for pi in pi_each_seed_list] for i in last_1e5_steps])` with `last_1e5_steps = [0, 5000, ..., 100000]`. \n\n`AverageReturn(pi, i)` indicates the average return score (five episodes) of `pi` at the i-th step starting from the final step. Compared to deterministic policies, the environmental scores of a stochastic policy tend to have high variance by its nature.\n\n**Thanks for clarifying this point! I agree that algorithms can be analyzed at different levels of analysis, and that improving our understanding of any level of analysis can be useful.**\n\nWe are very glad that this important concern is clarified.\n\nSince this probably could be the final response considering the situation that the discussion period ends soon, we would like to thank you once again for the constructive feedback and discussion. Overall, we will reflect your comments in the final submission of our paper to improve clarity.\n\nSincerely yours,\n\nAuthors.", " As authors, we really appreciate your time and effort spent reviewing, especially your excellent participation during this discussion period. This has been a genuinely great experience that we have been able to discuss this submission actively and address many of the reviewers’ concerns. We will improve our manuscript based the discussion and your comments for the final submission.\n\n**My rough understanding is that $D_\\Omega(\\pi\\Vert\\pi^\\prime)$ is defined as a divergence between between $\\psi_\\pi$ ($r_\\pi$) and $\\psi_{\\pi^\\prime}$ ($r_{\\pi^\\prime}$). Is this correct?**\n\nYes, this is correct. The core idea stems from **Lemma 1**, that a stochastic policy function $\\pi\\in\\Pi$ and an associated reward function $\\psi_\\pi \\coloneqq \\Psi_\\Omega(\\pi)$ (from **Definition 1**) have an “one-to-one relationship” (the regularizer $\\Omega$ determines such correspondence). In our setting, the policy space and the reward space are connected and jointly optimized to solve the imitation learning problem. This is a novel perspective in the IRL domain.\n\n- From this perspective, two mappings are presented in **Fig. 1:** $\\Psi_\\Omega$ and $\\nabla\\Omega^\\ast$.\n- Similarly, $\\pi_\\phi$ can be used in **Eq. (14)** since the regularized reward function $\\psi_\\phi$ is analytically drawn in a closed form expression using the shared parameter $\\phi$.\n- Inspired by previous studies of regularized IRL, the detailed computation methods for the mappings are implemented and addressed in the **appendices** and [Official Response to Reviewer 4DdJ (1/2)](https://openreview.net/forum?id=huT1G2dtSr&noteId=BeVQi2AXyQs).\n\nConsequently, let us suppose that there are two pairs of functions $(\\pi, \\psi_{\\pi})$ and $(\\pi^\\prime, \\psi_{\\pi^\\prime})$. We respectfully point out that the two Bregman divergences $D_\\Omega(\\pi\\Vert\\pi^\\prime)$ and $D_{\\Omega^\\ast}(\\psi_{\\pi^\\prime}\\Vert\\psi_\\pi)$ are equivalent and can be simply proved for strongly convex $\\Omega$. In the paper, we mostly focus on the use of Bregman divergence in the (primal) policy space ($D_\\Omega(\\cdot \\Vert \\cdot)$) due to its simplicity for derivations and computations.\n\n**Are there guarantees that AIL fails to be a \"good\" online learning algorithm in this same sense?**\n\nIn **Lines 31-45**, we pointed out the two drawbacks of AIL based on our observations: \n\n> **L37.** We claim that there are two issues leading to unconstrained policy updates: (1) a statistical divergence often cannot be accurately obtained for challenging problems and (2) an immediate divergence between agent and expert densities does not guarantee unbiased learning directions.\n> \n\nAs a result, we were able to bring a novel online learning algorithm that utilizes AIL and improves AIL’s overall robustness for imitation learning. Based on the minimax formulation of AIL and our experimental results, it is likely that there are certain situations in that AIL might fail in terms of regrets, and we pointed out three technical issues of AIL during the discussion (mode collapsing, non-convergence, inappropriate architecture). We respectfully remind you that a complete theoretical understanding of the AIL is beyond the scope of our paper and remains as future work.\n\n**If I understand correctly, this same line of reasoning would apply to any ML paper that applies SGD with a fixed step size.**\n\nWe respectfully point out that adversarial learning and inverse reinforcement learning would be regarded as exceptions to this statement. For both problems, the uncertainty of the intermediate models (discriminators and IRL reward functions) governs and hinders the convergence of primary learning models (generators and policies) and vice versa. To solve these issues, we applied the MD-based algorithm to improve the robustness of an IRL reward function, and it brought pleasing outcomes to the policy optimization in terms of imitation learning performance.\n\n**I'm still a bit confused here. … but this seems like a known fact about SGD, rather than a surprising, emergent property of mirror descent.**\n\nWe agree since SGD can be considered as an instance of online MD algorithms (**L96**). However, we respectfully point out that theoretical contributions of our work can be better evlauted as an imitation learning study. Our analyses are useful for generalizing statistical divergences that is actively used in the RL and IRL domains. At the same time, our approach have multiple contributions in IRL related problems, such as:\n\n- we proposed the fundamental relationship between policy and reward function in the imitation learning problem.\n- we proposed the RL/IRL methodology as a combined optimization process that theoretically guarantees the robustness of learning in sequential decisions.\n- we proposed a novel formalization of the online imitation learning problem and a practical algorithm to solve this problem.", " Thanks to the authors for continuing the discussion! This is useful for helping me get an even better understanding of the paper and its contributions.\n\n> My understanding of the method was that it regularized the policy updates, not the reward/discriminator updates (Eq. 7).\n\nI'm still a bit confused on this point, and would recommend revising the paper to clarify this. Based on this discussion, my rough understanding is that $D_\\Omega(\\pi \\| \\pi')$ is defined as a divergence between $r_\\pi$ and $r_{\\pi'}$. Is this correct? Part of the confusion here stems from the fact that $D_\\Omega$ is defined on L106 without reference to a reward function, and the reward function $r$ defined in the preliminaries does not appear in either Eq 7 or 14.\n\n> The claim seems to imply that MD-AIRL does not do adversarial learning.\n\nOK, I think this makes sense: it's OK that the discriminator learning is noisy because MD-AIRL is an online learning algorithm, and hence can handle noise in the \"labels.\" Are there guarantees that AIL fails to be a \"good\" online learning algorithm in this same sense?\n\n> The concern with AIL methods is that they won't converge because of the adversarial learning dynamics? Would it be possible to elaborate on how the proposed method avoids this?\n\nI'm still a bit confused here. The argument that MD-AIRL converges because of decreasing step sizes seems a bit odd because it would be straightforward to apply Robbins-Monro step sizes to AIL. If I understand correctly, this same line of reasoning would apply to any machine learning paper that applies SGD with a fixed step size. Yes, the decreasing step sizes are needed to guarantee convergence, but this seems like a known fact about SGD, rather than a surprising, emergent property of mirror descent.\n\n> To make sure I understand, the idea is that a poor choice of reward architecture will cause AIL to fail, but that the proposed method automatically suggests how to choose the reward architecture?\n\nWhere are the architectures discussed on page 9? The word \"architecture\" only seems to appear on page 6, but the architectures aren't defined there.\n\n> I don't entirely follow why regularizing the discriminator updates would make it allocate it's capacity better.\n\nTo make sure I understand, the argument here is that the regularized discriminator updates are similar to historical averaging? I'm not sure I fully understand this argument. I agree that historical averaging is necessary to guarantee convergence in two-player games, but it seems like historical averaging could be applied to either standard AIL updates or MD-AIRL updates.\n\n> I'm a bit confused here, because I interpreted Table B1 as saying that the gains were not statistically significant.\n\nQuick clarification about the \"stochastic policies\" -- the standard deviation reported in these tables is `StdDev([AverageReturn(pi) for pi in pi_each_seed_list])`, not `StdDev(Return(pi))`, right? I.e., this is measuring the variance across training runs, not the variance across different rollouts of the same policy.\n\n> there were two levels of “optimization” problem during the discussion\n\nThanks for clarifying this point! I agree that algorithms can be analyzed at different levels of analysis, and that improving our understanding of any level of analysis can be useful.\n\n", " Again, we thank you for your time spent reviewing and outstanding work on this discussion.\n\n**Regarding the necessity of RL:** We do not strongly argue for the necessity of RL **in theory**. Instead, we respectfully emphasize that RL is chosen **in practice** by the following points.\n\n- The derivation of **Eq. (7)** is under the assumptions that the expectation $\\mathbb{E}_\\pi$ is fully achieved, and all states in $\\mathcal{S}$ are taken into consideration. In this case, the closed form solution exactly recovers the next MD iteration based on our theory.\n- However, **Eq. (14)** is quite different in practice. It indicates $\\psi_\\phi$ is trained with finite stochastic trajectory samples ($s\\sim\\bar{\\tau}$) where the expert trajectories are genuinely limited. Also, since the IRL reward function is actually implemented with neural networks, numerical errors and some overestimations might accumulate in sequential decisions.\n- From the perspective of IRL, RL is (1) an efficient trajectory sampling method for making training trajectory samples seamless and (2) a practical on-policy algorithm that fine-tunes the IRL solution through optimizing with its value measures.\n\n**Regarding Fig. 1:** We agree that the schematic illustration of MD in **Fig. 1** does not fully depict the technical details of MD-AIRL. As we mentioned, we are working on a new figure which is a detailed illustration for **Section 6**.\n\nOverall, we will reflect your comments in the final submission of our paper to improve clarity.\n\nBest Regards,\n\nAuthors", " Thank you for the reply.\n\nRegarding Fig. 1: From what I understand, neither of the two paths is a very clear illustration of the proposed method, since the solid line misses the \"detour\" of estimating the policy betwenn $\\psi_t$ and $\\psi_{t+1}$. \n\nRegarding the necessity of RL:\nWhile I totally understand the practical necessity, I still do not see how the RL step is necessary due to the derivations. For Gaussian policies and Shannon-entropy regularization, we can solve Eq. 7 and Eq. 14 in closed form, without requiring any samples. And the Max-ent optimal policy that maximizes the expected reward given by $\\log \\pi(a|s)$ would actually be $\\pi(a|s)$ again (due to reward shaping). So at least for this setting (Gaussian, Max-Ent) the reinforcement learning only seems to become necessary due to the additional state reward, which was introduced \"in the middle of the derivations\" and is not derived from the original problem.\n\n", " We thank you for the time spent reviewing the paper and for the discussion. We are glad that most of your concerns have been addressed. Again, we highly appreciate your suggestions for new experiments, as these provide further insights for our paper.\n\nBest, Authors.", " **I don't entirely follow why regularizing the discriminator updates would make it allocate it's capacity better.**\n\nThe argument “a regularized reward function **trained with MD** shows better allocation of its capacity” can be better understood in the context of classical GAN studies for dealing with mode collapsing, especially the **historical averaging** technique proposed by Salimans et al. [1]: a model deviating from its time average is penalized for improving stability.\n\nWe would like to explain our claim by providing the following a simple intuitive example:\n\n- For a iteration $t\\in\\mathbb{N}$, let $\\pi_t\\in\\Pi$ be a stateless agent Gaussians.\n- Let the expert policy $\\pi_E$ be a mixture of $K$-Gaussians i.e. $\\pi_E = \\frac{1}{K}\\sum_{k=1}^K\\pi_{E}^{(k)}$ where $\\pi_E^{(k)} = \\mathcal{N}(\\mu^{(k)}, \\Sigma^{(k)})$. Suppoe an estimation process $\\bar{\\pi}_{E,t} \\in \\Pi$ does not coverges due to its limited capacity and largely perturbates among local solutions.\n- Suppose $\\bar{\\pi}_{E,t} = \\tilde{\\pi}_E^{(\\kappa_t)}$ where $D_\\Omega(\\bar{\\pi}_E^{(k)}\\Vert\\pi_E^{(k)}) \\approx 0$ for $k\\in\\{1,\\dots, K\\}$.\n- Following our analyses in **Section 5**, the distribution $\\pi_t$ trained with MD will converge to $\\pi_\\ast\\in\\Pi$ such that $\\nabla\\Omega(\\pi_\\ast) = \\lim_{t\\to\\infty}\\mathbb{E}[\\nabla\\Omega(\\bar{\\pi}_{E,t})]$.\n- Since $\\nabla\\Omega$ forms an isomorphism, we claim that $\\pi_t$ will covers most of modes that the estimation process of $\\bar{\\pi}_{E,t}$ produces in a long time series (e.g. by having a relatively high entropy), which is similar to the underlying idea of **historical averaging** techniques by Salimans et al. [1].\n\nWe respectively highlight that one of important theoretical results in **Section 5** is that the regret defined in **Eq. (11)** is bounded by $\\mathcal{O}(1/T)$. Therefore, our claim is that our MD updates optimally allocated its limited capacity, embracing discriminator's error of adversarial learning, which is also verified in our experiments.\n\n**I'm a bit confused here, because I interpreted Table B1 as saying that the gains were not statistically significant.**\n\nWe respectively highlight the following points.\n\n- For the following reasons, the standard deviations in Table B1 might appear to be relatively high compared to other popular metrics (losses) in machine learning.\n 1. In the MuJoCo locomotion tasks, the scores indicate the physical distance a robot travels by moving its joint over a long period (≤ 1000 steps). Each environment has a stopping condition, so the trajectory length can be inconsistent.\n 2. We used stochastic policies for all algorithms. The actions are stochastically sampled for each step, even for evaluation.\n \n Therefore, we would like to point out that consistently showing better performance than various AIL algorithms on average is still pleasing results that align with our claim.\n \n- Since RAIRL is the most compatible algorithm in terms of IRL architecture and regularization choice, the extensive comparison experimental results with RAIRL for various $\\Omega$ in the main paper should be taken into acount for the statistical sigfnificance.\n\n**The \"Euclidean\" update I was referring to was the unregularized policy update typically performed by GAIL/AIL/etc.**\n\nThank you for the detailed comment. We now understand this point more clearly. As you noticed, there were two levels of “optimization” problem during the discussion. Therefore, we would like to summarize our method on both perspectives as below.\n\n- **Optimization for “high-level” probability distributions: the main contributions are mostly in this level.**\n - AIL models can be seen as solving *unconstrained* optimization for its discriminator. We claimed that it does not deal with some challenging situations in imtation learning.\n - We propose MD-AIRL, which performs *constrained* MD updates utilize the adversarial learning as a close, but unrelaible estimation process of the target densities.\n- **Optimization for “low-level” neural networks. MD-AIRL also uses (Euclidean) gradients with an optimizer (ADAM) in its training. We do not specifically argue for or against training using gradient-based learning agorithm in neural networks.**\n - **MD-AIRL.** The neural network optimizes along with Euclidean gradients using an optimizer, but the proximal local target of learning is the MD-updates in the high level of probabilistic perspective, which makes the policy learning more robust.\n - **AIL.** Even if the policy is updated a small learning rate, we claim that optimizing AIL rewards might have practical issues, and these signals can be unreliable depending on the problems.\n\n[1] Tim Salimans, Ian J. Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen.\nImproved techniques for training GANs. *CoRR*, abs/1606.03498, 2016. URL http://arxiv.\norg/abs/1606.03498.", " Again, we thank you very much for your reply and outstanding work on this discussion. We would like to answer your questions in detail; apparently, there have been some misunderstandings of our claims. We address your comments and questions below.\n\n**My understanding of the method was that it regularized the policy updates, not the reward/discriminator updates (Eq. 7).** \n\n1. This is an important comment that has to be clearly addressed first. We clarify that **Eq. (7)** describes the **reward learning** mechanism of MD-AIRL that is connected to **Eq. (14)**.\n - In our paper, the updates in **Eq. (7)** is the learning objective for the **reward function** in $\\Psi_{\\Omega}(\\Pi)$ as stated in **Lines 180-181.**\n \n > … the MD update for the subsequent **reward function** $\\psi_{t+1}=\\Psi_{\\Omega}(\\pi_{t+1})$ is derived by solving a problem …\n > \n - Note that an alternative expression of the equivalent meaning can be used as $\\eta_t D_{\\Omega^\\ast}(\\bar{\\psi}^s_{E,t}\\Vert \\psi^s) + (1-\\eta_t) D_{\\Omega^\\ast}(\\psi^s_t\\Vert \\psi^s)$ where $\\Omega^\\ast$ is the Legendre-Fenchel transform of $\\Omega$. We omitted this expression since it might be redundant.\n2. The function $\\psi_{t+1}\\in\\Psi_\\Omega(\\Pi)$ outputs rewards that a regularized RL algorithm with the reward function $\\psi_{t+1}$ converges to the next MD step. \n3. Since the regularized reward function $\\psi_{t+1}$ typically trained via offline trajectory data in practice (IRL), the current policy $\\pi_t$ is trained by an RL algorithm with sufficient environmental interactions, yielding a policy of the next MD step $\\pi_{t+1}$. Fig. 1 illustrates this entire process. \n\n**The claim seems to imply that MD-AIRL does not do adversarial learning. Is this correct?**\n\nThank you for the question. To put it simply, MD-AIRL **does do** adversarial learning, but the non-convergence of adversarial learning **does not** affect the convergence of MD. To explain this statement, we would like to highlight that the theoretical framework of MD-AIRL does not specify the estimation algorithm as stated in **Lines 160-163**.\n\n> That is, it is fundamentally uncertain to model global objectives which are not attainable for both RL and IRL. Instead, we hypothesize on existence of a random process where each estimation $\\bar{\\pi}_{E,t}$ resides in a closed, convex neighborhood of $\\pi_E$, generated by an arbitrary estimation algorithm.\n> \n\nTherefore, we respectfully emphasize the following points.\n\n1. **Agnostic to the estimation process.** Since the online learning hypothesis only requires an erroneous estimation target of $\\pi_E$ at each iteration, the estimation process can be arbitrary, e.g., the maximum likelihood estimation algorithm in the toy experiment in **Fig. 2**. In the other experiments, we usually consider the logistic regression of AIL discriminators as an instance of such processes, mainly due to its great popularity in the IRL domain and the fairness of comparison experiments.\n2. **Learning a specific action (mostly) depends on MD.** In MD-AIRL, we have two neural networks trained by logistic regression of adversarial learning: $\\pi_\\nu$ and $d_\\xi$. For $\\pi_\\nu$, its only usage is in the MD updates of $\\psi_\\phi$ in Eq. (14), so $\\pi_\\nu$ does not affect the actual learning of the policy updates of RL. For $d_\\xi$, it reduces the covariance shift on *states* when state visitation is misaligned. Therefore, adversarial learning only has indirect effects on $\\pi_\\theta$.\n\n**The concern with AIL methods is that they won't converge because of the adversarial learning dynamics? Would it be possible to elaborate on how the proposed method avoids this?** \n\nYes, this is related to the general arguments in **Lines 185-187**. As illustrated in **Fig. 3**, we claimed that $\\pi_t$ finds a certain form of convergence $\\pi_\\ast$ even if the estimation process $\\bar{\\pi}_{E,t}$ is not convergent (depicted as the red region). This is because the constrained IRL updates are scheduled with the step size conditions in **Eqs (9)** and **(10)**, which are based on our theoretical analyses. Therefore, we respectfully point out that the estimation errors of adversarial learning’s logistic regression ($D_\\nu$ in the algorithm) are clearly decoupled in our anlayses by learning a MD-based reward function ($\\psi_\\phi$ in the algorithm).\n\n**To make sure I understand, the idea is that a poor choice of reward architecture will cause AIL to fail, but that the proposed method automatically suggests how to choose the reward architecture?**\n\nYes. The reward architecture of $\\psi_\\phi\\in\\Psi_\\phi(\\Pi)$ is determined by the parameterization of the policy $\\Pi$ (such as softmax or Gaussian), so learning actions is inherently optimized to train a policy in $\\Pi$. The state reward of $d_\\xi(s)$ is auxiliary and only delivers gradual effects on learning to facilitate densities to be matched. Please see **Fig. 9** for the architectures.", " Thank you for providing the detailed response to my question! I want to examine a number of the points more closely.\n\n > [AIL has the ] notorious complexity of adversarial learning. ... Novel theoretical framework of IRL that greatly abstracts a policy and an associated reward function into a single point in an optimization process.\n\nThe second part of this claim seems to imply that MD-AIRL does not do adversarial learning. Is this correct? My understanding was that it was still doing adversarial learning, but with a certain additional regularizer applied to the policy update.\n\n> Our empirical evidence sufficiently shows the necessity of MD. ... MD-AIRL shows better across all other imitation learning algorithms we have tested\n\nI agree that statistically-significant empirical evidence that MD-AIRL outperforms all prior methods would be a strong argument. I'm a bit confused here, because I interpreted Table B1 as saying that the gains were not statistically significant. Am I mistaken in interpreting these results?\n\n> Mode collapsing.\n\nTo make sure I understand, the argument is that mode collapse is caused by insufficient discriminator capacity, and that regularizing the discriminator updates might allow the discriminator to allocate it's finite capacity better? I'm a bit confused about this point, for two reasons. **First**, I don't entirely follow why regularizing the discriminator updates would make it allocate it's capacity better. Intuitively, if the network capacity is poorly allocated initially (e.g., it is highly accurate on states that don't matter for the task, but inaccurate on the important states), then regularizing the updates would make the discriminator at the next iteration behave similarly to this initially-bad discriminator. **Second**, my understanding of the method was that it regularized the policy updates, not the reward/discriminator updates (Eq. 7). It's not clear to me why regularizing the policy updates would result in regularizing the discriminator updates.\n\n> Non convergence.\n\nTo make sure I understand, the concern with AIL methods is that they won't converge because of the adversarial learning dynamics? Would it be possible to elaborate on how the proposed method avoids this? My understanding was that the proposed method was still adversarial in nature, but the theoretical analysis in Section 5 doesn't seem to analyze the adversarial learning dynamics.\n\n> Inappropriate design of reward architecture. ... most hyperparameter selection is grounded in theory.\n\nTo make sure I understand, the idea is that a poor choice of reward architecture will cause AIL to fail, but that the proposed method automatically suggests how to choose the reward architecture?\n\n> Limitations of gradient descent for probability distributions\n\nTo clarify, the \"Euclidean\" update I was referring to was the unregularized policy update typically performed by GAIL/AIL/etc, not a literal Euclidean metric on probability distributions.\n", " Thanks again for the efforts. Although the current evaluation is under the stochastic policy setting, I think this work still has its merit.", " **Why do we need to perform reinforcement learning if we know the (valid) distribution for the current reward? I understand that we need to mitigate covariance shift and take into account the auxiliary state-reward, but how does this relate to the theory?**\n\nThank you for the comments. We delineate justifications for our RL/IRL procedures below.\n\n- **The necessity of Reinforcement Learning.** \nRL is a canonical method for policy updates in our MD-AIRL for the following reasons.\n - **Equations.** We would like to clarify that the many equations in this paper (including Eq. (4), and Eq. after Eq. (15)) define their objectives and costs that inherently require the interaction with the environment by the definition of the expectation $\\mathbb{E}_\\pi$ (**L113**). Also, this is a restriction of sequential problems where **Eq. (7)** will be typically implemented with offline non-i.i.d data. As stated in **Lines 182-185**, we respectfully point out that the RL process plays an essential role in sequential learning by the induced discounted value measures.\n - **Theory.** We respectfully point out our key hypothesis is the online imitation learning and temporal costs. This means the ideal case of objective in **Eq. (7)** certainly is not attainable to the agent. As a result, the temporal cost function in **Eq. (8)** is presented, which involves nonstationary estimations of the expert density and on-policy trajectory samples. Due to our theoretical setting, applying an online policy learning algorithm is the most suitable way of incorporating our theory which is unbiased to minimize the defined temporal costs in a sequential decision problem. \n - **Algorithm.** As we mentioned, one of our aims was to verify the excellence of MD updates in the IRL domain. Therefore, the algorithm was designed to be on top of the AIL framework with discriminators and policy functions. Therefore, the trajectory sample in **Eq. (14)** does **not** cover all on-policy data required to update the policy in offline, so RL is fundamentally needed in the algorithm. \n - Learning with an on-policy trajectory such as in RL can generally prevent compounding errors [3] in sequential and unexpected problems caused by incomplete trajectories. Consistently fitting the models with RL is important in practice because our algorithm involves neural nets with a limited number of parameters.\n\n- **How the auxiliary state-reward fits into the theory?** \n We provide the following reasoning below.\n 1. Let us define the auxiliary temporal cost of state reward as $h(\\pi_t,\\tau_t) = \\sum_{i=0}^\\infty d_\\xi(s_i)$.\n 2. Due to the auxiliary cost, the learning objective in the actual algorithm is separated into the followings.\n - Regularized reward function ($\\psi_\\phi$):\n - minimize $f(\\pi_t, \\tau_t)$ where $\\pi_t=\\pi_\\phi$.\n - The convergence of $\\psi_\\phi$ is achieved at $\\pi_\\ast$ according to our theoretical analyses in Section 5.\n - Policy function ($\\pi_\\theta$):\n - minimize $g(\\pi_t, \\tau_t) = \\lambda f(\\pi_t, \\tau_t) + h(\\pi_t, \\tau_t)$ where $\\pi_t = \\pi_\\theta$\n - This mitigates the state misalignment issues, which is useful in practice.\n 3. Suppose all of the states are visited sufficiently many times during training.\n - Regularized reward function ($\\psi_\\phi$):\n - The convergence will be found at a unique fixed point of $\\Psi_\\Omega(\\pi_\\ast)$\n - Policy function ($\\pi_\\theta$):\n - **Ideal case:** $\\pi_\\theta$ converges to $\\pi_E$.\n - **General case:** $\\pi_\\theta$ finds the most reasonable point according to $g(\\pi_t, \\tau_t)$ that (1) is similar to the convergence point $\\pi_\\ast$ derived by MD, and (2) covers most of the visited states in the trajectory data. In many benchmarks, both points are important factors of performance measures.\n - **For both cases,** MD-AIRL has much lower variances than RAIRL in estimating the expert density since the variance induced by logistic regression for actions is decoupled and replaced with MD updates.\n- **Revision for the final version.** We will include a new paragraph on these additional justifications in Appendix C.2 on page 25 and will rename the title of Appendix C.2 as “Algorithmic Considerations.”\n\nPlease let us know if there are any remaining questions.\n\nBest Regards,\n\nAuthors of Submission #4477\n\n[1] Matthieu Geist, Bruno Scherrer, and Olivier Pietquin. A theory of regularized Markov decision processes. In *Proceedings of the 36th International Conference on Machine Learning*, volume 97 of *Proceedings of Machine Learning Research*, pages 2160–2169. PMLR, 2019.\n\n[2] Wenhao Yang, Xiang Li, and Zhihua Zhang. A regularized approach to sparse optimal policy in reinforcement learning. In *Advances in Neural Information Processing Systems*, pages 5938–5948, 2019.\n\n[3] Stéphane Ross, Drew Bagnell: Efficient Reductions for Imitation Learning. AISTATS 2010: 661-668.\n", " We highly appreciate the insightful feedback and your dedication to the discussion. We also thank you for raising the questions that were accidentally missed in the official response. We provide the following answers to the questions below.\n\n**How does the update procedure relate to the paths in Fig. 1? … where in the Fig. can I find the crucial transition from the estimated expert policy to the reward function?**\n\n- **Detailed description of Fig. 1.** \nThank you for your insightful feedback. **Fig. 1** illustrates our general framework: formalizing the regularized reward space and describing MD-based reward updates. We respectfully highlight each procedure in Fig. 1 and its corresponding implementation in the actual algorithm.\n 1. The current $\\pi_t$ is the starting point of the optimization step at iteration $t$.\n - This corresponds to the density model of the current RL network $\\pi_\\theta$.\n 2. The policy $\\pi_t$ creates its isomorphism $\\psi_t$ with the regularized operator $\\Psi_\\Omega$.\n - We implement this part with the reward function $\\psi_\\phi \\in \\Psi_\\Omega(\\Pi)$ using a dedicated neural network in the actual adversarial algorithm.\n 3. The model $\\psi_t$ is updated with the MD objective in Eq. (7),\n - This corresponds to learning with Eq. (14)\n - This involves the aforementioned derivations from $\\pi_\\phi$ to $\\psi_\\phi$.\n 4. The trained $\\psi_{t+1}$ is projected the policy space, yielding $\\pi_{t+1}$.\n - According to [1], the computation of $\\nabla \\Omega^\\ast(\\cdot)$ can be performed by regularized RL procedure.\n - We implemented the regularized actor-critic algorithm to serve this purpose.\n- **Revision for the final version.**\n \n We fully agree with your comment that detailed computation techniques are crucial. At the same time, there is a minor concern for the presentation: the derivations themselves may not be considered as our contributions since each has previously appeared in one of the references. Therefore, we are working on making additional figures and texts in Appendix C.\n \n - **For Appendix C:**\n - On page 26, we will include a new subsection titled “How $\\pi_\\phi$ is computed from $\\psi_\\phi$,” which itemizes the detailed computation methods and pseudocode for three different cases: (A) softmax, (B) Gaussian(Shannon), C. Gaussian(Tsallis)).\n - We will also include a new figure illustrating each computation method on page 26. The figure will consist of subfigures that contain detailed computation blocks for each derivation.\n - **For the main paper:**\n - In Line 251, we will include a footnote that explicitly references the new figure and the appendix we are working on.\n \n **Please note:** these changes are minor and can be finished within only a few days.\n", " **Limitations of gradient descent for probability distributions**\n\nThe Euclidean gradient descent algorithm produces an \"unbiased\" update rules, that is useful in updating neural network parameters in practice. More specifically, the algorithm itself can be classified as an MD algorithm where the strongly convex Euclidean ($\\ell_2$) norm is the regularizer $\\Omega$ and induces the metric of the parametric space. However, as we are dealing with the optimization problems in the space of probability distributions (**not** the underlying neural networks), we respectfully highlight that this metric may not be useful in measuring distance between probability distributions (**L94**). Theoretically, drawing the gradient descent algorithm in a probability space $\\Delta_{\\mathcal{X}}$ is _possible_ by replacing **Eq. (6)** with the divergence\n$$D(p,q) = \\sqrt{\\int_{\\mathcal{X}} \\vert p(x) - q(x)\\vert^2 \\mathrm{d} x}.$$\nHowever, it usually requires solving intractable integration, thus assuming $\\Omega$ as an entropic regularizer is generally a better approach for probability densities.\n\n**Statistical significance of the additional results** \n\nThe DAC reward function is known to remove some biases in GAIL rewarding mechanism and is a highly specialized method to solve MuJoCo benchmarks in terms of performance. Similarly, the GAIL algorithm with the DAC style reward function is the best performing algorithm in our comparison. Since our goal was to verify the overall robustness of the algorithm (**not**\n beating the DAC algorithm by a large margin), we believe the results are meaningful and align with our theory. We respectfully emphasize that MD-AIRL shows better across all other imitation learning algorithms we have tested, and these overall results significantly strengthen our claim.\n\nPlease let us know if there are remaining concerns.\n\nBest Regards,\n\nAuthors of Submission #4477\n\n[1] Seyed Kamyar Seyed Ghasemipour, Richard Zemel, and Shixiang Gu. A divergence minimization perspective on imitation learning methods. In Conference on Robot Learning, pages 1259–1277. PMLR, 2020.\n\n[2] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida, Spectral Normalization for Generative Adversarial Networks, ICLR 2018", " We highly appreciate the thoughtful feedback and your dedication to the discussion. \n\n**Justification on MD-based IRL.** Thank you for the question. We respectfully emphasize the following points.\n\n- **The necessity of abstraction.** AIL is one of the most popular topics in the imitation learning domain. However, its mechanism and theoretical analyses on parametric updates are not fully revealed, mainly due to the notorious complexity of adversarial learning combined with sequential decision problems. We propose a novel theoretical framework of IRL that greatly abstracts a policy and an associated reward function into a single point in an optimization process. With this abstraction, our work exhibits a promising direction of interpretability of imitation learning performance, thanks to the rich foundation of optimization studies.\n- **A good generalization has its own merits.** Previous works have successfully presented many divergence formulations; naturally, these concepts can be compared, distinguished, and categorized. We respectfully point out that this work applied a more general concept based on the well-formulated theory, and it is useful since theoretical or empirical improvements on this generalization can affect multiple derived models.\n- **Theoretical approach with solid empirical evidence.** Classical IRL studies focused on the problem definition of IRL and convergence analyses with simple linear models. In contrast, newer studies of AIL focused on algorithmic designs and performance. We respectfully point out that this work uses a method of balancing these previous studies. Our empirical evidence sufficiently shows the necessity of MD, and our experiments include challenging problems such as the problem with large-scale bandits and the multi-goal environment.\n- **Revision for the final submission.** We will carefully include these justifications by adding a new paragraph in Section 8 on page 10, as authors are allowed to add one more page at that stage.\n\n**Limitations of AIL.** We highly appreciate the thoughtful feedback. We would like to point out our technical issues regarding AIL. These issues stem from the typical characteristics of adversarial learning.\n\n- **Technical Drawbacks on AIL**\n - **Mode collapsing.**\n - (**+)** AIL shows good performance in single objective tasks such as MuJoCo locomotion benchmarks.\n - **(-)** Like GANs, the mode collapse phenomenon is one of the major problems in AIL when the target density is multimodal. One cause of this problem is the unstable reward signals due to limited discriminative capability.\n - **(-)** While GAN studies partially solved this problem using large-scale neural networks, this approach would likely be restricted in the RL/IRL setting.\n - **(ours)** When the expert behaves with multiple objectives, MD-AIRL has a clear advantage since the progress of the IRL reward function is governed by constrained updates. This point is empirically verified in the multi-goal experiment.\n - **Non convergence.**\n - **(+)** AIL formulates its objective with minimax problems and presents tangible solutions when convergence is found.\n - **(-)** AIL does not analyze the convergence itself; consequently, it does not deal with multiple issues on unreliable/finite trajectories.\n - **(-)** In GAN, sophisticated normalization and regularization techniques have been proposed to solve this issue. However, this does not perfectly translate to AIL models because the imitation learning agent is restricted to certain parameterizations and learning schemes.\n - **(ours)** The convergence of our reward learning mechanism is guaranteed for convergence even for challenging problems.\n - **Inappropriate design of reward architecture.**\n - **(+)** AIL relies on the nonlinearity of neural networks that is flexible and scalable to represent arbitrary reward function in a certain form.\n - **(-)** Designing appropriate reward architecture is challenging; inappropriate architecture may result in underfitting or overfitting of the imitation learning agent.\n - **(-)** AIL is difficult to progress its architecture because the performance results of adversarial learning are composed of multiple factors specific to tasks.\n - **(ours)** The reward model has the same expressiveness as the policy. Also, most hyperparameter selection is grounded in theory.\n- **Revision for the final submission.** We will also carefully include these justifications in Section 8. We will also carefully revise the supplementary materials by including a new appendix for detailed analyses on page 25.\n", " **Justifications for the BC implementation**.\n\nThank you for the insightful comment. First, we will make sure to include ablation of applying $l2$ and other regularization techniques in the final version. At the same time, we provide justifications for our BC implementation as follows.\n\n- **Not Fine-tuned for deterministic tasks.**\n 1. Ziniu, et al. [1] presented some remarkable analyses on the worst-case performance of offline imitation learning algorithms. Personally, their findings and analyses are intriguing as an imitation learning study.\n 2. We respectfully highlight one of the paper's vital observations (https://arxiv.org/pdf/2202.02468.pdf, pp. 7).\n \n > ***Observation 2.** For deterministic tasks (e.g., MuJoCo locomotion tasks), BC has no compounding errors if the provided expert trajectories are complete.*\n \n This implies that their formalizations are designed in favor of deterministic tasks, especially MuJoCo benchmarks. We would like to point out that the deterministic BC implementation of [1] is more geared toward MuJoCo benchmarks than ours by utilizing additional information regarding environments.\n \n 3. More importantly, we respectfully highlight one of the drawbacks of deterministic benchmarks explicitly pointed out by the authors (pp. 9) that we fully agree on.\n \n > **Benchmarks.** *Our study points out several drawbacks of existing MuJoCo locomotion benchmarks: deterministic transitions and limited initial states. … future imitation learning studies could benefit from more challenging benchmarks with stochastic transitions and diverse initial states***.**\n > \n \n For standard stochastic MDPs, we believe that the actions of an agent have to be modeled with a stochastic policy. Otherwise, compounding errors will become one of the significant problems. Therefore, we claim that our implementation of BC is appropriate since our MDP definition and theoretical claims are not limited to deterministic settings.\n \n 4. Lastly, we respectfully remind you that this paper also consists of imitation learning in the multi-goal benchmark that heavily favors stochastic policies. Therefore, the general premise of stochastic policy was needed for consistency between continuous action space experiments.\n- **About** $\\ell_2$ **Regularization.**\n 1. Following the analyses of Ziniu et al. [1], it is safe to say we can apply the proposed regularization when we are informed that the given MDP is deterministic. \n 2. Applying the $\\ell_2$ regularization is closely related to enforcing Lipschitz continuity on the network outputs [2,3]. Specifically, we think this regularization is certainly beneficial to simple deterministic networks in MuJoCo benchmarks because of a few reasons.\n - Deterministic policy networks process single, direct output of actions\n - In MuJoCo locomotion tasks, the outputs are real vectors and each action dimension has a relevant meaning for a physical system, such as the magnitude of forces.\n 3. From a technical perspective, a stochastic policy model such as a Gaussian network outputs multiple vectorized outputs, and some of outputs might not be adjusted to the regularization method. Therefore, we are not fully sure this specific regularization can guarantee performance gains for arbitrary stochastic policy parameterization. We believe the relationship between regularization and parameterization needs to be extensively studied.\n 4. Again, Ziniu, et al. [1] left interesting remarks on imitation learning benchmarks and regularization. We respectfully point out that the contributions of our paper are complementary to the work done by Ziniu, et al. [1]. We believe our methodology of MD-IRL and its implementation can benefit from these results, or vice versa.\n\n- **Revision for the final submission.** In Appendix D, we will include ablation studies for regularization techniques. The experiments can be finished in a couple of weeks.\n\n**Minor (Line 16).** Thank you for your effort on this point. We corrected this typo and uploaded a revised version.\n\nPlease let us know if there are any remaining questions!\n\nBest Regards,\n\nAuthors of Submission #4477\n\n[1] Ziniu, et al., Rethinking ValueDice - Does It Really improve Performance?, ICLR Blog Track 2022.\n\n[2] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, Yuichi Yoshida,\nSpectral Normalization for Generative Adversarial Networks, ICLR 2018\n\n[3] Zhiming Zhou, Jiadong Liang, Yuxuan Song, Lantao Yu, Hongwei Wang, Weinan Zhang, Yong Yu, Zhihua Zhang, Lipschitz Generative Adversarial Nets, ICML 2019.", " We highly appreciate the insightful comments and your dedication to the discussion. We address the comments below.\n\n**About using deterministic parameterizations for evaluation.**\n\nThank you for the insightful and instructive comments. We clarify that this paper assumed stochastic policies evaluating our experiments for the sake of consistency. As suggested, we report the evaluation scores on the deterministic setting below.\n\n**Table B3. Performance of MuJoCo with different policy types (100 expert demonstrations).**\n\n| | Sto. Policy ($a \\sim \\pi(a\\vert s)$) | Det. Policy ($a = \\mu(s)$) | Difference (Det - Sto) |\n| --- | --- | --- | --- |\n| BC (Hopper-v3) | 0.76 | 0.88 | **0.12** |\n| BC (Walker2d-v3) | 0.39 | 0.48 | **0.09** |\n| RAIRL (Hopper-v3) | 0.95 | 0.96 | **0.01** |\n| RAIRL (Walker2d-v3) | 0.67 | 0.73 | **0.07** |\n| MD-AIRL (Hopper-v3) | 0.96 | 0.97 | **0.01** |\n| MD-AIRL (Walker2d-v3 | 0.89 | 0.91 | **0.02** |\n\n- We believe these results are coherent with your experience: in MuJoCo, we can expect some performance gains when a policy behaves deterministically regardless of algorithms. We think these gains are strongly related to the innate characteristics of MuJoCo tasks (combined with Gaussian policies) since similar phenomena have often been reported in previous imitation learning and offline RL papers.\n- It is a genuinely good practice to check scores on deterministic versions and to compare them with the expected expert's scores. These results would be helpful as validation metrics, especially when grasping how the modes of actions (i.e., mean actions) deviate from the expert policy.\n- That being said, it would also be hard to interpret and analyze these performance differences and consider them as actual performance benefits because inconsistency between training and evaluation settings is intentionally caused by a practitioner's experiment design. We think this criticism could be further validated when solving real-world problems (such as robot hand manipulation and autonomous driving).\n- We respectfully emphasize that we presented a parameterization-agnostic algorithm for stochastic policies. The Gaussian parameterization in this paper is much more expressive than diagonal Gaussians when it comes to modeling correlation among dimensions of multivariate actions. Therefore, it is likely that our experiments can successfully determine the performance of tested algorithms for probability distributions. We believe this paper's evaluation strategy is sound and appropriate.\n", " Thanks very much for the general & personal responses and efforts! These clarifications make it more clear for me to understand the purpose and postulation of this work. The detailed explanations on the functionality of the learned reward function are important. \n\nAs for the comparisons, I wonder whether the current evaluations on MuJoCo are under the stochastic policies or the deterministic policies (the mean of the Gaussian); if they are stochastic, how about the scores of the deterministic ones? From my experiences, the deterministic policies often achieve higher scores on MuJoCo even though they are parameterized by a Gaussian during the learning. \n\nBesides, I am not fully convinced by the authors' implementation of BC: if we can observe that adding a simple $\\ell_2$ will help significantly improve the performance, why cannot we adopt this technique? So I suggest the authors add a $\\ell_2$ regularized BC as the BC baseline.\n\nMinor: BTW, I noticed an extra type in Line 16 (\"often described 16 as learning expert behavior fro finite number of demonstrations\" [sig]). fro -> for", " Thank you very much for your reply.\n\nI could not find an answer to the following questions:\n1) How does the update procedure relate to the paths in Fig. 1? Where do we start in each iteration and how does each step relate to a step in the Fig.? While I understand the mechanics of the proposed algorithm, I can not find it in the Fig. For example, where in the Fig. can I find the crucial transition from the estimated expert policy to the reward function?\n\n2)\n> If we know $\\pi_{t+1}$ already (because it is the policy that we used to compute $\\psi$), why do we need to perform reinforcement learning (cf. Eq.4 and the equation after Eq. 15) if we know the (valid) distribution for the current reward? I understand that we need to mitigate covariance shift and take into account the auxiliary state-reward, but how does this relate to the theory? ", " Dear authors,\n\nThank you for all of the revisions made to the paper, and for the detailed responses to my questions!\n\nMy main concern was about whether the theoretical and empirical results sufficiently motivate the use of the (more general) mirror descent framework. That is, why is (Euclidean) gradient descent, a special case, insufficient? The response to question 2 suggests that optimization using mirror descent updates will be \"different,\" but doesn't argue/prove that it will be better. Similarly, the new results in Table B1 seem to suggest that the gains from MD-AIRL are not statistically significant; GAIL (DAC rewards) seems to always be within one standard deviation of MD-AIRL.\n\n**Can the authors make the case for why the more general mirror descent framework is necessary?**", " **The mismatch between theory and algorithm.** \n\nThis is a good point. The simple experiments in Fig. 2, and Section 7.1 is the example of our approach that works purely and does not rely on state density matching. In generally, **we could not directly perform the MD update by optimizing the policy with respect to Eq. (7)**, because \n\n1. the data collected from the expert and the agent in the adversarial framework is not on-policy samples.\n2. the data does not fully cover all possible visitable states in an episode, phrased as misaligned state densities.\n\nWe used state-density in the challenging continuous control benchmarks in Eq. (15), which is similar to $\\lambda$ of the RAIRL paper. We respectfully point out that MD itself was not the main cause of this mismatch. The main technical hurdle was how to efficiently provides on-policy state samples covering all expert and agent demonstrations. This is very challenging in practice since our MD formulation in Eq. 14 works on stat densities, and usually, these densities are heavily misaligned. To overcome such issues that are caused by the nature of the off-line setting of IRL, we designed a dual discriminative architecture and utilized the state density discriminator to ensure that the realms of observation are properly aligned as fast as possible. Without matching state densities, MD-AIRL could perform slower, especially in the early phase of learning. We believe that MD-AIRL draws balance this inherits the technical advantage of AIRL in the early learning phase and convergent behavior MD-IRL that is derived from our theoretical analyses. \n\n**The effects of estimation errors.** \n\nTo answer this question, we provide detailed analyses on the noisy demonstration experiments of **Table 3**. Let the Bregman divergence between agent and expert policies be the error, and we analyze these errors by increasing the given noise level for the expert trajectories. **Table B2**. shows general tendencies of errors in RAIRL and MD-AIRL methods.\n\n**Table B2. Additional results on IRL with noisy demonstrations for different regularizers and noise scales (the table consists of changes of noise levels ε: 0.01 → 0.5).**\n\n| Settings | Bregman Div. (RAIRL) | Bregman Div. (MD-AIRL) | Average score difference (from Tab. 3) |\n| --- | --- | --- | --- |\n| Hoper, Shannon | 1078.19 ± 1885.02 → 1257.60 ± 2144.88 | 697.26 ± 1820.28$ → 781.15 ± 3015.54 | 33.22 → 79.57 |\n| Hopper, Tsallis | 81.613 ± 63.769 → 84.08 ± 64.361 | 80.593 ± 63.473 → 81.356 ± 63.243 | 59.02 → 125.07 |\n| Walker2d, Shannon | 2614.029 ± 4113.463 → 4163.818 ± 5771.339 | 2239.664 ± 2893.15 → 3224.986 ± 6439.13 | 529.82 → 801.56 |\n| Walker2d, Tsallis | 7.142 ± 3.767 → 7.667 ± 4.076 | 6.782 ± 3.727 → 7.109 ± 3.813 | 892.16 → 658.44 |\n| HalfCheetah, Shannon | 71.545 ± 118.263 → 229.079 ± 287.606 | 65.591 ± 107.82 → 224.707 ± 330.412 | 19.02 → 120.19 |\n| HalfCheetah, Tsallis | 807.545 ± 1014.531 → 814.094 ± 1035.657 | 807.895 ± 1013.572 → 814.199 ± 1036.893 | 24.74 → 30.77 |\n| Ant, Shannon | 283.718 ± 1586.597 → 775.331 ± 1915.778 | 108.17 ± 1223.28 → 221.098 ± 1652.558 | 164.55 → 506.6 |\n| Ant, Tsallis | 6542.044 ± 7022.796 → 6548.301 ± 7026.204 | 6541.958 ± 7022.89 → 6546.35 ± 7027.377 | 345.63 → 467.15 |\n\nIn the table, we can find an evident correlation between average Bergman divergence and performance since imitation learning converges when the divergence is 0. Therefore, this is another piece of empirical evidence for our theoretical arguments, and we greatly appreciate your suggestions. We will fully reflect these results in the final submission. Meanwhile, displaying various Bregman divergences for different tasks might be overwhelming and not always intuitive at a glance. Hence, we will also make sure to include a new figure with proper visualization of these analyses in the final submission, as authors are allowed to add one more page at that stage.\n\n[1] Wonseok Jeon, Chen-Yang Su,Paul Barde,Thang Doan, Derek Nowrouzezahrai, and Joelle Pineau. Regularized inverse reinforcement learning. In 9th International Conference on Learning Representations, 2021.\n\n[2] Frank Nielsen and Richard Nock. On Rényi and Tsallis entropies and divergences for exponential families. arXiv preprint arXiv: 1105.3259, 2011.", " Thank you very much for your detailed feedback and time spend reviewing our work. We address your comments below.\n\n**Changes compared to prior work.**\n\nThe design choice of the algorithm was intentional; we controlled the amount of the algorithmic changes compared to the RAIRL, since the primary goal of proposing MD-AIRL was to (1) demonstrate that MD in IRL settings can be readily reproduced in modern AIL implementation, and (2) clearly verify our claim by comparative experiments. Nevertheless, we still put a lot of effort into incorporating constrained optimization problem MD to the regularized IRL settings. That is, we believe the changes involve fundamental structural change, and we newly proposed our dual discriminative architecture that is well-grounded by theoretical reasoning.\n\n**About Eqs. (14) and (15).**\n\n1. **how $\\pi_\\phi$ is computed from $\\psi_\\phi$.**\n \n The underlying concept in Eq. 14 is that bidirectional transformation between $\\pi_\\phi$ and $\\psi_\\phi$ happens via its shared parameters without extra computation cost in our setting. For example, in our experiments, both $\\pi_\\phi$ and $\\psi_\\phi$ are analytically drawn in a closed form expression using the shared parameter $\\phi$ by the following transformation:\n \n - **Discrete policy.** Let policy for a state $s\\in\\mathcal{S}$ be $\\pi_\\phi(a|s)=p(a)$ for a discrete probability distribution on the action space, typically parameterized by a softmax distribution. Since the number dimension of action space is finite in this case, we can directly compute the Definition 1, i.e $\\psi_\\phi(s, a)\\Omega(p)+\\nabla_p\\Omega(p) -\\sum_{a\\in\\mathcal{A}}p(a)\\nabla\\Omega(p)$ for arbitrary $\\Omega$.\n - **Continuous policy (Shannon).** For both discrete and continuous policies the following equation holds: $\\psi_\\phi(s,a) = \\log \\pi_\\phi(a|s)$ (pp. 4, Jeon et al. 2021 [1]). For multivariate Gaussians, we can analytically compute the log-likelihood. As stated in Appendix B.4, we applied LDL decomposition on $\\Sigma$, a variant of Cholelsky decomposition that guarantees invertibility and positive-definiteness of the covariance matrix. For example, in our TensorFlow 2.0 code, this part is actually implemented as\n \n ```bash\n import tensorflow as tf\n import numpy as np\n \n numact = 3 # The dimension of action space for the Hooper-v3 benchmark\n log_denom = tf.constant(-numact * 0.5 * np.log(2 * np.pi), tf.float32)\n perm = tf.expand_dims(tf.range(numact, dtype=tf.int32), 0)\n \n # Log-likelihood of multivariate Gaussians\n # https://en.wikipedia.org/wiki/Multivariate_normal_distribution\n # Cholesky-based multivariate Gaussians: \n # https://arxiv.org/pdf/2102.13518.pdf\n @tf.function\n def logp_gaussian(x, mu, log_sigma, unit_lower_triangular):\n return log_denom - \\\n tf.reduce_sum(log_sigma + \\\n .5 * tf.square(\n tf.linalg.matvec(\n tf.linalg.lu_matrix_inverse(\n unit_lower_triangular, perm), \n x - mu)) * exp(-2 * log_sigma), -1)\n ```\n \n The above Python function `logp_gaussian` requires variables `mu`, `log_sigma`, and `unit_lower_triangular` that represent parametric tensor values for each data computed from a neural network, and these outputs can also be used to produce distribution at the same time since it is essentially a valid set of parameters for a Gaussian distribution. This particular parameterization usually has significantly low numerical errors thanks to the inverse function specialized for LU decompositions (`tf.linalg.lu_matrix_inverse`).\n - **Continuous policy (Tsallis)**. Computing the operator $\\Psi_\\Omega$ for an arbitrary continuous policy is usually intractable when $\\Omega$ is a Tsallis entropic regularizer except when the policy is constrained to be specific parametric models (e.g., exponential families). In this work, we assumed the Gaussian policy and analytic form that was initially discovered by Nielsen and Nock et al. (2011)[1]. The entire portion of Appendix B is dedicated to derivations of $\\psi_\\phi$ when $\\Omega$ is a Tsallis entropic regularizer.\n2. **The main change to prior works.** \n Compared to classical MD, we were able to perform the exact MD in the regularized reward space (Definition 1). Since prior works do not fully commit derivation on the space of conditional distributions, our findings contribute to general optimization studies in sequential decision problems as well as imitation learning problems. To the best of our knowledge, our work is one of the first works to derive a practical MD-based reward function in the IRL settings.\n \nThanks you for your detailed comments. Following your suggestion, We carefully rewrote this part in the rebuttal revision.\n", " We address other questions and comments:\n\n- **[L23 - L30]** We respectfully point out that this paper considers AIL algorithms as a generalized IRL method that involves reward estimation and RL based on estimated reward function. Thus, this paragraph is about a mild introduction to various IRL algorithms that consists of classical and modern IRL algorithms. Meanwhile, we agree that AIL frameworks do not seek the ground-truth function given by a system. Therefore, we fixed the phrase as follows:\n \n > *estimating the ground-truth reward function “directly,” → learning reward functions of a certain form “directly,\"*\n \n- **[Tab. 1]** We have the following answers to the questions.\n 1. **Rewards.** In the scope of this work, we are considering IRL reward functions in a more general sense than exactly recovering the expert’s reward function. In this sense, the discriminative signals of GAIL form a reward function since an RL algorithm can learn these signals, and the learning process recovers the expert density function.\n 2. **BC (Bregman divergence).** To the best of our knowledge, BC and its theoretical analyses usually assume the KL divergence minimization. While it might be true that other Bregman divergences can be applied as a generalization of the algorithm, we think that the overall approach will be vastly different from the original at that point.\n 3. **BC (Convergence).** We agree with your point. We fixed this issue and changed the label of the table (”Rate of convergence” *→ “Convergence analyses”*). \n- **[L73]** This is a mistake. Our original intention was to address a generalization of the Pythagorean theorem, which is indeed a much weaker condition than the Pythagorean theorem. -Since we did not intend to make confusion to the reader, we fixed this part as follows:\n \n > satisfies metric-like properties such as the Pythagorean theorem [13]. MD is closely related to… *→* satisfies metric-like properties [13, 25]. MD is also closely related to…\n \n- **[Fig. 2]** As suggested, we increased the sizes of fonts and figures and overhauled the arrangement of subfigures.\n- **[Eqs. 14 & 15]** Thank you for the comments. As suggested, we rewrote the intuitive reasoning in our revised manuscript. Eq. (14) is a direct interpretation of the MD updates of Eq (7), but it has a technical drawback. Therefore, the proposed reward function of Eq (15) incorporates a discriminative signal regarding state densities, which can be adjusted with the hyperparameter $\\lambda$.\n- **[L263 - L265]** We respectfully emphasize that this particular choice of regularized function $\\Omega$ in our experiments can be similarly found in previous RL/IRL works such as RAC and RAIRL algorithms. We believe that showing consistent performance gains from regularized IRL methods for various $\\Omega$ is essential for verifying the generality of the proposed model.\n- **[Sec 7. 1]** We would like to point out that the main purpose of this experiment was to measure the performance of IRL for various $\\Omega$. Although the randomly generated expert policy is attainable in this particular experiment, we did not expose the expert distribution to the algorithm, a direct method with the ground-truth distribution is not possible in our setting. We clarified this point in the revision.\n \n > We first considered multiarmed bandit problems *→ To measure the performance of IRL for various $\\Omega$, w*e first considered multiarmed bandit problems\n \n- **[Fig. 5]** We used a fixed learning rate for the RAIRL and MD-AIRL as similarly reported by the RAIRL paper. Since our primary goal of this paper is to show the effectiveness of MD-based learning rate scheduling, the performance gain explicitly shows the effectiveness of MD. To the best of our knowledge, weight decay is not commonly practiced in AIL. Therefore, applying weight decay requires an exhaustive search and reasoning for a fair comparison. That is, the MD-AIRL algorithm might also have the potential to be more stable by applying such techniques.\n- **[Fig. 6]** We respectfully highlight that the goal entropy serves as an intuitive and reasonable performance metric for measuring how evenly an agent travels to multiple goals. Another point for this particular experiment is that the expert policy is a mixture of four different trained RL agents (see Appendix C.3). Computing Bregman divergence for a mixture of continuous distributions is usually intractable to compute.", " We appreciate your helpful feedback and concerns. Please check **the rebuttal revision** of our paper reflecting the majority of the suggestions for improving clarity. Overall, we are working on incorporating all of your helpful comments in the final version. We address your questions below.\n\n**Experiments.**\n\nThank you for the comment. Following your suggestions, the additional experimental results are provided in the general response (**Table B1**). We respectfully emphasize that we focused on comparative studies with RAIRL for reasons. First, the performance gains attained from MD-AIRL compared to RAIRL directly reflect the effectiveness of our MD-based reward learning schemes, as we controlled most of the algorithmic considerations. In contrast, comparison with other IRL methods is fundamentally limited due to a lack of generality in terms of choice of $\\Omega$.\n\n**Questions:**\n\n1. To what extent are the results limited to inverse RL, versus any algorithm that performs policy updates? It seems like the idea of doing regularized … and to what extent do the proposed results extend to other settings? \nThis is a good point. Our approach toward the imitation learning problem is not limited to inverse RL, and our theoretical results can be applied to multiple subfields of machine learning. Also, our fundamental argument was to consider the overall imitation learning process as a combined optimization process between policy and reward functions. We strongly believe this novel perspective brought simplicity over complicated reasoning behind regularized MDPs and reward learning schemes of IRL algorithms. As a result, we proposed MD-AIRL, a robust and pratical adversarial imitation learning algorithm that is based on MD.\n2. While it seems like the theoretical results are correct, I'm unsure if they motivate the use of mirror descent. What would similar convergence results looks like for (say) standard/Euclidean gradient descent? \nApparently, the core motivation of our work started from an interest in geometries derived from the regularized MDPs. In machine learning and especially RL, we are familiar with the notion of learning in probability space, such as the Fisher information matrix, policy gradient algorithms, and the information geometry. These theoretical concepts imply that learning parameters in the probability distribution space might differ from convex optimization problems in the standard Euclidean space.\n\n**Limitations of the work.** \n\nWe fully agree with your suggestion. We have rewritten **Section 8** to address the following limitations. \n\n- While the Bregman divergence consists of familiar divergences such as the KL divergence, using the divergence might not always be the best choice of cost function to solve a particular task. Currently, the relationship between Bregman divergence and other families of statistical divergence is actively being studied.\n- Starting from Eq. (7), we proposed an \"impure\" form of Eq. (15) and presented an additional hyperparameter $\\lambda$. These are introduced due to a technical hurdle of how to efficiently provides on-policy samples, covering all expert and agent demonstrations. While the algorithm works well in a wide variety of benchmarks, our design choices for MD-AIRL might have some side effects, so it has been addressed as another limitation.", " We sincerely thank you for your positive feedback and overall support of our submission. We address your comments below.\n\n**Comparison with RAIRL.**\n\nThank you for the comment. Our theoretical and empirical achievements are built upon the novel perspective of considering iterative RL and IRL algorithms as a combined optimization process with dual aspects. Both RAIRL and MD-AIRL are highly generalized algorithms in terms of multiple options of regularizer $\\Omega$. Compared to RAIRL, our work brings more beneficial results in realistic situations with limited training time and unreliable data. Our work is also more aligned with early theoretical IRL studies providing reward learning schemes and convergence guarantees. Most of this explanation can be found in Sections 1 & 8.\n\n**Comparison with other approaches.**\n\nThank you for the comment. Following your suggestion, we have conduced additional experiments, that is provided in the general response (Table B1). Our primary aim was to demonstrate the excellence of MD-AIRL in various environments, regularizers, and other realistic situations. RAIRL is a competitive AIL algorithm that is highly scalable and general in this sense; thus, it was a good counterpart of our comparative studies, illuminating our claim of two issues in current AIL methods (Section 1). We respectfully emphasize that other IRL algorithms lack compatibility with our experiment settings: most of them only work on a specific regularizer. For example, we can only report with a Shanonnian entropic regularizer for most of the algorithms. This is because applying regularizers other than the Shannon entropy function is not theoretically pleasant for these methods e.g., $\\pi_E$ might not be the optimal point. Therefore, comparison with methods other than RAIRL is fundamentally limited for other approaches. ", " **Implementation details of BC.**\n\nThank you for the question. For this issue, we would like to point out a quote from the appendix of the paper [2] “Rethinking ValueDice - Does It Really improve Performance?” ([https://arxiv.org/pdf/2202.02468.pdf](https://arxiv.org/pdf/2202.02468.pdf), pp. 15)\n\n> ***Algorithm Implementation.** Our implementation of ValueDice and DAC follows the public repository https://github.com/google-research/google-research/tree/master/value_dice by Kostrikov et al. [2020]. **Our implementation of BC is different from the one in this repository.** … Instead, we use a simple MLP architecture without the output of the covariance. **The deterministic policy is trained with mean-squared-error (MSE).***\n\nThe appendix implies that the authors’ implementation of BC in [2] is different from standard ones for stochastic policies. Our implementation of BC is based on the `OpenAI Baselines` repository, where we applied additional changes to the original code for incorporating the full-covariance Gaussian parameterization and for fixing overall minor compatibility issues with MuJoCo 2.1 and TensorFlow 2.0. Under the close inspection of the paper and code, the main reasons for the performance boost compared to our implementation appear to be as follows:\n\n1. **Parameterizations are different.** A deterministic policy is inherently a strict outlier of our arguments and analyses since $\\Omega(\\pi)$ is usually undefined. We stress that the scope of this work is configured to dealing the problem in the sequential decision problem in the regularized MDPs of stochastic policies. Studying stochastic policies, in general, has distinct advantages of considering all the possible pathways to reaching multiple goals and the overall stability of small perturbations of environmental configurations.\n2. **Loss functions are different.** Notice that we assumed Gaussians for MuJoCo benchmarks, so the loss function of maximum likelihood estimation can be widely different from MSE depending on covariance matrices.\n3. **Regularization for weights.** The authors additionally applied $\\ell_2$ regularization to standard BC, and we can observe that unregularized BC performs poorly, in Fig. 4 of [2]. In fact, this point is connected to our argument of online updates in Section 2 and step size considerations in our settings that the parameters are treated as a point in constrained convex optimization in the Euclidean space. As mentioned in Section 8, the best (stochastic) regularization for IL/IRL for each specific task is another challenging problem in this domain and remains as future work.\n\n**Theorem 2.** \n\nThe aim of the theoretical argument in Theorem 2 is to show the boundedness of cost that corresponds to conditional Bregman divergences regarding all states $s\\in \\mathcal{S}$. Therefore, we define $A_t = \\sup_{s\\in \\mathcal{S}} \\mathbb{E}_{\\tau 1:t } [ D_\\Omega(\\pi^s_E \\Vert \\pi_t^s ) ] $ which represents such cost in one term. $T$ represents the finite end of training time, indicating finite training time. Thank you for this question; we enhanced the presentation of Theorem 2 in the rebuttal period, which can be checked in the rebuttal revision.\n\n[1] Tianwei Ni, Harshit S. Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, and Ben Eysenbach. F-IRL: inverse reinforcement learning via state marginal matching. In Conference on Robot Learning, pages 529–551. PMLR, 2020.\n\n[2] Ziniu, et al., Rethinking ValueDice - Does It Really improve Performance?, ICLR Blog Track 2022.\n\n[3] Ng, A. Y., Harada, D., and Russell, S., Policy invariance under reward transformations: Theory and application to reward shaping. In ICML 1999, Vol. 99, pp. 278-287.", " Thank you very much for your insightful feedback and the time spent reviewing our work. We address your comments below.\n\n**Comparison studies.** \n\nHere we note the important reasons this work focused on comparing with RAIRL in this paper.\n\n1. The reasons for fixing RAIRL as the main comparative algorithm throughout this work are (1) compatibility and (2) significance. A typical IRL algorithm is not designed to work on multiple statistical divergences. To the best of our knowledge, only RAIRL is a suitable algorithm that models the Bregman divergence so that the algorithm can be applied to all sets of our comparative experiments. Second, since RAIRL shares similar reward models and learning schemes with MD-AIRL, consistently showing performance gains from RAIRL was one of the core properties of MD-AIRL that is necessary to validate our claims.\n2. **F-IRL** [1] works on state densities with asymptotic estimation typically trained with mini-batches. In contrast, MD-AIRL contains the exact form of MD for probabilistic actions, incorporated with an estimated state density discriminator. F-IRL might be a reasonable algorithm if the problem is suitable for a specific divergence. Validating an IRL method that models an f-divergence is beyond the scope of our paper, as addressed in Section 8.\n3. **ValueDice** [2] focuses on improving offline settings of imitation learning algorithms, such as behavior cloning (BC). The authors assumed (1) environments to be deterministic MDPs, and also (2) parameterization to be deterministic, which is quite different from our settings. These combined factors make a fair comparison of the ValueDice method with the MD-AIRL approach very difficult in this paper. We respectfully point out that our goal for experiments was not state-of-the-art performance, even for these vastly different cases.\n\n**Functionality of the learned reward function.** \n\nWe address detailed analyses below.\n\n1. We presented the strict parameterization for the reward representation that was theoretically pleasing for both MD and regularized that is not standard dual gradients. Compared to classical MD, this work is dealing sequential decision problems of regularized MDPs with unknown dynamics. Therefore, our theories are geared toward rationalizing the regularizer $\\Psi$ and the convergence of the cumulative costs. \n2. The notion of the ground-truth reward requires elaboration since IRL is an ill-posed problem; there can be numerous solutions to the reward function inducing the same optimal policy. Our approach is grounded on a novel perspective of simultaneous learning policies and rewards in RL/IRL methods corresponding to the combined, multi-stage optimization process. Therefore, a particular reward function is tightly coupled with the corresponding policy (regardless of optimality). The proposed learning objective provides the reward function (for?) next iterative steps, constrained to local geometric constraints of the convex regularizer.\n\nWe also conducted the multi-goal experiment in Fig. 6, visualizing the basic characteristics of the learned IRL reward function.\n\n**Downstream tasks.** \n\n1. **Transfer learning.** We would like to point out that the primary purpose of this work is the robustness of imitation learning. While it can reproduce similar results since it retains the functionality of AIRL variants, we believe that a more critical aspect of the practicality of our imitation learning algorithms is preserving the algorithm's robustness even for challenging situations.\n2. **Reusability of reward functions.** Our key postulation of reward function is rooted in the core argument that a reward function has a one-to-one relationship with a corresponding policy ($\\Omega$ determines such correspondence). Based on our theoretical analyses, this claim is a strong argument that is different from other works that directly estimate a reward function of a certain form. From this perspective, we do not see a fixed form of reward function as a reusable object across multiple downstream tasks due to the nature of accumulation of errors in imitation learning in sequential decision problems and the possibility of negative transfer. Instead, we consider a reward function acquired by MD as an instant representation of current policy in a different form.\n\n", " We appreciate all the reviewers for their invaluable feedback. We have used them to improve our paper. We also thank encouraging comments from the reviewers, referring to our work as well-written (**`Ptuv`** & **`4Ddj`**), solid (**`4Ddj`)**, sound (**`Ptuv`** & **`4Ddj`**), and novel (**`mFTz`)**. Here we present abridged answers to the essential questions. \n\n- **Limitations.** The design choice of dual discriminative architecture was from the theoretical requirements of Eq. (7) for all possible instances of the on-policy state samples of the environment. Matching state densities with the reward formulation in Eq. (15) has distinct technical benefits in practice. Following the reviewers’ suggestions, we rewrote **Section 8** in the revised version of our manuscript.\n- **Experiments.** The main reasons for focusing the comparison studies with RAIRL were (1) compatibility and (2) significance. RAIRL is the essential counterpart for almost all experiments because it inherits the general feature of the modern AIL method, with a wide range of choices of Bregman divergence. Therefore, the consistent performance gains of MD-AIRL from RAIRL are important as they are directly related to the core theoretical claims. As suggested, we clarified the goal of the experiments and the underlying reasoning in the rebuttal revision and included additional experimental results below.\n\n**Additional experiments.** We are pleased to report additional empirical evidence before the author-reviewer discussion period. The following comparison experiment in Table B1 covers several recent IRL methods, where the reviewers suggest many of the algorithms. In particular, GAIL [1] with DAC-style reward function ($r(s,a) := \\log D(s, a) - \\log(1 - D(s, a))$) [2], FAIRL [3], and F-IRL[4] were additionally tested with noisy demonstrations. The table shows the performance for five different algorithms; we used our RAC implementation with Shannon regularizer except for F-IRL (we tested F-IRL with the official implementation with custom trajectory data).\n\n**Table B1. Imitation learning scores for noisy demonstrations ($\\varepsilon = 0.5$, the scores are rescaled by considering the average expert performance as 1).**\n\n| | Hopper-v3 | Walker2d-v3 | HalfCheetah-v3 | Ant-v3 |\n| --- | --- | --- | --- | --- |\n| GAIL (DAC-reward) [1,2] | 0.95 ± 0.11 | 0.66 ± 0.24 | 0.97 ± 0.07 | 0.94 ± 0.08 |\n| FAIRL [2] | 0.73 ± 0.25 | 0.14 ± 0.06 | 0.96 ± 0.03 | -0.11 ± 0.15 |\n| F-IRL [3] | 0.82 ± 0.06 | 0.67 ± 0.2 | 0.94 ± 0.03 | 0.91 ± 0.07 |\n| RAIRL | 0.96 ± 0.38 | 0.55 ± 0.32 | 0.96 ± 0.15 | 0.85 ± 0.11 |\n| MD-AIRL | 0.98 ± 0.07 | 0.73 ± 0.31 | 0.98 ± 0.02 | 0.96 ± 0.07 |\n\nMD-AIRL clearly outperforms modern IRL algorithms when it comes to the robustness of unreliable demonstrations. These additional results is in alignment with our analyses and supports our claims.\n\n**Rebuttal Revision.** Before the discussion, we posted a current revision of our manuscript. Notably, this version contains new lines in Sections 6 & 8 to reflect the majority of reviewers' important suggestions, as well as a few minor corrections of typos and grammatical errors. We believe the presentation has become much clear. We are currently extending this process further, incorporating all of the reviewers' invaluable comments for the final submission.\n\nPlease let us know if there are any remaining questions!\n\nBest Regards,\n\nAuthors of Submission #4477\n\n[1] Jonathan Ho and Stefano Ermon. Generative adversarial imitation learning. In Advances in Neural Information Processing Systems, pages 4565–4573, 2016.\n\n[2] Ilya Kostrikov, Kumar Krishna Agrawal, Debidatta Dwibedi, Sergey Levine, Jonathan Tompson, Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning, In ICLR 2019\n\n[3] Seyed Kamyar Seyed Ghasemipour, Richard Zemel, and Shixiang Gu. A divergence minimization perspective on imitation learning methods. In Conference on Robot Learning, pages 1259–1277. PMLR, 2020.\n\n[4] Tianwei Ni, Harshit S. Sikchi, Yufei Wang, Tejus Gupta, Lisa Lee, and Ben Eysenbach. F-IRL: inverse reinforcement learning via state marginal matching. In Conference on Robot Learning, pages 529–551. PMLR, 2020.\n", " This paper cast the imitation learning problem as an iterative RL-IRL process and proposes a new adversarial imitation learning (AIL) algorithm based on mirror descent. The authors prove the convergence of the learning policy to an optimal policy in $\\mathcal{O}(\\frac{1}{T})$. The experiments show that the proposed method, MD-AIRL, consistently outperforms RAIRL. **Strengths**:\n\nMotivated by some practical observations, e.g., the global solution in AIL is hard to obtain by a monolithic estimation process, the authors convert the original AIL formulation to a sequence of policies and associated reward functions, and derive an MD update rule that enjoys a regret bound of $\\mathcal{O}(\\frac{1}{T})$. This approach is appealing and seems sound, which also provides a linear convergence rate to the optimal policy. The experiments demonstrate MD-AIRL consistently outperforms RAIRL and is more robust to the noise and the number of trajectories. \n\nOverall, this paper is clearly written and easy to follow.\n\n**Weaknesses**:\n\nWhile the analysis part contributes a lot, the compared baseline only consists of RAIRL and BC. Some recent work, e.g., f-IRL (in terms of IRL) [1] and ValueDice (in terms of imitation learning) [2] can also achieve a more robust learning process than AIL and AIRL, which are missing in the literature or as baselines. Besides, MD-AIRL extends the AIRL framework, thus the learned reward function may be used in the downstream task as done in f-IRL. However, the functionality of the learned reward function has not been revealed. \n\n[1] Tianwei Ni, Harshit S. Sikchi, Yufei Wang, et al. \"f-IRL: Inverse Reinforcement Learning via State Marginal Matching.\" CoRL 2020.\n[2] Kostrikov, Ilya, Ofir Nachum, and Jonathan Tompson. \"Imitation learning via off-policy distribution matching.\" ICLR 2020.\n \n(1) The current theoretic analyses show the convergence to the optimal policy, which seems to come from the nature of MD. Is it possible to establish some results on the learned reward function? Or is it possible to give some analysis (experimental or theoretic) on the learned reward function?\n\n(2) I noticed the BC results in the MuJoCo tasks are very low, while BC can be a very strong baseline on MuJoCo tasks with only 1 demonstration (please see Figure 4 in [1]). Can you clarify the implementation details of BC?\n\n\n**Minor:**\nIn Thm. 2 (Line 220), what is the difference between $A_t$ and $A_T$?\n\n\n[1] Ziniu, et al., \"Rethinking ValueDice - Does It Really Improve Performance?\", ICLR Blog Track, 2022. The authors proposed an appealing approach to make the AIL process more robust. The theoretical result on the convergence rate is of great importance and the experiments show MD-AIRL is consistently better than RAIRL. MD-AIRL learns the reward function, which is also important for the IRL problem, while the authors dig little into the learned reward. Besides, comparisons with other recent IL and IRL algorithms will be helpful.", " Inspired by mirror descent, the paper proposes MD-IRL, which considers the reward function as an iterative sequence in a proximal method. The paper proves that such a method ensures robust minimization of a Bregman divergence under the mirror descent framework. \n Strengths:\n - It is novel to consider using MD in the reward update. Rigorous regret upper bound is also provided to show that the Bregman divergence is minimized to a local optimum. In the meanwhile, stepsize considerations are also provided, which is good. \n - Thorough empirical studies are provided (though limited, see weaknesses). \n\nWeaknesses:\n - The paper is closely related to RAIRL. I think the authors should make a thorough comparison with RAIRL. \n - In the empirical study, the paper only compares with RAIRL, which is slightly limited. \n See weaknesses. Yes. ", " The main idea of this paper is to modify adversarial inverse RL to perform mirror descent policy updates. In practice, this means that the regular RL actor objective is modified by minimizing the divergence between policy iterates. The paper proves that this method converges under reasonable assumptions (e.g., Robbins Monro learning rate), and shows that the method does work roughly on par with one prior methods (RAIRL), for a wide range of Bregman divergences. Strengths\n* The proposed method is applicable to a wide range of Bregman divergences.\n* Experiments study robustness to varying noise in the dataset and choices of hyperparameters.\n\nWeaknesses\n* Poor writing made the paper very difficult to understand. For example, when introducing the theoretical results, it'd be good to explain what question these results are answering before stating the results themselves. Similarly, when introducing the proposed method, it'd be good to explain the problem before introducing the solution (e.g., the dual discriminator is introduced without motivation).\n* I'm unsure about some of the claims in the introduction and related works section (see below).\n* Limited baselines for the continuous control experiments. E.g., for Fig 7/8/9, I'd recommend comparing to a method like DAC [1], or a more recent+competitive method.\n\n**Summary**: Overall, it seems likely that mirror-descent is the right way to do imitation learning, and that it can outperform vanilla gradient descent in many settings. However, I'm not convinced that this paper effectively makes this point, partially because the writing is very hard to follow, and partially because the experiments do not convincingly show that other Bregman divergences consistently outperform L2 distance (which is effectively what gradient descent uses). So, I think the paper is on the right track, and will eventually make for a strong resubmission after significant revisions to the writing and experiments.\n\n\nAdditional, less important questions/comments:\n* I'd recommend running a spelling + grammar checker on the paper.\n* L23 -- L30 -- It seems that this paragraph is really about imitation learning, not inverse RL. Indeed, adversarial imitation learning methods do not actually recover the expert reward function, only the expert policy. So, I'm not sure that the claim that these methods \"estimate the ... reward function directly\" is true.\n* Table 1 -- The column about \"rewards\" seems misleading, for two reasons. First, it's unclear of learning the reward function is actually useful, if the user only cares about acquiring the optimal agent. Second, methods like GAIL do not actually learn the expert's reward function. The column about Bregman divergences seems misleading, too, because BC can be implemented with different Bregman divergences (e.g., the standard BC corresponds to a forward KL divergence). BC also enjoys iterative solutions, and also inherits convergence guarantees from standard supervised learning theory.\n* L73 \"Pythagorean theorem\" -- Is it true that all Bregman divergences obey the pythagoream theorem? I don't think it's true for the KL divergence. The citation [13] doesn't mention Pythagorean theorem.\n* Fig 2 is very hard to read. I'd recommend making all subplots a consistent size (e.g., remove the tiny subplots in B) and making the font sizes all larger.\n* Eq 14 -- I'd recommend adding a paragraph of explanation before this, explaining the intuition before stating the final objective.\n* Eq 15 -- Why is the $\\lambda$ parameter needed? From a probabilistic perspective, it seems like $\\lambda = 1$ should be optimal\n* L263 -- L265 -- What is the motivation for introducing these different Bregman divergences? The proposed method is general, in that it can use any Bregman divergence. But making the claim that this generality is useful requires evidence that different Bregman divergences are useful in different situations.\n* Sec 7.1 -- I didn't understand the motivation for this experiment. It seems like the optimal policy is given by the empirical action distribution, which is trivial to compute with gradient descent or iterative updates.\n* Fig 5 -- Does the RAIL baseline also decay the learning rate? If the proposed method uses a learning rate schedule, it'd be good to equip the baselines with this schedule, too.\n* Fig 6, \"goal entropy\" -- Why is this a reasonable metric? It seems like some measure of divergence from the expert policy would be more meaningful.\n\n[1] https://arxiv.org/abs/1809.02925 1. To what extent are the results limited to inverse RL, versus any algorithm that performs policy updates? It seems like the idea of doing regularized policy improvement could (and has been?) studied in a wide range of contexts, beyond imitation learning. To what extent are prior results *already* applicable to this setting, and to what extent do the proposed results extend to other settings (e.g., are they stronger than the results in [1]?)?\n2. While it seems like the theoretical results are correct, I'm unsure if they motivate the use of mirror descent. What would similar convergence results looks like for (say) standard/Euclidean gradient descent?\n\n\n\n[1] https://proceedings.mlr.press/v97/geist19a.html The limitations section is OK. The discussion of extending the analysis to f-divergences is a good point, but not really addressing the limitations of the proposed method. The discussion about \"unsafe behavior\" and \"automation of labor\" seem overly broad. I would recommend making more specific limitations about the work (e.g., arbitrary Bregman divergences might not always be useful, and the choice of divergence measure adds an additional hyperparameter for the user to tune).", " The paper derives MD-AIRL, a new algorithm for inverse reinforcement learning as an instance of online mirror descent and proves convergence and bounded regret.\nThe derived algorithm is closely related to existing methods in the field of adversarial imitation learning, by using a structured discriminator akin to AIRL to estimate the expert densities (but additionally training a state-only discriminatoris to more directy match the state marginals). Furthermore, similar to RAIRL, the algorithm is tested for different Bregman divergences and policy regularizers and uses a density based model for the discriminator. The main algorithmical difference compared to prior methods, is that the discriminator does not directly specify the reward function for the given iteration, but instead is used to estimate the expert policy (which changes in every iteration, motivating the online MD formulation). The update in Eq. 14, hence, corresponds to an additional step that computes the next reward based on the expert policy estimate.\nMD-AIRL is compared to RAIRL on a multi-armed bandit problem, a continous point-mass environment and Mujoco environments, and shows slightly improved performance across all settings. \n## Originality\n__Strength__\n- The mirror descent formulation seems novel and leads to a slightly different algorithm that can make use of the broad theory around mirror descent.\n\n__Weakness__\n- Algorithmically, the changes compared to prior work are not overwhelming. \n\n## Quality\n__Strength__\n+ The derivations seem sound and the main claims are proven with sufficient rigor.\n\n__Weakness__\n- Proofs on convergence do not consider estimation errors (for estimating log-ratios and for policy optimization) \n- I think some aspect of the algorithm are not fully consistent with theory and not sufficiently well motivated:\n1. The second (state-only) discriminator is motivated based on [45, 46], where the GAN setting is considered. I don't see how these paper motivate the particular use of a state-discriminator. \n2. As far as I understood, the regularized reward function (Eq. 15) deviates from the theory. While I understand the motivation behind it (biasing the agent to vist expert states), it is not clear how this additional term affect the theoretical results on convergence and regret bound. I think the algorithm should also be evaluated without the state-reward\n\n## Clarity\n__Strength__\n- Overall, the paper is well-structured and well-written.\n\n__Weakness__\n- Some parts are a bit hard to understand:\n1. In Eq.14, it is not clear how $\\pi_{\\phi}$ is computed from $\\psi_{\\phi}$. I assume the optimization is actually performed w.r.t. a parameterized policy and then mapped to the reward function (or the policy is directly expressed in terms of $\\phi$), however this should be made more explicit. Also, it is not clear how the reinforcement learning relate to the derivations; the MD update in Eq. 7 does not consider time series data. \n2. I don't understand how the main change to prior work (Eq. 14) follows from the MD formulation. After the estimation of the expert densities, couldn't we directly perform the MD update by optimizing the policy with respect to Eq. 7 (using expectations over trajectories)?\n\n## Significance\n__Strength__\n- The work is a solid contribution to the field by providing a mirror descent formulation for inverse reinforcement learning, along with the corresponding algorithm, which preforms reasonably well.\n I only have two questions, but it would be important to answer them thoroughly in the rebuttal:\n\n1. How crucial is the state reward (Eq. 15) for the performance of MD-IRL, how does it relate to the theory? \n2. In Eq. 14, do I understand it correctly, that the objective is minimized with respect to $\\pi_{\\phi}$, and the optimal policy is then mapped to a reward function using the regularized reward operator? How does this relate to the paths in Fig. 1? If we know $\\pi_{t+1}$ already (because it is the policy that we used to compute $\\psi$), why do we need to perform reinforcement learning (cf. Eq.4 and the equation after Eq. 15) if we know the (valid) distribution for the current reward? I understand that we need to mitigate covariance shift and take into account the auxiliary state-reward, but how does this relate to the theory? The paper only discussed the limitation that some divergences are not covered by this formulation.\n\nThe paper would be much stronger, if it would thoroughly discuss the *mismatch between theory and algorithm* and clearly state that the *effects of estimation errors* have substantial impact in practice, but are not considered in the current work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 2, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 2, 4 ]
[ "QcWr32wiGG3", "bFOeN0qjuR", "aFcxAnSYa1f", "TvGtDv5kW9", "EZ__lX6cylc", "gAaa1K94fV-", "y2QhDF3YA3w", "pqwWbhvJAId", "kKdmrpsVyPT", "e9JDF8MnZp3", "y2QhDF3YA3w", "BqRHebuJYdH", "GgjNxHw0lqY", "nRy16duSBny", "zqPT3-RfnTGE", "pN2lz8SZ53U", "GgjNxHw0lqY", "wwC2puPOSNZ", "GM5MhZ5GFJ", "Y7Q3c7LIPIB", "tr-TsEEOSl3", "Xr6sWyeWRad", "CidadnfoeNt", "BeVQi2AXyQs", "jqnc-rarpx-", "gJn2URmrtbt", "CidadnfoeNt", "icz7sDf18xZ", "tr-TsEEOSl3", "FJKnNBsE06o", "nips_2022_huT1G2dtSr", "nips_2022_huT1G2dtSr", "nips_2022_huT1G2dtSr", "nips_2022_huT1G2dtSr", "nips_2022_huT1G2dtSr" ]
nips_2022_q2nJyb3cvR9
Near-Optimal Randomized Exploration for Tabular Markov Decision Processes
We study algorithms using randomized value functions for exploration in reinforcement learning. This type of algorithms enjoys appealing empirical performance. We show that when we use 1) a single random seed in each episode, and 2) a Bernstein-type magnitude of noise, we obtain a worst-case $\widetilde{O}\left(H\sqrt{SAT}\right)$ regret bound for episodic time-inhomogeneous Markov Decision Process where $S$ is the size of state space, $A$ is the size of action space, $H$ is the planning horizon and $T$ is the number of interactions. This bound polynomially improves all existing bounds for algorithms based on randomized value functions, and for the first time, matches the $\Omega\left(H\sqrt{SAT}\right)$ lower bound up to logarithmic factors. Our result highlights that randomized exploration can be near-optimal, which was previously achieved only by optimistic algorithms. To achieve the desired result, we develop 1) a new clipping operation to ensure both the probability of being optimistic and the probability of being pessimistic are lower bounded by a constant, and 2) a new recursive formula for the absolute value of estimation errors to analyze the regret.
Accept
Thank the authors for their submission. The paper studies regret minimization in finite-horizon tabular Markov decision processes. It is the first to show an optimal (up to logarithmic factors) regret bound of $\widetilde O(H \sqrt{|S| |A| T)}$ to Thompson sampling-type algorithms. A good addition to the TS literature, showing another case in which TS algorithms can have the same regret guarantees as optimistic algorithms. The paper is well-written.
train
[ "2gBPAfynl", "hlQfaBo3hff", "YY1Bv_Hmyp", "wo_6Qf7zMW0", "oZhQNNvPMnh", "-Mu4Md_-_b8", "v9xSAjHSmkyN", "-4QwtcZaSTL", "uzsIcSCgKP", "QPBaSHHJDHfI", "k6BF1ho4tLj", "U-qVArdYYAn", "ZmbnBhdkmj1", "K0e7uoBMN7e", "68owXVCI6hv", "HJniB9FQwtJ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your response! Please find our answers to your questions below:\n- **Non-stationary MDP**: In RL theory, **non-stationary MDP** means that transition probability $P$ and reward function $r$ change from episodes to episodes, which is a notion of non-stationarity slightly different from the standard stochastic process literature. In our paper, the model we use is called **time-inhomogeneous** MDP, which is more general than the **time-homogeneous** MDP (meaning that $P$ and $r$ remain fixed for all time steps), which is usually the standard textbook setting such as [Sutton et al. (2018)]. Therefore, our bounds still hold for the time-homogeneous MDP setting. In recent years, RL theory community is interested in time-inhomogenoeus MDP because of its generality.\n- **Real-world model**: We believe the Atari games, widely used as benchmarks for many practical algorithms, is a family of examples where \"the same sequence of transition matrices and reward functions appears in every episode\". At every time the game restarts (an episode ends), the agent will face the same game setting (same transition matrices and reward functions).\n- **Technical novelty:** We are happy to explain our novelties in further details if you have more specific questions.\n\n```\n[Sutton et al. (2018)] Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.\n```", " Thank you for the clarifications and taking my suggestions into consideration.\n\nI have a couple of questions.\n1. The MDP model in the paper uses a sequence of transition matrices and rewards that change along an episode, which is why they're indexed by $h\\in[H].$ This, by definition is a non stationary process.\n* My question was about justifying the use of this model. The same sequence of transition matrices and reward functions appears in _every_ episode. I would like to see some independent justification of this model, i.e., more than the claim that \"others use it.\" Where in practice does one encounter this model (or something that at least approximates it)?\n2. I am not convinced with the explanation about technical and algorithmic novelty.\n ---\nPost rebuttal, the reviewer would like to maintain their rating at 4.", " Thank you very much for your constructive suggestion! We will add an ablation study of our algorithm in the final version.", " Thanks for your reply! Unfortunately, in the experimental section I did not found the proof of your claim about increasing of stability of the algorithm by single seed randomization. It would be interesting to observe in the final version an ablation study of your algorithm. Now it is not clear what make algorithm better: single seed randomization or just smaller magnitude of the noise.\nHowever, I still think this paper is a very interesting contribution to the community and would like to keep my score.", " Thanks again for your review. We hope our answers could increase your confidence. As the discussion period is close to the end and we have not yet heard back from you, we would be glad to see if our rebuttal response has addressed your concerns questions/concerns.\nWe are more than happy to discuss further if you have any further concerns and issues, please kindly let us know your feedback. Thank you for your time and help!", " Thanks again for your review. We hope our answers could increase your confidence. As the discussion period is close to the end and we have not yet heard back from you, we would be glad to see if our rebuttal response has addressed your concerns questions/concerns.\nWe are more than happy to discuss further if you have any further concerns and issues, please kindly let us know your feedback. Thank you for your time and help!", " Thanks again for your review. We hope our answers could increase your confidence. As the discussion period is close to the end and we have not yet heard back from you, we would be glad to see if our rebuttal response has addressed your concerns questions/concerns.\nWe are more than happy to discuss further if you have any further concerns and issues, please kindly let us know your feedback. Thank you for your time and help!", " Thank you very much for your careful reading and recognition of our novelty! Please find our response to your questions and concerns below:\n- **Empirical study:** Please refer to the experiments section at the top for all reviewers.\n- **Appendix issues:** Thanks a lot for the careful reading and constructive suggestions for our appendix. We will fix these issues in our final version.", " Thank you very much for your careful reading and recognition of our novelty! Please find our response to your questions and concerns below:\n- **Lower order term:** We agree that our lower order term $\\widetilde{O}(H^4S^2A)$ is sub-optimal. However, it is a big open problem in reinforcement learning theory to get rid of the lower order terms and can serve as a promising future direction. In particular, our lower order term is the same as UCB-VI and more discussion about this topic can be found in [31].\n- **Thompson sampling algorithms and PSRL[5,6]:** We did not fully address the works in the area of Thompson sampling because we mainly focus on the standard frequentist regret bound and it would be hard to compare with an algorithm (such as PSRL in [5,6]) which studied different settings and objectives. On the other hand, we indeed compared with the algorithm of Thompson sampling under comparable settings and objectives such as [41] (in Section 4.1).", " Thank you very much for your careful reading and constructive suggestions! Please find our response to your questions and concerns below:\n- **Tabular MDP settings:** The tabular MDP setting studied in this paper is a **benchmark setting** in the RL theory community to evaluate the theoretical properties of new algorithms and techniques. All the relevant papers cited [2,7,8,9,12,14,15,22,23,31,33,41,42,48,50,51,53,54] use the same setting. \nIf the transition matrix and reward are changing, then the setting becomes **non-stationary MDP** [Wei and Luo, 2021], and how to adapt UCB/TS to non-stationary MDP remains largely unsolved. However, non-stationary MDP is beyond the scope of this paper.\n- **Novelty:** We disagree we lack novelty. We want to emphasize that much more technical novelty lies in our analysis, in addition to the single seed in algorithm design. \n - In particular, clipping used in existing literature [2,7] directly truncates the value estimate by the upper bound $H-h+1$, while we are the first one to use the new two-side clipping (with threshold $2(H-h+1)$) and this is crucial to our analysis. \n - In the analysis, we use both optimism and pessimism to derive a novel recursive bound on the absolute value of the policy estimation error. Meanwhile, it is also the first time that pessimism is used for worst-case regret analysis in online reinforcement learning.\n- **Algorithm writing:** Thanks for the suggestion! We have added a hyperlink to equation (5) and (6) for $\\sigma^k_{\\mathrm{ty}}$. As for trajectory generation, we believe it has been described in line 9 and 10 of Algorithm 1. Please let us know if you were referring to something else?\n- **Advantage of randomized exploration:** We are sorry for the confusion. However, in the paragraph of line 225, we did not intend to use our regret bounds to justify that the algorithmic component is the main advantage of randomized exploration. Instead, it serves as our motivation to study randomized exploration since the *advantage* here refers to the ease of implementation instead of any superiority in regret bound.\n- **Expected regret and experiments:** Please refer to the experiments section at the top for all reviewers. As for the expected regret, we believe that high-probability regret is a strictly stronger measure than the expected regret since by simply taking $\\delta=\\frac{1}{T}$, the expected regret will have the same leading order as the high-probability regret.\n\n```\nWei, Chen-Yu, and Haipeng Luo. \"Non-stationary reinforcement learning without prior knowledge: An optimal black-box approach.\" Conference on Learning Theory. PMLR, 2021.\n```", " Thank you very much for your careful reading and constructive suggestions! We will fix the typos and address all concerns in more details in the final version. Please find our response to your questions and concerns below:\n- **Condition for $T$ in Theorem 2:** The requirement of $T\\geq \\Omega(H^5S^2A)$ in Theorem 2 comes from line 774 in the proof of Lemma 25. From a high-level perspective, this is needed because the Bernstein bonus uses the sum of variance to control the regret, which is more refined than the Hoeffding bonus but only when $T$ is large enough. That is, if $T\\leq H^5S^2A$, then our analysis cannot guarantee the optimal order of $H$ in the leading term. \n- **Lower order term:** Our lower order term is the same as UCB-VI and we will discuss more on this in the final version. Meanwhile, it remains a major open problem in reinforcement learning theory to get rid of the lower order terms and some detailed discussion of this topic can be found in [31].\n- **Implementation of UCB-type algorithms:** We want to clarify that we believe UCB-type algorithms have difficulty in implementation mainly because of their non-trivial construction of confidence interval, which is true for both model-based and model-free algorithms. This is discussed in the paragraph starting from line 222 and we will try to make it more clear.\n- **Experiments:** Please refer to the experiments section at the top for all reviewers.", " We have fixed all the typos and added a section for experiments at **the end of the appendix**. In brief, we empirically compare RLSVI, UCBVI and our algorithm SSR in the deep sea environment, which is frequently used to test an algorithm's ability to do efficient exploration [35, 37]. We did not compare them with the PSRL [5, 6] because it is designed for different settings and objectives, which is not comparable to our algorithm.\n\nIn brief, SSR performs comparably as the UCBVI and significantly better than the RLSVI as suggested by our theory. More details can be found in our updated supplementary file.", " This paper studies episodic RL in tabular MDPs in the regret minimization setting, and presents an algorithm, called SSR, which follows the principle of exploration via randomized value functions. The paper presents two variants of SSR that differ in the choice of the noise. The variant that generates noise sequences based on Bernstein’s inequality is shown to achieve a worst-case regret bound (with high probability) that matches the lower bound up to logarithmic factors. This regret bound improves the best known regret bounds for randomized exploration by a factor of $\\widetilde O(H\\sqrt{S})$. The paper studies a classic but important problem in RL. Despite all the recent advances in the theoretical aspects of episodic RL, the studied setup is interesting in view of the empirical success of exploration via randomized value function and considering that the theoretical picture is not yet complete. The paper makes a good step in this direction and thus makes a nice addition to the literature on theoretical episodic RL. \n\nThe paper is written well and admits a good organization. The presentation of the various elements (model, algorithm, and results) is clear and precise. All these makes the paper a nice read. \n\nAmongst the technical and algorithmic novelties is the use of a single seed for randomization and the use of a new clipping strategy. The latter is shown to maintain both optimism and pessimism, which turns out to be crucial in achieving the improved regret bounds. \n\nSome weaknesses and questions:\n\n- The regret bound of the Bernstein-type variant in Theorem 2 is stated to hold when $T$ is larger than a (rather large) polynomial of $H$ and $S$, whereas this is not the case for the other variant. Could you elaborate further on this? And what can be said about regret for when $T$ is smaller than this threshold?\n\n- I would like to appreciate the authors to have included the lower order terms in the regret bounds and to have clarified the time after which the desired $\\sqrt{T}$-regime kicks in. Under both variants, the lower order terms scale as $H^4S^2A$, so that the desired $\\sqrt{T}$-regime kicks in after a rather large $T$ (albeit polynomial in $H$ and $S$). I may urge the authors to compare SSR against state-of-the-art in terms of such critical values of $T$ as well.\n\n- The statement ”UCB-type algorithms … suffer from difficult implementation ...” is not entirely correct. While this statement could be valid for model-based UCB-type algorithms, model-free ones do not require complicated planning procedures (such as EVI in UCRL2 or alike). Rather, they just take actions greedily with a Q-function, which is cheaply implementable. I may ask the authors to further elaborate on this, and if necessary, make the statement more precise. \n\n- The paper nicely closes the gap between the regret lower and upper bounds, implying that for large enough time horizons, both UCB-type and randomized approaches could achieve the worst-case optimal regret bound. Then the following question naturally arises: In the considered episodic setting, which approach is empirically superior? Unfortunately, the paper does not provide any experimental result, so it remains inconclusive as to whether any of the two approaches provide a superior empirical performance. \n\nMinor comments:\n\nIn several places, it is stated that optimism and pessimism is guaranteed to hold with constant probability. In technical sections (e.g., Section 4 and later), it might be a good idea to explicitly state such a constant probability to enrich the discussion. \n\nSome typos: \n\nLine 89: Algorithms …. has been ==> … have been \n\nLine 130 (and elsewhere): in tabular setting ==> in the tabular setting\n\nLine 158: $h=1, \\ldots H$ ==> $h=1, \\ldots, H$\n\nLine 162: noisy observation ==> noisy observations \n\nLine 178: has to large ==> has to be large\n\nLine 188: in suboptimal regret bound ==> in a suboptimal regret bound\n\nLine 192: number of clipping ==> … clippings \n\nLine 194: is closed to ==> is close to \n\nLine 198: uncertain-based ==> uncertainty-based\n\nLine 212: … still maintain ==> still maintains\n\nLine 219: Full stop missing\n\nLine 221: noise ==> noises \n\n------ Post Rebuttal ------\n\nI would like to thank the authors for the rebuttal. My questions and concerns are adequately addressed. As a result, I maintain my score of 7, but with an increased confidence of 4. \n See the weaknesses above. Also address minor comments if relevant. There is no negative societal impact associated to this work. The most important limitations of the presented methods in the paper are adequately and clearly discussed in the paper.\n", " The paper studies a finite state-action MDP setting with time inhomogeneous transitions and costs. The authors then propose a single seed randomization algorithm that . Regret analysis then purports to achieve $\\tilde{O}(H\\sqrt{SAT})$ bound on regret in the finite MDP setting using randomized algorithms (as opposed to variants of the UCB family for MDPs).\n Strengths: \n1. The paper addresses an important lacuna in finite MDP Reinforcement Learning. UCB algorithms have been known to show $\\tilde{O}(H\\sqrt{SAT})$ regret performance for quite some time now and an extension to Thompson Sampling-type algorithms is quite necessary.\n\nWeaknesses:\n1. I'm not convinced the setting of the problem is entirely natural. Essentially, the model is one where the agent interacts with the environment over H trajectories, and the exact same sequence of transition matrices and reward functions materializes every time.\n- Using a model just because it has been previously used does _not_ constitute sufficient justification.\n\n2. The algorithm and its analysis are lacking in novelty, in my opinion. \n- The concept of clipping and are introduced in [2, Algorithm 1] (see line 15) \n- With regards to $\\sigma^k_{ty}$ in Step 6: Moving from Hoeffding to Bernstein type randomization is rather dated and can be found in UCB-type algorithms previously proposed in the literature.\n- It appears to me that using a single seed for the trajectory is the only truly novel idea. 1. Steps are missing in Algorithm 1. \n - Trajectory generation must be mentioned somewhere between Steps 5 and 6.\n - Please point to equations 5 and 6 to show how $\\sigma^k_{ty}$ is generated. I was searching around for an expression for $\\sigma^k_{ty}$ for quite some time.\n\n2. In line 225, you claim that \"the main advantage of randomized exploration lies in the algorithm component.\" Yet, the results show that it is the other point, i,e, the Bernstein noise that really does the $H^{1.5}$ to $H$ reduction. In light of this, how would you justify your claim in Line 225?\n\n3. Can you say anything about expected regret?\n - I cannot find any numerical results in the paper. In the absence of simulations, how does one compare SSR empirically with other randomized (and UCB-based) algorithms? Regret analysis only provides upper bounds which could be rather loose.\n - Moreover, simulations could also shed light on the matter of **expected regret**. The present analysis says nothing about what happens on the complement, i.e., the subset of sample paths with measure $\\approx \\delta.$ The work is theoretical and I cannot find any potential negative societal impact.", " The paper considers the problem of regret minimization in RL using randomized algorithms. For this, the authors provide a single-seed-based algorithm that is able to obtain $\\tilde{O}(H\\sqrt{SAT})$ regret bound using Bernstein concentration-based variance scaling. Strength: The result of the paper and the simplicity of the algorithm using a single seed in an episode is the biggest result. Further, providing an analysis based on absolute value analysis of the difference between the optimal policy of the true problem setup and of the optimal policy of the sampled value function is interesting and can be used in many future works working on sampled MDPs.\n\nWeakness: \n\n1. The additive term of $O(H^4S^2A)$ is quite large. \n2. The paper does not addresses the works in the area of Thompson Sampling and worst-case regret bounds for posterior sampling based algorithms. How does the SSR algorithm compare against the PSRL algorithm by [5,6]? Yes.", " Authors propose novel algorithm for reinforcement learning with randomized exploration. Algorithm achieves minimax optimal regret bound up to poly-logarithmic factors, and it is the first such result for RL algorithms based not on the principle of Optimism in the face of Uncertainty. The most crucial algorithmic features that allow authors to achieve their result are\n1) use a single Gaussian noise up to scaling for all transitions;\n2) specific Bernstein-shaped magnitude of noise;\n3) novel clipping procedure that was used at the first time for algorithms with randomized exploration. Strengths:\n\n1) The first algorithm based on randomized exploration that achieves minimax optimal regret; It vastly extends the theoretical perspectives of randomized exploration.\n\n2) Novel proof technique that allows to control pessimism error for randomized algorithms;\n\nWeaknesses:\n\n1) Lack of empirical study. Authors claimed that use of single seed can significantly increase the stability of the algorithm, and it would be very interesting to observe this effect practically on tabular environment with non-trivial exploration problem (N-rooms for example). - The main suggestion was listed in the section on Strength and Weaknesses: it would be very nice to have an additional empirical comparison between SSR algorithm and other baselines such as UCBVI, classical RLSVI, and PSRL.\n- Additionally there are several small issues with the text in the appendix:\n - In the formula after line 577 $\\alpha_k$ was used without definition except the table of notations;\n - In Lemma 10 it could be great to make $z$ a parameter of function $f$;\n - In the beginning of the section of regret decomposition there was a lot of added subscript $k$ into value function that corresponds to $V^\\pi$. Authors addressed all the limitations of their paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "hlQfaBo3hff", "QPBaSHHJDHfI", "wo_6Qf7zMW0", "-4QwtcZaSTL", "HJniB9FQwtJ", "K0e7uoBMN7e", "ZmbnBhdkmj1", "HJniB9FQwtJ", "68owXVCI6hv", "K0e7uoBMN7e", "ZmbnBhdkmj1", "nips_2022_q2nJyb3cvR9", "nips_2022_q2nJyb3cvR9", "nips_2022_q2nJyb3cvR9", "nips_2022_q2nJyb3cvR9", "nips_2022_q2nJyb3cvR9" ]
nips_2022_Kf8sfv0RckB
TTOpt: A Maximum Volume Quantized Tensor Train-based Optimization and its Application to Reinforcement Learning
We present a novel procedure for optimization based on the combination of efficient quantized tensor train representation and a generalized maximum matrix volume principle. We demonstrate the applicability of the new Tensor Train Optimizer (TTOpt) method for various tasks, ranging from minimization of multidimensional functions to reinforcement learning. Our algorithm compares favorably to popular gradient-free methods and outperforms them by the number of function evaluations or execution time, often by a significant margin.
Accept
The basic ideas and contribution of this paper have been positively evaluated by the reviewers. There were a few questions, but many of them have been resolved by the authors' careful replies. There is an opinion that the comparison with reinforcement learning is inadequate, but since it is an application, this is not a major problem.
test
[ "o0ZgIThEsJg", "4NP8YtwYwfE", "R_VdFHwyg2s", "9F06vASsC80", "wpDmnjWrIl", "8j4ZppLpaHh", "K30GlSvoISn", "JfixnKgbJl", "_y3rl5vR9X8", "H6gQmfL3NC", "xdBrKMf4aiC", "OV3IVPn_h6bz", "hkSyXhEox5", "nwrDELUK36x", "1tGJR3TVFMQ", "7wJHCc1_DjM", "14gXs9r50fP", "LydYjFhWZUM", "vm7r4WDPzP6", "RtmNJfB6xa", "AGrzh4NU1Iw", "ZXlcr6g6hTO", "Y6uXA5-zO5", "PYXX_FyUbJf", "Hhzyw5KPOu", "SLsQf3P0tJ", "iZNyx0QtyJjF", "PKMC_7qE-kG", "pZ1VV8iKVT", "z9A4Q5CS9FH5", "V-X-boocn_g", "AJT63LeokaD", "UExIBCEj2e8", "6y3lFbGED2I", "1DHJsYb-bGoQ", "PxkiIF4lg-d", "s-HbkGRSAiZ", "dm3GgoX4r_w", "8hWj1OMSGQD", "Y0Y0gy2pAo6", "B6tHEXKhuFw", "xrM4jNQf70T", "F7693w2o3n4" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed rebuttal which clarifies some of my concern. I decide to increase the score.", " Thanks a lot! We are very pleased with your appreciation of our work. We will describe the discretization process in more detail in the text, if it will be accepted for publication.", " Thank you very much! We are very pleased with your high rating.", " Thank you very much for your comments, which allowed us to improve the quality of our work and clarify ambiguous points! If you have any other remarks, we will provide appropriate clarifications or make changes to the text of the paper. There is a small time for a discussion, we will be glad to fix something.", " I thank the authors for their detailed rebuttals, particularly the new numerical experiments. I agree with other reviewers that the process of discretisation of the state-space could have been better explained, particuarly for opimising weights of NNs. However, I think the authors have addressed my concerns, so I will increase my score.", " Thanks for clearing the remaining bits, I've incremented the score and confidence.", " In accordance with the comment of one of the reviewers, we conducted additional numerical experiments with gradient-based methods applied for all benchmark functions. We add the results to the main text (see Tab. 1 with 7 new methods/rows and explanations on LL. 198-202, 215-216, 223-226). As it turned out, for 6 benchmarks, TTOpt gives a significantly more accurate result for the same number of requests to the objective function. Compared to other methods, TTOpt is consistently accurate and avoids random failures to converge.", " In accordance with the comment of one of the reviewers, we conducted additional numerical experiments with gradient-based methods applied for all benchmark functions. We add the results to the main text (see Tab. 1 with 7 new methods/rows and explanations on LL. 198-202, 215-216, 223-226). As it turned out, for 6 benchmarks, TTOpt gives a significantly more accurate result for the same number of requests to the objective function. Compared to other methods, TTOpt is consistently accurate and avoids random failures to converge.", " In accordance with the comment of one of the reviewers, we conducted additional numerical experiments with gradient-based methods applied for all benchmark functions. We add the results to the main text (see Tab. 1 with 7 new methods/rows and explanations on LL. 198-202, 215-216, 223-226). As it turned out, for 6 benchmarks, TTOpt gives a significantly more accurate result for the same number of requests to the objective function. Compared to other methods, TTOpt is consistently accurate and avoids random failures to converge.", " Thank you for your comments! We conducted additional numerical experiments using gradient-based methods. We add the results to the main text (see Tab. 1 with 7 new methods/rows and explanations on LL. 198-202, 215-216, 223-226). As it turned out, for 6 benchmarks, TTOpt gives a significantly more accurate result for the same number of requests to the objective function. Compared to other methods, TTOpt is consistently accurate and avoids random failures to converge.\n \nIn our comment on the error bound, we meant that the compression trick does not formally change the corresponding estimate (eq. 3), but the rank value (R) in eq. 3 may turn out to be different. However, it should be noted that at the moment there is no formal proof substantiating the applicability of eq. 3 in the multidimensional case (since we never build any unfolding matrix in full during TTOpt iterations), but this formula is a motivation for the maxvol-like updates, which we perform during the sweeps. The only guarantee is that the result will monotonically improve with iterations. In the same time, our algorithm works very well in practice for a large class of difficult and widely tested high dimensional benchmarks for global optimization and in most cases close to optimum solutions are found. Thus, our experiments show that this mathematical bound is not very tight and rather pessimistic.", " Thank you for your clarifications and willingness to run additional experiments.\n\n* Your answer on the error bound when mode compression is applied is still not clear to me.\n* I still think gradient-descent baseline for the function minima estimation is important. (Table 1)\n* The Schwefel function is symmetric with respect to permutations of the different entries of $x_i$, nevertheless, I find it one of the most challenging functions that were tested, as its global minimum is not located at the origin. I believe the high absolute error (in comparison to the other functions) is due to discretization.\n* The alpine function is indeed no differentiable, however, it has multiple minima with the same value. I wonder how it would behave if the origin is not included in its discretization grid.\n", " Let us consider a concrete example. Assume that we train a neural agent with $N$ weights in a given RL environment. We discretize weights, such that each weight $\\theta_{i}$ ($i \\in [1, 2, \\ldots, N]$) can take only $K$ (let grid size be $K = 256 = 2^8$) discrete values $[ -1, -1+h, \\ldots, -1 + h \\cdot (K-1), 1 ]$, where $h=\\frac{2}{K-1}$. The target function $J(\\theta)$ is a cumulative reward the agent gets after completing one episode. As it follows, we can consider a function $J$ as an unknown $N$-dimensional tensor $\\mathcal{J}$ with modes of size $K$, i.e., this tensor will have $K^{N}=256^N$ elements. We can also treat $\\mathcal{J}$ as an $M$-dimensional tensor with modes of size $2$ if $2^{M} = K^{N}$ (i.e., $M = 8N$ if $K = 256$) by using QTT. At each step of the TTOpt algorithm, the following happens:\n\n- We get a new multi-index $I$ of the tensor $\\mathcal{J}$ that corresponds to weights $\\theta$.\n- We evaluate the cumulative reward $J$ with given weights $\\theta$.\n- We update the TT-representation (in terms of the TTOpt algorithm) for the tensor $\\mathcal{J}$.", " Thank you for your inspiring question and comment. This topic needs further deep investigation which was out of the scope of this paper. We would like to say, that our extensive computer simulations indicate that performance to find global extremum was not influenced by various initial conditions.\n\nAnd let us explain this in more detail. According to the proposed TTOpt algorithm, we select some random TT-tensor $\\mathcal{Y}_0$, which we then use to generate an initial set of column multi-indices $I_c$ (by the maxvol algorithm applied to each TT-core of $\\mathcal{Y}_0$). Next, the main iteration (sweep) of the algorithm occurs by updating the set of row multi-indices $I_r$ using known column multi-indices $I_c$. Here it is important to note the following:\n\n- Instead of the initial tensor $\\mathcal{Y}_0$, we could directly choose some random set of multi-indices $I_c$. For example, we can select the most distant indices from each other. As it turned out empirically, this approach does not lead to a significant improvement in the result.\n\n- As with the TT-cross method, the TTOpt algorithm quickly ``forgets'' the initial set of multi-indices if it did not match a really good initial approximation of the tensor. Accordingly, the choice of distribution for the TT-cores of $\\mathcal{Y}_0$ is not particularly important (we tried to use a normal distribution with different variances, as well as a uniform distribution).\n\n- Of particular interest is the choice of some really good approximation of the target tensor as an initial approximation $\\mathcal{Y}_0$. The TT-ALS (or some other) method trained on a small additional data set can be used for this. However, to date, we have not been able to achieve significant improvements by using this more complex approach.", " Thanks for this confirmation. I'm curious to hear the authors' hypotheses about why distribution does not matter. I would assume that adjusting the variance of TT-cores such that elements of the full tensor have a specific variance close to that of a modeled function could positively affect the convergence.", " > TTOpt works only with discrete parameters, which are neural network weights in RL experiments\n\nUnfortunately, this bit only added confusion on my side. Can the authors please elaborate on what exactly is discrete and how a neural network with discretized (using the new notation) weights is plugging into what would otherwise be replaced with a QTT? In other words, I'm trying to understand whether TTopt is applied to discretized weights of a neural network at any point, or the entire volume J is always treated as a function defined on a lattice.", " Indeed, you are correct about the dependence on the discretization grid. The advantage of TTOpt is that very fine grids can be used. For most experiments with benchmark functions, we used $2^{25}$ grids. The study of the effect of grid size can be found in Tab. 2 in Supplementary.", " Thank you, we implemented the proposed corrections. Some of these points were addressed in previous responses.", " Thanks to this observation, indeed many of the benchmark functions have the mentioned symmetry. Note, however, that Schwefel function (_F10_) is not symmetric, and the Alpine function (_F2_) is not differentiable. In both of these cases we obtained the lowest errors compared to all baselines (see Tab. 1). For convenience, we added analytic expressions of benchmark functions to Tab. 1 in the Supplementary.", " Thank you for your question, we will try to clarify this point. Please note that Tab. 1 in the main text and Tab. 2 in the Supplementary list results in a completely different choice of grid size. The result in the main text uses a grid of size $2^{25}$ in each mode for the QTTOpt method. In Tab. 2 in Supplementary, we compare results for grids of sizes from $2^8$ to $2^{20}$. Tab. 2 demonstrates the drastic effects of using QTT (mode compression) compared to the naive application of TT. The last result for Ackley function in Tab. 2 in Supplementary ($1.3E-04$, $2^{20}$ grid size) is sufficiently close to the value in Tab. 1 in the main text ($3.9E-06$, $2^{25}$ grid size). Note that all values are absolute errors.", " If we correctly understand the question, you are asking to explain why we need a mapping function. As an example, consider a matrix with the minimal element $-1$ and a maximal element $0.5$. The TTOpt algorithm will find the maximal by modulus element $-1$ (the minimum) even without the mapping function. However, suppose that the minimum is $-0.25$. Then the maximal by modulus element in this matrix is $0.5$, which is not the minimum. The mapping function allows the algorithm to find the minimum by transforming \n it to largest value. Assume that the current estimate of the minimum is 0, then\n $g(0.5, 0) = \\frac{\\pi}{2} - atan(0.5-0) < g(-0.25, 0) = \\frac{\\pi}{2} - atan(-0.25-0)$. After this mapping, TTOpt can correctly identify $-0.25$ as the minimum. ", " Thank you for raising this point. With the transition from TT to QTT format, the value of the rank may increase, but it is not possible to make exact estimates. Our numerical experiments show that the use of QTT (with larger $d$) leads to a significant increase in the accuracy (see Tab. 2 in Supplementary). In most cases, the accuracy of QTT grows significantly with the grid size.", " Thank you for raising this point. Please note that we provided an ablation study for the mode compression in Tab. 2 in Supplementary for benchmark functions. The use of mode compression (i.e., QTT) leads to many orders of magnitude improvements in the accuracy of the method. As can be inferred from this table, the larger the grid size, the more drastic the effect of mode compression for most functions.\n \nThe use of the mapping function is an integral part of the TTOpt algorithm and it can not be omitted, hence no ablation study is possible. This function has to transform the minimal element to the maximal by modulus element and should be continuous, smooth, and strictly monotone. We experimented with $g(x) = \\frac{\\pi}{2} - atan(J(x)-J_{min})$ and $g(x) = e^{\\alpha \\cdot (J(x) - J_{min})}$, but found no influence of this choice on the performance of the method.", " We agree that gradient-based methods are powerful, but as baselines, we use black-box optimization methods since they utilize the same information as TTOpt. Comparison to gradient-type methods has been done in prior works (for instance, see https://arxiv.org/pdf/1803.07055.pdf). An additional property of gradient-free methods is the ability to handle integer/binary variables and quantized weights. For example, in RL experiments we learn policy with binary weights, which can not be done by policy gradient-type methods directly.\n\nTo address the reviewer's concerns, we performed preliminary experiments with Proximal Policy Optimization (PPO) method. Note that this comparison is not completely fair: the PPO method used floating point weights, while gradient-free methods use weights with values $\\{-1, 0, 1\\}$. We limit the number of episodes to $10^5$. The results are shown in the table below. In this setup, PPO is less efficient than TTOpt in Swimmer, Inverted Pendulum, and Half-Cheetah environments, while PPO wins in the Lunar Lander environment. The run time of PPO is larger than TTOpt in all cases. Please note that these results are provided for quick checks only.\n\n---\n\nAn additional comparison of the TTOpt method and baselines presented in the paper (see Tab. 3) with the Proximal Policy Optimization method (PPO; Schulman et al. 2017) under the same budget up to $10^5$ episodes. The top table shows the mean and standard deviation of the final cumulative reward. The bottom table presents the calculation time in seconds:\n\n| Method | S | L | I | H |\n|:-------------|:--------------|:---------------|:---------------|:-----------------|\n| PPO | 95.63±40.59 | 303.83±2.75 | 815.38±369.23 | 3196.88±1187.82 |\n| TTopt (ours) | 357.50±6.59 | 290.29±24.40 | 1000.00±0.00 | 4211.02±211.94 |\n| cmaES | 342.31±36.07 | 214.55±93.79 | 721.00±335.37 | 2549.83±501.08 |\n| simpleGA | 349.91±10.04 | 283.05±16.28 | 893.00±283.10 | 2495.37±185.11 |\n| openES | 318.39±44.61 | 114.97±113.48 | 651.86±436.37 | 2423.16±602.43 |\n\n| | PPO | TTOpt |\n|:---|---------:|----------:|\n| S | 28670 | 287 |\n| L | 19398 | 823 |\n| I | 2238 | 44 |\n| H | 96050 | 1008 |\n", " Thank you for carefully checking the text. We implemented the proposed changes.", " There is no substantial reason to consider linear policies. We did it because the only black-box optimized RL policies we found online were linear. We decided to try fine-tuning experiments with these weights.", " We used the standard Gaussian distribution $\\mathcal{N}(0, 1)$ to initialize the TT-cores. We found no influence of this choice on the behavior of the algorithm. We have added a comment to Algorithm A.1.", " Thanks for pointing this out, you are right. The meaning of these lines in Algorithm A.1 is as follows. For all unfoldings of the TT-tensor we want to have \"tall\" submatrices, i.e., $R_{i-1} N_i \\geq R_i$. This condition may not hold for several first unfoldings if $N_i$ is small and the requested $r_{max}$ is too large. In this case we reduce the rank $R_i$ to maximal possible value $R_i \\gets R_{i-1} N_i$ in order to have a square submatrix at $i$-th unfolding. This condition is also enforced during the right-to-left sweep, which was missing from Algorithm A.1. We added a fix.", " Thank you for pointing at this bit. This statement in the text is not entirely correct, and we changed it to \"large precision\". The accuracy of the final answer is determined by the curvature of the optimized function, the resolution of the discretization grid, and also by the degree of algorithm convergence. To reach convergence for very dense grids a larger number of function evaluations may be needed (please, note that we restricted the experimental budget to $10^5$ evaluations). In all experiments, we used Numpy arrays which are float32. There is no direct influence of float precision on accuracy.", " In the most general case, there are no guarantees of global/local convergence of TTOpt. The only guarantee is that the result will monotonically improve with iterations. We note that for discrete optimization problems a global optimum in the general case can not be obtained without an exhaustive search of the parameter space. However, we found that the method works well in practice and that the convergence is almost independent of the starting point if a sufficient number of steps is available. Fig. 2 in the Supplementary shows rates of convergence for 10 various benchmark functions.", " Thanks for this interesting observation. The dependence on the level of weight quantization in Tab. 3 is indeed non-trivial. We can speculate that finer quantization of agent's weights leads to slower convergence of TTOpt, and hence we need to run more episodes to get the same cumulative reward. Note that we used the same number of episodes in experiments with different levels of quantization.\n \nOn the other hand, neural networks with heavily quantized (low bit) weights often incur only a small drop in accuracy, see e.g. arxiv:2103.13630. The expressive power of heavily quantized policies may be enough to solve simple environments, but the convergence of TTOpt is much faster in these cases.", " The use of the word \"quantization\" indeed leads to confusion. In our experiments with TTOpt we found that treating a tensor of dimension $d$ as a higher dimensional tensor of dimension $D, D >> d$ but with a smaller mode size is very beneficial for the accuracy and speed of the method. This modification is referred to as Quantized Tensor Train (QTT), similarly to other works. At the same time, TTOpt works only with discrete parameters, which are neural network weights in RL experiments. Hence you can _only_ train quantized neural networks with TTOpt/QTTOpt. We replaced \"quantized\" with \"discretized\" in the text and emphasized this unfortunate clash of notation where it is not possible to avoid it. ", " We thank the reviewer for working thoroughly with the article and asking interesting questions. We agree that some of the points in the text may be unclear, which is an unfortunate consequence of size limitations and the cross-domain nature of the work. We added a paragraph to explain the RL aspects of the method starting at L234 in the main text and L156 in the Supplementary.", " Please find the comparison against Bayesian optimization in Tab. 4 in Supplementary. Compared to Bayesian methods, TTOpt is significantly faster and finds better solutions in most of our experiments.", " We agree with the reviewer that this claim would be too optimistic. In the text (last paragraph in Sec 2.4) we just say that we stop the algorithm if row and column indices do not change during the sweeps, which would correspond to the local optimum. Finding the local optimum does not guarantee the global optimality of the result. In practice, however, this condition is rarely met because of the small budget of iterations. We rephrased the offending paragraph to better reflect this. Also, we found almost no influence of the (random) initialization on the final result if sufficiently many sweeps are made.", " We thank the reviewer for pointing out this unclear detail. In the case of the quantized tensor train with mode size two and maximal rank $R > 2$ we simply select all indices for the first $k$ modes until $2^k > R$. We note however that this detail does not change the global behavior of the algorithm. We added an explanation to the main text (footnote 11 in Sec. 2.6).", " The reviewer is correct that there are no theoretical guarantees about the convergence of the algorithm to the global minimum (we added explicit statements to the manuscript; last paragraph in Sec. \"Optimization in the multidimensional case\"). We note, however, that for discrete optimization problems an optimum in the general case can not be obtained without an exhaustive search of the parameter space. We found that the method works well in practice: for benchmark functions (including non-differentiable Alpine function) close to optimum solutions are found (Tab. 1 in the main text, Tab. 2-4 in Supplementary). The convergence is almost independent of the initial approximation if a sufficient number of steps is available. Fig. 2 in the Supplementary shows rates of convergence for 10 benchmark functions.", " We agree with the Reviewer and added explicit statements in the revised version of the manuscript (last paragraph in Sec. \"Optimization in the multidimensional case\") that there are no rigorous guarantees of the convergence to the global minimum, nor the rate of this convergence. The only guarantee is that the result will monotonically improve with iterations.\n \nWe demonstrated that our algorithm works very well in practice for a large class of difficult and widely tested high dimensional benchmarks for global optimization (including non-differentiable Alpine function) and in most cases close to optimum solutions are found (Please see Table 1 in the main text, Tables. 2-4 in Supplementary).\n \nOur computer simulation results clearly indicate that the convergence rate is almost independent of the initial approximation if a sufficient number of steps is used. Moreover, Fig. 2 in the Supplementary shows rates of convergence behavior for 10 benchmark functions.", " We agree with the Reviewer that this is a quite important property of the maximum volume principle, which needs some theoretical justification or at least some intuitive explanation. Our justification for applying the maximum volume principle in our TTOpt algorithm is based on Eq. 3 (taken from rigorously proven Theorem 1 of Goreinov et al. \"How to find a good submatrix\" (2010)), which guarantees that the submatrix of maximal volume will contain quite a close element to the largest element in the full matrix if the rank of a full matrix R is sufficiently low. Our very extensive numerical experiments show that the proven mathematical bound is not very tight and rather pessimistic, and in practice, the maximal-volume submatrix contains the maximum entry or at least the element which is very close to the optimal one.", " Our choice to treat RL problems with TTOpt has multiple reasons:\n- TTOpt can be directly used to optimize policies with quantized (discrete) weights which are ideal for the application in edge devices and microcontrollers. The traditional policy gradient methods are not suitable in this case without modifications.\n- The reward function in RL may not be differentiable and may pose difficulties for policy optimization, for example in the environments with stochastic reward. As the reviewer noted we considered simple deterministic environments in this initial work, but we look forward to applying the method in a harder setting.\n- Direct optimization methods often yield more stable and diverse policies compared to classical policy gradients. The application of direct search methods may be a promising alternative to traditional RL.\n\nHaving noted these arguments, we stress that we tested TTOpt using a significant number of traditional optimization benchmark functions (including non-differentiable function, i.e., Alpine function; see Tab. 1 in Supplementary; for convenience, we added analytic expressions of all benchmark functions to the Table) in addition to RL benchmarks. Other applications of TTOpt in learning-based approaches may include hyperparameter tuning, latent space optimization, etc. We look forward to these perspective applications.", " The authors present a novel approach to multivariate optimisation of general objective functions making use of tensor-train decompositions over discrete state-spaces. The TT Decomposition approximation is based on the cross-approximation method / TT-Cross of Oseledets, etc. The main interest to use TT-cross is not for building a approximation but based on the observation that maximal elements of the tensor are highly likely to be in the maximum volume submatrix which is found using MAXVOL in TT-cross and the maximal element of\nthe sub-matrix increases monotonically over the iterations. This result is hinted at, but unless I misunderstand, it is not proved or confirmed. This is employed empirically to define a new optimisation algorithm.\n\nThe algorithm is deployed in the context of Reinforcement learning, as an alternative to evolutionary approaches, as well as some general optimisation baseline examples. It is demonstrated that the method is competitive, and in many cases outperforms other approaches. Strengths:\n* (Originality) This work presents an interesting approach to optimisation of (possibly non-differentiable) high-dimensional functions, exploiting tensor train decompositions, which is novel.\n* (Significance) This is important as the traditional approaches to high-dimensional gradient free optimisation (e.g. BO) do not necessarily scale well with dimension, so that this offers a plausible alternative.\n* (Quality) The paper is generally well written.\n\nWeaknesses:\n* (Clarity): I do not understand why the authors felt the need to bring reinforcement learning into this work. The TTOpt algorithm is a general purpose optimisation algorithm, it is unclear why the authors felt that RL would be a unique use-case for this method, particularly given that the examples studied do not appear to involve non-differentiable loss-functions, and moreover, could be solved using policy gradient approaches. This feels quite odd to me.\n* (Significance) The key assumption in the proposed iterative optimisation method is that the maximum volume sub-matrix should contain the absolute maximum element, thus justifying the iteration. However, unless I misunderstand, this isn't justified by any argument provided in this work. * Why are the authors specifically considering RL as a test case for this method? This seems to be shoe-horned in. I don't see any necessary need for such non-differentiable optimisation methods, whereas I can think of many other situations involving high-dimensional non-differentiable objective functions. I appreciate the motivation based on Saliman's work on Evolutionary algorithms in RL, but this seems tenuous. Can the authors shed some light about why this is the *right* method for RL and when? It seems that for all of the presented problems, policy gradient would've worked just fine.\n\n* The key assumption in the proposed iterative optimisation method is that the maximum volume sub-matrix should contain the absolute maximum element, thus justifying the iteration. However, unless I misunderstand, this isn't justified by any argument provided in this work, including the \"intuition behind ttopt\" presented in page 3. I really think this needs to be studied more, because this is a crucial feature that needs to be justified. I appreciate that this can be quite a challenge, but some intuition about why this property should be true is important. I think the authors need to be a bit clearer about what can and cannot be guaranteed with this methodology. It is ok if the method is heuristic but this needs to be made very explicit.", " This paper implements a novel method to find maximum elements of a tensor-shaped object function by combining quantized tensor train representation and generalized maximum matrix volume principle. The author applied this method to minimization of multidimensional functions and reinforcement learning tasks.\n Strengths:\nThe originality is good. The algorithm combining Tensor-train and maxvol algorithm is novel and clear. The writing quality is good. The complexity is also analysed. It should be a significant work if it can work well on a wide range of tensor-shaped object function.\n\nWeekness:\nOn theoretical aspect, there are no guarantee on the stability nor convergence of the algorithm. \n 1. The quantized Tensor train seems inconsistent with the subsection 2.4. The searching submatrix is R*R, but when the tensor is quantized to a 2-power tensor, usually R should larger than 2, thus there should be some initial step to start the algorithm.\n\n2. As stated in line 137 and 129, each maxvol algorithm are worked in a reshaped submatrix with coloum indices randomly choosed. However, the author declare that the the algorithm will converge to optimal row and coloum indice, which seems strange. Comparison against bayesian optimization which is also a brunch of methods for black box optimization is missing in the paper.", " The paper presents a gradient-free optimization method, based on the tensor train approximation of the functional and the maximum volume principle, adapted to navigate the optimized parameter space instead of reconstructing the volume. Experiments on multidimensional analytical functions and reinforcement learning settings demonstrate superior convergence rates and efficiency in terms of calls to the black-box function. Judging by the provided analysis and results, TTopt deserves membership in most gradient-free optimization toolboxes.\nExtending maxvol principles for optimization and mandatory quantization of the continuous case are interesting, novel, and significant (conditioned on the presented results).\nThe study of TTopt on analytical functions is rich and deep, going as far as comparing with Bayesian optimization methods.\n\nHowever, given the impressive results from Fig.3 in RL environments, too little space is allocated to explaining the setup and discussion of what makes two orders of magnitude improvement in reward possible. \nAdditional explanations in the supplementary are helpful but do not leave the impression of self-contained work. Domain expertise in RL and evolutionary strategies is required to understand the paper. The paper would benefit from a brief recap of ingredients and an explanation of how TTopt plugs into the RL pipeline. The MDP formalism (B.3) does not elaborate on whether the setup is applicable in off-policy /on-policy settings, whether it makes sense to optimize the number of evaluations, and whether all requested evaluations can always be carried out. \nDiscussion of what entails exceeding the target reward is missing.\n\nThe paper is well-written and mainly easy to follow, except for frequent redirections to the supplement and related work for specific and essential parts of the narrative. Perhaps pivoting some text from pages 3 and 4 into the supplement won't hurt and would allow for streamlining the \"story\" since understanding the concepts of the tensor train decomposition does not really help to understand what is under the hood of TTopt without going into the supplement. In a sense, the matrix case (very well-written) is not representative of the multi-dimensional case due to many special handling bits. 1. L47 \"Our algorithm can directly train quantized neural networks\" - can the authors explain this? I understand the concept of QTT, but this point is only seen in the introduction and slightly touched around the RL part. Also, L235 \"These experiments model the case of quantized neural networks which use q-bits quantization.\" - please elaborate on how NN parameter quantization relates to TT mode quantization.\n2. In relation to Table 3, please discuss why some environments benefit from finer quantization while others don't, especially given the low variance of curves?\n3. What are the convergence guarantees of TTopt?\n4. The paper mentions \"up to machine precision\" (L40), and by the look of fp exponent of results in Table 1 and on, single-precision was used. Do the results improve further with double precision?\n5. In Algorithm A1, rank selection only accounts for the left-hand side constraint on the ranks but does not enforce the right-hand side constraint. Shouldn't it also be the $\\min(R_{i-1} N_i, R_i N_i, r_{max})$? This obviously cannot be done in a single loop due to double-side recursion; however, two loops can do. Wouldn't it be better than the reduction procedure from LL58-61 of the supplementary?\n6. In Algorithm A1, \"Set $G_i = \\mathrm{random}(R_i N_i , R_{i+1})$, random how? Does the distribution matter?\n7. Why is the case of linear policies important to consider TTopt with?\n\nWriting tips:\n- Order of environments in the left and right parts of Table 3 is different for no good reason\n- Consider enabling line numbering built into algorithmicx for ease of referring to line numbers\n- Annotating algorithms with shapes of operands would help in understanding\n- Consider prioritizing a self-contained intro into the experimental domains and smooth explanation of TTopt plugging into the big picture, over explanations of TT and maxvol, provided that the main paper cannot fit the core algorithms anyway due to limited space. Answers to several questions asked above, in combination with the rather deep analysis of rank and number of evaluation dependencies, cover the most interesting limitation scenarios.", " The paper proposes an optimization method for multivariate functions of discrete arguments by considering their low-rank tensor decomposition in terms of Tensor Train structure. The method is compared to gradient-free methods, and is evaluated on Reinforcement Learning tasks and on functions with continuous arguments. The main text is clear and well written. This work combines various solutions to solve this optimization problem, that is, extending the maxvol function to higher order tensors, replacing high dimensions by decomposing them to lower ones, and applying an updated mapping function to deal with algorithmic constraints. Additional experiments testing different hyper-parameters appear in the appendix. The paper lacks, however, experiments that demonstrate the contribution of each individual part of the method empirically, such as the mapping function and the mode compression.\n\nIf the landscape of RL consisted of only of gradient-free methods, then the presented low execution time of the method, together with its ability to solve the problems successfully, makes this work significant. The fact there is no comparison to gradient-based methods is a major weakness and complicates the assessment of the possible impact of this work.\n\nStrengths:\n* Extends a matrix-based method to higher order tensors.\n* Low execution time in comparison to other gradient-free approaches.\n* Various experiments regarding the hyperparameters of the method appear in the appendix.\n\nWeaknesses:\n* No comparison with gradient-based methods.\n* The comparison to RL is lacking. There is no comparison to any gradient-based method.\n* There are no ablations to the benefits of using either the mapping functions (Eq. 7) or mode compression. * Questions \n\t- Maximum Value in matrix is bounded by $\\hat{J}_{max} \\geq \\frac{J}{R^2}$, for tensors with $d$ indices, would this bound grow to the power of $d$, doesn't this bound increase with the compression trick? \n\t- What happens when you have a tensor train of $d=2$, where there are two minimal values of $-1$ in each, and a maximal value of $0.5$ in each? the total maximal value is $1$, isn't it? (The mapping function in equation 7 is applied at the end)\n\t- Why are the values in Table 2 in the appendix are of order of magnitude different than presented in Table 1 of the main text? of instance, the absolute deviation from the minimum of the Ackley function is $1.2$ (the absolute minimum is at $0$), but the relative deviation is $3.9e-6$.\n - It seems all the test functions have the symmetry of $x_i \\rightarrow -x_i$. How would the method work for non-symmetric test functions?\n \n\n* Suggestions\n\t- Perhaps add the definition for \"Maximum modulus element\" to the main text.\n\t- Perhaps add an evaluation of the method with the compression to modes and without (line 178).\n\t- It would be interesting to see how the approach handles non-differentiable functions as it is gradient-free.\n\t\n\t\n* Minor corrections:\n - line 181 \"Let assume\" -> Assume that\n - Appendix Table 2 Grienwank -> Griewank * I believe that when the method is applied to continuous functions, it is sensitive to the chosen discretization grid. Moreover, the complexity of the method increases for finer grids, as the $N_k$ increases." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4, 3 ]
[ "PxkiIF4lg-d", "wpDmnjWrIl", "8j4ZppLpaHh", "B6tHEXKhuFw", "_y3rl5vR9X8", "OV3IVPn_h6bz", "xrM4jNQf70T", "B6tHEXKhuFw", "Y0Y0gy2pAo6", "xdBrKMf4aiC", "F7693w2o3n4", "1tGJR3TVFMQ", "nwrDELUK36x", "SLsQf3P0tJ", "V-X-boocn_g", "F7693w2o3n4", "F7693w2o3n4", "F7693w2o3n4", "F7693w2o3n4", "F7693w2o3n4", "F7693w2o3n4", "F7693w2o3n4", "F7693w2o3n4", "xrM4jNQf70T", "xrM4jNQf70T", "xrM4jNQf70T", "xrM4jNQf70T", "xrM4jNQf70T", "xrM4jNQf70T", "xrM4jNQf70T", "xrM4jNQf70T", "xrM4jNQf70T", "B6tHEXKhuFw", "B6tHEXKhuFw", "B6tHEXKhuFw", "B6tHEXKhuFw", "Y0Y0gy2pAo6", "Y0Y0gy2pAo6", "Y0Y0gy2pAo6", "nips_2022_Kf8sfv0RckB", "nips_2022_Kf8sfv0RckB", "nips_2022_Kf8sfv0RckB", "nips_2022_Kf8sfv0RckB" ]
nips_2022_gnc2VJHXmsG
RKHS-SHAP: Shapley Values for Kernel Methods
Feature attribution for kernel methods is often heuristic and not individualised for each prediction. To address this, we turn to the concept of Shapley values (SV), a coalition game theoretical framework that has previously been applied to different machine learning model interpretation tasks, such as linear models, tree ensembles and deep networks. By analysing SVs from a functional perspective, we propose RKHS-SHAP, an attribution method for kernel machines that can efficiently compute both Interventional and Observational Shapley values using kernel mean embeddings of distributions. We show theoretically that our method is robust with respect to local perturbations - a key yet often overlooked desideratum for consistent model interpretation. Further, we propose Shapley regulariser, applicable to a general empirical risk minimisation framework, allowing learning while controlling the level of specific feature's contributions to the model. We demonstrate that the Shapley regulariser enables learning which is robust to covariate shift of a given feature and fair learning which controls the SVs of sensitive features.
Accept
The authors propose a novel method for calculating Shapley value for kernel-based models. The paper includes both a theoretical analysis and extensive experimental evaluation. A majority of reviewers are in support of accepting the paper and the rebuttal/discussion period helped to clear out (most of) the reviewers' concerns.
train
[ "kEUXuxoaOZ", "A0o7UDfIt2I", "tpHPtvg4O7", "EqIrzt4K6zH", "1dIm8ebBlq", "hr8Ik92pGuT", "SMNpQ99P-kN", "XqRRnGR2P4I", "0T40I4ufnUz", "kXS4yqLmpX0", "QbqAtLBdf-", "W6ebe9cAD9", "j_JA91rKjn7", "mxgZzXTKTbFV", "1xfDLTrUICd", "ZNQ77wARFeD", "hDThH9TEbQ5", "8DKggMTVyjs", "F0_qSbb8MAl", "Pja6xWdaSkc", "EFtuInxqJe", "BQkeSBmKUYh" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We will forward the message we recieved from the Program Chairs:\n\n\n- During this decision making phase, the authors are not allowed to interact with the reviewers, area chairs nor senior area chairs. Reviewers, area chairs and senior area chairs will work with what they have at this moment, including your manuscript, supplementary material, reviewers' reviews, your rebuttals and any discussion between reviewers and you, in order to collectively arrive at the recommendation of your submission.\nWith the unprecedented number of submissions as well as unprecedented number of reviewers we have this year, it has not been without hiccups, but we anticipate that we would be able to release decision notification on September 14, as was planned originally.\nIf your submission is accepted, you will have another month to prepare the camera-ready version.\nSincerely,\nProgram Chairs\n\n--- \n\nWe will consider extra experiments that reviewer suggested during the preparation of the camera ready version.", " Hi when is the deadline at your side? At my side, it seems I am able to make changes until Aug 19th. Is it that you will not be allowed to make changes after today? How about updating supplement materials? It is also fine to include some annoymous github link which is to be updated before Aug 19th if you prefer.", " Thank you very much! ", " Sounds good, discussing these points in the paper will resolve all my remaining concerns. I'm going to raise my score.", " Thank you for the suggested further experiments, however we don't have the capacity to run them given the limited time left.\n\nWe sincerely hope the reviewer can consider increasing their score as we have now:\n\n- Clarified the practicality of our method (can scale up to n=1,800,000 and applied to 71 features)\n- Clarified the point on IG vs Shapley values with experiments and theoretical discussions.\n- Explained why designing kernel method specific SHAP approximation is a well-motivated piece of research.\n\nThank you very much.\n", " Hi thanks for the efforts. The experiments do not 100% match what I described above. \"As Riberio et al 2016 showed,\nglobal and local importance do not necessarily imply each other.\" Yes I agree with the above and that is where the mismatch comes from. \n\nI am NOT asking for RETRAINING the model every time a feature is removed. \n\nThe experiment is simpler than it:\n\nInputs: We are presented a trained model to explain. \n\n- For a given example, we apply a method (IG/RKHS Shapley), we get importance scores. \n- We rank features by importance scores. \n- We feed into the SAME model with masked samples. Masks are created by the rank of importance scores. The mask means replacing that feature value with its mean, or some null value. \n- On the test data, we get the accuracy / loss, as the function of the number of masked features.\n\nOutputs: We plot the figure of this function. \n\nThe method serves as a heuristic evaluation of the LOCAL importance. The con here is that replacing is never the best we can do, and the model may not understand that (which means a bias is introduced). A more accurate method is to get the conditional distribution of that feature conditioned on the value of other features of each sample, theoretically. It can be approximated by the unconditional distribution of the feature in-sample if we assume independence between features. Of course it seems to be too complicated to be included here, and often times a simple masking by the mean can give a good estimate. ", " Thank you for your suggestion. We have included the extra experiment you requested in the supplementary materials, the PDF file is called “RKHS_SHAP_versus_IG_comparison_for_R2”. Please have a look.\n\nWe hope this will close the discussion on the topic Shapley values versus Integrated Gradients because we still believe it to be outside the scope of our contributions.\n\nWe hope the reviewer can now increase their scores as we have fully answered and clarified all your queries and concerns.\n\nThank you.", " **Frye et al similarities:** We thank the reviewer for clarification on Frye's approach. We agree with the reviewer that by promoting smoothness during learning, Frye's approach should not arrive to the overfitting situation Yeh et al. (2022) mentioned. We will make subsequent changes to the camera ready version on Frye's approach and emphasize more on the conceptual similarities between ours and theirs. Thank you again for pointing that out explicitly.\n\n---\n\n**Robustness results:** We believe this confusion is caused by us having a different view on RKHS-SHAP's value function estimator. Our approach certainly utilised properties of RKHS functions to model the value functions using KME/CMEs, where we first propose a population level model in Prop 2, and a finite-data estimator in Prop 3. However, had one choose to model value functions differently, for example, proposing another conditional mean (population level) estimator, the subsequent analysis will be very different. This can be seen e.g. by the bound in theorem 6, for Observational Shapley functionals, the upper bound of the norm of the population CME function appears. \n\nBecause of this, we still believe the robust results is still RKHS-SHAP specific because we made a specific modelling choice of the conditional expectation and the analysis is then built on that choice we made. \n\n---\n\n**Writing:** We thank the reviewer for their suggestion. We agree that more emphasize can be put on highlighting the two main elements of estimating Shapley values. We promise to make changes in the early parts of the paper to strengthen this point to improve reading experience. \n\n---\n\nWe really appreciate the reviewer's effort to improve the clarity of the paper. We promise to make the following changes in the camera ready version:\n1. Restructure the narrative in early parts of the paper to emphasise the aspect we contributed to Shapley value estimation for RKHS functions\n2. We will change our comments on Frye's approach and draw parallels to the conceptual similarities among the two.\n3. Add a few lines when introducing CME explaining the intuition behind.\n\nWe hope we have resolved most of the reviewer's question and hope they could increase their scores. Thank you very much.", " Hi, thanks for the clarification about Propositions 2 and 3. This part of the paper was a bit hard for me to follow, and I suspect that providing a bit more explanation would be helpful to other readers as well.\n\n**About the Frye et al. similarities.** Like you said, I think these comparisons would be helpful to discuss in the paper. There are a couple points I still find myself disagreeing with though:\n\n- Frye et al.'s method doesn't require training a separate neural network for every feature subset $S$. That would be very slow and impractical, so a single model is instead trained with randomly sampled masks (see eq. 17 [here](https://openreview.net/pdf?id=OPyWRrcjVQw)). This is more computationally efficient, and it also functions as a form of regularization by making it more difficult for the model to memorize labels.\n- If I understand correctly, the main argument for your CME approach not being susceptible to reproducing the empirical conditional expectation is that the ridge penalty encourages smoothness. Why can't we use comparable regularization tactics in the Frye et al. deep learning approach? I suppose main point here is that *neither method is actually susceptible to this risk*. The flaw presented in Yeh et al. (2022) seems exaggerated, it's based on a worst-case memorization scenario that should be straightforward to mitigate with standard regularization tactics and early stopping based on a validation loss. Note that a similar argument could be made with any use of empirical risk minimization (i.e., that the model will collapse to the empirical expectation), but DNNs trained via ERM tend to generalize just fine on out-of-sample data. If it's helpful, I can even point to papers that successfully used Frye et al.'s supervised surrogate approach without obtaining constant explanations.\n\n**About whether the robustness result is specific to RKHS-SHAP.** This topic was addressed in your response, but I'm not sure we're on the same page yet. The fact that you used CME/KME as tools to analyze the robustness does not mean that conditional Shapley values' robustness is due to the RKHS-SHAP estimation approach. The estimation strategy does not affect the properties of the quantity being estimated, and these are equivalent to conditional Shapley values, right? The robustness result still seems like an intrinsic property of Shapley values when applied to this type of model; in other words, the property holds regardless of how the Shapley values are calculated (e.g., using your approximation, or using oracle access to the conditional distributions). Can you please clarify if I'm missing something?\n\n**Writing.** There was one more weakness from my review that I would ideally like to see incorporated in the revisions - the one beginning with \"I thought the paper's exposition could greatly benefit from disentangling the two main elements of estimating Shapley values.\" This discussion has reinforced my understanding that the main ideas in this work are related to how held-out features are handled. This was surprisingly hard to grasp from reading the paper due to repeated comparisons with KernelSHAP (which is focused primarily on how to handle the exponential complexity), so it would be helpful to rework early parts of the paper explaining what the contributions are and how it relates to prior work. For context, see this recent review paper that nicely distinguishes these two aspects of Shapley value estimation [1].\n\n[1] Chen et al., \"Algorithms to estimate Shapley value feature attributions\" (2022)\n\nThis is my last set of issues to discuss, I expect to be able to finalize my score after we resolve these. Sorry we're getting close to the deadline for author-reviewer discussion.", " Hi, I more than agree with you about the axiomatic difference between the two algorithms. I am talking about the form of comparison in D.2. Here you have already coded the experiments to find important features with the RKHS-SHAP, so the comparison of such should be straightforward. \n\nConcretely, suppose one has two algorithms RKHS-SHAP and IG, and ranks features by their importance provided by each of the algorithm. Can you show that by masking these features with their average in the order, which algorithm leads to a faster decreasing in classification accuracy / MSE? ", " Empirical comparison between RKHS-SHAP and kernel method specific Integrated Gradients is outside of the scope of our contribution and, for the reasons we outline below, would not be particularly meaningful even if it was possible to conduct in this limited time scope. However, as RKHS-SHAP is a kernel method specific estimation way to compute Shapley values, all existing discussion and comparisons of Shapley values vs Integrated Gradients will still be applicable to our discussion.\n\nThe original author of Integrated Gradients (IG), Mukund Sundararajan, in his work \"The Many Shapley Values for Model Explanation\" Remark 4.5, constructed an example where they compared IG with a variant of Shapley values called BSHAP, and showed that both attributions are \"intuitively reasonable\" but \"not immediately clear which interpretation is obviously superior\".\n\nThere is no consensus in the literature on the preference between Shapley values and Integrated Gradients. Both methods satisfy different sets of axioms and offer attribution with different interpretations. As an example, in article [1], the author gave an example where IG fails to satisfy several Shapley's desirable axioms. In particular, IG can assign different attributions to two features that always have the exact same effect on the model, and can assign positive attributions to features that have no effect on the model. \n\nIn summary, the discussion on Integrated Gradients vs Shapley values, as two different explainability approaches, is interesting, but unrelated to our contribution. We focused on providing a kernel method specific Shapley value estimation scheme, following the established line of research on designing model-specific SHAP algorithms such as linearSHAP, treeSHAP, and DeepSHAP. While one could conduct a study on IG vs Shapley values, but that will be a very different paper to ours.\n\n[1] https://towardsdatascience.com/limitations-of-integrated-gradients-for-feature-attribution-ca2a50e7d269", " Thank you for your reply. We summarise our response as follows:\n\n### CME Part\nThanks for the suggestion for the amendments which will improve clarity. Since the original discussion was introducing the general concept of CME, we did not include subset notation, to keep the notation simple. \n\nWe delayed the introduction of CMEs with feature subsets until Proposition 2 and provided empirical estimation methods in Proposition 3. In the camera-ready version with fewer space constraints, we shall add an extra sentence in line 191 to briefly introduce the concept and expand later in section 3.\n\nTo obtain $\\mu_{Y\\mid X_S}$, one simply computes the kernel matrix for features of $X$ in $S$, instead of computing overall features. Please see proposition 2 for more details. \n\n---\n\n### Comparison with Frye et al.\n\nWe agree with the reviewer that the similarities and differences between Frye's approach and ours are worth emphasizing in the paper because they share a similar regression-like intuition, but, as we highlight below, differ in most other aspects:\n\n- **Difference in regression target:** Frye's approach regresses onto scalar values $f(X)$ for a specific $f$ while our approach regresses onto an infinite dimensional feature map $\\psi_{X_{S^c}}$. Note that our model is aiming to capture representation of the full conditional distribution (via its RKHS embedding) rather than the conditional expectation for a specific $f$.\n\n- **Difference in dependency on $f$:** CME estimation depends on the function space that $f$ belongs to and not on the specific $f$. This subtle but crucial point allows one to apply Shapley functionals as attribution priors during the learning of $f$ itself, in order to regularize it.\n\n- **Difference in hypothesis space:** Frye's approach uses a scalar-valued parametric neural network model, while our approach uses an RKHS-valued non-parametric kernel ridge regression with a ridge penalty to promote smoothness.\n\n- **Difference in learning procedure:** For each $S$, Frye's approach needs to relearn the parameters of a new neural network. On the other hand, assuming that the function space where $f$ lies is fixed (which is certainly true after learning $f$ itself), our method has a closed form ridge regression solution with no additional parameters to be learned (i.e. all kernel lengthscales for CME estimations are already determined by the function space of $f$).\n\nBecause of all the differences outlined above, we believe that our approach does not suffer from the risk of reverting back to an empirical conditional expectation $\\nu^{CE}_{x, f} = \\frac{\\sum_{i=1}^n\\textbf{1}[{x_i}_S = x_S]f(x_i)}{\\sum_{i=1}^n\\textbf{1}[{x_i}_S=x_S]}$ compared to Frye's deep method, because it promotes smoothness in the conditioning variable due to its regularisation. Therefore, even when we are explaining new feature values that have not been seen before, we would not get constant explanations. This is illustrated by our experiments in Appendix D2 where we do not see clusters of Shapley values even though we are explaining on withheld data points.\n\nWe hope the reviewer could increase their score if their concerns are now clarified. Thank you very much!", " Hi, thanks for response. I was looking for a comparison with IG empirically. ", " Thanks to the authors for their response. I believe I understand the method better now, particularly the approach of casting CME estimation as a regression problem. I have some other comments, but perhaps we can first resolve a couple remaining questions about CME estimation.\n\nIn lines 176-190 of the original work, I was unaware that the authors would use this approach for feature subsets $X_S$ because no feature subset notation is used. Can this be amended in the revision? And what modifications are necessary in the equation for $\\hat \\mu_{Y \\mid X}(x)$ when we instead want $\\hat \\mu_{Y \\mid X_S}(x_S)$?\n\nIt seems like this approach actually ties in quite closely with that of Frye et al., more so than the paper currently reflects or I previously realized. In both cases, there's a random variable $Z$ jointly distributed with $X$, and both papers are interested in the conditional expectation $\\mathbb{E}[Z | X_S = x_S]$; both papers approximate it by minimizing a least-squares-like loss function, or $\\min_g \\mathbb{E}[ ||Z - g(X_S) ||^2]$. The differences seem to be that 1) this work adopts $Z = \\psi_Y$ and uses $||\\cdot||_{\\mathcal{H}_g}^2$ while Frye et al. adopts $Z = f(X)$ and uses MSE, and 2) this work lets $g$ be a kernel method while Frye et al. use a neural network. Can the authors comment on this, are these approaches as similar as they seem to me?\n\nNot that I think the similarity poses a novelty issue, I believe there's room for a method designed specifically for kernel methods. However, acknowledging the similarity would be helpful, and I think the authors must reconsider whether their current critiques of Frye et al.'s supervised method also hold for their own approach. Specifically, if Frye et al.'s method risks reverting to the empirical conditional expectation (to use the term from Sundararajan et al.), does this method not risk the same issue? If not, is it saved by the kernel method's relative lack of flexibility compared to a neural network? Or does the regularization term (line 182) play a crucial role that is not currently explained? And do the authors believe that similar regularization tactics are somehow not applicable to Frye et al.'s deep learning-based approach?\n", " Dear reviewers, \n\nWe have addressed all of your constructive comments and questions. We would really appreciate if the reviewers could go over our responses and update their evaluations accordingly. We'd be happy to address any remaining questions - but we will only be able to do this until Tuesday 4 pm ET, so we would appreciate comments before then.\n\nThank you very much.\n\nBest,\nAuthors", " ### General comments: \nWe want to thank the reviewer for their detailed constructive comments and suggested references, which are valuable and appreciated. We also really appreciate the reviewer's suggestion on both narrative and literature to include, especially the ones on attribution prior, we will include them in the camera ready version.\n\nWe would like to begin by addressing what appears to be a misunderstanding of the framework of Conditional Mean Embeddings. It is possible that several reviewer's concerns stem from this misunderstanding and are thus not valid.\n\nWe hope to clarify the comments individually and hope the reviewer will increase the score afterwards.\n\n### Specific comments:\n\n---\n*\"Whether you're estimating interventional or observational Shapley values, each coalition/subset will require a separate evaluation of the value function (coalitional game). And for each evaluation, you'll need to estimate KME or CME, which seems like a potentially expensive operation...\"*\n\nA: Please refer to our general comment on CME estimation and complexity analysis. In short, Conditional Mean Embedding for each coalition can be estimated with an $\\mathcal{O}(n \\sqrt{n})$ operation, and it is certainly an improvement over other methods, for example, over conditional density estimation. We have provided detailed complexity comparisons in Appendix A. \n\n---\n*\"CME is not a simple thing to estimate, it would seem to require that you filter for examples in your dataset where $X_S= x_S$. In a high-dimensional dataset, or a dataset with continuous features, wouldn't the number of matching rows be very small, particularly for s with large cardinality?....\"*\n\nA: This is not right. As detailed in the general comments, CME is not estimated by filtering out dataset as the reviewer has suggested. In fact, CME is estimated via a vector-valued regression problem, and thus we do not require multiple values observed over the same conditioning value. Instead, we can utilise the smoothness across the conditioning variable. This approach has been widely adopted and shown to be effective in a wide range of applications, as detailed in the general comments.\n\n\n---\n*\"In discussing the conditional expectation estimate in Section 3, the authors write that they \"utilise the arsenal of kernel methods to estimate the conditional expectations directly.\" I believe this is very similar (except for the focus on kernels) to the second method introduced by Frye et al. 2020 that has a very similar motivation - the supervised surrogate model...\"\"*\n\nA: Thank you for this suggestion. In fact, we have mentioned Frye's approach in line 210-214. It is, however, worth noting that a recent paper by Yeh et al. 2022 raises some significant concerns regarding the approach by Frye et al., regarding the validity of their approach, e.g. the optimal regression based learner will yield constant explanations when explaining feature values that are unique in the dataset. \n\n---\n\n*\"Could you comment on whether the constants in the robustness bounds are actually small? If not, these results may be meaningless. E.g., we could establish robustness bounds for Shapley values with arbitrary models whose predictions lie within a bounded range - are these results more useful than that?\"*\n\nA: The constant in bound for observational SV represents the (upper bound of the) smoothness of the CME functions $\\mu_{X_{S^c} \\mid X_S}$ for all S. Therefore, when the features have stronger non-linear dependencies, the norm of their CME function will be larger, thus the bound will be less tight. We believe this to be quite natural because features having stronger non-linear dependencies means their conditional density is more difficult to capture, thus harder to bound them: the more difficult the problem gets, the harder it is to get a tight bound. We can include a further discussion on this in the camera-ready version.\n\nWe also wanted to highlight, as mentioned in line 284-285, this assumption has also been adopted in Park et al. 2020 for their analysis.\n\n[Park et al. 2020] A measure-theoretic approach to kernel conditional mean embeddings.\n\n---\n\n*\"Similar to my point under \"weaknesses,\" can you elaborate on whether you believe this result is somehow specific to RKHS-SHAP, or just a property of Shapley values applied to kernel models?\"*\n\nA: We believe this result is specific to RKHS-SHAP. RKHS-SHAP method provides an estimator of the observational/marginal value function using CME/KME tools, which we then use to provide the the robustness analysis. When Shapley values are applied to kernel functions more generally, prior to our work, the only way to estimate them in the observational setting was to perform conditional density estimation, which would require different tools for robustness analysis, depending on the particular conditional density estimation method used.\n\n---\n\n*We hope the reviewer could increase their score as the concerns are now clarified. Thank you very much!*", " *\"Weakness: Only applicable to Kernel methods\"*\n\nA: The developed method follows the well established line of research on explainability tools for specific function models: TreeSHAP for tree based models, DeepSHAP for deep models, LinearSHAP for linear models etc. By focusing on a specific function model, more efficient and performant approximation algorithms can be developed, as showcased in our paper -- which focuses on kernel methods, for which model-specific tools for estimation of Shapley value have not previously been considered.\n\nTherefore, our contribution fills an important gap in the literature and we do not believe the focus on kernel methods to be its weakness.\n\n---\n*\"Weakness: The proposed method falls short of computational complexity. It seems no practical.\"*\n\nA: Please refer to the general responses where we addressed and highlighted the computational complexity of our algorithm. In short, our algorithm takes $\\mathcal{O}(n\\sqrt{n})$ time when computing the conditional mean embeddings. \n\nTo showcase the large scale capabilities of our algorithm, we have ran our experiments on a \"league of legends win prediction\" and reported the results in Appendix D2, where we explained a kernel logistic regression with $1,800,000$ instances with $71$ features under $10$ minutes, demonstrating the practicality and scalability of our algorithm.\n\n---\n*\"The established theorem is relatively straightforward. It is obvious from the definition.\"*\n\nA: Could you please clarify what do you mean by straightforward? and which part follows directly from which definition?\n\n---\n*\"Since kernel methods are differentiable, one can use gradient based methods for understanding kernel methods. How is that compared to Shapley based methods?\"*\n\nA: A: While gradient-based and Shapley value-based attributions are both trying to provide explanation to an algorithm, the ways they define the notion of \"importance\" are very different. In particular, the popular Integrated Gradient (IG) method [Sundararajan et al. 2017] was shown to theoretically approach Aummann-Shapley value as discussed in [Chen et al. 2019], a fundamentally different concept to Shapley values. \n\nThis difference leads to IG failing to satisfy some desirable feature attribution axioms that Shapley values do satisfy. For example, when feature i and j contributed equally to the function f at all coalition S, i.e. $\\nu_f(\\{i\\}\\cup S) = \\nu_f(\\{j\\}\\cup S)$ with $\\nu_f$ the value function defined with respect with $f$, IG do not necessarily returns the same attribution score to features i and j, but SVs would. Moreover, when feature i does not contribute to the function f at all, the attribution score from IG will not always be 0 while Shapley value based approach would. See examples from this article [1] for further reference.\n\n- [1] https://towardsdatascience.com/limitations-of-integrated-gradients-for-feature-attribution-ca2a50e7d269\n- [Sundararajan et al. 2017] Axiomatic attributino for deep networks\n- [Chen et al. 2019] Explaining Models by Propgagating Shapley values\n\n---\n\nWe hope the reviewer could increase their score as these concerns are now clarified and in light of other reviewer's positive comments. Thank you very much.", " *\"Weakness: The method provided in this paper is only applicable to the kernel-based method\"*\n\nA: The developed method follows the well established line of research on explainability tools for specific function models: TreeSHAP for tree based models, DeepSHAP for deep models, LinearSHAP for linear models etc. By focusing on a specific function model, more efficient and performant approximation algorithms can be developed, as showcased in our paper -- which focuses on kernel methods, for which model-specific tools for estimation of Shapley value have not previously been considered.\n\nTherefore, our contribution fills an important gap in the literature and we do not believe the focus on kernel methods to be its weakness.\n\n---\n\n*\"Weakness: The paper only considered the conditional/marginal expectation-based Shapley value, while lacked of discussion for baseline value-based SV (BSHAP)\"*\n\nA: In lines 90-91, we discussed a general formulation of value functions for ML explanation, which composes of an expectation of f with respect to some reference distribution. In line 98-99, we gave two other examples that are not marginal/conditional, to showcase the literature, and explained in line 100 why we chose to focus on conditional/marginal SVs -- precisely because they are the two most commonly discussed variants, following the work from Lunberg et al. 2017, Janzing et al. 2019, Chen et al. 2020, Frye et al. 2021, Yeh et al. 2022. \n\nWe also did not discuss BSHAP for the following practical reason: it imputes pre-defined values to represent missingness in a value function. However, besides in obvious cases such as in imaging, where imputing 0 corresponds to blacking out a pixel, it is not obvious how one should pick a baseline to impute in practice. Imputing the mean value simply corresponds back to the marginal expectation in our case.\n\nRef:\n- [Lundberg et. al. 2017] A Unified Approach to Interpreting Model Predictions\n- [Janzing et. al. 2019]: Feature relevance quantification in explainable AI: A causal problem\n- [Chen et al. 2020]: True to the Model or True to the Data\n- [Frye et al. 2021] Shapley explainability on the data manifold\n- [Yeh et al. 2022] Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations \n\n---\n\nWe hope the reviewer could increase their score as these concerns are now clarified. Thank you very much! ", " We thank the reviewers for their comments. We are encouraged to see that all reviewers found our work interesting and important. We first give a general discussion regarding points raised by multiple reviewers, mainly on the CME estimation [R3] and computational complexity of RKHS-SHAP [R2-R3], and then address each reviewer's comment individually.\n\n---\n\n### Conditional Mean Embeddings [R3]\n\nEstimating the conditional mean embedding (CME) of $p(Y\\mid X=x)$, defined as $\\mu_{Y\\mid X=x} = \\mathbb{E}[\\ell(Y, \\cdot)\\mid X=x] \\in H_\\ell $ for some kernel $\\ell$ on $\\mathcal{Y}$, does not require a sample of multiple observations from the conditional distribution $p(Y\\mid X=x)$. Hence, the CME is not estimated by filtering out dataset as R3 suggested. Instead, we only require a sample from the joint distribution of $(X,Y)$ and use a vector-valued non-parametric regression approach (see lines 176-190). This allows us to utilise smoothness across the conditioning variable. In detail, the general finite-data CME $\\mu_{Y\\mid X=x}$ estimator takes the following form: \n\n$$\\hat{\\mu}_{Y\\mid X}(x) = \\Psi_\\mathbf{y}(\\mathbf{K}_\\mathbf{XX} + n\\eta \\mathbf{I})^{-1}\\Psi_\\mathbf{X}^\\top \\psi(x)$$\n\nas shown in line 190 in the main text. Here, $\\Psi$ denotes the corresponding feature matrices, which can be different for $X$ and $Y$, and $\\mathbf{K}$ the kernel matrices. The theoretical justification of this formulation has been studied rigorously in this decade by papers such as [Grünewälder et al. 2012], [Klebanov et al. 2019], and [Park et al 2020]. There are also key results showing that this mapping of distribution to RKHS is injective when using characteristic kernels (e.g. RBF) as well. [Sriperumbudur et al. 2011]\n\nFrom the practical perspective, CMEs have been widely adopted for various ML tasks that require a rigorous representation of conditional densities, such as the work in [Ton et al. 2021] where they have used CME for conditional density estimation under meta learning setting, or the work in [Chau et al 2021] where CME is used for two-staged regression to handle multiple modality. [Park et al. 2021] also used CME to test for conditional distributional treatment effect. The lists of CME application goes on and [Muandet et al 2017] is an excellent reference for this topic.\n\nReferences:\n- [Sriperumbudur et al. 2011] Universality, Characteristic Kernels and RKHS Embedding of Measures\n- [Grünewälder et al. 2012]: Conditional Mean Embeddings as Regressors\n- [Klebanov et al. 2019]: A Rigorous Theory of Conditional Mean Embeddings\n- [Park et al 2020]: A Measure-Theoretic Approach to Kernel Conditional Mean Embeddings\n- [Ton et al 2021]: Noise Contrastive Meta-Learning for Conditional Density Estimation using Kernel Mean Embeddings\n- [Chau et al 2021]: BayesIMP: Uncertainty Quantification for Causal Data Fusion\n- [Park et al. 2021]: Conditional Distributional Treatment Effect with Kernel Conditional Mean Embeddings and U-statistic Regression\n- [Muandet et al. 2017]: Kernel Mean Embedding of Distributions: A Review and Beyond\n\n-----\n\n### Complexity conerns [R2-R3]\n\nWhile naive estimation of CME requires $\\mathcal{O}(n^3)$ due to inversion, there are numerous large-scale kernel approximation technique one could choose from to reduce the complexity as discussed in lines 187-190. In this work, we use FALKON [Giacomo et al. 2020], a Nyström based preconditioner for conjugate gradient descent to compute the heaviest Kernel matrix vector multiplications in the paper. This allows us to reduce the computational complexity from $\\mathcal{O}(n^3)$ to $\\mathcal{O}(n\\sqrt{n})$ since the inducing points of the approximation are chosen to scale with $\\sqrt{n}$, a choice that is theoretically justified to provide optimal learning rates as shown in [Marteau-Ferey et al. 2019] and [Rudi et al. 2015]\n\nFor larger datasets, $n\\sim 10^6$, we found that, choosing $m=\\sqrt{n}$ and setting a batch size of 10, computation of all Shapley values for all features only took around 1 minute on V100 cards, which are considered good cards but not top of the line.\n\nReference:\n- [Giacomo et al. 2020]: Kernel methods through the roof: handling billions of points efficiently\n- [Marteau-Ferey et al. 2019]: Globally convergent newton methods for ill-conditioned generalised self-concordant losses.\n- [Rudi et al. 2015]: Less is more: Nyström computational regularisation.\n\n", " This paper proposed a novel method for calculating Shapley value for kernel-based models. The conditional expectation-based kernel method suffers from the error caused by the sampling-based conditional expectation estimation, while this paper utilize the property of kernel method provided non-parametric estimater of the value function of Shapley value directly. They theorectically showed the robustness of RKHS-SHAP. They also propose Shapley regulariser which could control specific features' contributions to the model. Extensive experiments verified their methods. [Strength]\n1. This paper rigorously proved the non-parametric estimater of the value function for kernel-based method.\n2. Extensive experiments verified the effectiveness and robustness of the proposed methods.\n\n[Weakness]\n1. The method provided in this paper is only applicable to the kernel-based method.\n2. The paper only considered the conditional/marginal expectation-based Shapley value, while lacked of discussion for baseline value-based Shapley value (BShap) [cite 1]. \n\n[cite 1] Mukund Sundararajan and Amir Najmi. The many shapley values for model explanation. In International Conference on Machine Learning, pages 9269–9278. PMLR, 2020. N/A N/A", " - The authors proposed a feature attribution method for explaining kernel methods. \n- Some analysis has been provided to demonstrate the robustness of the proposed methods. \n- Experiments are carried out to show the effectiveness of the method. Strengths:\n- The robustness of the method under local perturbation has been rigorously verified, which is quite important in such explanation methods. \nWeaknesses:\n- The proposed method falls short of computational complexity. It seems not practical. \n- The established theorem is relatively straightforward. It is obvious from the definition. - Since kernel methods are differentiable, one can use gradient based methods for understanding kernel methods. How is that compared to Shapley based methods? - Computational complexity. \n- Only applicable to kernel methods. ", " This paper considers the calculation of Shapley value explanations in the context of kernel models. The idea is to leverage the structure of such models to generate better Shapley value estimates (e.g., more accurate or faster), similar to what TreeSHAP enables for tree ensembles.\n\nThe main technical insight has to do with how held-out features are handled. Rather than estimating $\\mathbb{E}[f(X_{s^c}, x_s) | X_s = x_s]$ by sampling values from $p(X_{s^c} \\mid X_S = x_S)$, or estimating $\\mathbb{E}[f(X_{s^c}, x_s)]$ by sampling from $p(X_{s^c})$, the authors suggest using the underlying kernels to calculate the conditional mean embedding (CME) or kernel mean embedding (KME) for the held-out features (possibly conditioned on the observed ones). This may lead to more reliable and/or faster estimate of the value function (aka coalitional game), and the authors then suggest using KernelSHAP's weighted least squares algorithm for the final Shapley value approximation. I should mention that I'm not an expert in kernel methods, so apologies in advance if I misunderstood any key parts of the method.\n\n### Strengths:\n\n- This paper is the first to my knowledge that develops a Shapley value approach specifically for kernel methods. The insights appear new and possibly useful in practice. The results are non-trivial and seem to require a strong understanding of kernel methods, which is not currently well represented in the explainable AI literature. (On the other hand, the key techniques may be inaccessible to many readers, but this is perhaps inevitable.)\n- The method enables regularization of explanations during training, which is generally difficult to achieve with Shapley values due to the cost of their calculation.\n- The background section is overall very nicely written.\n\n### Weaknesses:\n\n- I may be wrong on this, but I think the authors could say more about how expensive it is to estimate $\\mu_{X_{s^c} \\mid X_s = x_s}$ repeatedly throughout the KernelSHAP WLS approximation for different coalitions $s$. I've elaborated on this in the \"questions\" section below.\n- Again I may be wrong on this, but I think the paper understates the difficulty of estimating $\\mu_{X_{s^c} \\mid X_s = x_s}$. I've elaborated on this in the \"questions\" section below.\n- The robustness results do not seem to be a property of RKHS-SHAP, but of Shapley values themselves when applied to kernel models for which the authors' assumptions hold. It's an interesting result and non-trivial to show, but it perhaps shouldn't be claimed to be related to RKHS-SHAP.\n- In discussing the conditional expectation estimate in Section 3, the authors write that they \"utilise the arsenal of kernel methods to estimate the conditional expectations directly.\" I believe this is very similar (except for the focus on kernels) to the second method introduced by Frye et al. 2020 that has a very similar motivation - the supervised surrogate model. That method is easier to train than the conditional generative model, does not require Monte Carlo sampling, and has no risk of off-manifold examples because it bypasses the generative modeling task. Perhaps this deserves some discussion? \n- I thought the paper's exposition could greatly benefit from disentangling the two main elements of estimating Shapley values (and where RHKS-SHAP is helpful): 1) handling predictions with held-out features, and 2) accurately approximating Shapley values. Section 2 made it sound like this work would present a significant improvement over KernelSHAP (see lines 114-130), which led me to believe the improvement had to do with 2). Ultimately, it had to do with 1), which I don't understand as really being part of KernelSHAP. The main idea behind KernelSHAP is approximating Shapley values via weighted least squares, and the authors actually re-use this part. What's modified here is the sampling trick for handling held-out features, which isn't unique to KernelSHAP and also used by other Shapley value methods (e.g., IME). \n\nA couple small things:\n- On line 125, the formula for the closed-form solution to the WLS problem solved by KernelSHAP seems a bit informal due to the presence of $\\infty$ entries in $W$\n- It may be helpful to say explicitly that the interventional and observational coalitional games can be estimated via the KME/CME specifically because of the model's linear dependence on the features $\\psi_x$. Of course, what we really want is the expectation of the model's output, and it's a unique property of such methods that we can get that via the expectation of the $\\psi_x$ component.\n\nThere are a couple related works that seem like they should/could have been discussed here:\n- The various ways of estimating $\\mathbb{E}[f(X) \\mid X_s = x_s]$ are discussed in a recent review paper [1], including parametric assumptions, generative modeling, and several more. The conditional generative model is not, I think, widely understood as a viable approach\n- The idea of \"attribution priors\" [2] has been discussed as a way to regularize models' dependencies during training, although it's mainly used with gradient-based methods rather than Shapley values. Regularizing explanations is a natural idea, but Shapley values are somewhat unique in making this intractable\n- ShapNets [3] also provide a fast Shapley value calculation, albeit for a specific modified DNN architecture, and one of the motivations was to enable explanation regularization. RKHS-SHAP provides something similar, but for kernel models\n\n[1] Covert et al., \"Explaining by removing: a unified approach to model explanation\" (2021)\n\n[2] Erion et al., \"Improving permeance of deep learning models with axiomatic attribution priors and expected gradients\" (2021)\n\n[3] Wang et al., \"Shapley explanation networks\" (2021) If I understand correctly, the authors may have understated the computational cost of their approach, as well as the difficulty of achieving reliable CME estimates with realistic datasets. To elaborate:\n\n1. Whether you're estimating interventional or observational Shapley values, each coalition/subset will require a separate evaluation of the value function (coalitional game). And for each evaluation, you'll need to estimate $\\mu_{X_{s^c} \\mid X_s = x_s}$ or $\\mu_{X_{s^c}}$, which seems like a potentially expensive operation. (Well, the KME can be calculated once per feature, but not the CME.) Thus, RKHS-SHAP does not seem like it offers any run-time benefits, in fact it may make the estimation even slower. Am I missing something? \n2. $\\mu_{X_{s^c} \\mid X_s = x_s}$ is not a simple thing to estimate, it would seem to require that you filter for examples in your dataset where $X_s = x_s$. In a high-dimensional dataset, or a dataset with continuous features, wouldn't the number of matching rows be very small, particularly for $s$ with large cardinality? Furthermore, if you somehow are able to find matching rows that help you estimate $\\mu_{X_{s^c} \\mid X_s = x_s}$, couldn't you use the same rows to directly estimate $\\mathbb{E}[f(X_{s^c}, x_s) | X_s = x_s]$ even for non-kernel models? It seems like this trick only works in settings where, by assumption, you could reliably estimate the conditional expectation of the model output. (By the way, this approach for estimating the conditional expectation is discussed in Sundararajan et al., 2020.) Is this right, or am I missing something? \n\nAbout the robustness results:\n- Similar to my point under \"weaknesses,\" can you elaborate on whether you believe this result is somehow specific to RKHS-SHAP, or just a property of Shapley values applied to kernel models?\n- Could you comment on whether the constants in the robustness bounds are actually small? If not, these results may be meaningless. E.g., we could establish robustness bounds for Shapley values with arbitrary models whose predictions lie within a bounded range - are these results more useful than that?\n There are no negative societal impacts to this work.\n\nAs for methodological limitations, I believe those are primarily in the speed and reliability of the CME estimation (as discussed above). If my concerns are accurate, then I don't think these points are currently adequately discussed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "A0o7UDfIt2I", "1dIm8ebBlq", "EqIrzt4K6zH", "XqRRnGR2P4I", "hr8Ik92pGuT", "SMNpQ99P-kN", "kXS4yqLmpX0", "0T40I4ufnUz", "W6ebe9cAD9", "QbqAtLBdf-", "j_JA91rKjn7", "mxgZzXTKTbFV", "hDThH9TEbQ5", "ZNQ77wARFeD", "nips_2022_gnc2VJHXmsG", "BQkeSBmKUYh", "EFtuInxqJe", "Pja6xWdaSkc", "nips_2022_gnc2VJHXmsG", "nips_2022_gnc2VJHXmsG", "nips_2022_gnc2VJHXmsG", "nips_2022_gnc2VJHXmsG" ]
nips_2022_3LBxVcnsEkV
GREED: A Neural Framework for Learning Graph Distance Functions
Similarity search in graph databases is one of the most fundamental operations in graph analytics. Among various distance functions, graph and subgraph edit distances (GED and SED respectively) are two of the most popular and expressive measures. Unfortunately, exact computations for both are NP-hard. To overcome this computational bottleneck, neural approaches to learn and predict edit distance in polynomial time have received much interest. While considerable progress has been made, there exist limitations that need to be addressed. First, the efficacy of an approximate distance function lies not only in its approximation accuracy, but also in the preservation of its properties. To elaborate, although GED is a metric, its neural approximations do not provide such a guarantee. This prohibits their usage in higher order tasks that rely on metric distance functions, such as clustering or indexing. Second, several existing frameworks for GED do not extend to SED due to SED being asymmetric. In this work, we design a novel siamese graph neural network called Greed, which through a carefully crafted inductive bias, learns GED and SED in a property-preserving manner. Through extensive experiments across $10$ real graph datasets containing up to $7$ million edges, we establish that Greed is not only more accurate than the state of the art, but also up to $3$ orders of magnitude faster. Even more significantly, due to preserving the triangle inequality, the generated embeddings are indexable and consequently, even in a CPU-only environment, Greed is up to $50$ times faster than GPU-powered computations of the closest baseline.
Accept
The paper studies an important problem of computing distance functions across graphs which is NP-hard to solve in the worst case. The author provide a theoretical analysis of certain properties of the algorithm, and show its relevance in practice. The reviewers pointed out some weakness, but the rebuttal helped resolve some of those. Please address those comments for the camera ready version. The paper has weak accept votes, but in light of the significance of the topic, I also lean toward accepting the paper.
train
[ "7aFIQMYq6s", "Gs5yx7FwlzT", "WpWCiWF_1WO", "oMnb7bJdSFp", "btf4IOEF5Zg", "AdHGQK3DlMs", "vq6t53578Uz", "xLCfCVzYAC3", "My--wnMAr8", "IwFh1flPYHgC", "PviUGBtsnrv", "tTUwZ99aE4K", "wbGHWop2Ky", "nKdbUmWg7qU", "0ptXGHvK7Nz", "jX8ADOCPjD", "JYiOfhBLo4Z", "-4SHMCrFgu" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Hello dear authors,\nThank you for your comments and addressing the issues pointed out by all reviewers. I am updating my score to weak accept.", " Based on your latest responses, my major concerns are addressed. \n\nI would like to improve my ratings to weak accept.", " Dear Reviewer ap6M,\n\nWe thank you for taking the time to provide critical comments on our work. We had added the comparison to GotSIM, cited IsoNet and clarified that all other issues were already addressed in our initial submission. \n\nWith these additional experiments and clarification, we believe the Reviewer would now find the manuscript acceptable. Since today is the last day for author-reviewer discussions, please let us know if there are other comments for which we can provide any clarifications.\n\nWe are looking forward to your feedback.\n\nThank you,\n\nAuthors", " **In terms of the ablation studies of the siamese architecture, your claim As visible in Fig. 4i-4k, GREED is consistently better than GREED-dual. is not accurate for me. For example, in 4(I), the performance of Greed-Dual and Greed are nearly the same. Besides, can you give any analysis why Greed-NN outperforms Greed-Dual in most cases? Considering GNN, despite not using siamese architecture, should have better modeling ability than MLP due to the inductive bias of approximating GED/SED?**\n\nThe referred statement is an oversight on our part. We have changed it to:\n\n> The RMSE of Greed is *generally* better than GREED-Dual, with the difference being more significant at low volumes.\n\nGreed-NN Vs. Greed-Dual:\n\nWe stress that in these ablations, we wish to demonstrate the efficacy of our inductive biases with low volumes of training data. This ability is important since generating training data itself is NP-hard (due to computing GED/SED). To provide concrete examples, with 1000 training pairs, Greed consistently outperforms both Greed-NN and Greed-Dual. With larger training pairs, the effect of inductive bias becomes less important in some of our datasets, and we find that occasionally Greed-NN or Greed-Dual gives better accuracy. However, even in these cases the difference is only marginal, and taking the runtime and theoretical guarantees of Greed into consideration, it would still be the recommended choice for most use cases. Finally, we note that Greed-NN and Greed-Dual have strictly more parameters than Greed, which works in their favour only when the training data is large.\n\nAs for why Greed-NN might be better than Greed-Dual, we would like to offer the following insight. Greed-NN keeps the inductive bias due to the Siamese architecture but ablates the inductive bias of the custom prediction function. On the other hand, Greed-Dual keeps the inductive bias due to the custom prediction function but ablates the inductive bias due to the Siamese architecture. A priori, it is difficult to say which of the two sources of inductive biases is more important. A posteriori, however, the observed phenomenon of Greed-NN generally out-performing Greed-Dual can be interpreted as evidence for the Siamese architecture providing a stronger inductive bias than the custom prediction function. Note that the modeling ability of GNN vs MLP should not be used as proxy for the modeling ability of Greed-Dual vs Greed-NN as GNN and MLP perform very different functions, namely graph embedding and GED/SED prediction respectively, in these architectures.\n\nWe have added the above discussion to the new revised version. \n\nWe once again thank the reviewer for engaging with us in a discussion and helping us improve the paper. ", " Thank you again for your time on reviewing our work and your willingness to raise the rating to borderline accept. (The rating still reflects 4, which corresponds to borderline reject. We would be thankful if you raise it to 5, which corresponds to borderline accept).\n\nPlease find below our clarifications on your queries.\n\n**I agree that the efficiency improvement of the proposed framework is quite impressive. However, in my opinion, the novelty of learning graph edit distance by GNN is limited. My concern is that it might not be enough to only have the efficiency improvement as a NeurIPS submission.**\n\nThank you for your positive comment on the efficiency aspect. We also note that our accuracy is better than state-of-the-art algorithms as well, generalizes to both GED and SED, preserves properties from their original distance spaces, and scales better with query graph sizes.\n\n**In terms of the novelty of the architecture, the novelty lies in the use of siamese architecture, which are two GNNs of share parameters. While the architecture of GNNs (including Pre-MLP, GIN, Concatenation, Pool, and Post-MLP, and GED/SED prediction), are standard in the literature. This is what I mean The architecture is not novel in the review.**\n\nWe agree with the reviewer that the individual components of the GNN used as the embedding layer are not novel and have been used in the literature. In our humble opinion, the novelty lies in our custom prediction functions (Sections 3.1.1 and 3.1.2) that allow us to guarantee preservation of the metric property for GED predictions and the triangle inequality (along with non-negativity) for SED predictions. These properties are required for many higher order applications, such as clustering and indexing, and GREED is therefore the only algorithm that is suitable for such scenarios. While the prediction function is simply the Euclidean distance on the embedding space for GED, a suitable function for SED is significantly less obvious, and along with corresponding analytical proofs, is one of the novel contributions of our work.\n\nFurthermore, the Siamese architecture is crucial for these guarantees as it constrains the model to learn a single map from the graph space to the embedding space for both the query and target graphs. The noteworthy achievement of our work is to demonstrate that we can satisfy all these desiderata while still out-performing with large margins, both in terms of accuracy and runtime, state-of-the-art models that freely use pair-dependent cross-graph information in expensive computations, and do not provide any guarantees. We reflect that this is possible because both the Siamese architecture and the custom prediction functions, apart from enabling the requisite guarantees, serve as strong inductive biases for graph distance learning tasks. As such, we humbly consider the simplicity of our architecture as a salient feature. ", " Hi authors, \n\nThanks for your comprehensive responses, especially the writing of the paper, which addresses some of my concerns\n\nHowever, my major concern is that the contributions of this work including idea and framework is not that big, since\n\n1. I agree that the efficiency improvement of the proposed framework is quite impressive. However, in my opinion, the novelty of learning graph edit distance by GNN is limited. My concern is that it might not be enough to only have the efficiency improvement as a NeurIPS submission.\n\n2. In terms of the novelty of the architecture, the novelty lies in the use of siamese architecture, which are two GNNs of share parameters. While the architecture of GNNs ( including Pre-MLP, GIN, Concatenation, Pool, and Post-MLP, and GED/SED prediction), are standard in the literature. This is what I mean `The architecture is not novel in the review`.\n\n3. In terms of the ablation studies of the siamese architecture, your claim `As visible in Fig. 4i-4k, GREED is consistently better than GREED-dual.` is not accurate for me. For example, in 4(I), the performance of Greed-Dual and Greed are nearly the same. Besides, can you give any analysis why Greed-NN outperforms Greed-Dual in most cases? Considering GNN, despite not using siamese architecture, should have better modeling ability than MLP due to the inductive bias of approximating GED/SED?\n\nIn summary, due to the above concern, I think this paper is borderline paper. Based on the updated version, especially the writings of the paper, I am inclined to borderline accept.", " Thanks again for your insightful suggestions and comments. As the deadline for discussion is approaching, we are glad to provide any additional clarifications that you may need.\n\nIn our previous response, we have carefully studied your comments and addressed the concerns raised. We summarize our responses with regards to the following aspects:\n\n1. We conducted additional ablation studies to show that (i) the trends with respect to the non-siamese (Greed-dual) and MLP (Greed-NN) versions also hold for GED, (ii) clarify our position on the pre-mlp module including new empirical data.\n\n2. We have addressed all the issues related to presentation. The changes are highlighted in blue in the revised version.\n\n3. We explained why the siamese architecture is required for the theoretical guarantees we provide with respect to preservation of distance properties and its implication on the accuracy with actual empirical data (now further boosted with experiments on GED as well).\n\nWe hope that the provided new experiments and additional explanations have convinced you of the merits of our work. Please do not hesitate to contact us if there are other clarifications or experiments we can offer. \n\nOn the whole, we show Greed is more accurate than a host of baselines and orders of magnitudes faster. Our codebase is also public for anyone to reproduce the results.\n\nThank you for your time again!\n\nBest,\n\nAuthors", " Thanks again for your insightful suggestions and comments. As the deadline for discussion is approaching, we are glad to provide any additional clarifications that you may need.\n\nIn our previous response, we have carefully studied your comments and addressed the concerns raised. We summarize our responses with regards to the following aspects:\n\n1. We have addressed all the issues related to presentation. The changes are highlighted in blue in the revised version.\n\n2. We explained why the siamese architecture is required for the theoretical guarantees we provide with respect to preservation of distance properties and its implication on the accuracy with actual empirical data. \n\n3. We provided preliminary evidence on the efficacy of our technique to MCSS. Nonetheless, we would also be ok to change our title to only edit distance as suggested.\n\nWe hope that the provided new experiments and additional explanations have convinced you of the merits of our work. Please do not hesitate to contact us if there are other clarifications or experiments we can offer.\n\nThank you for your time again!\n\nBest,\n\nAuthors", " Thanks again for your insightful suggestions and comments. As the deadline for discussion is approaching, we are glad to provide any additional clarifications that you may need.\n\nIn our previous response, we have carefully studied your comments and addressed the concerns raised. We summarize our responses with regards to the following aspects:\n\n1. We clarified that all citations with the exception of IsoNet were already cited. We also pointed out that NeuroMatch, which does subgraph isomorphism tests, was already one of the baselines in k-NN queries (this is only setting where it can be used since it does not output a distance or perform whole-graph matching. NeuroMatch performs approximate subgaph-isomorphism tests, whereas our focus is on learning distances.) IsoNet suffers from the same limitation as NeuroMatch. We have now cited IsoNet.\n\n2. We included GotSIM for GED and showed Greed is more accurate. We also explained why GotSIM does not extend to SED.\n\n3. We explained that several of the requested experiments such as (i) Why siamese and its benefits, (ii) Why GIN and (iii) why sum-pool were already included in the submission.\n\nWe hope that the additional explanations and comparison with GotSIM have convinced you of the merits of our work. On the whole, we show Greed is more accurate than a host of baselines and orders of magnitudes faster. Our codebase is also public for anyone to reproduce the results.\n\nPlease do not hesitate to contact us if there are other clarifications or experiments we can offer.\n\nThank you for your time again!\n\nBest,\n\nAuthors", " We thank the reviewers for the critical comments and suggestions. Please find a point-by-point response to the comments raised by the reviewers below. We have also updated the main manuscript and the appendix to address these comments. The changes made in the manuscript are highlighted in **blue color**. *All references to figures, tables, sections, citations, etc. made in our response are based on the updated version.*", " **Appeal to the reviewer:** We have addressed all the presentation issues in our rebuttal. We have also clarified why a Siamese network is necessary and the resulting consequence of better quality than all baselines, massive scalability advantage and indexable embeddings. If the reviewer feels the revised version addresses the concerns raised, we humbly request to please raise the rating.", " **There have been many works, as mentioned by the authors, on learning graph-based similarity metrics by GNN, thus in general, the novelty of this idea is limited. It is straightforward to include SED into the framework.**\n\n*Response:* We do not claim novelty on the problem formulation. Our claim of novelty is on:\n* **Massive scalability:** We are 1000 times faster than the closest baseline of H2MN (Sec 4.4).\n* **Novel Architecture:** The proposed architecture has two key innovations. (1) The embeddings are computed in a pair-independent manner. This is possible only because we use a siamese architecture. (2) We preserve properties of the original space, i.e., the predicted GED space is metric and the predicted SED space satisfies triangle inequality. None of the existing techniques do that. Owing to these properties, the graph embeddings are indexable and this leads to the massive scalability mentioned above.\n* **Better quality:** The proposed algorithm is consistently better than the baselines across most scenarios and scales better with graph sizes.\n\n**Though useful, the proposed GNN framework is not novel, which have been proposed in previous works. Especially, the motivation to use Siamese GNN is not clear.**\n\n*Response:* The first para of Sec 3.2 discusses why we use a siamese architecture. We reproduce it verbatim below.\n\n> If we do not use a siamese architecture, then the embedding model for the query and the target graphs would be different. Hence, the predicted distance would violate symmetry, and therefore, would not be a metric. Furthermore, if the distance computations are pair-dependent [27, 41, 2], i.e., it jointly learns the embedding of the query and the target, then a single graph may correspond to multiple representations. Hence, it would not be a metric or satisfy the triangle inequality.\n\nIn addition, we also refer to Sec 4.5, where we compare GREED with its non-siamese version, GREED-dual, where the weights are not shared. As visible in Fig. 4i-4k, GREED is consistently better than GREED-dual. \n\nRegarding the statement \"the proposed GNN framework is not novel, which have been proposed in previous works.\", we would appreciate it if the reviewer can point out which prior work has proposed this framework.\n\n**The presentation of this work need improvements:**\n\n**a) The title \"learning graph distance functions\" is kind of overclaimed, since the work only learns for graph edit distance, while there are other distances, like maximum common subgraph.**\n\n*Response:* We acknowledge this feedback.\n\n* We would be happy to change our title to \"GREED: A Neural Framework for Learning Graph Edit Distances\" if the reviewer feels so. The acronym GREED actually expands to only edit distances (GRaph Embeddings for Edit Distances). In the title, we followed the naming convention of the baselines such as H2MN[42] and GENN-A*[37], which used the terms \"graph similarity learning\" or \"graph distance learning\", despite focusing on graph edit distance.\n\n* Nonetheless, to show generalizability to maximum common subgraph similarity (MCSS), we have added this experiment in Appendix K with reference from Sec 5 in main paper. The same results are also produced below. \n\nRMSE on MCSS:\n| Method |Aids'| Linux | IMDB |\n|--| -------- | -------- | ---- |\n|Greed |0.514 |0.085 | 0.293 |\n|H2MN | 0.612 |0.152 | 0.475 |\n\n\n\n**b) The content of Section 2 is a bit too much, since most of them are established knowledge, not proposed by the authors. The authors can shorten Section 2 by moving some materials to appendix. Moreover, it would be better if the authors give the references for GED, SED, and the properties.**\n\n*Response:* We acknowledge this feedback. Section 2 has been modified accordingly.\n\n**c) In Section 4, the ref of some tables are wrong, e.g., the text following 4.2 Prediction Accuracy of SED and GED and 4.3 Efficiency are all Table 2a and 2b, which is confusing.**\n\n*Response:* We apologize for the latex reference error. We have corrected them. Now, Tables 1a and 1b present the RMSE errors of the algorithms, and Tables 3a-c present their running-time results.\n\n**d) In Section 4.6, the authors mentioned Table 3a is not found in the paper.**\n\n*Response:* We apologize again for this mistake, which was due to latex reference errors. We have corrected this. Table 3 corresponds to Table 4 in the revised version.\n\n**e) In the Readme of the code, the submission information is still KDD 2022. In fact, I think this is the reason that the table references are messy in the paper. The authors should carefully proofread the paper for NeurIPS submission.**\n\n*Response:* The table references messed up due to latex reference errors. We have also updated the README now.", " **q1. Why did you not cite and compare your approach against the following state-of-the-art recent approaches i.e., GOTSim [3], NeuroMatch [4], Graph embedding network (GEN) [5], and IsoNet [6] ?**\n\n*Response:* We would like to humbly state that this comment is partially incorrect. \n\n* We have cited GOTSim ([13]), NeuroMatch ([32]), and GEN ([27]).\n* We have empirically compared it with NeuroMatch and shown significantly superior performance (See Table 2b). As we have already illustrated in Baselines para of Sec 4.1, NeuroMatch is an algorithm for predicting subgraph isomorphism and not edit distance or subgraph edit distance. Hence, it can only be used for k-NN queries. The issue is the same for IsoNet, hence it was not compared against. We have now cited IsoNet in the revised version (highlighted in blue font in Sec 1).\n\nAdditional baselines:\n\n* We do not compare with GEN (referred to as GMN in our paper), since SimGNN, H2MN and GENN-A* have compared with GEN and have shown better performance on the same datasets that we use. We compare with SimGNN, H2MN and GENN-A* and show better performance. Hence, comparing with GEN is redundant. \n\n* Regarding GotSIM, we did not compare since GotSIM does not generalize to SED; there is an explicit assumption of modeling a symmetric distance function because it uses cosine similarity to compare node neighborhoods. In addition, the denominator in Eq. 6 in GotSIM is also based on the assumption of whole-graph matching. In GED, H2MN is more recent and reported lower errors on average than GotSIM. Nonetheless, we have now compared with GotSIM in GED and show that GREED is indeed better. For GotSIM, we use the code provided by the authors. The results are below. This comparison has also been added to Table 1a in the revised version. \n\n\nRMSE of Greed vs GotSIM:\n| Method |Aids'| Linux | IMDB |\n|--| -------- | -------- | -------- |\n|Greed |0.796 | 0.416| 6.734 |\n|GotSIM | 0.996 | 0.574 | 37.831 |\n\n\n**q2. Did you try any other alternatives to siamese graph neural network settings i.e., non-sharing of parameters between query and target graphs to see if it helps?**\n\n*Response:* This empirical analysis was already included in our submission. Specifically, in Sec 4.5 we compare GREED with GREED-dual where the weights between GNNs embedding the target and query graphs are not shared. As visible in Fig. 4i-4k, GREED is consistently better than GREED-dual. Furthermore, in Sec. 3.2, we also discuss the advantages that a Siamese architecture provides.\n\n**q3. Did you try any alternatives to the Graph Isomorphism Network (GIN) module?**\n\n*Response:* This experiment was also already included in the paper. Specifically, we refer to this ablation study in that last line of Section 4.5 with details in App G. \n\n**q4. Why did you use sum pool specifically?**\n\n*Response:* An ablation study with sum-pool was already included in our submission. Specifically, we refer to this ablation study in the last line of Sec 4.5 with details in App G.\n\nSum-pool can better distinguish graph sizes better than other aggregation functions such as mean-pool or max-pool. To elaborate, let us consider a graph $G_1$ that is significantly larger than another graph $G_2$. In this scenario, the individual coordinates of $G_1$’s embedding can potentially be significantly larger than those of $G_2$ since in $G_1$ the summation is being done over a larger set of embeddings. Both mean-pool and max-pool fail to capture the size information as effectively, since the max and the mean operations do not scale with the number of inputs.\n\nIn the literature on graph similarity learning, SimGNN [2], GMN [24], and GENN-A\\* [31] use attention-weighted sum-pool. We have not used attention as we observed in our development phase experiments that sum-pool outperforms attention-pool in our setting. Instead, we have utilized multi-granular summarization by utilizing layer-wise concatenation to obtain a rich representation of the graph structure. This approach has been shown to be effective in extracting useful subgraph features (see Xu et. al., “Representation Learning on Graphs with Jumping Knowledge Networks”, ICML 2018)\n\n**Appeal to the reviewer:** As we have clarified in our rebuttal, most of the questions asked were already present in the paper. In addition, we have now also included GotSIM. If the reviewer feels satisfied with our work and effort, we humbly appeal to the reviewer to raise our rating. ", " **Does the Pre MLP matter since you use the one-hot encoding? Meanwhile, a bit larger problem is whether all the component in Figure 2(b) is necessary. In other words, can we directly use GIN (or other GCNs) to replace Figure 2(b)?**\n\n* *Pre-MLP:* The dimension of the one-hot vector increases linearly with the number of labels in a graph database and hence can be very large. The primary job of the pre-mlp is to reduce it to a desirable dimension size. Indeed, the same effect may also be obtained by directly feeding it to the first layer of GIN. However, since a GIN constructs embeddings by incorporating both structure and label information, one may desire a different dimensionality in the GIN layers. Hence, we have this separation. To summarize, the pre-mlp allows a more clear conceptual separation of the label-based embeddings from the subsequent structure+label embeddings. In terms of pure results, we do not observe a statistically significant difference in performance. We appreciate this observation and have added the above discussion to Sec 3.1. In addition, we have also conducted the ablation study to observe the RMSE of Greed with and without the pre-mlp layer. The results have been added to Appendix G. The results table is also re-produced below.\n\n|RMSE\t|With Pre-MLP\t|Without Pre-MLP|\n|-------|---------------|----------------|\n|AIDS (SED)\t|0.51|\t0.51|\n|Amazon|\t0.5o\t|**0.39**|\n|CiteSeer\t|0.52|\t**0.51**|\n|Cora_ML\t|**0.64**|\t0.68|\n|AIDS' (GED)|\t**0.8**\t|0.85|\n|IMDB (GED)\t|**6.73**\t|7.68|\n|LINUX (GED)|\t0.42|\t**0.41**|\n|Protein\t|0.52|\t0.52|\n|PubMed|\t0.73|\t0.73|\n\n* *Directly using GIN:* A GIN produces embeddings for each node. In our problem, we need an embedding for a graph. Hence, we need some aggregation function that can take a set of node embeddings and generate a single embedding characterizing the graphs. For that we use a sum-pool followed by an MLP. \n* *Is the GIN important?:* Ablation study on replacing the GIN with other GNNs like a GCN or GAT is provided in Appendix G and Table B. GIN provides the best performance. This result is not surprising since GIN is provably more expressive than GCN or GAT in distinguishing graph structures (essential to SED or GED computation) and is as powerful as the Weisfeiler-Lehman Graph Isomorphism test.\n\n\n**It seems that the second and third strengths above (i.e., preservation of GED's theoretical properties) are mainly from the last layer (i.e., GED and SED prediction functions F). In other words, the paper doesn't have theoretical insights into the choice of siamese architecture and the GNN component. In this case, the ablation study is important to support the claim.**\n\n*Response:* The siamese architecture is also necessary to ensure metric and triangle inequality of GED and SED respectively. The first para of Sec 3.2 discusses this aspect. We reproduce it verbatim below.\n\n> If we do not use a siamese architecture, then the embedding model for the query and the target graphs would be different. Hence, the predicted distance would violate symmetry, and therefore, would not be a metric. Furthermore, if the distance computations are pair-dependent [27, 42, 2], i.e., it jointly learns the embedding of the query and the target, then a single graph may correspond to multiple representations. Hence, it would not be a metric or satisfy the triangle inequality.\n\nIn addition, we also refer to Sec 4.5, where we compare GREED with its non-siamese version, GREED-dual, where the weights are not shared. As visible in Fig. 4i-4k, GREED is consistently better than GREED-dual. \n\n\n**Meanwhile, some writing parts can also be improved. For example, (1) The table's index is unclear. There are two different \"table 1\" on page 7 and 15, respectively. (2) Figure 4d should be \"Dblp\" instead of \"IMDB\". (3) Try to put the figure (tables) and corresponding text on the same page for reading-friendly purposes.**\n\n*Response:* We apologize for these presentation issues. They occurred due to latex reference errors and skipped our notice. All of these issues have now been corrected.", " **It seems that all the ablation studies are related to the SED based on the caption of Fig. 3, do you have any experiments with the GED? Those experiments can also help to test the consistency of the results.**\n\n*Response:* We have added the same experiment on GED now (Fig. F in Appendix, which is referred to from Sec 4.5 in the main paper). The obtained data is produced below in the form of a table. As visible, the trends are similar. As more training data is provided, the gap between Greed and Greed-NN reduces. Greed-dual, i.e., the non-siamese version is consistently inferior to Greed.\n\n\n**RMSE in AIDS'**\n#Training Pairs|100000 | 10000| 1000|\n|-------------|--------|------|-------|\n|Greed| 0.8 | 0.86 |3.11|\n|Greed-NN| 0.95 | 1.23 |3.22|\n|Greed-Dual|1.23 | 3.47 |4.38|\n\n**RMSE in Linux:**\n#Training Pairs|100000 | 10000| 1000|\n|-------------|--------|------|-------|\n|Greed|0.42 | 0.40 |1.75|\n|Greed-NN|0.43 |0.55 |1.85|\n|Greed-Dual|0.52 |1.06 |2.21|\n\n\n**RMSE in IMDB:**\n#Training Pairs|100000 | 10000| 1000|\n|-------------|--------|------|-------|\n|Greed|6.73 | 4.77 |54.24|\n|Greed-NN|7.8 | 15.29 |647.97|\n|Greed-Dual|9.39 | 187.62 |285.36|\n\n**From Fig. 4(i)-(k), we can observe that the blue line (Greed-NN) has a better performance on PubMed and CiteSeer datasets. What is the possible reason for it?**\n\n*Response:* Greed-NN sometimes outperforms Greed *only* when the training data is very large. As visible in 4i-k, the blue line is below green only on the right part (larger training data) of the plot. We produce below the text from Sec 4.5 verbatim that explains why we observe this behavior.\n\n> Compared to Greed, Greed-NN achieves marginally better performance at larger train sizes in Pubmed and Citeseer. However, in DBLP, Greed is consistently better. The number of subgraphs in a dataset grows exponentially with the node set size. Hence, an MLP needs growing training data to accurately model the intricacies of this search space. In DBLP, even 100k pairs is not enough to improve upon $\\mathcal{F}$. Furthermore, since computing GED and SED is NP-hard, generating large volumes of training data is not desirable. Overall, these trends indicate that $\\mathcal{F}$ enables better generalization and scalability with respect to accuracy. Furthermore, given that its performance is close to an MLP even on large training data, and it enables indexing, the benefits outweigh the marginal reduction in accuracy.", " Graph and subgraph edit distance (GED and SED) are important mechanisms to measure the distance between graph pairs. However, it is NP-hard and thus hard for scalability. This paper proposes a framework based on the graph neural networks, named GREED, to learn the graph embedding for the edit distances. \n\nUnlike the previous methods, GREED is able to preserve the essential theoretical properties (e.g., triangle inequality, non-negativity, etc.) of the MED as the metric distance function, and also can easily be adapted to SED. Therefore, the embeddings learned from the GREED are indexable, significantly improving query time. \n\nComprehensive experiments are conducted to validate the proposed method. GREED outperforms state-of-art methods across ten real-world datasets up to millions of edges and is faster than other methods. \n Strengths:\nFirst of all, the paper illustrates the motivation, problem, and previous works' limitations clearly. The paper's overall organization is good and easy for readers to follow. The authors open the source code, which is good for reproduction and will benefit the entire community. \nSecondly, the proposed method (GREED) can learn the theoretical properties-preserved graph embedding. And the author provides the math proof for it.\nThirdly, the learned embedding is indexable and pair-independent, which is essential for scalability and fails most of the previous works in the literature. In this aspect, the author not only provides the neural networks-based model but also details of query algorithms.\nLast but not least, lots of comprehensive experiments and discussions are conducted to validate the GREED, including prediction accuracy, method efficiency, scalability, and generalizability.\n\nWeaknesses:\n1. The main concern I have is the novelty of the design of the neural network and the corresponding ablation study (section 4.5). It seems that the second and third strengths above (i.e., preservation of GED's theoretical properties) are mainly from the last layer (i.e., GED and SED prediction functions F). In other words, the paper doesn't have theoretical insights into the choice of siamese architecture and the GNN component. In this case, the ablation study is important to support the claim. However, from Fig. 3(i)~3(k), the model component design seems not convincing (i.e., the green line is not the best). To better understand it, I raised a couple of questions in the \"Questions\" box. \n2. Meanwhile, some writing parts can also be improved. For example, \n(1) The table's index is unclear. There are two different \"table 1\" on page 7 and 15, respectively.\n(2) Figure 4d should be \"Dblp\" instead of \"IMDB\". \n(3) Try to put the figure (tables) and corresponding text on the same page for reading-friendly purposes.\n The following questions are mainly related to section 4.5:\n1. It seems that all the ablation studies are related to the SED based on the caption of Fig. 3, do you have any experiments with the GED? Those experiments can also help to test the consistency of the results. \n2. From Fig. 3(i)-(k), we can observe that the blue line (Greed-NN) has a better performance on PubMed and CiteSeer datasets. What is the possible reason for it?\n3. Does the Pre MLP matter since you use the one-hot encoding? Meanwhile, a bit larger problem is whether all the component in Figure 2(b) is necessary. In other words, can we directly use GIN (or other GCNs) to replace Figure 2(b)?\n\n\n In the paper, the authors have explicitly or implicitly pointed out a couple of limitations: (1) not considering the edge labels; (2) not having well generalizability. \nFor limitation (1), I encourage the authors to explore and take advantage of some edge-related GCN models. For limitation (2), I suggest the author do more analysis of the dataset properties to find the reason for the poor generalizability. For example, is it caused by the structure domain shift (e.g., small world, scale-free property, and so on) or feature difference?\n\nTo best of my knowledge, there is no potential negative societal impact.\n", " In this work, the authors design a novel siamese graph neural network GREED which can learn graph (GED) and subgraph edit distances (SED) in a property-preserving manner. The authors demonstrate the efficacy of their approach via empirical results. Here are the main strengths of the current work :\n1) The authors focus on graph similarity measurement which is a very important problem in the graph ML community.\n2) The authors through empirical results demonstrate that the proposed approach works well as well as is significantly faster compared to the other baseline approaches.\n3) The authors via using a siamese graph neural network which utilizes shared parameters to embed both the query and target graphs respectively are able to reduce the model parameters as well as the training time significantly.\n\nHere are the main weaknesses of the current work :\n1) The proposed approach has limited novelty factor. \n2) The authors do not seem to compare their work with state-of-the-art recent approaches which could have been good baselines to compare against. More concretely, the authors do not cite and/or compare against the following state-of-the-art recent approaches i.e., GOTSim [3], NeuroMatch [4], Graph embedding network (GEN) [5], and IsoNet [6].\n3) Some aspects of the paper lack details/discussion and the authors do not seem to have tried to experiment with different settings more before finalizing their approach. More concretely, the authors could have tried non-sharing of parameters between query and target graphs to see if it helps. Additionally, the authors could have experimented with alternatives for their Graph Isomorphism Network (GIN) module. Also why did the authors specifically choose sum pool rather than other pooling techniques. The authors could also have tried approximate nearest neighborhood (ANN) based approaches in their work. \n\n[3] Khoa D Doan, Saurav Manchanda, Suchismit Mahapatra, and Chandan K Reddy. Interpretable graph similarity computation via differentiable optimal alignment of node embeddings. pages 665–674, 2021.\n[4] Zhaoyu Lou, Jiaxuan You, Chengtao Wen, Arquimedes Canedo, Jure Leskovec, et al. Neural subgraph matching. arXiv preprint arXiv:2007.03092, 2020.\n[5] Yujia Li, Chenjie Gu, Thomas Dullien, Oriol Vinyals, and Pushmeet Kohli. Graph matching networks for learning the similarity of graph structured objects. In International conference on machine learning, pages 3835–3845. PMLR, 2019\n[6] Indradyumna Roy, Venkata Sai Velugoti, Soumen Chakrabarti, and Abir De. Interpretable neural subgraph matching for graph retrieval. AAAI, 2022. q1) Why did you not cite and compare your approach against the following state-of-the-art recent approaches i.e., GOTSim [3], NeuroMatch [4], Graph embedding network (GEN) [5], and IsoNet [6] ?\n\nq2) Did you try any other alternatives to siamese graph neural network settings i.e., non-sharing of parameters between query and target graphs to see if it helps ?\n\nq3) Did you try any alternatives to the Graph Isomorphism Network (GIN) module ? \n\nq4) Why did you use sum pool specifically ? The authors list just one limitation of their work.", " In this work, the authors propose a neural framework to learn two well established metric, GED and SED, for graph similarity computation. By designing a siamese graph neural network to learn GED and SED in a property-preserving manner, the authors also show that the learned functions can guarantee to be a distance metric. Experiments demonstrate the effectiveness of the proposed framework in terms of accuracy and efficiency. ### Strengths\n1. The idea to unify GED and SED in one learning manner is kind of interesting.\n\n2. The proposed framework is reasonable to approximate GED and SED.\n\n3. The experiments are comprehensive.\n\n### Weaknesses\n1. There have been many works, as mentioned by the authors, on learning graph-based similarity metrics by GNN, thus in general, the novelty of this idea is limited. It is straightforward to include SED into the framework.\n\n2. Though useful, the proposed GNN framework is not novel, which have been proposed in previous works. Especially, the motivation to use Siamese GNN is not clear.\n\n3. The presentation of this work need improvements:\n\n a) The title ``learning graph distance functions’’ is kind of overclaimed, since the work only learns for graph edit distance, while there are other distances, like maximum common subgraph.\n\n b) The content of Section 2 is a bit too much, since most of them are established knowledge, not proposed by the authors. The authors can shorten Section 2 by moving some materials to appendix. Moreover, it would be better the authors give the references for GED, SED, and the properties.\n\n c) In Section 4, the ref of some tables are wrong, e.g., the text following ``4.2 Prediction Accuracy of SED and GED`` and ``4.3 Efficiency`` are all Table 2a and 2b, which is confusing\n\n d) In Section 4.6, the authors mentioned Table 3a is not found in the paper. \n\n e) In the Readme of the code, the submission information is still KDD 2022. In fact, I think this is the reason that the table references are messy in the paper. The authors should carefully proofread the paper for NeurIPS submission.\n Besides the above weakness, one more question is from Section 4.5:\n\nFrom Figure i-k, we can see that Greed-NN shows better performance compared Greed in two out of three datasets, i.e., on PubMed and CiteSeer. The authors should give some analyze of this phenomenon, which is very important to justify their claimed inductive bias.\n None" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5 ]
[ "WpWCiWF_1WO", "oMnb7bJdSFp", "My--wnMAr8", "btf4IOEF5Zg", "AdHGQK3DlMs", "tTUwZ99aE4K", "nKdbUmWg7qU", "PviUGBtsnrv", "wbGHWop2Ky", "nips_2022_3LBxVcnsEkV", "tTUwZ99aE4K", "-4SHMCrFgu", "JYiOfhBLo4Z", "0ptXGHvK7Nz", "jX8ADOCPjD", "nips_2022_3LBxVcnsEkV", "nips_2022_3LBxVcnsEkV", "nips_2022_3LBxVcnsEkV" ]
nips_2022_cLx3kbl2AI
Context-Based Dynamic Pricing with Partially Linear Demand Model
In today’s data-rich environment, context-based dynamic pricing has gained much attention. To model the demand as a function of price and context, the existing literature either adopts a parametric model or a non-parametric model. The former is easier to implement but may suffer from model mis-specification, whereas the latter is more robust but does not leverage many structural properties of the underlying problem. This paper combines these two approaches by studying the context-based dynamic pricing with online learning, where the unknown expected demand admits a semi-parametric partially linear structure. Specifically, we consider two demand models, whose expected demand at price $p$ and context $x \in \mathbb{R}^d$ is given by $bp+g(x)$ and $ f(p)+ a^\top x$ respectively. We assume that $g(x)$ is $\beta$-H{\"o}lder continuous in the first model, and $f(p)$ is $k$th-order smooth with an additional parameter $\delta$ in the second model. For both models, we design an efficient online learning algorithm with provable regret upper bounds, and establish matching lower bounds. This enables us to characterize the statistical complexity for the two learning models, whose optimal regret rates are $\widetilde \Theta(\sqrt T \vee T^{\frac{d}{d+2\beta}})$ and $\widetilde \Theta(\sqrt T \vee (\delta T^{k+1})^{\frac{1}{2k+1}})$ respectively. The numerical results demonstrate that our learning algorithms are more effective than benchmark algorithms, and also reveal the effects of parameters $d$, $\beta$ and $\delta$ on the algorithm's empirical regret, which are consistent with our theoretical findings.
Accept
The reviewers found the paper to be novel and interesting. The introduced model was found to be innovative and leading to cleaner/better regret bounds. The only major concern that was raised and not resolved was lack of technical novelty. However, it seems that this work provides new and relevant results to the existing literature. And it seems that clean and fundamental techniques are indeed adequate for achieving the result of this paper.
train
[ "XLFiBUKlhq", "WCHf7BnQCk9", "27vMXe6rFuPd", "2Hzdt7q2EAv", "qacRqSGC1eG", "Yr6aIwp4xxE", "f6JxRhvYivU", "3pUhOSUJ-7j", "XhGJ4KUSswu", "4gkKhNYgmgr", "sj6nd52yf78", "vK_7fkv4bs", "D6BGwpXZave" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We really appreciate the reviewer’s insightful comments that help our paper become stronger. We have learned quite a lot from the reviewer’s professionalism and patience. Thank you so much for your valuable time.", " The authors' response clarifies all of the points that I did not understand. As for [30] (now [31]), I tended to say that it covers the binary-feedback setting (as they assume the realized demand \"random\" instead of \"noisy\"), and I'm sorry for the confusing. The authors' explanation are informative, and I'm glad to see that you have already included it in your discussions on related works.\n\nI'm pretty sure that this work should get in. However, I'm not sure whether I should raise my score since there is a recalibration on the grades this year. I'll make necessary changes as soon as this gets clearer.", " 3. Assumptions in main theorems.\n\nWe’d thank the reviewer for this suggestion. We are sorry that we did not make the assumptions clear enough in Theorem 1 and Theorem 3, which might cause some confusion. In our revised version, we will state our assumptions of demand functions directly in Theorem 1 (i.e., $\\beta$-Hölder continuity of $g\\left(x\\right)$ and boundedness of $b$) and Theorem 3 (i.e., $k$-th order smoothness of $f\\left(p\\right)$ and boundedness of $a$). For the lower bounds in Theorem 2 and Theorem 4, all the assumptions have already been explicitly presented in the “sup” environment. \n\nIn the current version, when comparing our results with those in the existing literature, we have explicitly pointed out the differences of their assumptions and ours. Specifically, in the paragraph after Theorem 2, we have carefully discussed the difference in comparison with [23] and [8]. [23] consider almost the same model as ours but have a completely different benchmark when defining the regret. [8] do not assume the separable structure and only assume Lipschitz continuity instead of $\\beta$-Hölder continuity. Thus, when $\\beta=1$, our model is comparable with theirs. For this Lipschitz continuous model, we have shown the improvement of our regret bounds due to the separable structure. For a more detailed discussion, please refer to lines 186-199 in our original version. In the paragraph after Theorem 3, we have also shown the difference in assumptions between ours and that in [30] and discussed how our general model can be reduced to [30]. The details can be seen in lines 251-263. As a summary, we have also listed the key assumptions of demand models in the existing studies and our work in our Table 1. \n\nIn addition, we would like to mention that for DPLCE, even under a much stronger assumption that $a$ is exactly known, our regret bound is not improvable in terms of the dependency of $T$ and $\\delta$. Thank you again for your valuable suggestions, which make the presentation and exposition of our paper clearer. \n\n4. Model misspecification issue.\n\nThank you for raising the important issue of model misspecification, which is very crucial especially in the real-world applications.\n\nWhen the true demand is linear, both algorithms ADPLP and ADPLC can lead to a sublinear regret theoretically. Specifically, ADPLP will have a regret upper bound in the order $\\widetilde{\\mathcal{O}}\\left(\\sqrt T\\vee T^\\frac{d}{d+2\\beta}\\right)$ for any input $\\beta$, since linear function is $\\beta$-Hölder continuous for any $\\beta\\in\\left(0,1\\right]$. When $d=1$ and we choose $\\beta\\geq\\frac{1}{2}$, we can even obtain the optimal regret $\\widetilde{\\Theta}\\left(\\sqrt T\\right)$. Under other settings, though not optimal, ADPLP still eventually converges to the optimal solution. Since ADPLP does not have the prior information of the linear structure, the sublinear regret is the best we can expect. For ADPLC, since linear functions are $k$th-order smooth, the regret upper bound can be guaranteed by Theorem 3. If we have more information, choosing $k\\geq 3$ and $\\delta=0$ or $k=ln{T}$, we can again obtain the optimal $\\widetilde{\\Theta}\\left(\\sqrt T\\right)$ regret. Therefore, when the true demand is linear, both algorithms are robust to the model misspecification.\n\nWhen the true demand is purely non-parametric, the current regret notion is no longer a reasonable measure of the algorithm’s performance. Similar to the discussion in [F], we cannot expect a sublinear regret if the benchmark is the true optimal policy. In fact, how to define a reasonable measure under the mis-specified case is a fundamental question that is worthy of a separate study. Therefore, in the pure non-parametric setting, even if one can find some problem instances for which our algorithms ADPLP and ADPLC perform well and achieve a sublinear regret, these problem instances may not be representative enough. Considering that a thorough study of this issue may be out of the scope of this paper, we prefer to leave it to future research. We hope that this treatment is fine with you. \n\n5. Limitation of this paper.\n\nThe main limitation we discussed in our paper is that Theorem 3 and Theorem 4 do not match with respect to dimension d (see the paragraph just after Theorem 4). In the revised version, we will discuss more about the limitation due to the need of the bounds on $b$, $\\beta$ and $k$, as pointed out by the reviewer. We'd thank the reviewer for giving us a good opportunity to rethink the limitation of our work.\n", " 2. The choice of the unknown parameter.\n\n2.1. The bounds of $b$ and $\\left|\\left|a\\right|\\right|$. We admit that the knowledge of the bounds of $b$ and $||a||$ is a strong assumption and we will make this claim in the revised version, although it is a common assumption in the dynamic pricing literature (see, e.g., [D], [E], [F], [J]). There are already several empirical methods to estimate the bound of $b$ from the historical data. [F] provides an efficient data-driven approach based on two-stage least squares (2SLS) method with the careful choice of the instrument variable, together with a case study of the company, Oracle Retail. Their estimates of $b$ for different categories of fashion items were found to be of the same order of magnitude, lying in the range $[-1,-0.1]$. [G] also propose some useful tools and they analyze $1851$ price elasticities across different markets. They observe a mean price elasticity of $-2.62$ with $50$ percent of the observations between $-1$ and $-3$. We will add the discussion to our revised version. For the norm of $a$, we can repeat the same process.\n\n2.2. The choices of $\\beta$ and $k$. We will make it clear in the revised version that $\\beta$ and $k$ need to be known in advance and will point out explicitly that this is one of the limitations of our work. Although most of the existing literature assume the knowledge of smoothness parameters (see, e.g., [H]), we completely agree that the exact values of these parameters are not available in practice. When there is no such prior knowledge, we can estimate them from the historical data. For example, Section G of [A] provides a cross-validation method to determine the smoothness parameters, which can also be applied to our setting after slight modifications. Also, we can do the grid-search for \\beta and k, and find the ones that fit the historical data best by solving regression using polynomial functions (i.e., $c_1{\\left||x\\right||}^{\\ \\beta}+c_2$ and $c_3p^k+c_4p^{\\left\\lfloor k\\right\\rfloor}+\\ldots+c_{\\left\\lfloor k\\right\\rfloor+1}$ respectively). \n\nWe also would like to point out that in a recent paper [I], the authors establish a very important negative result that designing algorithms that adapt to unknown smoothness of payoff functions is generally impossible. This implies that in our setting, if we do not have any knowledge about $\\beta$ and $k$, it is virtually impossible to design an efficient algorithm. [I] point out that if an extra crucial self-similarity assumption holds, it is possible to design efficient algorithm for bandit with discrete arms. However, whether the self-similarity condition can hold in our dynamic pricing setting remains unclear, and how to extend [I]'s algorithms to continuous price space is beyond the scope of this paper. Nevertheless, we believe that this is an important future research direction, and will provide more discussions on this issue in our conclusion section.\n\n[D] Besbes, O., Zeevi, A. (2015). On the (surprising) sufficiency of linear models for dynamic pricing with demand learning. Management Science, 61(4), 723-739.\n\n[E] Miao, S., Chen, X., Chao, X., Liu, J., Zhang, Y. (2022). Context‐based dynamic pricing with online clustering. Production and Operations Management.\n\n[F] Nambiar, M., Simchi-Levi, D., Wang, H. (2019). Dynamic learning and pricing with model misspecification. Management Science, 65(11), 4980-5000.\n\n[G] Bijmolt, T. H., Van Heerde, H. J., Pieters, R. G. (2005). New empirical generalizations on the determinants of price elasticity. Journal of marketing research, 42(2), 141-156.\n\n[H] Hu, Y., Kallus, N., Mao, X. (2020). Smooth contextual bandits: Bridging the parametric and non-differentiable regret regimes. In Conference on Learning Theory (pp. 2007-2010). PMLR.\n\n[I] Gur, Y., Momeni, A., Wager, S. (2022). Smoothness-adaptive contextual bandits. Operations Research.\n\n[J] Keskin, N. B., Zeevi, A. (2014). Dynamic pricing with an unknown demand model: Asymptotically optimal semi-myopic policies. Operations research, 62(5), 1142-1167.\n", " Thank you for your insightful comments and valuable feedback. We next provide our detailed response to your questions and comments.\n\n1. Technical novelty comparing with binary choice model\n\nThank you very much for pointing out the two important papers [A] [B], and we find [C] is also closely related in this stream. We are sorry about missing these works and have read them with much carefulness. Below we give a detailed comparison between these works with ours. We will carefully add this discussion to our related work and Table 1 in our revised version.\n\nAs mentioned by the reviewer, [A] [B] and [C] are based on a binary choice model with unknown noise distribution and linear valuation function of customer. Specifically, every customer buys the product with probability $1-F\\left(p_t-\\theta^\\top x_t\\right)$, where $F$ is the CDF of random noise. [A] [B] and [C] consider a very important question of how to estimate \\theta without knowing $F$. [A] propose an impressive estimator of \\theta in their Eq. (3.1), [B] integrate the idea of discretization and EXP4 algorithm cleverly, and [C] reduce the challenge to a logistics regression. Conceptually, one can transform our demand model into the one where each customer buys the product with probability $bp_t+g\\left(x_t\\right)$ (for DPLPE) or $f\\left(p_t\\right)+a^\\top x_t$ (for DPLCE), assuming $bp_t+g\\left(x_t\\right)$ and $f\\left(p_t\\right)+a^\\top x_t$ fall in $\\left[0,1\\right]$. Although the unknown noise distribution is not our challenge as in [A], [B] and [C], we still need to estimate $b$ and the non-parametric function $g\\left(\\cdot\\right)$ at the same time for DPLPE (or a and the non-parametric function $f\\left(\\cdot\\right)$ for DPLCE). Despite that we have an additive structure, it is still challenging to decouple $bp_t$ and $g\\left(x_t\\right)$ for DPLPE (or $f\\left(p_t\\right)$ and $a^\\top x_t$ for DPLCE) with only a buying/not-buying feedback. In short words, [A] [B] and [C] need to estimate $F\\left(\\cdot\\right)$ and $\\theta$ with only one feedback, whereas we need to estimate $b$ and $g\\left(\\cdot\\right)$ (or $a$ and $f\\left(\\cdot\\right)$) with only one feedback. We completely agree that [A] [B] and [C] consider an important and challenging question, but also would like to point out that our models are faced with several different challenges. In our revised version, we will point out these three important papers and compare these works with ours carefully.\n\nFurthermore, the binary model and our model do not contradict with each other. There are several middle grounds between these two models, which may also be of our interest. An example is in the binary model, $F\\left(\\cdot\\right)$ is known but the valuation function is non-parametric, i.e., $g\\left(x_t\\right)$ as in our model, or even $F\\left(\\cdot\\right)$ is unknown and the valuation function is non-parametric.\n\n[A] Jianqing Fan, Yongyi Guo, Mengxin Yu (2021), Policy Optimization Using Semi-parametric Models for Dynamic Pricing, https://arxiv.org/abs/2109.06368.\n\n[B] Jianyu Xu, Yu-Xiang Wang (2022), Towards Agnostic Feature-based Dynamic Pricing: Linear Policies vs Linear Valuation with Unknown Noise, AISTATS 2022.\n\n[C] Luo, Y., Sun, W. W. (2021). Distribution-free contextual dynamic pricing. arXiv preprint arXiv:2109.07340.\n\n", " 3. One missing stream of related works.\n\nThank you very much for pointing out the three important papers. We are sorry about missing these works and have read them with much carefulness. Below we give a detailed comparison between these works with ours, and we will add these discussions to the related work and Table 1 in our revised version. As mentioned by the reviewer, [F], [G] and [H] are based on a binary choice model with unknown noise distribution and linear valuation function of customer. Specifically, every customer buys the product with probability $1-F\\left(p_t-\\theta^\\top x_t\\right)$, where $F$ is the CDF of random noise. [F], [G] and [H] consider a very important question of how to estimate $\\theta$ without knowing $F$. [F] propose an impressive estimator of $\\theta$ in their Eq. (3.1), [G] integrate the idea of discretization and EXP4 algorithm cleverly, and [H] reduce the challenge to a logistics regression. Conceptually, one can transform our demand model into the one where each customer buys the product with probability $bp_t+g\\left(x_t\\right)$ (for DPLPE) or $f\\left(p_t\\right)+a^\\top x_t $ (for DPLCE), assuming $bp_t+g\\left(x_t\\right)$ and $f\\left(p_t\\right)+a^\\top x_t$ fall in $\\left[0,1\\right]$. Although the unknown noise distribution is not our challenge as in [F], [G] and [h], we still need to estimate $b$ and the non-parametric function $g\\left(\\cdot\\right)$ at the same time for DPLPE (or a and the non-parametric function $f\\left(\\cdot\\right)$ for DPLCE). Despite that we have an additive structure, it is still challenging to decouple $bp_t$ and $g\\left(x_t\\right)$ for DPLPE (or $f\\left(p_t\\right)$ and $a^\\top x_t$ for DPLCE) with only a buying/not-buying feedback. In short words, [F], [G]and [H] need to estimate $F\\left(\\cdot\\right)$ and $\\theta$ with only one feedback, whereas we need to estimate $b$ and $g\\left(\\cdot\\right)$ (or $a$ and $f\\left(\\cdot\\right)$) with only one feedback. We completely agree that [F], [G] and [H] consider an important and challenging question, but also would like to point out that our models are faced with several different challenges. In our revised version, we will point out these three important papers and compare these works with ours carefully.\n\nFurthermore, the binary model and our model do not contradict with each other. There are several middle grounds between these two models, which may also be of our interest. An example is in the binary model, $F\\left(\\cdot\\right)$ is known but the valuation function is non-parametric, i.e., $g\\left(x_t\\right)$ as in our model, or even $F\\left(\\cdot\\right)$ is unknown and the valuation function is non-parametric.\n\n[F]Jianqing Fan, Yongyi Guo, Mengxin Yu (2021), Policy Optimization Using Semi-parametric Models for Dynamic Pricing, https://arxiv.org/abs/2109.06368.\n\n[G]Jianyu Xu, Yu-Xiang Wang (2022), Towards Agnostic Feature-based Dynamic Pricing: Linear Policies vs Linear Valuation with Unknown Noise, AISTATS 2022.\n\n[H] Luo, Y., Sun, W. W. (2021). Distribution-free contextual dynamic pricing. arXiv preprint arXiv:2109.07340.\n\n4. Mismatching of the dimension $d$ in regret bound.\n\nWe'd thank the reviewer for this suggestion. In the revised version, we will be more careful when we talk about the matching of lower bound and upper bound. \n", " Thank you for your insightful comments and valuable feedback. We next provide our detailed response to your questions and comments.\n\n1. Complexity of solving the optimization problem in Algorithm 2 (line 10).\n\nThank you for raising this important question. We have carefully looked into the computational issue of our algorithm. We would like to point out that the objective function in lines 10-11 of our Algorithm 2 is generally non-concave. Note that even in the setting where the demand is linear with price and there is no context, such an optimization problem is not concave (see Figure 1 in [C]). Nevertheless, the optimization problem in lines 10-11 of our Algorithm 2 is simply a univariate optimization problem (i.e., the price $p$ is the only decision variable). Since $\\langle \\hat\\theta_{t,i} , \\varphi(p)\\rangle$ and $\\phi(p,x_t)^\\top V_{t,i}^{-1}\\phi(p,x_t)$ are both polynomial functions of $p$ with explicit expressions, the objective function is easily evaluated. Therefore, we can discretize the $\\textbf{I}_i$ to a satisfying granularity and compare the objective function at each discretized point. Besides, since $N$ is relatively large, the length of $\\textbf{I}_i$ is relatively small, which can empirically benefit the speed of solving the optimization problem.\n\nWe have also investigated the computational issues of those algorithms based on OFU principle in the existing literature. In fact, for almost all the algorithms based on OFU principle with a continuous action space, there exist similar computational challenges (see, e.g., [A], [B]). Involving a high-dimensional action space, the algorithm in [A] can be even more difficult to compute. [B] need to solve almost exactly the same optimization problem as our algorithm, but do not provide a careful discussion about the complexity. Thus, in general, we are not expecting an efficient algorithm theoretically and believe that the computational complexity of the algorithms based on OFU is a common challenge in the literature. \n\n[A] Abbasi-Yadkori, Y., Pál, D., Szepesvári, C. (2011). Improved algorithms for linear stochastic bandits. Advances in neural information processing systems, 24.\n\n[B] Wang Y, Chen B, Simchi-Levi D (2021) Multimodal dynamic pricing. Management Science 67(10):6136–6152.\n\n[C] Bu, J., Simchi-Levi, D., Xu, Y. (2022). Online pricing with offline data: Phase transition and inverse square law. Forthcoming in Management Science. \n\n2. The knowledge of $\\beta$ and $k$.\n\nThank you for reminding us of the unknown parameters $\\beta$ and $k$. We will make it clear in the revised version that $\\beta$ and $k$ need to be known in advance and will point out explicitly that this is one of the limitations of our work. Although most of the existing literature assume the knowledge of smoothness parameters (see, e.g., [B] and [D]), we completely agree that the exact values of these parameters are not available in practice. When there is no such prior knowledge, we can estimate them from the historical data. For example, Section G of [F] provides a cross-validation method to determine the smoothness parameters, which can also be applied to our setting after slight modifications. Also, we can do the grid-search for $\\beta$ and $k$, and find the ones that fit the historical data best by solving regression using polynomial functions (i.e., $c_1{\\left||x\\right||}^{\\ \\beta}+c_2$ and $c_3p^k+c_4p^{\\left\\lfloor k\\right\\rfloor}+\\ldots+c_{\\left\\lfloor k\\right\\rfloor+1}$ respectively). \n\nWe also would like to point out that in a recent paper [E], the authors establish a very important negative result that designing algorithms that adapt to unknown smoothness of payoff functions is generally impossible. This implies that in our setting, if we do not have any knowledge about $\\beta$ and $k$, it is virtually impossible to design an efficient algorithm. [E] point out that if an extra crucial self-similarity assumption holds, it is possible to design efficient algorithm for bandit with discrete arms. However, whether the self-similarity condition can hold in our dynamic pricing setting remains unclear, and how to extend [E]'s algorithms to continuous price space is beyond the scope of this paper. Nevertheless, we believe that this is an important future research direction, and will provide more discussions on this issue in our conclusion section.\n\n[D] Hu, Y., Kallus, N., Mao, X. (2020). Smooth contextual bandits: Bridging the parametric and non-differentiable regret regimes. In Conference on Learning Theory (pp. 2007-2010). PMLR.\n\n[E] Gur, Y., Momeni, A., Wager, S. (2022). Smoothness-adaptive contextual bandits. Operations Research.\n\n[F] Jianqing Fan, Yongyi Guo, Mengxin Yu (2021), Policy Optimization Using Semi-parametric Models for Dynamic Pricing, https://arxiv.org/abs/2109.06368.\n\n(To be continued...)\n", " Thank you for your insightful comments and valuable feedback. We next provide our detailed response to your questions and comments.\n\nIn this work, we focus on working on the dependence on $T$ in Theorem 4, since in a real-world application, the dimension of context $d$ is relatively very small comparing with $T$. Admittedly, matching the upper bound with the lower bound in terms of the dependency of dimension $d$ is important, especially when $d$ is very large. This question is indeed one of the important future research directions, and we are working along this line. \n\nFor the numerical study, we want to provide some more insights on the impact of parameters like $\\beta$, $\\delta$ from the empirical side, because they are hardly discussed in the previous literature. We will shorten the numerical part to make it more compact. \n\nWe feel really sorry that we do not provide enough technical details due to the space limit. We will highlight more technical details in our revised version as follows.\n\n1. For Theorem 1, we will modify the paragraph right after the theorem (from line 165 in the original version) as following: \nThe bound in Theorem 1 consists of two terms $\\widetilde{\\mathcal{O}}\\left(\\sqrt T\\right)$ and $\\widetilde{\\mathcal{O}}\\left(T^\\frac{d}{d+2\\beta}\\right)$. The first term $\\widetilde{\\mathcal{O}}\\left(\\sqrt T\\right)$ is caused by the random shocks that we introduce to estimate b, capturing the complexity of learning b. …… The second term $\\widetilde{\\mathcal{O}}\\left(T^\\frac{d}{d+2\\beta}\\right)$ is incurred by our inaccuracy of approximating the non-parametric function $g\\left(\\cdot\\right)$ with a constant in each small bin.\n\n2. For Theorem 2, we have highlighted the main idea and the techniques that we use as following: \nTo prove the second lower bound $\\widetilde{\\Omega}\\left(T^\\frac{d}{d+2\\beta}\\right)$, we construct a series of Hölder continuous functions in $\\left[0,1\\right]^d$ that are difficult to distinguish from each other. We then apply the Bretagnolle–Huber inequality (see [6]) and KL divergence arguments to bound the regret of any algorithm from below.\n\n3. For Theorem 3, we will highlight the technical background as following just after Theorem 3: \nBy the idea of optimism over OFU, we can upper bound the regret by the length of the confidence interval and then applied the well-known concentration inequality from [A] to well control the growth of the confidence interval.\n\n4. For Theorem 4, we have mentioned the main idea as following: The proof of the second lower bound $\\Omega((\\delta T^{k+1})^{\\frac{1}{2k+1}})$ relies on constructing a series of $k$th-order smooth functions with parameter $\\delta$ and applying Pinsker's inequality to bound the total variation of two probability measures through their KL divergence.\n\n[A] Abbasi-Yadkori, Y., Pál, D., Szepesvári, C. (2011). Improved algorithms for linear stochastic bandits. Advances in neural information processing systems, 24.\n\nThanks to the reviewer again for raising these important points and giving us a chance to further improve our paper’s flow.\n", " Thank you for your insightful comments and valuable feedbacks. We next provide our detailed response to your questions and comments.\n\n1. Binary Demand structure.\n\nWe really want to thank the reviewer for bringing this important issue up. We first want to mention that the demand in [30] ([31] in the revised version) is not binary, but is a continuous variable in [0,1]. Below we give a detailed comparison between the binary demand structure with ours. We will also discuss carefully in our related work and Table 1.\n\nIn the binary model, the customer buys the product with probability $1-F\\left(p_t-\\theta^\\top x_t\\right)$, where $F$ is the CDF of the random noise of the customer’s valuation. This stream of works starts from the case where F is well known in advance (see, [A], [B] and [C]). The knowledge of $F\\left(\\cdot\\right)$ reduces the complexity of solving the problem since $\\theta$ is the only parameters that need estimating. Recently, [D] [E] and [F] consider a very important question of how to estimate $\\theta$ without knowing $F$. [D] propose an impressive estimator of $\\theta$ in their Eq. (3.1). [E] integrate the idea of discretization and EXP4 algorithm cleverly and [F] reduce the challenge to a logistics regression. In our model, the distribution of random shock makes no difference to the optimal pricing strategy. Thus, whether the distribution is parametric/known or not is not important in our formulation. As pointed out by [G], the assumption about parametric and non-parametric noise is indeed decisive for the binary reward setting.\n\nIn order to compare with the binary choice model more clearly, conceptually, one can transform our demand model into the one where each customer buys the product with probability $bp_t+g(x_t)$ (for DPLPE) or $f(p_t)+a^\\top x_t$ (for DPLCE), assuming $bp_t+g(x_t)$ and $f(p_t)+a^\\top x_t$ fall in $[0,1]$. Although the unknown $F$ is not our challenge as [D] [E] and [F], we still need to estimate b and the non-parametric function $g(\\cdot)$ (or $a$ and the non-parametric function $f\\left(\\cdot\\right)$) at the same time. Despite that we have an additive structure on the price effect and the context effect, it is still challenging to decouple $bp_t$ and $g\\left(x_t\\right)$ (or $f\\left(p_t\\right)$ and $a^\\top x_t$) with only a buying/not-buying feedback from the customer. In short words, [D] [E] and [F] need to estimate $F\\left(\\cdot\\right)$ and $\\theta$ with only one feedback and we need to estimate $b$ and $g\\left(\\cdot\\right)$ (or $a$ and $f\\left(\\cdot\\right)$) with only one feedback. \n\nFurthermore, the binary model and our model do not contradict with each other. There are several middle grounds between these two models, which are of our great interest and we also list as our future work. An example is in the binary model what if $F\\left(\\cdot\\right)$ is known but the valuation function is non-parametric i.e., $g\\left(x_t\\right)$ in our model, and even what if $F\\left(\\cdot\\right)$ is unknown and the valuation function is non-parametric.\n\n[A] Xu, J., Wang, Y. X. (2021). Logarithmic regret in feature-based dynamic pricing. Advances in Neural Information Processing Systems, 34, 13898-13910.\n\n[B] Javanmard, A., Nazerzadeh, H. (2019). Dynamic pricing in high-dimensions. The Journal of Machine Learning Research, 20(1), 315-363.\n\n[C] Javanmard, A. (2017). Perishability of data: dynamic pricing under varying-coefficient models. The Journal of Machine Learning Research, 18(1), 1714-1744.\n\n[D]Jianqing Fan, Yongyi Guo, Mengxin Yu (2021), Policy Optimization Using Semi-parametric Models for Dynamic Pricing, https://arxiv.org/abs/2109.06368.\n\n[E]Jianyu Xu, Yu-Xiang Wang (2022), Towards Agnostic Feature-based Dynamic Pricing: Linear Policies vs Linear Valuation with Unknown Noise, AISTATS 2022.\n\n[F] Luo, Y., Sun, W. W. (2021). Distribution-free contextual dynamic pricing. arXiv preprint arXiv:2109.07340.\n\n[G] Wang, H., Talluri, K., Li, X. (2021). On Dynamic Pricing with Covariates. arXiv preprint arXiv:2112.13254.\n\n2. Knowledge of the proxy variance of the noise ${\\sigma}^{2}$\n\nThank you for carefully reading our paper and raising this question. You are completely right that in ADPLC, the variance proxy $\\sigma^2$ needs to be known as an input to our algorithm. In ADPLP, we do not need to know $\\sigma^2$. In the revised version, we will clearly mention this point. As in our response to the first question, the assumption of parametric or non-parametric noise distribution is not essential in our model, but is indeed decisive for the binary reward setting. In the revised version, we will make it clear about the difference between these two modelling approaches and discuss more about the roles of the noise parameter and distribution in our setting.\n\n3. Notation and typos.\n\nWe are sorry about our carelessness. We will double check our paper carefully and fix the typos including the one mentioned by the reviewer. We will also change the notation for expected demand to $\\mu$.\n\n", " The paper studies an online learning and contextual pricing problem with semi-parametric partially linear demand models, where the demand function is the sum of a linear function with an unknown coefficient(s) of price (or context, resp.) and an unknown function of the context (or price, resp.). For the above two demand models, the paper develops two corresponding online pricing algorithms with provable regret upper bounds and matching lower bounds with respect to the horizon $T$. Strengths:\nOverall, the paper is well-written. \nFormulation-wise, the paper generalizes the commonly used Lipschitz continuous assumption to Holder continuity. Importantly, the results show that the additional structure of linearity indeed improves the regret bound. \nAlgorithm-wise, the paper combines several online learning techniques, including binning and approximation, random shock in pricing, and upper confidence bound algorithm, so as to estimate the non-parametric part of the demand functions. \nAnalysis-wise, the paper gives a comprehensive analysis of the statistical complexity/regret of the formulated problems.\n\nWeakness: \n 1. In Algorithm 2, when solving the UCB optimization problem in line 10, what is its complexity? And in general how to solve it (efficiently)?\n 2. The algorithms seem to require the knowledge of $\\beta$ and $k$, the authors should state these explicitly in the assumptions and discuss how to estimate these quantities when there is no such prior knowledge. \n\nMinor comment:\nUpon reviewing the literature, the paper misses one stream of works related to semi-parametric dynamic pricing (see below). The author should discuss the contribution against these works.\n- Policy optimization using semiparametric models for dynamic pricing (Fan et al. 2021)\n- Distribution-free contextual dynamic pricing (Luo and Sun, 2021)\n- Towards agnostic feature-based dynamic pricing: Linear policies vs linear valuation with unknown noise (Xu and Wang, 2022)\n\nAlso, the authors should mention that the matching of lower bounds refers to the dependency on $T$ but not with respect to $d$. As in the discussion of Theorem 4, one of the lower bounds doesn't meet the corresponding upper bound in the context dimension. I hope the author can address in the rebuttal the two points mentioned in the weakness above and I may adjust my final rating based on the response. NA.", " This paper studies two online contextual dynamic pricing problem settings: DPLPE where demand is linear wrt price, and DPLCE where demand is linear wrt context. For DPLPE problem, they assume the realized demand $D_t = bp_t+g(x_t) + \\epsilon_t$ with $g(x)$ being $\\beta$-Holder continuous and $\\epsilon_t$ being a subGaussian noise. They propose an ADPLP algorithm that adopts a space binning and a random shock techniques. With ADPLP, they achieve an optimal $\\tilde{O}(\\sqrt{T}\\vee T^{\\frac{d}{d+2\\beta}})$ regret up to logarithmic factors by proving both the upper and the lower regret bounds, where $d$ is the dimentionality of features. For DPLCE problem, they assume the realized demand $D_t = f(p_t) +a^{\\top}x_t + \\epsilon_t$, with $f(p)$ being $k^{\\text{th}}$-order smooth with a small parameter $\\delta$ that could be $T$-dependent. They present an ADPLC algorithm that adopts a local polynomial approximation and a biased linear contextual bandit with an optimistic OFU strategies. With ADPLC, they achieve an optimal $\\tilde{O}_d(\\sqrt{T}\\vee(\\delta T^{k+1})^{\\frac1{2k+1}})$ regret up to logarithmic factors, by proving both the upper and the lower regret bounds. Finally, they conduct numerical experiments and show that the simulation results of ADPLP and ADPLC ourperform all benchmarks. Strengths:\n\n(1) This work generalizes the problem settings of both linear demand and linear context problems. For DPLPE, they consider $\\beta$-Holder class instead of only Lipschitz that was broadly assumed by previous works, and they improve the comparison by replacing the linear benchmark with the true demand function in [8] and [23]. For DPLCE, they consider $k$-th-order smooth with not only integer but also non-integer $k$'s, and they also emphasize the role that $\\delta$ plays instead of treating it as a constant as previous works did.\n\n(2) For both DPLPE and DPLCE, they design algorithms with provable optimal regret bounds. These results are significant as they not only match the order of $T$ but also those of $\\beta$ and $\\delta$. Moreover, for DPLCE, their upper and lower bound match those in [30], indicating that a linear context added on the demand curve might not require substantially more information to learn.\n\n(3) Their numerical experiments are comprehensive and the results are well-displayed.\n\nWeaknesses:\n\n(1) Some related literatures should be discussed with more details. For example, the stream of binary demand model. As this work also assume a noisy feedback, the only difference of binary feedback is that the noise distribution is dependent on $p$ and $x$ while this work assumes iid. Since the closely-related work [30] also assumes a binary feedback, the results of this work do not actually cover those in [30]. Overall, the binary feedback is an important property in many pricing problem settings, and I suggest the authors to be aware of this issue and place this paper in the related literatures with more precision.\n\n Questions:\n\n(1) See the related work issue I mentioned above. Maybe a good idea is to list these key properties/assumptions in categories (like what they did in [30] Table 1).\n\n(2) Notice that ADPLC require knowledge on the noise std $\\sigma$. Does ADPLP require it as well? It is sometimes decisive to assume a parametric or non-parametric noise in an online learning problem, so I suggest the authors to specify these properties in detail, and briefly discussion how they contribute to your analysis and results.\n\nMinor suggestions:\n\n(1) Avoid notation repeatance: e.g., $d$ for feature dimension and also for expected demand.\n\n(2) A few typos: e.g. line 42: $x_t\\in[0,1]^d$.\n Limitations and potential extensions of this work are well discussioned. \nThere is no discussions on social impact as it is a work of theory. However, I indeed suggest the authors to consider any potential ethic issue that might occur in a pricing problem with these assumptions you have specified.", " This paper studies the context-based dynamic pricing, where the unknown expected demand admits a semi-parametric partially linear structure. Two special cases of semi-parametric partially linear models (Linear Pricing Effect and Linear Contextual Effect) are considered. Two new algorithms (DPLPE and DPLCE) are proposed and their regret upper bounds and matching lower bounds are established. \n Strengths\n\n(1) In spite of a theoretical paper, it is well written and easy to follow. \n\n(2) The proved matching lower bound is useful to understand the limit of the considered problem. \n\n\nWeaknesses\n\n(1) Technical novelty: The proposed partially linear demand model is a natural extension of existing linear demand model and nonparametric demand model. On the other hand, a similar partially linear demand model has been proposed in dynamic pricing literature, see below. None of them was mentioned in the paper. In fact, these papers consider a binary choice model which is arguably more challenging than the one considered in this paper. \n\nJianqing Fan, Yongyi Guo, Mengxin Yu (2021), Policy Optimization Using Semi-parametric Models for Dynamic Pricing, https://arxiv.org/abs/2109.06368. \n\nJianyu Xu, Yu-Xiang Wang (2022), Towards Agnostic Feature-based Dynamic Pricing: Linear Policies vs Linear Valuation with Unknown Noise, AISTATS 2022. \n\n(2) The proposed algorithm and theoretical analysis require the knowledge of some true parameters, e.g., the upper and lower bounds of price, the upper and lower bounds of the true price coefficient, The continuous parameter $\\beta$ of the unknown function $g$. In practical pricing applications, knowing upper and lower bounds of price is arguable and a mild assumption. But it is less justifiable to know the bound of $b$ and $\\beta$ in practice. \n\n(3) It is unclear what assumptions were assumed in the main Theorems (Theorems 1-4). In addition, when the authors compare the proved regret bounds with those in the literature, it is also important to discuss if their model assumptions are comparable. It would be more convincing if the faster regret bound is not obtained under much stronger assumptions.\n\n(4) The experiments of this paper only study the performance when the true model is the proposed partially linear demand model. No model misspecification was studied. It is unclear if it is always safe to use the proposed algorithm in practical pricing applications. \n\n~~~~~~~~~~~~~\nAfter rebuttal: Thanks the authors for carefully addressing all my concerns. My last three comments have been nicely addressed. I have raised my rating to Borderline Accept. On the other hand, while I understand the difference (e.g., nonparametric on noise v.s. nonparametric on covariates; binary feedback v.s. continuous feedback) of this paper compared to existing pricing literature with partial linear demand model, I am not fully convinced by the \"technical novelty\" beyond them. So I choose to only increase the rating to Borderline Accept.\n\n\n (1) It is important to discuss the technical novelty of this paper beyond existing semi-parametric dynamic pricing papers. \n\n(2) It is important to provide some guidance on how to choose the unknown parameters (bounds of $b$, $\\beta$, etc) in a data-driven way. It is also helpful to study the sensitivity of the choice of these parameters in experiments. \n\n(3) It is helpful to list all assumptions used for the theorems. When discussed with regret bounds in the literature, it is important to also mention the difference in the assumptions. \n\n(4) It is important to study the model misspecification case, e.g., when the true model is linear or purely nonparametric, in the experiments. \n\n\n No potential negative societal impact was discussed in the checklist. No limitation was discussed in the paper. ", " This paper studies the contextual dynamic pricing problem under partially linear structural assumptions. In particular, the authors consider two demand models **DPLPE** and **DPLCE**. The former assumes a linear term in the price with an additive $\\beta$-Holder continuous function of the context. The latter assumes a linear term in the context with an additive $k$-th order smooth function of the price.\nThe authors present two online algorithms and provide regret upper bound as well as lower bound guarantees for each of the models:\n1. **DPLPE**:\n - upper bound of $\\mathcal{O}\\left(\\sqrt{T} + \\ln T\\cdot T^{\\frac{d}{d+2\\beta}}\\right)$.\n - lower bound of $\\Omega\\left(\\max(\\sqrt{T}, T^{\\frac{d}{d+2\\beta}})\\right)$.\n2. **DPLCE**:\n - upper bound of $\\mathcal{O}\\left(d\\ln T(\\sqrt{T} + (\\delta T^{k+1})^{\\frac{1}{2k+1}}))\\right)$.\n - lower bound of $\\Omega\\left(\\max(\\sqrt{T},(\\delta T^{k+1})^{\\frac{1}{2k+1}}))\\right)$. **Strenghts**:\n- The paper in general is well written and its contribution over previous works is presented clearly by the authors.\n- The models considered in the paper are more general compared to known works and they provide new results on the problem of dynamic pricing.\n- I find the algorithm design of both algorithms to be interesting and novel as it combines multiple ideas from previous works.\n- The authors also provide lower bound guarantees which enhances the main results of the paper.\n\n**Weaknesses**:\n- As stated by the authors the lower bound in Theorem 4 is missing the dependence in $d$ which makes this lower bound less significant compared to the lower bound in Theorem 2.\n- I believe that the problem setup can be presented more clearly for the reader with better formatting.\n- I feel like the main text could benefit from a more comprehensive technical overview. As this paper presents theoretical results I find the Numerical Study section redundant and would suggest providing more background on the proof techniques used in the paper. I don't see any potential negative societal impact." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "WCHf7BnQCk9", "XhGJ4KUSswu", "vK_7fkv4bs", "vK_7fkv4bs", "vK_7fkv4bs", "4gkKhNYgmgr", "4gkKhNYgmgr", "D6BGwpXZave", "sj6nd52yf78", "nips_2022_cLx3kbl2AI", "nips_2022_cLx3kbl2AI", "nips_2022_cLx3kbl2AI", "nips_2022_cLx3kbl2AI" ]
nips_2022_SPiQQu2NmO9
Target alignment in truncated kernel ridge regression
Kernel ridge regression (KRR) has recently attracted renewed interest due to its potential for explaining the transient effects, such as double descent, that emerge during neural network training. In this work, we study how the alignment between the target function and the kernel affects the performance of the KRR. We focus on the truncated KRR (TKRR) which utilizes an additional parameter that controls the spectral truncation of the kernel matrix. We show that for polynomial alignment, there is an over-aligned regime, in which TKRR can achieve a faster rate than what is achievable by full KRR. The rate of TKRR can improve all the way to the parametric rate, while that of full KRR is capped at a sub-optimal value. This shows that target alignemnt can be better leveraged by utilizing spectral truncation in kernel methods. We also consider the bandlimited alignment setting and show that the regularization surface of TKRR can exhibit transient effects including multiple descent and non-monotonic behavior. Our results show that there is a strong and quantifable relation between the shape of the alignment spectrum and the generalization performance of kernel methods, both in terms of rates and in finite samples.
Accept
The reviewers all found that the paper is interesting and contributes interesting new results. The reviewers have made several useful constructive comments that the authors are strongly encouraged to take into account.
train
[ "hkPv73gKmGP", "jsC6hlSCx8", "Xl434e-o_tH", "NOFXFiYc0ZK", "JKSMQfxZ3Yb", "hOUiiul7DS", "3CU0fDCyu7t", "kmkZ2yojcJ9", "7gphLjflgPu", "ZPnP0C07US", "35rU0hPgGK1", "12-wvZ6iVLH", "4V9UQLrP4IX", "B2swk8lUO8Z", "Hl0l0xPwHaN" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you! We are happy that our comments clarified your questions. Yes, we will add the clarifying statements to the revised paper.\n", " > thank you for your reply, it clarified my confusions and if you include some of these points in the manuscript, I believe it will help the readers as well.\n\nWe are glad that it cleared the issue. We will include these points in the revised manuscript.\n\n> ... if that's not possible, I would ask you to re-phrase the conclusion statement from \"... it is possible to extend ...\" to something softer, as otherwise it feels a bit awkward that something that seems surely possible to do is not in fact done.\n\nYes, this is very reasonable. We will follow your advice and tone down the claim if we did not manage to state a formal statement. \n\n> Otherwise, I'm happy to keep my positive rating of your submission.\n\nThank you!", " Thank you for your positive vote! Yes, we will incorporate the corrections in the revised paper.", " Dear authors, \n\nThank you for your response! Assuming the corrections would be incorporated in the revised paper, I will change the rating to a positive one. \n", " I would like to thank the authors for providing the response. I found the discussion useful to clarify my questions to understand the paper. I would keep my score. Please add clarifying statements along those comments in the revised paper to make it more readable. ", " Dear authors,\n\nthank you for your reply, it clarified my confusions and if you include some of these points in the manuscript, I believe it will help the readers as well. \n\nJust one last comment on the random design discussion - it of course would be great to have a formal result. However, if that's not possible, I would ask you to re-phrase the conclusion statement from \"... it is possible to extend ...\" to something softer, as otherwise it feels a bit awkward that something that seems surely possible to do is not in fact done. \n\nOtherwise, I'm happy to keep my positive rating of your submission. ", " We thank all the reviewers for their valuable comments which has helped us improve the presentation of the paper. We address specific comments in response to reviewers individually. A summary of the main changes that we plan in the revision are as follows:\n\n- We will address and correct the uniqueness issue raised by Reviewer 6bFj.\n\n- We will add a discussion of connections with the work of Tsigler \\& Bartlett (2020) as detailed in response to Reviewer 6bFj.\n\n- We will elaborate on how the results can be extended to the random design (response to Reviewer UAiV) and to the generalization error (response to Reviewer Qioj). The two issues are related.\n\n- We will discuss how target alignment spectrum can be estimated in practice (response to Reviewer Qioj).\n\n- We will clarify the role of the noise level in choosing the optimal truncation level (response to Reviewer UAiV).\n\n- We will include plots showing the interaction of $\\lambda$ and $r$ (response to Reviewer Qioj).\n\nWe would be happy to address any outstanding concerns that you might have.", " Thanks for your careful reading! We will address the uniqueness issue below, but please note that the uniqueness does not affect the main results of the paper. The material in Section 3.1 is more of an interesting side note and somewhat tangential to the central point of the paper which is discussed in Section 4. We couldn't find any criticism of the main points of the paper in your review and hence are somewhat puzzled by your rating.\nWe would be happy to address any shortcomings you find regarding the main message of the paper.\n\n> I am not sure I understand section 3.1, see the questions below... Is the statement about the solution of (6) being not unique correct? ...\n\nYour point is valid and the solution is indeed unique. Thanks for pointing this out! There is a subtelty in how one defines the functional version of the TKRR estimate which perhaps led us not to notice this. The common way of defining the TKRR estimate is to pass $\\widetilde \\omega$ through the adjoint operator associated with the original RKHS, that is to form $S\\_x\\^\\*(\\widetilde \\omega)$ where $\\widetilde \\omega$ is the solution of (17) in the supplementary material. This form will inherit non-uniqness of $\\widetilde \\omega$. The way we stated Prop.~1 is more approperiate for this form.\n\nThe way we defined the TKRR estimate in (6) though leads to passing $\\widetilde \\omega$ through the adjoint associated with the smaller RKHS, that is $\\widetilde S\\_x\\^\\*(\\widetilde \\omega)$, which as you point out will be the same for all the solutions of (17). This is even better than we anticipated! It reinforces the idea that (6) is the canonical way of defining the TKRR functional form. Proposition 1 as stated is still technically correct in this case (the solution set in part (b) will just be a singleton), but can be simplified greatly given the uniqueness.\n\nWe plan to revise Prop. 1 and point out the uniqueness. On the other hand, no matter which of the two versions of the functional estimate of TKRR one wants to use, the fact that they all map to the same point under the sampling operator $S_x$, remains true. We intend to point this out and perhaps elaborate in the supplement. This makes the subsequent results valid for either version.\n\n> The evaluation is illustrative enough, but only uses the artificial dataset.\n\nSince our paper is a theoretical one, we believe simulations are more suited for demonstrating the results. Real datasets have many moving parts which are hard to control, hence not that effective in demonstrating our desired points. It is a common practice in theoretical work to either not include any experiments at all (like the Tsigler \\& Bartlett paper below) or to provide experiments in simulated settings.\n\n\n> It would be worth commenting the results of the submission in comparison with Tsigler, A., \\& Bartlett, P. L. (2020). \n\nThanks for the pointer. We will discuss the connections in the revision. There are interesting parallels but also notable differences between our work and theirs. They consider the usual ridge regression while we work with the kernel ridge, although the results can be translated back and forth after some transformation. \n\nInterestingly, they also have a spectral truncation parameter $k$, although in Tsigler \\& Bartlett, the truncation level is more of a device in the proof that can be optimized to obtain the tightest upper bound, whereas the truncation level $r$ in our case is a regularization parameter in TKRR that can be tuned in practice. So the bound in Tsigler \\& Bartlett will essentially correspond that of the Full KRR in our paper. In fact, we also use a similar proof-device $k$ in the proof of Theorem 2, separate from the $r$-truncation level of TKRR (see line 272 for the definition of $k$). As an important corollary, our results about TKRR cannot be deduced from those in Tsigler \\& Bartlett.\n\nAnother point of difference is the focus in Tsigler \\& Bartlett which is on how the eigendecay of the kernel (or equivalently the covariance matrix) affects the MSE bound, while we focus on how the interaction of the target alignment decay and the kernel eigendecay together affect the bounds. There are indications of the effect of the target alignment in Tsigler \\& Bartlett, in the form of the tail energy of their $\\theta\\^\\*$ parameter, but the implications of this decay seems to have not been explored in detail compared to our work.\n\n> w is used for both noise in (1) and coefficient vector in (3)\n\nThey are different symbols. The one in (1) is $w$ and the one in (3) is omega $\\omega$. We will change the noise vector in (1) to $\\varepsilon$ to avoid confusion.\n\n> is $\\sqrt N$ needed in the formula between the lines 117-118?\n\nThanks for pointing this out. You are correct that it is not needed. It will be fixed in the revision. (This doesn't affect the subsequent developments).\n\n> misprint: -1 in the power of in 192\n\nThanks again! We will fix it.", " > it’s not clear how useful the findings about the alignment and concentration would be in practice...\n\nHaving theory is still useful in practice. For example, if you plot the error as a function of $r$, and see the non-monotonic behavior, you can guess that perhaps there are multiple disjoint bands in the alignment spectrum.\n\nThat being said, there is a very good plugin estimator for the spectrum of $f\\^\\*$ in practice: Replacing $S\\_x(f\\^\\*)$ with the noisy observations $y/\\sqrt n$ in the definition of the spectrum gives a very good estimate, that is, $U\\^T y / \\sqrt n$ is a very good estimate of $U\\^T S\\_x(f\\^\\*)$. This is because each $u\\_k\\^T y / \\sqrt n$ will effectively average over the noise in $y$. This can be made more precise using concentration inequalities and we plan to elaborate on that in the revision.\n\n> writing needs improvement: while the overall story is clear, this paper involves a lot of notations ...\n\nWe have tried to move as much technical details as possible to the appendix. Please note that this is a theory paper. Without proper notation, it will be hard for people to follow. In the revision, we will try to discuss the results more in plain English, so hopefully that will help. Please also note that two other reviewers found the paper, to quote \"well-written and the ideas and the results are presented clearly,\" and \"easy to follow and well-written.\"\n\n> The connection to generalization is unclear...\n\nThanks for your comment. We will make the connection more clear in the revision. Here is a brief summary: The generalization error really makes sense in the random design setting. Let $(x,y)$ be a random test point, and let $(x\\_1,y\\_1),\\dots,(x\\_n,y\\_n)$ be i.i.d. training data, all from the same joint distribution $\\mathbb P$ on $(x,y)$. Let $\\mathbb P\\_X$ be the marginal distribution of $x$ under $\\mathbb P$. The generalization error for a fixed function $f$ is \n\n$\\mathbb E(y - f(x))\\^2 = \\mathbb E (f\\^\\*(x) - f(x) + w)\\^2 = \\mathbb E(f\\^\\*(x) - f(x))\\^2 +\\sigma\\^2$, \n\nwhere the expectation is taken w.r.t. the randomness in both $x$ and $y$. This can further be written as $\\\\| f - f\\^\\* \\\\|\\_{\\mathbb P\\_X}\\^2 + \\sigma^2$, that is, the population $L\\^2$ norm-squared of $f - f\\^\\*$ plus the variance of the noise. The variance of the noise is the unimporvable part of the generalization error, i.e., the minimum Bayes risk. So the excess generalization error is $\\\\| f - f\\^\\* \\\\|\\_{\\mathbb P\\_X}\\^2$. For large $n$, since the $L\\^2$ norm is an integral, this can be well-approximated by the empirical norm based on the training $x\\_i$ data, that is, $\\frac1n\\sum\\_{i=1}\\^n (f(x\\_i) - f\\^\\*(x\\_i))^2$ which is the empirical norm that we have considered in the paper. This is why we call it the empirical excess generalization error in line 61. This approximation can also be made more precise; we have elaborated on this in response to Reviewer UAiV and plan to include those details in the revision. \n\n> The relation between $\\lambda$ and r is not clear. It seems that both of them are playing as regularization parameters and not clear if in certain regimes they overlap. A counter plot similar to Fig. 1 \\& 2 would be useful to clarify this issue for the readership. \n\nYes, the relation is in general complicated. Our Theorem 2 shows that in the case of polynomial alignment, one needs both to achieve the best performance.\nThanks for the suggestion about the $r$-$\\lambda$ contour plot; that is a great way to show the complicated nature of their joint effect on the performance. We have made the plot and will add it to the revision.\n\n> It would be interesting to discuss connections to to singular value thresholding ...\n\nThanks for the suggestion. We can make the following connection: Our results show that spectral truncation reduces the variance, the third term in Eqn. (9), and this is in line with what singular value thresholding does by reducing the noisy directions. We will add a few sentences about this to the “Conclusion” section.\n\n> In Fig. 2 (left) for noise intensity 0.18 and higher, it seems that the best choice for truncation is r=0. What does that mean?\n\nThanks for the careful observation. The plot is a bit misleading. The minimum truncation level is $r = 1$, and that is what those plot should show. We will make the x-axis range on these plots more clear. What the plots show is that for very large noise levels, the best performance is achieved if we truncate right away, that is, only keep the first component from the alignment spectrum. This is in line with our theory developed in Proposition 2(a), although perhaps not clearly discussed in the present manuscript. Your comment here is very much related to that of Reviewer UAiV on the relation between the optimal truncation level and $\\sigma^2$. We plan to make this much more clear in the revision, as elaborated in response to their comment.", " > 1. All analysis in the paper is performed in the fixed design setting. In the conclusion the authors state that it can be extended to the random design model ... I cannot judge how straightforward/possible that extension is ... .\n\nThanks. Our original intention was not to provide a rigorous statement in the random design case, and there could be obstacles that we did not anticipate. However, we will add a sketch of what we believe is possible to the supplementary material. Basically it boils down to this: Assume that the RKHS is generated by a Mercer kernel (continuous function on a compact space). Then, the RKHS is a convex class of uniformly bounded functions. Then, Theorem 14.12 in [26] applies and we have $\\frac12 \\\\|f\\\\|\\_{L^2}^2 \\le \\\\|f\\\\|\\_n^2$ uniformly over $\\mathbb H$, with probability $1 - c\\_1 e^{-c\\_2 n \\delta\\_n^2}$ where $\\delta_n$ is a critical radius that goes to zero with $n \\delta\\_n^2 \\to \\infty$. We can then apply this result to get $\\\\|\\widehat f\\_{r,\\lambda} - f\\^\\*\\\\|\\_{L^2}^2 \\le 2 \\\\|\\widehat f\\_{r,\\lambda} - f\\^\\*\\\\|\\_{n}^2$ with high probability. The LHS of this inequality is the excess generalization error and the RHS is what we control. Combined with Markov inequality, Theorem 1 then shows that the same data-dependent bound on the LHS of (10) is also a high probability bound on the generalization error in the random design setting.\n\nThe only caveat in the above argument is the zero-mean assumption on the function class in Theorem 14.12 of [26]. We do not think it is needed and we would like to avoid it. If we manage to prove a result, we will make a rigorous statement in the revision. Otherwise, we will point out the sketch above and leave the details to future work. \n\n> Theoretical results presented in the paper indicate importance of the noise level for selection of a suitable truncation parameter...\n\nThis is a great suggestion! There is already an indication of how the noise level plays a role in deciding the optimal truncation level, in the definition of $j\\^\\*$ in Proposition 2(a). For sufficiently small levels of the noise, $j\\^\\*$ will be equal to $\\ell+1$ (as discussed on line 169). Then, for $r < \\ell+1$, the MSE is increasing, but the moment we enter the signal band ($r = \\ell+1$), the MSE starts to decrease and keeps decreasing till we get to the end of the band ($r = \\ell+b)$, at which point it starts to increase again. So, in this case (small enough noise level), the optimal truncation level is at the end of the band, i.e., $r = \\ell+b$. If we increase the noise level, $j\\^\\*$ will be in the middle of the band. In this case, depending on where in the band $j\\^\\*$ lies, the optimal truncation level could still be $r = \\ell+b$. But at some point, the noise level is so high that $j\\^\\*$ is very close to the end of the band, in which case, the dip in the MSE over the band is so small that the best truncation is just at $r=1$ level. This includes the case where there is no $j\\^\\*$ in the band, hence the MSE is monotonically increasing (very high noise levels). \n\nThe situation in Figure 2a is a bit more complicated since we have two bands, and formally we have not stated a result for this case. But a similar pattern to the above holds. We will try to state a formal result in the two-banded case in the revision if possible, and map out the implications. In any case, hopefully, the discussion above is enough to address your point, and we plan to add it to the revision. \n\n> I found reading of the manuscript rather strenuous ... partially incorporating discussion currently presented in Appendix D2 and D3 into the main text ...\n\nWe debated your suggestion but we prefer to keep the current format. The discussion in Appendix D2 and D3 is a fairly technical translation of the results of other papers to our notation. It is more of an expository note on existing papers and including them in the main text will detract attention from our own contributions. We believe the main message is adequately carried in the main text in the current format. Perhaps with additional clarifications that we will add in the revision in response to reviewers, it will become easier to read. Also, please note that two other reviewers found the paper, to quote \"well-written and the ideas and the results are presented clearly,\" and \"easy to follow and well-written.\"\n\n> In line 240, I believe the reference should be to Proposition 2a instead of Theorem 2.\n\nThanks! We will fix this in the revision.", " Thank you for your positive feedback and encouragement!", " The paper studies a truncated KRR method and shows how the alignment of the target function with the kernel improves the performance of the TKRR. The paper also sheds light on a situation when the TKRR can achieve parametric rates of convergence - something which is not possible for KRR. Both a theoretical analysis and a simulation study is included in the paper. The paper is sufficiently novel and studies a very interesting variant of KRR, namely, TKRR and demonstrates both theoretically and empirically the strength of TKRR, in particular, when the target function is aligned with the kernel function. The paper is well-written and the ideas and the results are presented clearly. None. No potential negative societal impact.", " The paper considers kernel ridge regression (KRR) and its truncated version for estimating a target function contaminated with the i.i.d. noise. Two situations are considered depending on the target alignment (TA) scores, which are the coefficients of projection of the underlying target vector on the eigenvalues of normalized empirical kernel matrix K. 1)TA scores are non-zero in a few consecutive components, in which they sampled as i.i.d. with non-zero mean, 2) TA scores are decaying polynomially as well as eigenvalues of $K$. In the first example, the authors demonstrate possible non-monotonous behavior of the MSE depending on truncation parameter r, and other parameters fixed. In the second one, four performance regimes are considered for truncated and full KRR based on the strength of alignment of the target function with the eigenvectors of $K$. If the target alignment coefficients decay fast, then with the proper truncation and smoothing parameter the MSE achieves a better rate than the full KRR with the parametric rate as the limit in the decay parameter.\n Strengths:\nThe paper is easy to follow and well written. The examples considered serve as a good illustration of the double-descent phenomenon in RKHS and the dependence of generalization performance of the (T)KRR on properties of RKHS and on how well the underlying function is aligned with the eigenvectors of $K$.\n \nWeaknesses:\n- I am not sure I understand section 3.1, see the questions below. \n- The evaluation is illustrative enough, but only uses the artificial dataset. \n\nMinor details:\n- w is used for both noise in (1) and coefficient vector in (3)\n- is $\\sqrt{N}$ needed in the formula between the lines 117-118? \n- misprint: $-1$ in the power of $\\sigma/n$ in 192 \n Is the statement about the solution of (6) being not unique correct? \n(6) is a ridge regression problem with a design matrix with $r$ orthogonal columns, $r<n$ (see line 128)? \nIn the supplementary material (17) has a non-unique solution, at the same time \n$$\\tilde{S}^*_{\\mathbf{x}}(w)(x) = \\frac{1}{\\sqrt{n}}\\sum_{j=1}^n w_j \\tilde{\\mathbb{K}}(x,x_i) = \\frac{1}{\\sqrt{n}} \\sum_{k=1}^r \\mu_k \\phi_k(x) \\sum_{j=1}^n \\phi_k(x_j) w_j = \\sum_{k=1}^r \\mu_k \\phi_k(x) u_k^{\\top} w.$$\n Multiple solutions in terms of $w$ in (17) come from the fact that $w$ contains an arbitrary vector, that lies in the subspace orthogonal to the one spanning $\\{u_1,\\dots,u_r\\}$, but taking $u_k^{\\top} w$ leaves only one solutions?\n\nIt would be worth commenting the results of the submission in comparison with Tsigler, A., & Bartlett, P. L. (2020). Benign overfitting in ridge regression. arXiv preprint arXiv:2009.14286.\n\n----------------\nI increased the rating after the authors commented on the questions. The limitations and well addressed in the work. No negative societal impact is anticipated.", " The manuscript provides an analysis of truncated kernel ridge regression. In particular, the authors show how alignment of the target function influences convergence rates, and identify scenarios in which TKRR exhibits stronger guarantees than ridge regression. In addition, they show examples in which generalisation performance of TKRR demonstrates non-monotonicity and phase transition behaviour in regularisation and truncation parameters. Strengths:\n\nThe main result of the paper is the exact expression for mean squared error of TKRR from which the authors derive two interesting conclusions:\n1. They demonstrate how a non-monotonous behaviour of the generalisation performance with respect to truncation parameter takes place, which provides a new insight into double-descent phenomena \n2. They derive stronger convergence results for TKRR (compared to canonical KRR) in case of strong alignment between the target and kernel functions, which are new to the best of my knowledge\n\nWeaknesses:\n\n1. All analysis in the paper is performed in the fixed design setting. In the conclusion the authors state that it can be extended to the random design model. However it’s outside of my area of expertise, so I cannot judge how straightforward/possible that extension is, taking into account also that TKRR itself is data-dependent\n2. Theoretical results presented in the paper indicate importance of the noise level for selection of a suitable truncation parameter. This is also supported by simulation results, where figure 2a demonstrates that whether choosing a larger r (already being in decreasing error regime) is a beneficial approach depends on the noise level. With this in mind, I believe the manuscript would benefit from a more thorough discussion of this aspect of the problem, in particular what can be deducted from the theoretical results and what only from the simulations/experiments\n3. I found reading of the manuscript rather strenuous. While in parts it can be attributed to its level of technicality, I would suggest the authors to consider some re-organisation of the material. In particular, I would recommend at least partially incorporating discussion currently presented in Appendix D2 and D3 into the main text, potentially at the expense of the proofs in Section 7.\n4. In line 240, I believe the reference should be to Proposition 2a instead of Theorem 2. Please, see weaknesses section above and correct/comments on any misunderstandings from my side Yes, they have", " this submission studies how the alignment between target function and the kernel in logistic kernel regression (KRR) can improve the regression fidelity. Kernel spectral truncation is used to adjust the alignment. The results indicate that: 1) for polynomial alignment, there is a regime that kernel truncation (TKRR) archives the parametric rate O(\\sigma^2/n) which is faster than full KRR; 2) for bandlimited alignment, TKRR exhibits multiple descents. \n Strength\ntheoretical novelty: this is the first work to study the impact of target alignment for KRR\n\nWeakness\npractical significance: it’s not clear how useful the findings about the alignment and concentration would be in practice. It heavily relies on the spectrum of f^* which is unknown, and no side information is known about it in this setting. \nwriting needs improvement: while the overall story is clear, this paper involves a lot of notations, where some details can be deferred to the appendix. \n The connection to generalization is unclear. It’s mentioned in lines 59-61 that the results can be extended to generalization error. The setting however seems to be a pure inference task, with no training involved and no training/test data defined separately. This needs to be clarified.\nThe relation between \\lambda and r is not clear. It seems that both of them are playing as regularization parameters and not clear if in certain regimes they overlap. A counter plot similar to Fig. 1 & 2 would be useful to clarify this issue for the readership.\nIt would be interesting to discuss connections to to singular value thresholding in the context of (e.g., nuclear norm based regularization) which truncates certain singular value directions that are less aligned with the signal of interest. \nIn Fig. 2 (left) for noise intensity 0.18 and higher, it seems that the best choice for truncation is r=0. What does that mean?\n They are properly discussed. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 2 ]
[ "JKSMQfxZ3Yb", "hOUiiul7DS", "NOFXFiYc0ZK", "kmkZ2yojcJ9", "7gphLjflgPu", "ZPnP0C07US", "nips_2022_SPiQQu2NmO9", "4V9UQLrP4IX", "Hl0l0xPwHaN", "B2swk8lUO8Z", "12-wvZ6iVLH", "nips_2022_SPiQQu2NmO9", "nips_2022_SPiQQu2NmO9", "nips_2022_SPiQQu2NmO9", "nips_2022_SPiQQu2NmO9" ]
nips_2022_eUAw7dwaOg8
Bridging the Gap: Unifying the Training and Evaluation of Neural Network Binary Classifiers
While neural network binary classifiers are often evaluated on metrics such as Accuracy and $F_1$-Score, they are commonly trained with a cross-entropy objective. How can this training-evaluation gap be addressed? While specific techniques have been adopted to optimize certain confusion matrix based metrics, it is challenging or impossible in some cases to generalize the techniques to other metrics. Adversarial learning approaches have also been proposed to optimize networks via confusion matrix based metrics, but they tend to be much slower than common training methods. In this work, we propose a unifying approach to training neural network binary classifiers that combines a differentiable approximation of the Heaviside function with a probabilistic view of the typical confusion matrix values using soft sets. Our theoretical analysis shows the benefit of using our method to optimize for a given evaluation metric, such as $F_1$-Score, with soft sets, and our extensive experiments show the effectiveness of our approach in several domains.
Accept
When training binary classifiers, one usually minimizes the (surrogate) binary cross-entropy loss (BCE), but evaluates on metrics such as F1, AUROC, or other confusion metric-based scores. The authors propose to instead combine a differentiable approximation of the Heaviside function with a probabilistic view of the typical confusion matrix values using soft sets to directly optimize F1 and AUROC at training time. The authors show that under certain assumptions the metrics computed over the soft-set confusion matrix values are asymptotically similar to the underlying true metric. Finally, the authors evaluate the proposed approximations on several unbalanced datasets and show competitive performance with respect to optimizing BCE. While the reviewers outlined some weaknesses, they agreed that this work is relevant to the larger research community and presents a novel approach with potential to be used in practice. During the rebuttal phase the authors addressed the main remaining issues and I will recommend acceptance. Please incorporate all the information presented during the rebuttal phase into the manuscript.
train
[ "6JbhRiK7B7", "_P2xqELLNZ0", "774qly9buWH", "4n_e_1MDzqs", "8kr3uokaeAP", "j9oeWCjwo8D", "5YQQD33Tf6f", "u_hKUtEUs3", "GeR3HJw0fCZ", "Vocby5Ynvyl" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your detailed response. I also read your response to other reviewers. I decide to maintain my positive rating. I hope this paper could include an analysis of generalization errors, studies on multiclass classification, and a comparison with other surrogate losses in the main content of the paper.", " Thank you for your informative responses, I have read it, along with other reviewers' opinions, throughs. I would like to maintain my positive ratings.\n\nI would also like to hear the opinion from reviewer MHi1.\n", " Thank you for taking the time to provide a thoughtful review. We are also excited that the proposed method provides numerous advantages over existing methods. Our answers to the questions in the review are provided below along with our proposed changes to incorporate the feedback into the paper.\n\n__Do the results about F1* and Accuracy* contradict what the theory predicts? (Question 1, Weakness Sentence 1)__\n\nWe believe the experimental results do not contradict the theoretical results. Our theoretical analysis says that our proposed loss should converge to the evaluation metric, but nothing prevents binary cross entropy (BCE) to result in comparable performance in some cases. In general, though, our theoretical analyses do not lead us to believe that BCE would converge to any desired metric based on confusion matrix values in the limit. In particular, binary cross entropy loss is expressed as:\n$$-\\\\frac{1}{n}\\\\sum_{i=1}^n \\\\left(y_i \\\\log p + (1-y_i)\\\\log (1-p) \\\\right)$$\n\nwhile Accuracy is:\n$$\\\\frac{|TP| + |TN|}{n} = \\\\frac{\\\\sum_{i=1}^n \\\\left (y_i \\\\hat y_i^{\\\\mathcal{H}} + (1-y_i)(1-\\\\hat y_i^{\\\\mathcal{H}})\\\\right)}{n}$$\nThese are different expressions that don’t generally converge to each other as $n \\to \\infty$. We will clarify this important point in our theoretical analysis (Section 4).\n\nIt is also worth noting that the Accuracy metric is particularly challenging, as it may overestimate classifier performance for imbalanced datasets [14] (lines 208-209). We believe that for this reason, $F_1$-Score is often a better indicator of a classifier’s true performance and, therefore, optimizing $F_1$-Score using our method can result in better performance than optimizing for other confusion-matrix based metrics with our approach. This highlights an interesting challenge in the evaluation and training of neural networks: how does one identify the best evaluation metric and thereby choose a suitable training loss using our method? This is an important question that warrants further research in the future.\n\n\n**Is this approach less directly applicable to multi-class classification? (Weakness Sentence 3)**\n\nWe agree that the problem of multi-class classification presents an interesting opportunity for future work. As noted in our response to Reviewer 7RmE under the heading titled “Multiclass classification”:\n\nThere is promise in applying our approach to multi-class classification, however this requires choosing a metric to apply our method to. In multi-class classification, common metrics (such as F1) can be computed using an aggregation strategy like macro or micro averaging. In these cases of simple differentiable aggregations, it is straightforward to apply our method by approximating the Heaviside function, computing soft sets, and then computing the final score. In other cases, further research is required for more complex aggregations. For example, Top-N accuracy considers a response correct if it is included in one of the top N predictions. In computer vision, another complex example is the mean average precision at a number of intersection-over-union (IoU) thresholds (as proposed by Everingham et al.). We will briefly discuss these ideas in the related work section of our paper to inspire future extensions of our work.\n\nEveringham et al. (2015). The pascal visual object classes challenge: A retrospective. IJCV.\n", " Thank you for your review – we are glad you found our investigation thorough. Below we address the weaknesses of the paper that are pointed out in the review and answer the questions that are major concerns. \n\n**Why is the proposed optimization objective useful? (Weakness 1)**\n \nOur new optimization objective is important for three reasons. First, in many cases, classifier performance is improved by bridging the training-evaluation gap using our method, as you noted. Second, as discussed in Section 6, our method is different from much prior work in that our approach is flexible and can be used with a variety of metrics composed of confusion matrix values. Third, our approach makes improvements with respect to runtime of related work and our approach does not limit batch size which facilitates shorter training times (Section 6). These improvements make our approach more practical than existing alternatives. \n \n**Clarifications (Weakness 2)**\n \nIn response to your comment that the paper was hard to follow in some places (Weakness #2), we address your specific questions below. We’d be happy to discuss any other areas where the paper was hard to follow in the author-reviewer discussion period.\n \n**How computationally expensive is the method? (Question 1)**\n \nOur approach has the worst-case runtime complexity of the confusion-matrix based metric being approximated as a loss (as mentioned briefly on lines 315-316 of the paper). Computation of such metrics are linear with regard to the number of samples, which is discussed in Section 3 of the Supplementary Material. In contrast, the adversarial method by Fathony and Kolter has cubic runtime complexity [11].\n \n**How robust is the model to slight OOD perturbations? (Question 2)**\n \nOur work makes no claims regarding out of distribution data. However, our theoretical results show asymptotic equivalence (for large $n$) between the proposed soft-set confusion-matrix based loss (such as $F_1$*) and the true metric ($F_1$-Score).\n \nAs noted in our response to Reviewer 7RmE under the heading titled “Approximation error and generalization error”:\n \nAnalyzing the approximation error and generalization error is an interesting direction for future work. However, please note that such analyses typically consider the loss used to train the classifier — which is the focus of our paper — along with the neural network architecture (Cao & Gu, 2019) and even properties of the data (Nakada & Imaizumi, 2020). We will mention analyzing these errors as valuable future research directions in the conclusion section.\n\nCao & Gu (2019). Generalization bounds of stochastic gradient descent for wide and deep neural networks. NeurIPS.\n\nNakada & Imaizumi (2020). Adaptive Approximation and Generalization of Deep Neural Network with Intrinsic Dimensionality. J. Mach. Learn. Res.\n\n \n**Is this method scalable to large datasets? (Question 3)**\n \nYes, our proposed method is scalable to large datasets. The typical runtime complexity of our approach is typically linear wrt. number of samples, as mentioned earlier in this response. Also, in contrast to the work by Fathony and Kolter [11], we place no constraints on batch size during training, as noted on line 313 of our paper. This means that one can train with as big of a batch size as GPU memory allows, potentially shortening training time.\n", " Thank you for your detailed review. We agree that this is an impactful area of research and we are excited about the simplicity and effectiveness of our approach. Below we address the questions about our approach and describe the changes that we will make to the paper to improve its clarity and clarify our contributions.\n\n**What is the novelty of the proposed approach? (Weakness 1)**\n\nOur work is novel in that:\n1. To the best of our knowledge, we are the first to combine a differentiable approximation of the Heaviside function with a probabilistic view of the typical confusion-matrix values using soft sets in order to train binary neural network classifiers using losses that approximate typical classification metrics composed of confusion matrix values.\n2. Our approach is not only simple to implement, but it is flexible in that it can be applied in a straightforward manner to any classification metric composed of confusion-matrix values.\n3. Our approach makes improvements in runtime performance with respect to related work and without limiting batch sizes in practice (Section 6). \n\n**Does the sigmoid approximation of the Heaviside function change performance of the approach? Could you conduct an ablation study for the two components of the proposed approach? (Question 1)**\n\nOur proposed approach uses a differentiable approximation of the Heaviside step function to build a probabilistic confusion matrix using soft sets. We can use a piecewise linear approximation for the approximation of the Heaviside function or a sigmoid approximation (see results in Section 2 and Section 5 of the Supplementary Material). The important idea here is to approximate the Heaviside function in a way that satisfies the properties explained in Section 3.1 of the paper. Unfortunately, it is not possible to ablate (to remove) the use of either soft sets or a differentiable approximation of the heaviside function in our implementation because both are required for a differentiable objective.\n\n**Should consistency of BCE be considered? Should we consider statistical (sample) efficiency? (Question 2)**\n\nIn Section 4.2, we consider a classifier trained under our setup optimizing over $F_1^s$, which is the $F_1$-Score computed with soft sets. We prove that $F_1^s$ and $F_1$, which is the true $F_1$-Score, both converge almost surely to the same expression as $n \\to \\infty$. This similarly holds for other metrics such as Accuracy and AUROC. However, if a network is optimized using binary cross entropy loss and then evaluated using a confusion-matrix based metric, the convergence result no longer holds. For example, binary cross entropy loss is expressed as:\n$$-\\\\frac{1}{n}\\\\sum_{i=1}^n \\\\left(y_i \\\\log p + (1-y_i)\\\\log (1-p) \\\\right)$$\n\nwhile Accuracy is:\n$$\\\\frac{|TP| + |TN|}{n} = \\\\frac{\\\\sum_{i=1}^n \\\\left (y_i \\\\hat y_i^{\\\\mathcal{H}} + (1-y_i)(1-\\\\hat y_i^{\\\\mathcal{H}})\\\\right)}{n}$$\n\nWhile training using binary cross entropy loss generally improves evaluation accuracy, these are different expressions that don’t generally converge to each other even as $n \\to \\infty$. \n\nWhile our experimental results show promising results, we agree that future work could investigate more theoretical aspects of our approach, such as statistical sample efficiency. We will briefly mention this idea in the conclusion section of the paper as another interesting future research direction.\n", " Thank you for the thoughtful review. \n\n**Comparisons with other differentiable surrogate losses (Weakness 1)**\n\nTo the best of our knowledge, our approach is the only published method with the flexibility to train neural network binary classifiers using losses that approximate *any* classification metric composed of confusion-matrix values. Therefore, we compared against specific surrogate approaches for well-established classification metrics, including the adversarial approach for F1 proposed by Fathony and Kolter [11] and the WMW statistic [33] (a differentiable method for optimizing the AUROC score). Additionally, Sec. 7 of the Supp. Material includes results against other baselines, including the differentiable surrogate for DICE ([8] in the Supp. Material).\n\n**Approximation error and generalization error (Weaknesses 2 & 4)**\n\nAnalyzing the approximation error and generalization error is an interesting direction for future work. However, please note that such analyses typically consider the loss used to train the classifier — which is the focus of our paper — along with the neural network architecture (Cao & Gu, 2019) and even properties of the data (Nakada & Imaizumi, 2020). We will mention analyzing these errors as valuable future research directions in the conclusion section.\n\nCao & Gu (2019). Generalization bounds of stochastic gradient descent for wide and deep neural networks. NeurIPS.\n\nNakada & Imaizumi (2020). Adaptive Approximation and Generalization of Deep Neural Network with Intrinsic Dimensionality. J. Mach. Learn. Res.\n\n**Gap between the training loss and test indicators (Weakness 3)**\n\nEmpirical results (e.g., [10]) show that the training-evaluation gap hinders evaluation performance. The gap means that in many applications, the metric differs significantly from the surrogate loss [30] and there may not be a strong correlation between minimizing a surrogate loss and improving an evaluation metric [26]. Also, it could mean that classifiers are solving “the wrong problem” when optimizing a surrogate loss, leading to suboptimal evaluation performance [15]. We will clarify this point in the introduction of the paper.\n\n**Multiclass classification (Question 3)**\n\nThere is promise in applying our approach to multi-class classification, however this requires choosing a metric to apply our method to. In multi-class classification, common metrics (such as F1) can be computed using an aggregation strategy like macro or micro averaging. In these cases of simple differentiable aggregations, it is straightforward to apply our method by approximating the Heaviside function, computing soft sets, and then computing the final score. In other cases, further research is required for more complex aggregations. For example, Top-N accuracy considers a response correct if it is included in one of the top N predictions. In computer vision, another complex example is the mean average precision at a number of intersection-over-union (IoU) thresholds (as proposed by Everingham et al.). We will briefly discuss these ideas in the related work section of our paper to inspire future extensions of our work.\n\nEveringham et al. (2015). The pascal visual object classes challenge: A retrospective. IJCV.\n\n**Experiments on graph data (Question 4)**\n\nWe performed an additional experiment that will be added to our Supp. Material Sec. 6. Following Thompson et al. and using the CocktailParty dataset ([14] in the Supp. Material), we constructed a fully-connected graph in which each person corresponded to a node in the graph. Edges corresponded to metric distance between people. We then used a Graph Neural Network to predict ​​pairwise affinities that encoded whether or not two individuals were part of the same conversational group. This task was effectively the same as described in Section 4.1.2 of our Supp. Material, but instead of predicting interactions among pairs of potential interactants independently, we made predictions for all the pairs seen in a frame of the dataset simultaneously. The results of this additional experiment, which used the hyperparameters proposed in Thompson et al., are in line with the existing results reported in our paper:\n\n||Loss|$F_1$-Score|Accuracy|\n|---|---|---|---|\n|(1)|$F_1$*|$0.83\\pm0.01$|$0.77\\pm0.01$|\n|(2)|Accuracy*|$0.77\\pm0.02$|$0.72\\pm0.02$|\n|(3)|BCE|$0.81\\pm0.03$|$0.75\\pm0.03$|\n\nThompson et al. (2021). Conversational group detection with graph neural networks. ICMI.\n\n**Other improvements**\n\nDue to space constraints, we will highlight the difference between the confusion matrix and soft confusion matrix formulation in the text (**Question 1**). Immediately after Eq. 6, we will add: “Whereas the values of the confusion matrix (tp, fp, tn, fn), are the sum of zeros and ones, the values of the soft confusion matrix (tp_s, fp_s, tn_s, fn_s) are the sum of continuous values in $[0,1]$ (rather than integers).” \n\nWe will avoid making numbers the subject of a quote (**Question 2**). Thanks.\n\n", " This paper proposes to use a differentiable approximation of the Heaviside step function (which determines where the prediction should be positive according to the input and threshold) to build a loss function that theoretically approximates Accuracy or F-score. The motivation is to bridge the gap between training loss and evaluation metrics in binary classification.\n\nIn terms of the method, it has two parts. The first is to approximate the Heaviside function with a sigmoid or piecewise linear function with hyperparameter tau. Then this paper introduces the soft set membership to measure the degree of belonging to a certain confusion matrix set. In terms of theoretical support, the author proves that the new approximation of the Heaviside function is Lipschitz continuous and the loss constructed by their combination also makes the variation of each step in SGD optimization small. Then the paper proves that the loss that approximates the F1 metric converges to the F1 metric when the number of samples is infinite. Finally, in the experimental part, the paper experiments on tabular and image data and also explores the balance of precision and recall. The approximated loss of F1 score achieves higher performance than BCE loss and other losses.\n \nStrengths:\n1. The paper is novel and compares some related methods in related work and experiments.\n2. This is a completed work, with both theoretical support and experimental results, and the authors honestly point out various deficiencies.\n3. The presentation of this article is very good; the overall structure and specific details are relatively clear.\n4. This paper provides a training loss that approximates the evaluation metric. The difference between training and evaluation is a problem of great interest.\n\nWeakness:\n\n1. Although this paper has compared adversarial-based methods in experiments, comparisons with other differentiable surrogate losses are lacking.\n\n2. This paper can compare previous methods and the method in this paper for the difference in approximation error. Previous methods include BCE loss and the previous methods to solve the difference between training loss and test indicators.\n\n3. What kind of theoretical problems will the gap between the two-class training loss and test indicators bring about; this paper lacks the analysis in this regard.\n\n4. There are some differences between the test sample and the training distribution. The closeness of the loss at training time to the metric at test time does not seem to solve this problem, and this article does not consider generalization error.\n 1. I suggest that this paper can give a figure that pairs and compare the original version and soft version and highlight the difference. Formula 2 and Formula 6 look identical (the difference is the small s).\n\n2. Line 124, “By [11]”, I recommend not making numbers the subject of a quote.\n\n3. What are the challenges of extending this research to multiclass classification?\n\n4. I suggest that this article can also experiment on more kinds of data, such as binary classification on graph data.\n The paper discusses limitations scattered throughout the paper. Besides the limitation summarized in checklist 1 (d), they also acknowledged other limitations.\n\nIn Line 119, the paper acknowledges that the mini-batch stochastic gradient descent (SGD) optimization method does not provide an unbiased estimator on non-decomposable metrics.\n\nIn Line 202, the paper acknowledges that the F1-Score computed from soft sets is a biased estimator for the expected true F1-Score for finite n.\n\nIn Line 249-254, they find that the loss that approximates AUROC does not work.\n", " The paper proposes a method to directly optimize evaluating metrics, such as F1 and AUROC, at training time. The method includes a piecewise linear approximation of the 0-1 loss (heaviside function), and soft versions of F1 and AUROC metrics based on the diffierentiable approximation. Experiments show that the proposes method can achieve superior F1 and AUROC compared to models trained with binary cross entropy loss. Strength\n- Binary classification is an important and frequent problem. \n- The proposed method can have wide application potential.\n- The proposed method is simple and effective.\n\nWeaknesses\n- The novelty might be somewhat thin: approximating non-diffierentiable functions with differentiable ones are common. The theoretical grounding section might not be too superising (continuity, consistency). Proposing to directly optimize evaluating metrics at training time might be novel, but I think it is also studied, as those mentioned in related works and are compared against. 1. Based on my understanding, I think the technical part of proposed method includes a piecewise linear approximation of the heaviside function, and approximations of confusion-matrix metrics based on soft sets. I'd like to see an ablation study where these two metrics are evaluated separately. For example, if the heaviside function is approximated with sigmoid function, does the proposed \"accuracy\" metric coincides with BCE? \n\n2. Sec. 4.2 presents a consistency result. Isn't that also true for the BCE loss, at least for accuracy? I think to firmly demonstrate the advantage of the proposed method, consistency alone might not be enough, and one might want to consider the statistical (sample) efficiency? Limitations an societal impact are sufficiently addressed.", " In binary NN classifiers, we train the models on BCE objective but at the test time, we evaluate the model's accuracy or AUROC instead. The authors propose a differentiable approximation to the test time metrics (like Accuracy/AUROC/F1 score) so that we can use them for training the model as well. The authors evaluate the new loss function on 5 datasets with varying levels of class imbalance and show that the models trained with proposed loss have better generalization. Strengths:\n- The introduction of a new optimization objective.\n- The investigations seem thorough.\n\nWeaknesses:\n- Did not justify enough why this new optimization objective is useful. I understand this improves the metrics we are interested in measuring but are there any other benefits.\n- The paper is hard to follow in some places. The following are my major concerns, it would be great to discuss this \n-\tHow expensive is the method? In terms of compute and time efficiency.\n-\tHow robust is the model to slight OOD perturbations? \n-\tIs this method scalable to large datasets? \n n/a", " This work focuses on the loss function of the binary classification problem and proposes the use of the differential Heaviside Function Approximation $\\mathcal{H}$ and soft set membership as a loss to directly optimize desired confusion matrix-based metrics such as accuracy, F1-score, etc., instead of the commonly used surrogate objective BCE. The study focuses on the analysis of a sigmoid function and a five-point linearly interpolated function as an approximation, with some assumptions providing theoretical underpinning. Experiments are conducted on several unbalanced level datasets, and the results are consistently good or comparable in some scenarios. Strengths: The strategy suggested in this article provides a number of advantages over earlier attempts to directly improve measures. It alleviates the batch size limitation and being faster for the adversarial approach [11] (cubic runtime complexity), and it supports a wider variety of metrics as opposed to WMW [33]. \n\nSpecifically, the method can optimize metrics such as $F_1$, $F_\\beta$, Accuracy, AUROC, etc. directly and balance the tradeoff of Precision and Recall, without increasing the training time complexity (linear). \n\nThe authors provide a theoretical foundation for this method and empirically validate it using a variety of datasets with varying levels of imbalance. The linear approximation function was introduced in [27] as an activation function, but this work seems the first to use it in the training objective.\n\n\nWeaknesses: The experimental results show that only some of the metrics have a significant improvement, while others do not. For example, AUROC has no unbiased estimator which leads to bad performance. This approach seems less directly applicable to multiclassification tasks.\n Results are observed when the evaluation measure is F1 and the loss is also F1*, however practically all studies on accuracy are comparable to BCE (tabular datasets) or the same improvement is reached using F1* and Accuracy* (image datasets). Also, it appears that F1* performs better on numerous evaluation metrics; does this contradict what the theory predicts? Did not conduct research with human subjects" ]
[ -1, -1, -1, -1, -1, -1, 6, 6, 4, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 3, 2, 4 ]
[ "j9oeWCjwo8D", "774qly9buWH", "Vocby5Ynvyl", "GeR3HJw0fCZ", "u_hKUtEUs3", "5YQQD33Tf6f", "nips_2022_eUAw7dwaOg8", "nips_2022_eUAw7dwaOg8", "nips_2022_eUAw7dwaOg8", "nips_2022_eUAw7dwaOg8" ]
nips_2022_pGLFkjgVvVe
Uncertainty Estimation Using Riemannian Model Dynamics for Offline Reinforcement Learning
Model-based offline reinforcement learning approaches generally rely on bounds of model error. Estimating these bounds is usually achieved through uncertainty estimation methods. In this work, we combine parametric and nonparametric methods for uncertainty estimation through a novel latent space based metric. In particular, we build upon recent advances in Riemannian geometry of generative models to construct a pullback metric of an encoder-decoder based forward model. Our proposed metric measures both the quality of out-of-distribution samples as well as the discrepancy of examples in the data. We leverage our combined method for uncertainty estimation in a pessimistic model-based framework, showing a significant improvement upon contemporary model-based offline approaches on continuous control and autonomous driving benchmarks.
Accept
Unanimous accept from 3 reviewers. I'm uncertain about "accept" given reviewer's XJeV and Jba5 reviews are on the short and vague side. Reviewer vqqM never responded, even though they would have been a great reviewer for this work (reminded them once and they confirmed, but forgot to follow up again). Reviewer TKTA's review was the most useful, a borderline accept. There is reviewer consensus on novelty, in addition to being well written, with convincing results on MuJoCo and a highway environment. I myself am unfamiliar with Riemannian metrics/manifolds, however after reading up on the subject, I worried this paper might have been too close to "Latent Space Oddity: on the Curvature of Deep Generative Models" to learn (or compute) latent space metrics, however this works differs by (1) using a variational *forwards* model to consider dynamics, (2) using an ensemble of model to consider epistemic uncertainty, and (3) tying both these aleatoric and epistemic forms of uncertainty into an offline-RL setting, where rewards are pessimistically estimated under uncertainty. While it seems a little ad hoc to suggest this particular method _for_ a particular application (offline RL), which confuses the narrative and motivation a bit, it does seem to give better RL performance in these settings than L2 and ensembling/bootstrapping in Figure 4. This is the most borderline paper I've seen as AC this NeurIPS, but if forced to make a decision, I lean accept.
train
[ "RXlHkvM-Wqk", "fnx0M2So99v", "TthC12cx8D", "_8MT67_zqkO", "-SlAbh2D8ci", "p36z4tfMEgn", "3JXdgqEgqoJ", "6baQMHgPFD_", "Haq7LVvG7mv" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your time and effort in revising the paper. Regarding the geodesic solve: My apologies for not having seen the supplementary material, yes it is clearly explained there, thank you for the detailed description. I'm not quite certain if Corollary 1 is of enough importance in the main paper, but thank you for addressing all my comments, it helped understanding the work better.", " References: \\\n[1] Nutan Chen, Alexej Klushyn, Richard Kurle, Xueyan Jiang, Justin Bayer, and Patrick Smagt. Metrics for deep generative models. In International Conference on Artificial Intelligence and Statistics, pages 1540–1550. PMLR, 2018. \\\n[2] Georgios Arvanitidis, Lars Kai Hansen, and Søren Hauberg. Latent space oddity: On the curvature of deep generative models. In 6th International Conference on Learning Representations, ICLR 2018, 2018.", " Thank you for your thorough and positive review and for your helpful comments and suggestions. We were encouraged that you found our paper to be very well written, our results novel, and our contributions clear. Please find a response to all of your comments and questions below.\n\nRegarding your suggestions on clarifications, we will add these to our paper. Specifically, your comment on lines 27 and 29, Section 2.1, the equation in line 78, and your comments on the related work and discussion.\n\nRe placement of tables and figures: Thank you for your comment. We will rearrange these to make them referenced before they show in the paper.\n\nRe title figure: This is a great idea. We agree with you that an additional figure explaining the algorithm could really benefit clarity of our proposed approach. We will add one to the paper.\n\nRe online / offline: As you suggested, we will add further discussion about the difference between online and offline, and how uncertainty is used in both settings. Particularly, we note that in online RL uncertainty is generally used for optimism, while in offline it is generally used for pessimism, to avoid overestimation of the value.\n\nRe metric capturing inherent characteristics: We understand that this sentence can be a bit vague, and we’ll therefore clarify what we mean in our paper. Specifically, in the four rooms example, we see that the walls act as a barrier in latent space, which is captured by our metric. Regarding the autonomous driving t-SNE plots, we agree that these are only qualitative, and as you mentioned, we only consider their shape to suggest a distinction between our metric and the Euclidean distance.\n\nRe how the geodesic curves are found: Much of this information was pushed to the appendix, but we agree that it is important for it to be explained in the front matter. We will add this information to the paper to make it more clear. Briefly, we defined a parametric curve in latent space and used gradient descent to minimize the curve’s energy.\n\nAnswers to Questions:\n\n1. As we pointed out earlier, we will move information from the appendix to the paper on how the geodesics are computed. We do a numerical estimate of the geodesic, and describe it in Algorithm 2 in the appendix (see supplementary material). We agree this should be explained better in the paper and will add further explanation for clarity. Regarding the metric, it has been used in previous work on generative latent models (see [1], [2]). Intuitively, this metric attempts to find curves of minimum energy in latent space. Since our metric is induced by the ensemble uncertainty (see Theorems 1 and 2), it will search for curves that don’t fall in areas of high uncertainty. \n\n2. If we understand your question correctly, what we mean is that our metric can capture the discrepancy between examples, while at the same time measure the amount of uncertainty in OOD areas. The discrepancy between examples can also occur for data that is in-distribution.\n\n3. Yes you are correct, and thank you for your suggestion. We will add a label to make this clear.\n\n4. This is an important comment that we will clarify in the paper. The models themselves are neural networks, which are parametric. The nonparametric uncertainty comes from using the nearest neighbor method. This makes the method semiparametric (line 327). We will emphasize this point in the paper.\n\n5. Yes you are correct. We will add a reference to the equation as you suggested.\n\n6. In practice we use the results of Theorem 2 for uncertainty. Corollary 1 was placed to show a clear decomposition, similar to that of Theorem 1. We will clarify this in the paper. \nRe ambient space - this is the observation space, i.e., state space. We will clarify this as well.\n\n7. Difference with only using an ensemble: Unlike the pure ensemble method, which uses the variation of the ensemble in observation space, we use the Riemannian metric, induced by the ensemble, to calculate an uncertainty measure based on Equation 3. This is also what makes this metric semiparametric.\n\n8. Re four rooms example: In this example we used a 2D latent space to help visualize what the latent space is learning, without the need of dimensionality reduction. In general, it would be best to choose a higher dimension. Notice that in the figure we show a mapping of all the points in the environment to their respective latent codes. We believe that with a “denser” environment (i.e., with more states), we would achieve the same structure, though we would need more data.\n\n9. Yes, the data score is the mean score achieved by the policy in the data. We will add an explanation to the paper.\n\n10. Yes, we used t-SNE to make visualization easier. This is a qualitative figure, to give a sense of how the latent space looks like. The actual dimension is 32. We will clarify this in the paper.\n", " Thank you for your positive review! Your comments on the quality of our writing, the contribution of our work, and the significance of the results were encouraging. Also, thank you for pointing out the reference and the typo. Regarding your comment on Section 4.3, we will go over this section a few more times to make sure it is clear.", " Thank you for your positive review! We were encouraged that you found the clarity and writing of our paper to be excellent, and that you found the paper to be of high significance. Regarding your question, we can definitely derive similar results for more complex distributions, but with the caveat that the pullback metric may not decompose into $G_{\\mu} + G_{\\sigma}$. There is a benefit to this decomposition since we can capture $\\mu$ and $\\sigma$ separately, as $\\sigma$ captures aleatoric uncertainty. Recall that the pullback metric is defined by a stochastic generator function from the latent space to the ambient (observation) space. We can therefore use any model that induces such a generator. We can also enrich this metric by utilizing prior information, as done in [1]. \n\nReferences: \\\n[1] Arvanitidis, Georgios, Soren Hauberg, and Bernhard Schölkopf. \"Geometrically Enriched Latent Spaces.\" In International Conference on Artificial Intelligence and Statistics, pp. 631-639. PMLR, 2021.", " We thank the reviewers for spending the time and effort to carefully evaluate our work. We were encouraged by the reviewers’ positive reviews, who found our paper to be of “high quality and high significance” [XJev], “delightfully well written” [Jba5], where the “novelty is clear” [TKTA]. Beyond these encouraging descriptions, the reviewers made valuable comments that we answer in the following.", " For reinforcement learning agents learned in an offline setting, it is especially important to be able to quantify uncertainty, as the agent may encounter out-of-distribution experiences when deployed in the online environment. Learning to quantify uncertainty then enables the agent to avoid these regions, by operating under a penalized MDP. In this work, uncertainty is estimated as a k-nearest neighbor between the current state action pair against the state actions pairs in the offline dataset, according to a distance metric. The distance metric is formulated by the authors as the geodesic in a learned latent space, modeled as a VAE. The latent encoder handles aleatoric uncertainty, whereas an ensemble of decoder functions is used to handle epistemic uncertainty.\n To my knowledge, the paper is quite original. It builds off of theoretical findings from past work on deep generative models and applies it well to the offline RL domain.\n\nThe quality of the paper is high - the idea is straightforward, expressed clearly, and demonstrated cleanly. The clarity of the writing is excellent, and it was easy to follow what the authors proposed.\n\nI believe this paper has high significance - enabling RL agents trained on offline datasets to quantify their uncertainty is critical for ensuring good continued performance when deployed in an online environment. I am wondering how flexible the proposed method is to changes in the encoder and decoder beyond the standard Gaussian formulation. Would the derived pullback metric work for more complex VAEs? Could similar pullback metrics be derived for more general generative modeling techniques (my understanding of the work of Arvanitidis et al., 2018 is that it is a study on the geometry of generative models in general, not limited to VAEs)?\n (Social Impact Limitations N/A)", " To address the uncertainty quantification (UQ) problem in offline model-based RL, the authors propose a Riemannian metric that captures both uncertainty in dynamics (aleatoric) and OOD data (epistemic). Their offline RL algorithm GELATO (cute name) learns an uncertainty-penalized reward function by learning the geodesic distance through a 'pullback' metric. The geodesic distance is combined with KNN evaluation and bootstrapping to generate improved UQ performance. In control and autonomous driving datasets GELATO achieves good performance. Strength:\n\n- Delightfully well written paper. The exposition of motivation, theory, and experiment results are clear and convincing. \n- Introduces a new representation for estimating distance of model dynamics in latent space, that is more geometrically salient then Euclidean distance.\n- Detailed analysis of success and failure cases in multiple domains in the experiment section.\n- Broad applicability of the contribution. As the authors mentioned, their method of capturing geodesics can be applied to many RL estimates such as Q values, rewards, etc. The authors provide a clear and effective way of doing so.\n\nminor details:\n- missing reference in line 19, typo of \"Riemannian\" in line 29. Also minor: I found section 4.3 to be a little bit hard to follow at first. $M$ is used to refer to the manifold in the paper except in this section. N/A", " Using Riemannian distance metrics instead of Euclidean metrics, the authors better estimate uncertainties for out-of-distribution (OOD) data and apply it to model-based Reinforcement Learning (RL) settings. The paper introduces a new pullback metric for such dynamic models, and verifies its performance on several RL benchmarks. A very well written paper, where the novelty is clear. The results are quite nice, though showing the improvement in more scenarios would be great, since the authors noticed that the Riemannian manifold metric work better in some specific cases. However, it seems a bit limiting to define the metric for only dynamic systems. The authors could elaborate more on their choice for this specific setting for model-based RL uncertainty estimation as well, as opposed to general uncertainty measurement.\n\nWhile the whole paper is focused around offline RL, it would be great if online RL can be added into the introduction a bit more so the contrast between the two is clearer. It also isn't clear to the reader why parametric and nonparametric methods are combined for uncertainty estimation, this could be motivated better.\n\nLine 27 the \"metric\" used for kNN could be associated with how the distance is defined. That way the reader can understand why a \"proper metric\" is important. Similarly, line 29 could have just a quick explanation on what epistemic and aleatoric uncertainties are. Just model/data uncertainty would help the understanding for the reader. \n\nSection 2.1 is mainly about (online) RL, not sure if the title necessarily fits. Or the third paragraph on offline setting can be expanded and explained more in detail, since it's the focus of this paper. The equation in line 78-79 (which seems quite important) could be elaborated more, in what space the curve length is defined, and why we care about using a curve length for $f(\\gamma)$ instead of $\\gamma$. Also, it seems section 3 would fit into section 2, since it is also a preliminary/background. Is there a reason it is separate?\n\nThough the Algorithm 1 is presented quite nicely, it could be even further improved if it were a visual figure, I believe. It would make a very strong titlepage figure as well, outlining the novel framework the authors introduce in this paper.\n\nA major issue that can be easily fixed, is that many of the figures, tables, and algorithms are located pages before they are actually referenced in the text. \n\nThe whole idea of the metric capturing inherent characteristics of the model dynamics is still a bit vague to me. If it is, for instance, the discrete bottlenecks present in the four rooms example, more samples would be needed to see this clearly. If it is the clustering in the intersection example, more explanation is needed why the clusters for Euclidean distance is not enough. t-SNE visualization is only qualitative, and distances do not necessarily mean much, though maybe some conclusion can be drawn from the shape of the clusters.\n\nA lot of explanation on how the geodesics are found is missing, how the parametric curve is defined for instance. This could be made a much more important part of the paper. The computation time is also mentioned at some point, and this could be described more in detail, exact computation times would be appreciated.\n\nThe related works are divided quite nicely, but the last paragraph on disentangling latent space for RL feels a bit out of place and irrelevant. \n\nLastly, a lot of the discussion was in the results 5.3 already, so the section 7 title could just be conclusion. \n\nAll-in-all a well-written paper with plenty of preliminary information for readers of all backgrounds. The proposed framework could be shown using a titlepage figure describing the pipeline/block diagram for more impact. The overall contributions are clear, and the results agree with the conclusions from the authors. Although many of the core ideas follow from previous work, this work enables the application of geometry-aware learning methods in RL context, providing consistently better results than baselines.\n\n\nSmall details: \n- There is no appendix, even though it was referenced many times in the paper.\n- Missing figure reference in line 19\n- Typo line 29 \"Riemmannian\"\n- Line 231 \"would low values.\" misses some word 1) In the pullback metric for model dynamics, how did the authors come up with this choice for metric? How is the time derivative computed? And what about the geodesic computation, are using gradient descent to solve an ODE?\n\n2) In the abstract, the authors mention how the \"proposed metric measures both the quality of OOD samples as well as the discrepancy of examples in the data\" (line 7). Don't they both in the end mean the same thing?\n\n3) In Figure 1, \"high model error\" is mentioned twice, but never do we see \"low model error\". This would be on the manifold, correct? Would it make sense to add it somewhere in the figure/caption?\n\n4) Line 121 mentions how the encoder and decoders are parametric functions, is there any details on this? If normal neural networks are used, would that not be nonparametric?\n\n5) The equation in Definition 2 defines the curve length form Eq. 1, correct? It could be helpful to link these two for better understanding if that's the case.\n\n6) The authors introduce an ensemble of decoders in Line 161, which I assume is then used for all experiments that follow? How about the forward model, do we continue with the assumption in Corollary 1 and practically neglect it? What is the \"meaning\" of this ambient space X? This could be made clearer. The need for this forward model in the first place is unclear, perhaps Corollary 1 could be entirely removed.\n\n7) The results are compared with a classical bootstrap ensemble method, but isn't that also used in the proposed method for the epistemic uncertainty? What is the difference there with the baseline?\n\n8) Is the four room experiment discrete or continuous? From the actions, it seems to be discrete, but since the latent space is continuous, I was wondering what happens if we sample more points in the latent space, and where they would correspond to in the rooms, i.e. how likely it is they will end up in the walls/obstacles. Also, would the input space be 3D? Isn't a 2D latent space a bit of an easy mapping in this case? \n\n9) Is \"Data Score\" in Table 1 the score that the collected data achieved? A quick explanation here would help.\n\n10) Why is t-SNE used for the intersection example latent space visualization? Is it because it is more than 2D? How many dimensions is the latent space here?\n\n The limitations are properly addressed by the authors." ]
[ -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "fnx0M2So99v", "TthC12cx8D", "Haq7LVvG7mv", "6baQMHgPFD_", "3JXdgqEgqoJ", "nips_2022_pGLFkjgVvVe", "nips_2022_pGLFkjgVvVe", "nips_2022_pGLFkjgVvVe", "nips_2022_pGLFkjgVvVe" ]
nips_2022_cJ006qBE8Uv
Adversarial Unlearning: Reducing Confidence Along Adversarial Directions
Supervised learning methods trained with maximum likelihood objectives often overfit on training data. Most regularizers that prevent overfitting look to increase confidence on additional examples (e.g., data augmentation, adversarial training), or reduce it on training data (e.g., label smoothing). In this work we propose a complementary regularization strategy that reduces confidence on self-generated examples. The method, which we call RCAD (Reducing Confidence along Adversarial Directions), aims to reduce confidence on out-of-distribution examples lying along directions adversarially chosen to increase training loss. In contrast to adversarial training, RCAD does not try to robustify the model to output the original label, but rather regularizes it to have reduced confidence on points generated using much larger perturbations than in conventional adversarial training. RCAD can be easily integrated into training pipelines with a few lines of code. Despite its simplicity, we find on many classification benchmarks that RCAD can be added to existing techniques (e.g., label smoothing, MixUp training) to increase test accuracy by 1-3% in absolute value, with more significant gains in the low data regime. We also provide a theoretical analysis that helps to explain these benefits in simplified settings, showing that RCAD can provably help the model unlearn spurious features in the training data.
Accept
All reviewers have expressed a clear opinion in favour of acceptance, one improving their score after the rebuttal and discussion. I’m happy to recomment acceptance.
train
[ "LZcCVKCWf9j", "foX3bQsYJNF", "87-7efj3Lty", "qA_eRffTCKDI", "19T43H5Xd_e", "DTNo9rXE_nF", "aliucaa53Lq", "bO3cWpPad5I", "-jBnMNFNrtc", "UFUX8uuN2I", "krC2iE9Ro4D", "2qejYrxrXtA", "iGD2qWm9S5k", "Xq1hI-Nxb8T", "mebmQRVJUx8", "xD9cONEUN_f" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The additional results on VAT partly address my concern and I have increased my score. But the effectiveness of the proposed method still worries me given the marginal improvement.", " Dear Reviewer,\n\nThank you for the suggestions for improving the paper. We have additional experiments comparing RCAD with the baseline VAT in a semi-supervised setting. As mentioned in our response, we have edited our submission with these experiments that we believe further strengthen the paper. Together with the discussion on Imagenet results and comparisons with adversarial data augmentation methods, **have all the concerns been addressed?** If not, we would be happy to continue the discussion and/or revise the paper.", " Dear Reviewer,\n\nThank you for the suggestions for improving the paper. As we mention in our response, we already have in our submission experimental results on the adversarial robustness of models trained with our objective (RCAD). We also point to a section of our paper that justifies RCAD's method of generating OOD examples. We hope that **this addressed your concerns**. If not, we would be happy to continue the discussion and/or revise the paper.", " We hope that our answers in the author response and the running time comparisons we added have addressed the concerns raised in the review. **We would be happy to continue the discussion if the reviewer has additional questions or concerns.**", " We hope that the Imagenet results, additional experiments on VAT, and comparisons with adversarial data augmentation methods have addressed the concerns raised in the review. **We would be happy to continue the discussion if the reviewer has additional questions or concerns.**", " Thanks for the additional feedback! We have revised the related work section (L83) to highlight how Besnier successfully uses adversarial examples for OOD detection. **Please let us know if there are additional questions or concerns that we can address**.", " I thank the authors for their response and revisions. After reading the authors response and other reviews, I maintain my rating. I agree with the authors that the way the generated OOD points are _used_ is different in Besnier et al.. At the same time, I encourage the authors to give credit to this earlier work for introducing the adversarial method for _generating_ OOD points. Standard adversarial training methods, on the hand, aim to generate _in-distribution_ points, effectively augmenting the training set.", " Thanks for your response. It has solved my question. I would like to keep rating 7.", " We thank the reviewer for their detailed feedback and comments.\n\n> Influence of hyper-parameter $\\lambda$\n\nRCAD is generally quite robust to the choice of the hyperparameter $\\lambda$ (compared to $\\alpha$), with values in [0.07, 0.12] all producing test performance significantly better than ERM. The table below shows that for CIFAR-100-2k the performance varies from 28.5% to 30.6%, which is a shorter range compared to the performance range (27.0% to 30.5%) observed when ablating on $\\alpha$ in Fig 3b. When the dataset size is large we find that a much smaller value of $\\lambda=0.02$ is sufficient since not much regularization is needed (Appendix B, L961). \n\n| $\\lambda$ | Test Accuracy on CIFAR-100-2k |\n|----------:|:-----------------------------:|\n| 0.00 | $28.5 \\pm 0.12$% |\n| 0.01 | $28.5 \\pm 0.09$% |\n| 0.02 | $28.7 \\pm 0.10$% |\n| 0.03 | $28.6 \\pm 0.11$% |\n| 0.04 | $29.2 \\pm 0.12$% |\n| 0.05 | $29.3 \\pm 0.11$% |\n| 0.06 | $29.8 \\pm 0.08$% |\n| 0.07 | $29.6 \\pm 0.09$% |\n| 0.08 | $30.2 \\pm 0.10$% |\n| 0.09 | $30.5 \\pm 0.08$% |\n| 0.10 | $30.4 \\pm 0.09$% |\n| 0.11 | $30.6 \\pm 0.07$% |\n| 0.12 | $30.3 \\pm 0.08$% |\n| 0.13 | $30.0 \\pm 0.10$% |\n| 0.14 | $29.9 \\pm 0.08$% |\n| 0.15 | $29.7 \\pm 0.11$% |\n\nWe report the mean and 95% confidence intervals over 10 independent runs and for this controlled experiment, we fix $\\alpha=1.0$.", " We thank the reviewer for the detailed feedback and comments. We respond to their two questions individually, in addition to the comment on using adversarial examples as OOD points. As suggested by the reviewer, to improve the readability of the theory section we will move some of the intermediate lemmas in Sec 5 to the Appendix. \n\n> “Sweet spot” for the value of $\\alpha$\n\nWe find that the optimal choice of $\\alpha$ depends more on the size of the training data than the choice of the dataset. For low-data regimes (like CIFAR-100-2k), a higher value of $\\alpha=1.0$ is needed to exacerbate the noisy features. When the training data size is large enough (e.g., CIFAR-100, Tiny Imagenet) a smaller value of $\\alpha=0.5$ performs better. Note that $\\alpha=0.5$ gives good results on all large datasets (Appendix B, L962). To study this further, we furnish results in Fig 3b over two additional datasets in the high-data regime: CIFAR-100 and Tiny Imagenet, where we find that the optimal $\\alpha$ is close to $0.5$. We show the mean and 95% confidence intervals over 5 independent runs. \n\n| $\\alpha$ | Test Accuracy on CIFAR-100 | Test Accuracy on Tiny Imagenet |\n|---------:|:--------------------------:|:------------------------------:|\n| 0.1 | $72.4 \\pm 0.07 $% | $63.8 \\pm 0.08 $% |\n| 0.2 | $75.2 \\pm 0.08 %$% | $64.1 \\pm 0.10 $% |\n| 0.3 | $74.9 \\pm 0.07 %$% | $64.0 \\pm 0.11 $% |\n| 0.4 | $77.4 \\pm 0.09 %$% | $67.0 \\pm 0.07 $% |\n| 0.5 | $77.3 \\pm 0.07 %$% | $67.3 \\pm 0.09 $% |\n| 0.6 | $76.9 \\pm 0.09 %$% | $67.5 \\pm 0.11 $% |\n| 0.7 | $77.0 \\pm 0.08 %$% | $67.2 \\pm 0.10 $% |\n| 0.8 | $76.8 \\pm 0.07 %$% | $67.0 \\pm 0.10 $% |\n| 0.9 | $76.6 \\pm 0.08 %$% | $66.7 \\pm 0.08 $% |\n\n> Relation to max-margin classifiers\n\nWhile in some settings the classifier learned by RCAD is well aligned with the max-margin classifier, in other settings like our analysis in Sec 5 the ERM solution converges to the max-margin classifier that depends on the spurious feature while RCAD recovers a classifier with a smaller dependence on the spurious feature (see Fig 5a).\n\nSoudry et al. [46] show that homogenous linear predictors trained with SGD converge to the direction of the max-margin classifiers on linearly separable datasets. In our toy setting, the training data is indeed linearly separable (Fig 5a). Additionally, we train ERM using a margin based loss (L282). Thus, ERM itself recovers the max-margin classifier on the 9 dimensional input data (same as the one returned by SVM solver). However, note that here the max-margin solution relies on the spurious features $x_2, \\ldots, x_9$ to maximize the margin. So the SVM solver would fail to recover the true generalizing solution of $\\mathbf{w}^* = [1, 0, \\ldots, 0]^\\top$, which has a smaller margin. RCAD uses the self-generated examples to identify the spurious features that lie along uncorrelated directions and maximizing entropy on these examples biases the solution of RCAD away from the poorly generalizing max-margin solution, towards the optimal $\\mathbf{w}^*$.\n\n\n> Using adversarial examples as OOD points (Besnier et al.)\n\nWhile adversarial examples have indeed been applied in many applications [11, 18], we believe that ours is the first work to maximize a model’s uncertainty on heavily perturbed adversarial examples, in a way that improves in-distribution test performance. For comparison, Besnier et al. use adversarial examples in a different way, as proxy OOD examples for the OOD detection module. While we generate the adversarial examples with large perturbations, making them out-of-distribution, we instead use them to amplify spurious correlations that need to be unlearnt, as opposed to improving uncertainty quantification or detection under test time distribution shifts. ", " We thank the reviewer for the detailed feedback. As we discuss below, results on adversarial robustness of RCAD and comparison with existing regularizers is already present in our submission, and we are adding clarifications on running time comparisons. Please let us know if our revisions and clarifications address all of the major issues, or if there are any other concerns. We look forward to continuing the discussion.\n\n> Justify the way to generate out-of-distribution (OOD) examples.\n\nWe generate out-of-distribution examples via large adversarial perturbations because: i) empirically we found it to work better than alternative approaches; and ii) it is supported by our theoretical findings.\n\n[Alternative approaches] In our initial experiments, we tried to maximize entropy on OOD examples generated in two different ways: i) random samples from a uniform prior over 3-channel images $[-1, 1]^3$; and ii) interpolating examples from same/different classes. The first approach made no difference in test accuracy. The second is similar to Mix-MaxEnt [36] (see our discussion in Sec 2, L100). While their self-generation logic is geared towards improving *out-of-distribution* uncertainty quantification, ours is focused on improving *in-distribution* test performance. The reviewer’s suggestion of using random samples from other classes is similar to a special case of RCAD when we set $\\alpha=0.$, and as we see in Fig 4b this choice of $\\alpha$ is sub-optimal.\n\n[Theoretical findings] We generate OOD examples by taking *large adversarial* steps for two reasons: i) adversarial examples amplify spuriously learnt noisy features [18]; and ii) maximizing entropy on all examples that contain a specific feature would prevent the model from using that feature for prediction. We detail this logic in Sec 3 (L153) and verify it theoretically in Sec 5. With regards to the need for the large step-size, note that in Theorem 5.4 we show that unlearning happens when $\\alpha$ is large enough, tracking our empirical findings in Fig 4b (L247).\n\n\n> Check how vulnerable the trained model is to adversarial attacks.\n\nOur original submission already had experiments verifying the adversarial robustness of RCAD to FGSM attacks in Appendix C.1 (Tab 6). We find that RCAD improves test performance without decreasing adversarial robustness. We compare RCAD’s adversarial robustness with ADA, ME-ADA and standard adversarial training, all of which explicitly robustify the model’s predictions to FGSM attacks. \n\nOn CIFAR-100-2k (low data regime), we find that RCAD not only improves over all baselines by $\\geq 1$% on unperturbed test set (clean), it also has the best performance on the test examples adversarially perturbed with FGSM attacks by $\\geq 0.5$%. Indeed, on CIFAR-10, CIFAR-100 (high data regime) we find that while RCAD still has the best test accuracy on clean data, ME-ADA has better performance on adversarial test inputs. But RCAD certainly improves over ERM’s test performance on adversarial examples by about $4-6$% on both these datasets. \n\nAs suggested, we will make it more clear in the introduction that RCAD’s main objective is to improve test performance, not adversarial robustness even though it employs adversarial examples for it.\n\n\n> Discuss the difference from contrastive learning with data augmentation.\n\nWe thank the reviewer for the interesting connection and suggestion on comparing our method with ideas from the self-supervised literature like contrastive learning on image augmentations. We believe that while both RCAD and contrastive learning with data augmentation aim to improve the quality of the learned features, RCAD does so without requiring data augmentation, which often requires domain knowledge to perform. This is especially highlighted in our results on regression tasks (Fig 4c, L252) where we seamlessly apply our RCAD objective without any specific knowledge of the underlying task.\n\n> Running time comparison.\n\nRCAD increases the running time by 30% compared to ERM (L367), which is much faster than alternative adversarial data augmentation baselines ADA and ME-ADA which increase running time by 160% and 185% (over ERM), respectively. We add discussion on this in Appendix B.\n\n> Generalization performance comparison with existing regularizers.\n\nOur main paper already studies how RCAD improves test accuracy in comparison with and in addition to other regularization methods (Tab 2). We compare RCAD with standard data augmentation, label smoothing, advanced augmentation methods like CutOut/CutMix and MixUp training. Please let us know if you feel there is any particularly effective regularizer that would further improve our study.\n", " We thank the reviewer for the detailed feedback and comments. The two main concerns seem to be: i) experimental results on more realistic benchmarks like ImageNet; and ii) comparison with semi-supervised methods like VAT in low-data regimes. As we discuss below, we already included experiments on ImageNet in the submission, and we are also adding new comparisons to VAT. Please let us know if our revisions and clarifications address all of the major issues, or if there are any other concerns. We look forward to continuing the discussion.\n\n> Imagenet Results\n\nThe original submission had results on Imagenet (Appendix B, Tab 5), finding that RCAD improves the top-1 accuracy by 0.31%. We are not surprised that the gains are smaller in this high-data regime since the relative improvement from RCAD is more pronounced when training data is limited (Sec 4.3, Fig 3b). Nevertheless, because the Imagenet benchmark is competitive, even small, statistically significant benefits have been recognized as important contributions [AR1, AR2, AR3]. \n\n[AR1] Xie, Qizhe, et al. \"Unsupervised data augmentation for consistency training.\" Advances in Neural Information Processing Systems 33 (2020): 6256-6268.\n\n[AR2] Tsipras, Dimitris, et al. \"From Imagenet to image classification: Contextualizing progress on benchmarks.\" International Conference on Machine Learning. PMLR, 2020.\n\n[AR3] List of pre-trained models on ImageNet along with validation accuracies: https://github.com/Cadene/pretrained-models.pytorch\n\n> Comparisons with semi-supervised baseline: Virtual Adversarial Training (VAT)\n\nAs suggested by the reviewer, we adapted RCAD to the semi-supervised setting and compared it to VAT, finding that RCAD improves over VAT by 0.29% and 0.17% on CIFAR-100-2k and CIFAR-100-10k respectively (see table below). We have added this result and corresponding discussion to Appendix E. \n \nWe run VAT on the subsampled CIFAR training sets: CIFAR-100-2k and CIFAR-100-10k with 2k and 10k training samples respectively. The rest of the CIFAR-100 training data is treat as unlabeled data on which VAT performs adversarial training treating the trained model’s predicted label as the true one. Analogous to this, the semi-supervised version of our objective RCAD optimizes the objective in Eqn. 2 (L144) on the labeled set $\\hat{\\mathcal{D}_l}$, but for each unlabeled example (in $\\hat{\\mathcal{D}_u}$) we simply maximize entropy on its corresponding OOD example generated using Eqn. 1 (L141) with the main difference being that we treat the model’s predicted label $\\hat y$ as the true label (similar to VAT). Thus, the semi-supervised RCAD objective is given by:\n\n\n$\n\\arg\\max_{\\mathbf{w} \\in \\mathcal{W}} \\mathbb{E}_\\hat{\\mathcal{D}_l} [ \\log p_\\mathbf{w} (y \\mid \\mathbf{x}) + \\lambda \\cdot {\\mathcal{H}}_\\mathbf{w} ( \\mathbf{x} - \\alpha \\cdot {\\nabla}_\\mathbf{x} \\log p_\\mathbf{w} (y \\mid \\mathbf{x}) ) ] + \\lambda \\cdot \\mathbb{E}_\\hat{\\mathcal{D}_u} [ {\\mathcal{H}}_\\mathbf{w} ( \\mathbf{x} - \\alpha \\cdot \\nabla_\\mathbf{x} \\log p_\\mathbf{w} (\\hat y \\mid \\mathbf{x}) ) ] \n$\n\nwhere, $\n\\hat y = \\arg\\max_{y \\in \\mathcal{Y}} p_{\\mathbf{w}}(y \\mid \\mathbf{x})\n$\n\n\n| | CIFAR-100-2k | CIFAR-100-10k |\n|---------------------:|:------------------:|:------------------:|\n| VAT | 32.74 $\\pm$ 0.06% | 62.81 $\\pm$ 0.06% |\n| semi-supervised RCAD | 33.03 $\\pm$ 0.05% | 62.98 $\\pm$ 0.05% |\n\n\n\nThe $95$% confidence intervals are obtained by evaluating the test performance of models on 10 independent runs for both methods. We trained VAT using the original paper authors’ implementation and set hyperparameters $\\alpha, \\epsilon$ (see Eqns. 5, 8 in Miyato et al.) to $\\alpha=1.0, \\epsilon=8.0$ for both datasets. RCAD hyperparameters were set to $\\alpha=0.7, \\lambda=0.1$ for CIFAR-100-2k and $\\alpha=0.5, \\lambda=0.02$ for CIFAR-100-10k. For both methods, the respective hyperparameters were tuned using a hold out validation set\n\n> Proposed method resembles the existing methods in adversarial data augmentation\n\nWe are unaware of any method that improves test performance by maximizing uncertainty on generated examples, whether those examples are generated randomly, through data augmentation, or via adversarial perturbations (adversarial data augmentation). Adversarial data augmentation methods [1,2,3] *minimize* uncertainty (entropy) on the self-generated examples, whereas RCAD *maximizes* uncertainty. Our experimental results confirm that RCAD outperforms these prior methods that minimize uncertainty (Fig 3a).\n\n\n", " This paper proposes a regularization method for improving the generalization in maximum likelihood learning. The main idea is to reduce confidence (or increase entropy) on inputs generated by large adversarial perturbation. The authors conduct extensive experiments to demonstrate the effectiveness of their method. They also analyze the effectiveness of their method by showing it can unlearn noisy features in a toy setting. This paper is overall well-organized and clearly written. As the main contribution of this paper is to propose a new regularization method, the novelty might be minor since the proposed method resembles the existing methods in adversarial data augmentation [1,2,3], albeit the difference that this paper proposes to \"unlearn\" examples under large adversarial perturbation instead of \"learn\" examples under small adversarial perturbation. \n\nMy biggest concern in this paper is that the experiment results may not be sufficient to demonstrate the effectiveness of the proposed method. Since the authors aim to improve generalization in a very general sense instead of robustness against perturbation or domain shift, experiments on large datasets such as ImageNet would be necessary. I am not quite certain about the effectiveness of the proposed method in such a more realistic setting as the improvements on full datasets such as CIFAR-10 and CIFAR-100 are already marginal. The authors also emphasize that the proposed method is most effective in low-data regimes. In this regard, I believe it would be necessary to showcase the proposed method in a semi-supervised setting with unlabeled data available. \n\n[1] Virtual adversarial training: a regularization method for supervised and semi-supervised learning. Miyato et al., 2018.\n\n[2] Generalizing to unseen domains via adversarial data augmentation. Volpi et al., 2018.\n\n[3] Maximum-entropy adversarial data augmentation for improved generalization and robustness. Zhao et al., 2020. The authors compare their method with baselines including those from adversarial training such as FGSM, and those from domain adaptation such as ADA and ME-ADA. But as far as I am concerned, none of these methods is specifically designed for standard in-domain generalization. FGSM is for improving adversarial robustness while ADA and ME-ADA are for domain generalization. Why didn't the authors compare their method to VAT [1], which also proposes to perform data augmentation with adversarial perturbation, and aims for generalization in standard supervised and semi-supervised settings?\n\n[1] Virtual adversarial training: a regularization method for supervised and semi-supervised learning. Miyato et al., 2018. The limitations have been properly discussed in Section 6.", " This paper proposes a new regularization technique for deep neural networks by reducing confidence on self-generated examples that lie along directions adversarially chosen to increase training loss. The proposed regularizer can be easily integrated into training pipelines and added to existing techniques to gain 1-3% improvement in test accuracy. Strength:\n\nThis paper identifies a new inductive bias to reduce the confidence on out-of-distribution inputs and turns this bias into a new regularizer for training. Using adversarial direction to guide the training process is an interesting idea. The empirical evidence demonstrates the effectiveness of the proposed method, and the theoretical analysis also presents some theoretical guarantees on generalization gap. Some interesting insights are also provided with experimental validations.\n\nWeaknesses:\n\n1. The approach of generating out-of-distribution examples from large stepsize adversarial examples needs to be justified. Why is it a proper way to generate such examples? How is it compared with other generation methods, e.g., random samples in other classes?\n\n2. It would be better to point out that the objective is not to improve adversarial robustness although adversarial examples are employed for training. This could help avoid confusion. Although the proposed method does not try to robustfy the model, it is unclear if the proposed method improves the test accuracy at the cost of the adversarial robustness. Corresponding experiments could be designed to address this concern.\n\n3. The connection of the proposed method to some other ideas such as data augmentation and contrastive learning could be also discussed and compared if relevant.\n\n4. Running time comparison could be added to see how much cost paid for the performance gains.\n\n5. It would be also interesting to see how the proposed regularizer is compared with the existing ones with respect to generalization performance.\n 1. Justify the way to generate out-of-distirbution examples.\n2. Check how vulnerable the trained model is to adversarial attacks.\n3. Discuss the difference from contrastive learning with data augmentation.\n4. Running time comparison.\n5. Comparison with existing regularizers. Yes.", " The paper proposes RCAD, a simple regularization scheme for predictive models that aims to increase the predictive entropy for OOD points. OOD points are generated adversarially using FGSM with a large step size. The authors demonstrate that the method reliably improves the test accuracy on a set of image classification tasks, as well as tabular regression tasks. The improvement is more significant when less data is available. To understand why the method works, the authors study it on a toy linear classification task. Strengths:\n- The method is clearly introduced, and carefully compared to prior work.\n- Statistically significant improvement in test accuracy (up to 3 percentage points) across the benchmarks (image classification and regression on UCI data) when RCAD is employed, compared to multiple strong baselines.\n- Ablations to understand the impact of the adversarial step size and model architecture.\n- Theoretical analysis for the linear case that motivates the method.\n\nWeaknesses:\n- The idea to use adversarial optimization to find OOD points is of limited novelty (see \\[a\\]). To the best of my knowledge, however, it has not been previously studied theoretically, or evaluated on such a broad set of benchmarks.\n- The analysis on the linear case is sometimes hard to follow. Consider moving more detail into the appendix, to give the narrative more breathing space.\n\nReferences:\n- \\[a\\]: Besnier, V., Bursuc, A., Picard, D., & Briot, A. (2021). Triggering Failures: Out-Of-Distribution detection by learning from local adversarial attacks in Semantic Segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 15701-15710).\n - Does the \"sweet spot\" for the $\\alpha$ value that we see in Figure 4b change significantly with the dataset? Would be useful to understand how tricky it is to get this parameter \"right\" for a novel task.\n- Have authors considered the relation of the method to max margin classifiers like the SVM? I wonder if the two approaches recover similar decision boundaries. A comparison on the toy dataset would be interesting. Authors discuss the impact of the additional regularization term on training time, as well as the assumptions made by the analysis. I consider the potential negative societal impact of this work to be limited.", " This paper proposes to increase the test accuracy by maximizing the entropy of out-of-distribution examples that are adversarially generated, which is named reducing confidence along adversarial directions (RCAD). Empirical results support the claim that RCAD can improve test accuracy. In addition, the authors theoretically analyze the effectiveness of RCAD on a linear case. \n Strengths\n+ This paper is well written and organized. I can easily follow.\n+ The proposed method is easy but effective in improving test accuracy. The proposed method is an interesting case that leverages adversarial attacks for good. \n+ Experiments are comprehensive and sound. The results validate that RCAD can be compatible with various methods and indeed enhance the test accuracy. \n+ The theoretical results explain why RCAD help test accuracy to some extent (although I have not carefully checked the correctness of theoretical results) — RCAD can regularize the model to unlearn the spurious features. + Could I know the influence of the hyperparameter $\\lambda$, since it seems that an important hyperparameter in RCAD?\n The authors have pointed out the limitation of their proposed method. I have no idea about how to accelerate RCAD so far.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 3 ]
[ "19T43H5Xd_e", "19T43H5Xd_e", "qA_eRffTCKDI", "krC2iE9Ro4D", "2qejYrxrXtA", "aliucaa53Lq", "UFUX8uuN2I", "-jBnMNFNrtc", "xD9cONEUN_f", "mebmQRVJUx8", "Xq1hI-Nxb8T", "iGD2qWm9S5k", "nips_2022_cJ006qBE8Uv", "nips_2022_cJ006qBE8Uv", "nips_2022_cJ006qBE8Uv", "nips_2022_cJ006qBE8Uv" ]