paper_id
stringlengths
19
21
paper_title
stringlengths
8
170
paper_abstract
stringlengths
8
5.01k
paper_acceptance
stringclasses
18 values
meta_review
stringlengths
29
10k
label
stringclasses
3 values
review_ids
sequence
review_writers
sequence
review_contents
sequence
review_ratings
sequence
review_confidences
sequence
review_reply_tos
sequence
nips_2022_X8mmH03wFlD
Understanding the Failure of Batch Normalization for Transformers in NLP
Batch Normalization (BN) is a core and prevalent technique in accelerating the training of deep neural networks and improving the generalization on Computer Vision (CV) tasks. However, it fails to defend its position in Natural Language Processing (NLP), which is dominated by Layer Normalization (LN). In this paper, we are trying to answer why BN usually performs worse than LN in NLP tasks with Transformer models. We find that the inconsistency between training and inference of BN is the leading cause that results in the failure of BN in NLP. We define Training Inference Discrepancy (TID) to quantitatively measure this inconsistency and reveal that TID can indicate BN's performance, supported by extensive experiments, including image classification, neural machine translation, language modeling, sequence labeling, and text classification tasks. We find that BN can obtain much better test performance than LN when TID keeps small through training. To suppress the explosion of TID, we propose Regularized BN (RBN) that adds a simple regularization term to narrow the gap between batch statistics and population statistics of BN. RBN improves the performance of BN consistently and outperforms or is on par with LN on 17 out of 20 settings, including ten datasets and two common variants of Transformer.
Accept
The paper studies the reason why Batch Normalization is not effective in NLP tasks. The authors find that the inconsistency between training and inference leads to the failure. They define Training Inference Discrepancy (TID) to measure the inconsistency and show that BN can obtain better performance when TID is small. The authors propose Regularized BN with an additional regularization term. Experiments show RBN is better than plain BN and comparable to Layer Normalization. Authors may want incorporate the additional analysis in the feedback.
train
[ "LY8v6fUCnUi", "QKnk09ZZnv", "2L3dNc_8gD", "1GGRNny_7lx", "Y0jiNoVtkgu", "HrwmgEeXjEz", "9SdFLPP-PGE", "-XkpWWrYByv" ]
[ "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Sure, we will add the additional results to the revised version. Thanks!", " The additional experimental results should be included in the revised version to make this work more convincing.", " We thank the reviewer for the encouraging and insightful comments. ", " We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific questions below. \n\n**Question 1**: Can you explain why the performance of BN on language modeling tasks is already better than LN? \n\n**Response:** Language modeling is essentially an classification task with large number of classes (number of words in vocabulary). The data, especially wikitext 103 dataset, is relatively hard to be fitted. Thus, optimization plays a more important role in language modeling task. BN offers better optimization performance and properties compared to LN, but has a lack of training inference consistency. With our quantitative definition of training inference inconsistency, we find that BN has very small inconsistency on language modeling tasks. Thus, BN exert its advantage on optimization and performs better than LN on language modeling tasks. \n\n \n\n**Question 2:** Can you explain why Transformer_BN has a much bigger mean and variance deviation than ResNet18? \n\n**Response:** We think both data and architecture contribute to the large deviation. If we use BN in Vision Transformer (ViT-BN), the mean and variance deviation of ViT-BN fall between Transformer_BN and ResNet18. \n\n\n**Question 3:** Can you compare your RBN with Powernorm?\n\n**Response:** We compare the performance of RBN with Batch Renormalization (BRN) [14], Moving Averaing Batch Normaliazation (MABN) [44], and PowerNorm [36]. \nBRN and MABN are helpful to decrease training inference inconsistency in small batch setting in CV tasks. We highlight that PowerNorm incorporates a scaling layer (the root mean square layer normalization mehtod [48]), which is important for stabilizing training, as shown in the supplementary materials and official code of PowerNorm. We thus also compare PowerNorm-only and PowerNorm+layerscaing. \nFrom the results shown in following table, we can see that RBN performs the best in most settings. PN is not stable without layer scaling. \n\n| norm\\dataset | IWSLT14 | WMT16 | PTB | WT103 |\n| :------------------------: | :------: | :------: | :------: | :------: |\n| Post-RBN | 35.5 | **26.5** | **44.6** | **17.1** |\n| Post-PowerNorm-only | 0 | 0 | 254.6 | inf |\n| Post-PowerNorm+layerscaing | **35.6** | 0 | 49.8 | 21.0 |\n| Post-BRN | 35.3 | 24.8 | 45.1 | 17.3 |\n| Post-MABN | 0 | 0 | 47.4 | 33.6 |\n| | | | | |\n| Pre-RBN | **35.6** | 26.2 | **43.2** | **17.1** |\n| Pre-PowerNorm-only | 34.5 | 26.0 | 48.6 | inf |\n| Pre-PowerNorm+layerscaing | **35.6** | **27.2** | 59.8 | 20.9 |\n| Pre-BRN | 35.2 | 25.3 | 45.7 | 17.5 |\n| Pre-MABN | 35.0 | 25.8 | 48.7 | inf |\n\n| norm\\dataset | Resume | CoNLL | IMDB | Sogou | DBPedia | Yelp |\n| :------------------------: | :------: | :------: | :------: | :------: | :------: | :------: |\n| Post-RBN | **94.8** | **91.4** | **84.5** | **94.7** | **97.6** | **93.6** |\n| Post-PowerNorm-only | 94.4 | 67.1 | 84.2 | 90.6 | 97.1 | 89.6 |\n| Post-PowerNorm+layerscaing | 94.3 | 90.9 | 84.0 | 94.6 | 97.4 | 93.2 |\n| Post-BRN | 93.6 | 89.9 | 83.6 | 94.5 | 97.5 | 93.3 |\n| Post-MABN | 94.4 | 90.8 | 84.1 | 94.5 | 97.5 | 93.5 |\n| | | | | | | |\n| Pre-RBN | 94.0 | 90.6 | **84.4** | **94.7** | **97.5** | **93.5** |\n| Pre-PowerNorm-only | 5.0 | 11.1 | 84.2 | 94.4 | 97.4 | 93.3 |\n| Pre-PowerNorm+layerscaing | 93.3 | 54.1 | 83.3 | 94.4 | 97.3 | 93.4 |\n| Pre-BRN | 94.1 | **91.1** | 84.3 | 94.5 | 97.4 | 93.4 |\n| Pre-MABN | **94.8** | 90.9 | **84.4** | 94.6 | **97.5** | 93.3 |\n\nWe only run one seed for PowerNorm, BRN, and MABN. \nFor PowerNorm, we follow their source code. We use 4000 warmups, and set foward and backward momentum as $0.9$. \nFor BRN, we use one epoch BN as warmup and linearly increase $r$ to $3$ and $d$ to $5$, which are the same as BRN paper. $r$ and $d$ are renormalizing factors. \nFor MABN, we use $16$ mini-batches to compute simple moving average statistics and momentum $\\alpha=0.98$ to compute exponential moving average statistics, which are the same as MABN paper. \n \nThank you for suggesting the comparison to other methods. We will include the results in our final manuscript.", " We thank the reviewer for the encouraging and insightful comments. Please find our responses to specific questions and concerns below. \n\n**Question 1:** Does this finding invalidate the PowerNorm findings? Or is simply an added observation. \n\n**Response:** PowerNorm observes that the forward and backward statistics of BN are more diverse (unstable) in NLP than that in CV. We further find that the diversity of statistics of BN varies across different NLP tasks. \nOur definition of training inference discrepancy is scale invariant, while the deviation defined in PowerNorm depends on the scale of input. PowerNorm defines the mean and variance deviation as $\\frac{1}{d}\\Vert \\mu -\\mu_{B}\\Vert$ and $\\frac{1}{d}\\Vert \\sigma^2 -\\sigma^2_{B}\\Vert$ in Figure 2 of the PowerNorm paper. BN is typically added after linear transformation, i.e., $y=BN(Wx)$. The mean and variance deviation defined in PowerNorm will increase by $k$ and $k^2$ if we multiply the weight $W$ or feature $x$ by a factor of $k$. Furthermore, we find our defined TID can well serve as an performance indicator of BN. Even though our finding does not invalidate the PowerNorm finding, our work moves a non-trivial step beyond their finding. \n\n \n\n**Question 2:** TID seems to be clearly correlated with performance, however do you have any intuition on why this discrepancy occurs? \n\n**Response:** The batch statistics could be far from the population statistics and the TID will be large, when the empirical distribution of batch data is distant from the population distribution. In NLP, we think the word distribution of sentences can vary significantly. For instance, different sentences come from different topics, each topic has its own word distribution. Large sentence variation makes it hard for batch distribution to approximate population distribution, leading to large TID. We also conjecture that the Transformer architecture contributes to large TID of BN since LN, rather than BN, is widely used in various Vision Transformers for CV tasks. We leave this exploration as future work. \n\n \n\n**Question 3:** Furthermore, why is this discrepancy smaller on some tasks versus others? Is this something to do with distributional shift on the dataset or the nature of the task? \n\n**Response:** This discrepancy (TID) is defined on training set and is irrelevant to testing distribution, thus it is not caused by the distributional shift between training and testing. We think the distributional shift is an impact factor that is orthogonal to TID. Testing BN with batch statistics could mitigate the distributional shift to some extent. We show the results in supplementary material (part E, table 4). Testing BN with batch statistics can improve BN in certain settings, but still perform worse than LN and RBN. We think different discrepancies stem from the nature of tasks, e.g., data diversity, network architectures. \n\n \n\n**Concern 1:** It is unclear whether RBN is superior to LayerNorm. One claim related to its use in practice is that RBN tends to converge faster, however this is not supported directly. It would be nice to have. \n\n**Response:** One advantage of RBN over LayerNorm is that RBN inherits the merit of BN in optimization. The following table shows the training nll loss of Post-Norm Transformer RBN and Transformer LN on IWSLT14 (top) and WMT16 (bottom). We can find that the training of RBN is faster than LN. Another advantage of RBN is that it does not introduce additional computation cost during inference (we can merge the population statistics into the linear transformation after training), while LN has to perform normalization again during inference for each data. \n\n| norm\\epoch | 5 | 10 | 30 | 50 |\n| :--------: | :--: | :--: | :--: | :--: |\n| LN | 5.33 | 2.97 | 2.67 | 2.38 |\n| RBN | 4.77 | 2.95 | 2.65 | 2.37 |\n\n\n| norm\\epoch | 5 | 10 | 15 | 20 |\n| :--------: | :--: | :--: | :--: | :--: |\n| LN | 3.41 | 3.04 | 2.87 | 2.83 |\n| RBN | 3.36 | 3.00 | 2.84 | 2.80 |\n\n \n\n**Missing citation:** We thank the reviewer and will add the missing citation in the revised manuscript. ", " This paper explores potential explanations for the lack of effectiveness of batch norm compared to LayerNorm. They find a phenomenon that correlates with this (TID) and propose a method to tackle this (RBN).\n\n---\n\nAfter response: Thank you for your response, I appreciate the additional clarifications and experiments!\n\n--- Pros:\n- Solid finding about the correlation between TID and BN\n- Good presentation\n\nCons:\n- Not much discussion on **why** this happens...which I feel would be a greater contribution for future work to build upon.\n- Furthermore, it is unclear whether RBN is superior to LayerNorm. One claim related to its use in practice is that RBN tends to converge faster, however this is not supported directly. It would be nice to have ### Questions:\n- Does this finding invalidate the PowerNorm findings? Or is simply an added observation. \n- TID seems to be clearly correlated with performance, however do you have any intuition on why this discrepancy occurs?\n- Furthermore, why is this discrepancy smaller on some tasks versus others? Is this something to do with distributional shift on the dataset or the nature of the task?\n\n### Missing Citations:\n[Understanding and Improving Layer Normalization](https://arxiv.org/pdf/1911.07013) They address the main limitation in their paper, I appreciate this.", " This paper attempts to explain the failure of Batch Normalization (BN) in NLP tasks. The authors find that the inconsistency between training and inference leads to the failure. They define Training Inference Discrepancy (TID) to quantitatively measure this inconsistency and show that BN can obtain better performance when TID keeps small during training. Based on this observation, the authors propose Regularized BN that adds a regularization term of TID during training. Experiences show that RBN can improve the performance of BN and outperforms or is on par with layer normalization. The authors also conduct some analyses to show the advantages of their method. Strengths\n1.\tThis paper focuses on a well-known learning problem in NLP field.\n2.\tThis paper finds an interesting observation from extensive experiments across different CV and NLP tasks.\n3.\tBased on their observation, the authors proposed a simple yet effective method.\n4.\tThis paper is well-written and easy to follow\n\nWeaknesses\n1.\tLack of theoretical explanation and guarantee. \n2.\tWithout compared with other related normalization methods, for example, Powernorm.\n\n\n\nNovelty:\nThis paper is novel. The failure of Batch Normalization in NLP model is a well-known learning problem in NLP field. This paper is the first one to find that the inconsistency between training and inference and the performance gap between batch normalization and layer normalization are highly correlated. Based on this observation, the authors proposed a novel and simple method that can consistently improve the performance of batch normalization.\n\nSoundness:\nThe proposed method is empirically validated on different tasks and dataset. But there is no theocratical guarantee about the soundness of their method.\n\nSignificance:\nThe proposed RBN method can consistently improve the performance of Batch Normalization to on be par with Layer Normalization on different tasks and dataset.\nHowever, the author didn’t compare their method with other batch normalization methods, for example, Powernorm. They only discuss the difference in Section2.\n\nPresentation:\nThis paper is well-organized. Starting from an observation, the authors raise a hypothesis and conduct extensive experiments to validate their hypothesis. Based on that, the authors propose a method that can significantly improve the performance. Finally, the authors conduct analyses on the model trained with their method to validate their understanding.\n This paper can be further improved if the author could provide more explanation about their observation. Here are some questions that I hope the authors can answer:\n1.\tCan you explain why the performance of BN on language modeling tasks is already better than LN?\n2.\tCan you explain why Transformer_BN has a much bigger mean and variance deviation than ResNet18?\n3.\tCan you compare your RBN with Powernorm?\n The authors mention that they don’t have a theoretical guarantee about their method in the Limitation section. I agree with it. But I think this work contains sufficient empirical findings that can benefit the community. \n\nThere is no potential negative social impact of this work.\n", " The authors begin by relating the failure of BN variance in NLP models. They define Training Inference Discrepancy (TID) to quantitatively measure this inconsistency and show that TID can serve as an indicator of BN’s performance. \nIn particular, BN reaches much better test performance than LN when TID keeps small through training (e.g., in the CV case). \nThey propose Regularized BN (RBN) that adds a regularization term in BN to penalize and reduce the TID when the TID of BN is large.\nIn experimetns, RBN can outperform BN in a lot of settings, and outperform LN in some settings. Strength:\nThe proposed approach is novel and the point of view is convincing.\nIt's good to address the BN mystery in NLP.\n\nWeakness:\nNo theory advancement.\nEmpirical improvements are not large. I don't have any question at this point. Yes." ]
[ -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, 3, 3, 3 ]
[ "QKnk09ZZnv", "1GGRNny_7lx", "-XkpWWrYByv", "9SdFLPP-PGE", "HrwmgEeXjEz", "nips_2022_X8mmH03wFlD", "nips_2022_X8mmH03wFlD", "nips_2022_X8mmH03wFlD" ]
nips_2022_W-xJXrDB8ik
Unsupervised Visual Representation Learning via Mutual Information Regularized Assignment
This paper proposes Mutual Information Regularized Assignment (MIRA), a pseudo-labeling algorithm for unsupervised representation learning inspired by information maximization. We formulate online pseudo-labeling as an optimization problem to find pseudo-labels that maximize the mutual information between the label and data while being close to a given model probability. We derive a fixed-point iteration method and prove its convergence to the optimal solution. In contrast to baselines, MIRA combined with pseudo-label prediction enables a simple yet effective clustering-based representation learning without incorporating extra training techniques or artificial constraints such as sampling strategy, equipartition constraints, etc. With relatively small training epochs, representation learned by MIRA achieves state-of-the-art performance on various downstream tasks, including the linear/${\it k}$-NN evaluation and transfer learning. Especially, with only 400 epochs, our method applied to ImageNet dataset with ResNet-50 architecture achieves 75.6% linear evaluation accuracy.
Accept
This paper proposes a pseudo-labelling algorithm for unsupervised representation learning inspired by information maximization. The reviewers found that the proposed method is theoretically well grounded and that the authors provide extensive experimentations to demonstrate the validity of their approach. I agree with those conclusion after reading the paper. I therefore support acceptance.
train
[ "GS-J0q8RzV4", "DI35LWbYtY", "s-IBBepzZME", "oAtkE--MEw9", "oGdYxeQnfvJV", "Zft6wEhMaC", "ji8_QAG4Iz3", "8OgWIwgoxhK", "mdBrnSMOit", "Qhp_i9xk_6U", "zX87lv2njFZ", "oMOKezYed-Z", "IrnaP0iZdVb", "Xfsy5Hmk6C6", "yyOp9cT-ES", "MibC0T8e3I_", "O2icJxbjfBs", "aavw41RbZTD" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We are happy to hear that our responses have been helpful for the understanding of our work. We will incorporate the additional results and feedback, including the limitations, into the next version. For the recommending papers [a, b], we will add discussion on the papers in the revised version.\n\nThank you!\n\nAuthors.", " (Additional response about I3)\n\nThe reviewer questioned the tuning of MIRA. To address more on this, we further follow BYOL's linear evaluation protocol and report the results in detail. Based on the practices of BYOL, we use SGD with a Nesterov momentum of 0.9 using a batch size of 1024 and sweep over 5 learning rates {0.1, 0.2, 0.3, 0.4, 0.5} (without linear scaling) on a local validation set in the ImageNet train set. Since the authors of BYOL do not specify the local validation set, we simply use the 1% training split from SimCLR in the ImageNet train set as our local validation set. We do not use regularization methods such as weight decay, gradient clipping, etc.; we adapt widely used the 100 epochs training and cosine scheduling to adjust learning rates as in SimSiam, SwAV, DINO, BarlowTwins, etc. The table below shows the linear evaluation result with 400 epochs trained model on the local validation set w.r.t. 5 learning rates.\n\n| lr | 0.1 | 0.2 | 0.3 (Best) | 0.4 | 0.5 |\n|:---:|:---:|:---:|:---:|:---:|:---:|\n| Top-1 Acc | 78.08 | 78.54 | 78.86 | 78.71 | 78.57 |\n\nFinally, we check the performance of the chosen hyperparameter (lr=0.3) by evaluating it on the test dataset. In the below table, we show the original result from the paper denoted as LARS, and the result by the chosen learning rate denoted as SGD. In both Top-1 and-5 accuracies, SGD shows a slight improvement of +0.1%.\n\n| | Top-1 Acc | Top-5 Acc |\n|---|:---:|:---:|\n| SGD (new) | 75.6 | 92.6 |\n| LARS (Paper) | 75.5 | 92.5 |\n\nWe will contain the results in the paper and clarify the evaluation protocol in more detail.\n", " I thank the authors for answering my comments and providing additional experimental results, which have been useful for improving my overall understanding of the work. I would expect the authors to include the results on the detection and segmentation downstream tasks in the paper and discuss the limitation of the work clearly. Furthermore, I think for considering representations encouraging to encode semantic structure of data, following few works [a,b] could be discussed and compared in the related works section. Anyway, I remain absolutely positive about this paper and have updated my review rating to \"Accept\".\n\n[a] Wang et al., Unsupervised Representation Learning by Invariance Propagation, NeurIPS, 2020.\n\n[b] Dwibedi et al., With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations, ICCV, 2021.", " We updated the paper with the following modifications:\n* We briefly explain our solving strategy in the fourth paragraph of the introduction.\n* We add more details on the 3.2 solving strategy, especially about the derivation of the necessary and sufficient conditions and the fixed point iteration.\n* We fix some typos.", " (Additional response about Q3,4.)\n> Q3,4. What is the performance of MIRA with only 3 iterations? What are the practical reasons we should care about the convergence of the pseudo-labeling?\n\nWe further experiment if the number of fixed point iterations matters when the model is trained with longer epochs (400). The results of the linear evaluation performance are as below:\n\n| # of iterations | 1it | 3it | 30it |\n|:---:|:---:|:---:|:---:|\n| MIRA, 100ep | 69.5 | 69.5 | 69.3 |\n| MIRA, 400ep | 72.7 | 72.6 | 72.9 |\n\nIn the 100 epochs training, the models with a smaller number of fixed point iterations, 1it and 3it, perform slightly better (+0.2%); while in 400 epochs training, the model with more iterations, 30it, performs better (+0.2~0.3%). As a result, the convergence to the fixed point is not the major factor for learning; however, it seems reasonable to choose a sufficiently large number of fixed point iterations in longer epoch training.\n\nWe further respond to the “*Why should we care about the speed of convergence*?“. In the experiments, we want our computed pseudo-labels to be close enough to the theoretically motivated ones in the method (section 3). By Prop. 3 in the Appendix, we prove the convergence of the fixed point iteration (Eq. 5); while the converging speed is unknown. Hence, we verify in Fig. 2 that our fixed point iteration converges fastly enough to use in practice to solve the optimization problem (Eq. 3). Besides that, we want to prove that our fixed point iteration is at least more effective than Sinkhorn's method, in which the SwAV is based on. Furthermore, our experimental results suggest that the more converged (with a larger number of iterations) gets better results on longer epochs training.", " We complete the experiments about Q1 and Q2. Here is our resposnes for them.\n\n> Q1. 800ep training results with MIRA.\n\n→ We report MIRA 800 epochs training results on the linear and k-NN evaluations.\n\n| Epochs | Top-1 | k-NN (100%) | k-NN (10%) | k-NN (1%) |\n|:---:|:---:|:---:|:---:|:---:|\n| 400ep | 75.5 | 68.7 | 60.7 | 47.8 |\n| 800ep | 75.4 | 68.8 | 61.1 | 48.2 |\n\nMIRA trained for 800 epochs shows a similar performance to the one trained for 400 epochs; it shows +0.1~+0.4% improved k-NN evaluation accuracy and -0.1% linear evaluation accuracy. While the performance change depends on the evaluation protocols, it seems hard to claim neither MIRA is further improved nor it suffers from a performance drop. We just note that the models trained for 400 and 800 epochs with MIRA achieve strong performance on both linear and k-NN evaluation.\n\n\n> Q2. Multi-crop augmentation applied to a smaller batch size in Tab. 8?\n\n→ In Table 8, we report the linear evaluation performance of MIRA for 100 epochs training with a smaller batch size of 512; showing that MIRA performs relatively well with a reduced batch size. We further evaluate the performance of MIRA with multi-crop and smaller batch sizes below.\n\n| Method | Multi-crop | Batch size | Top-1 | Top-5 |\n|:---:|:---:|:---:|:---:|:---:|\n| SwAV | O | 4096 | 72.1 | - |\n| MIRA | O | 512 | 72.9 | 91.3 |\n| MIRA | O | 4096 | 73.5 | 91.5 |\n\nWith multi-crops, MIRA with a batch size of 512 achieves 72.9% accuracy which outperforms the 72.1% accuracy of SwAV with a batch size of 4096. The result is consistent with Table 8 (where no multi-crops are applied) that MIRA shows robust performance with smaller batch size.", " We thank the reviewers for their time and effort to provide constructive reviews. We appreciate the encouraging remarks on the paper, including novelty (UDR3, E9n2, y6Mg), well-defined formulation (mBbA), neat organization (UDR3), and extensive experimental evaluations (UDR3, E9n2).\n\nDue to limited time and resources, we are not able to address all the comments in the Author Rebuttal period, some experiments are currently in progress. For these experiments, we are going to update our comments as soon as they are finished; we will further revise the paper after updating the comments.", " ---\nThirdly, we clarify the comments as follows.\n\n> C1. The connection of “KKT conditions and iterative method to get the feasible solution” is important and should be presented in the main paper.\n\n→ Indeed, we skip the derivation from the KKT condition (Eq. 4) to the iterative method (Eq. 5) in the main paper. We will incorporate the derivation below into the main paper.\n\n*By substituting the necessary and sufficient condition (Eq. 4) into* $\\overline{w}\\_j=\\frac{1}{B}\\sum\\_{i=1}^B w\\_{ij}$, *we get the necessary and sufficient condition with* $\\overline{w^*}$.\n\n$$ \\overline{w^\\*}\\_j = \\frac{1}{B}\\sum\\_{i=1}^B w^\\*_{ij}\n = \\frac{1}{B}\\sum\\_{i=1}^B \\frac{\\overline{w^\\*}\\_j^{-\\frac{\\beta}{1-\\beta}} p\\_{ij}^{\\frac{1}{1-\\beta}}}{\\sum\\_{k=1}^K \\overline{w^\\*}\\_k^{-\\frac{\\beta}{1-\\beta}} p\\_{ik}^{\\frac{1}{1-\\beta}}} = \\overline{w^\\*}\\_j^{-\\frac{\\beta}{1-\\beta}} \\frac{1}{B}\\sum\\_{i=1}^B \\frac{ p\\_{ij}^{\\frac{1}{1-\\beta}}}{\\sum\\_{k=1}^K \\overline{w^\\*}\\_k^{-\\frac{\\beta}{1-\\beta}} p\\_{ik}^{\\frac{1}{1-\\beta}}} \\Leftrightarrow \\overline{w^\\*}\\_j = \\Bigg[ \\frac{1}{B}\\sum\\_{i=1}^B \\frac{ p\\_{ij}^{\\frac{1}{1-\\beta}}}{\\sum\\_{k=1}^K \\overline{w^\\*}\\_k^{-\\frac{\\beta}{1-\\beta}} p\\_{ik}^{\\frac{1}{1-\\beta}}} \\Bigg]^{1 - \\beta}. $$\n \n*And the RHS of the last equation becomes the update rule (Eq. 5).*\n\n> C2. Authors should explain how they obtain their necessary and sufficient condition of Equation 4 in the main paper.\n\n→ We will add an explanation--\"*The proposition is driven by proving the strict convexity and then applying the KKT condition.*\" around line 132.\n\n> C3. The abstract and the introduction lack clarity.\n\n→ We will add further explanations on the flow of the ideas in the main paper and clarify our method in the introduction and abstract in the revised version. \n\nEspecially for line 45, we will change into \"*We formulate the problem as a strictly convex optimization problem and derive the necessary and sufficient condition of solution with the Karush-Kuhn-Tucker (KKT) condition. The solution can be achieved by fixed-point iteration.*\".\n\n> C4. About \"*a more in depth discussion comparing MIRA with SwAV other than just saying that MIRA also minimizes the conditional entropy (while SwAV only maximizes the entropy)*\".\n\n→ We further clarify the connection and difference of MIRA against the popular Sinkhorn algorithm. In fact, as the reviewer pointed out, the derivation of MIRA shares similar practices with Sinkhorn's algorithm. Both methods formulate the necessary and sufficient condition of the convex optimization problem via the KKT condition; use the condition to design an iterative method that converges to a feasible solution. However, differently from our problem (Eq. 3), the optimal transport problems are defined on the set of couplings; hence, it seems that our problem is not directly solvable via Sinkhorn's algorithm unless we find a way to reformulate the problem on the set of couplings. We agree that it will be an interesting direction to study.\n\n> Minor: The learning rate is missing at L:427.\n\n→ In line 427, we set the learning rate into 0. We again clarify that this is just 0, not the typo.", " ---\n\nWe secondly respond to the questions as follows:\n\n> Q1. please see the response in I2\n\n> Q2. What are the test performances of MIRA, without re-training the linear probe, on ImageNet-v2? The author can compare with [2]\n\n→ The test performance of our method, without re-training the linear classification head, on ImageNet-v2 is as follows:\n| | ImageNet val | MatchedFrequency | Threshold0.7 | TopImages |\n|:---:|:---:|:---:|:---:|:---:|\n| MIRA | 75.5 | 62.9 | 71.6 | 76.9 |\n\nWe note to the reviewer that the reference [2] is omitted in the review. We will make a comparison and do an analysis as soon as the reference [2] is provided.\n\n> Q3,4. What is the performance of MIRA with only 3 iterations? What are the practical reasons we should care about the convergence of the pseudo-labeling?\n\n→ We report the 100 epochs pre-training linear evaluation results with 1 and 3 fixed point iterations.\n\n| # of iterations | 1it | 3it | 30it |\n|:---:|:---:|:---:|:---:|\n| MIRA 100ep | 69.5 | 69.5 | 69.3 |\n\nIn the table, the convergence to the fixed point doesn't seem really important for downstream performance; hence, it seems possible to choose a small number of iterations in practice. However, in this case, it is ambiguous what we are using as a pseudo-label; in the experiment, we want our method to follow the theoretical motivation. ~~Furthermore, it is uncertain if such results hold when loss becomes sufficiently small as it approaches convergence~~. ~~We will later update the results on a longer epoch (when close to convergence) during the discussion period~~. We update the results with the longer epochs (400) training in \"*Response to reviewer y6Mg - 2.1*\".\n\n> Q5. In Table 6, are the hyper-parameters of SwAV and MIRA exactly the same except for the pseudo-labeling? If not, what are the results if all of the hyper-parameters are exactly the same.\n\n→ They share most of the hyper-parameters, e.g., # of clusters, projection head, optimizer, weight decay, etc., but they are not exactly the same. Especially, MIRA and SwAV use different augmentation recipes; We follow the augmentation scheme of DINO cause it is more widely adopted. We report the SwAV's linear evaluation performance when all of the hyper-parameters are exactly the same including the augmentation recipe except for pseudo-labeling.\n\n| | MIRA (wo EMA) | SwAV | SwAV (SimSiam) | \n|:---:|:---:|:---:|:---:|\n| 100ep, Top-1 | 68.7 | 66.9 | 66.5 |\n\nWhile performance is increased, our method still performs better with a +1.8 % margin.", " We appreciate the reviewer for the extensive feedback and the acknowledgment of the novelty of our proposed algorithm, MIRA. We hope our response will address the concerns.\n\n---\n\nWe firstly present our detailed response to the issues raised about our experiments. We will continuously clarify and address these issues in the revision.\n\n> I1. The authors select different hyper-parameters for the different training setups, e.g., they use different learning rates for 100/200 epochs in comparison to 400 and 800 epochs.\n\n→ We respectfully clarify that we do not use different learning rates for 100/200 epochs in comparison to 400/800 epochs. We fix the base learning rate of 0.3 across all epochs; while we only reduce the warm-up epochs from 10 to 5 for 100/200 epochs training. To address the reviewer's concern, we perform an ablation study by setting MIRA on 100/200 epochs of training with 10 epochs of warm-up. The linear evaluation performances with 10 epochs warm-up match the reported performance in the main paper.\n\n| Warm-up | 100ep | 200ep |\n|:---------:|:-----:|:-----:|\n| 5 epochs | 69.4 | 72.1 |\n| 10 epochs | 69.3 | 72.1 |\n\n> I2 (Q1). MIRA performance with the online encoder for all epochs?\n\n→ We report the linear evaluation performance with online encoders from pre-trained models for all epochs. The table below summarizes the result. While the usage of the representation from momentum encoders shows improvement over the usage of online encoders, its gain is limited to at most 0.2%. We believe this is not a dominant factor in MIRA's improvement. Our method still achieves 75.4% accuracy with only 400 epochs of training while using the online encoder.\n\n| Method / Epochs | 100 | 200 | 400 | 800 |\n|:----------------:|:----:|:----:|:----:|:----:|\n| MIRA | 69.4 | 72.1 | 72.9 | 73.8 |\n| MIRA-online | 69.4 | 72 | 72.9 | 73.6 |\n| MIRA (multi-crop) | 73.5 | 74.8 | 75.5 | |\n| MIRA-online (multi-crop) | 73.2 | 74.6 | 75.4 | |\n\n> I3. It is not clear that the authors have not tuned their method on the validation/test set.\n\n→ We do not tune our method on the validation/test sets. For semi-supervised evaluation, we conduct a hyper-parameter search over learning rates on a partial training set while following the other settings to SwAV. For linear evaluation, we do report the performance on various epochs and configurations; hence, we do not conduct any hyper-parameter search but adapt the optimizer (LARS) and base learning rate (0.1) settings from the official SimSiam implementation. Therefore, we speculate that further hyper-parameter search will strengthen our result on linear evaluation.", " > W2. More explanation and references will be helpful to follow the update rule in (Eq.5) from optimality condition (Eq.4).\n\n→ Indeed, we skip the derivation from (Eq. 4) to (Eq. 5) in the main paper; the detailed derivation is only described in line 377 of the Appendix. We again note the derivation of (Eq. 5) from (Eq. 4) here:\n\n*By substituting the necessary and sufficient condition (Eq. 4) into* $\\overline{w}\\_j=\\frac{1}{B}\\sum\\_{i=1}^B w\\_{ij}$, *we get the necessary and sufficient condition with* $\\overline{w^*}$.\n\n$$ \\overline{w^\\*}\\_j = \\frac{1}{B}\\sum\\_{i=1}^B w^\\*_{ij}\n = \\frac{1}{B}\\sum\\_{i=1}^B \\frac{\\overline{w^\\*}\\_j^{-\\frac{\\beta}{1-\\beta}} p\\_{ij}^{\\frac{1}{1-\\beta}}}{\\sum\\_{k=1}^K \\overline{w^\\*}\\_k^{-\\frac{\\beta}{1-\\beta}} p\\_{ik}^{\\frac{1}{1-\\beta}}} = \\overline{w^\\*}\\_j^{-\\frac{\\beta}{1-\\beta}} \\frac{1}{B}\\sum\\_{i=1}^B \\frac{ p\\_{ij}^{\\frac{1}{1-\\beta}}}{\\sum\\_{k=1}^K \\overline{w^\\*}\\_k^{-\\frac{\\beta}{1-\\beta}} p\\_{ik}^{\\frac{1}{1-\\beta}}} \\Leftrightarrow \\overline{w^\\*}\\_j = \\Bigg[ \\frac{1}{B}\\sum\\_{i=1}^B \\frac{ p\\_{ij}^{\\frac{1}{1-\\beta}}}{\\sum\\_{k=1}^K \\overline{w^\\*}\\_k^{-\\frac{\\beta}{1-\\beta}} p\\_{ik}^{\\frac{1}{1-\\beta}}} \\Bigg]^{1 - \\beta}. $$\n \n*And the RHS of the last equation becomes the update rule (Eq. 5).*\n\nWe will incorporate this derivation into the main paper in the revised version.\n\n> W3. More diverse downstream tasks (detection/segmentation).\n\n→ We test our method on detection/segmentation of the COCO 2017 dataset with Masked R-CNN, R50-C4 on a 2x scheduled setting. We use the configuration from the MoCo official implementation. MIRA performs better than the supervised baseline and is comparable to MoCo; is not as dominating as in the classification tasks. As the reviewer pointed out, methods that consider local or pixel-wise information show superior performance in these tasks. Incorporating such formulation into clustering-based approaches or MIRA seems to be an important future direction.\n\n| | $\\text{AP}^\\text{bb}$ | $\\text{AP}^\\text{bb}_\\text{50}$ | $\\text{AP}^\\text{bb}_\\text{75}$ | $\\text{AP}^\\text{mk}$ | $\\text{AP}^\\text{mk}_{50}$ | $\\text{AP}^\\text{mk}_{75}$ |\n|------|:-----:|:--------:|:--------:|:-----:|:--------:|:--------:|\n| Sup | 40 | 59.9 | 43.1 | 34.7 | 56.5 | 36.9 |\n| MoCo | 40.7 | 60.5 | 44.1 | 35.4 | 57.3 | 37.6 |\n| MIRA | 40.6 | 61 | 44.1 | 35.3 | 57.2 | 37.3 |\n\n\n[A] L. Ericsson, H. Gouk, and T. M. Hospedales. How well do self-supervised models transfer? In CVPR, 2021.", " Thank you for your extensive review. We are highly grateful that you praised our proposed method and empirical experiments. We are happy to give the responses to questions (Q) and weaknesses (W) below and hope they resolve your concerns.\n\n> Q1. What will happen if the predicted pseudo labels become unbalanced in an iteration? What is the guarantee that the predicted pseudo labels be reasonably balanced in the update step?\n\n→ Our method, MIRA, finds mutual information (MI) maximized pseudo-labels through the optimization problem (Eq. 3). When model predictions (= predicted pseudo-labels) are extremely unbalanced (close to collapsing), the MI term of (Eq. 3) will be low. Through MI regularization, MIRA finds MI-maximizing points around the model predictions. Hence, MIRA will not choose collapsed (or extremely unbalanced) pseudo-labels; it is more likely to choose reasonably balanced ones to maximize the MI.\n \nMeanwhile, we can't guarantee that MIRA will escape from any collapsed point. There are some collapsed states that MIRA can't escape. For example, when the model outputs constant representation regardless of input, the effect of mutual information will be diminished; the output of MIRA will be fixed in the collapsed representation. However, this rarely happens in a conventional training scheme that starts from a non-collapsed state.\n\n> Q2/W1-2. What is the motivation of using MI for having pseudo-labels and then using the pseudo-labels for representation learning?\n\n→ This work deals with clustering-based (or pseudo-labeling-based) representation learning. The advantage of these approaches is that they can account for inter-data similarity; representations are encouraged to encode the semantic structure of data. For instance, data representations that are assigned to the same cluster pull each other; in contrast, conventional NCE-based approaches perform instance-wise discrimination by pulling each other. Finding the similarity between the data seems important, especially for downstream tasks of classification; the empirical results in [A] show strong results of clustering-based approaches on classification tasks, e.g., transfer learning.\n\n> W1-1. Limited ablation study about pseudo-labeling by MI maximization.\n\n→ To address the review's concern, we report more ablation studies about pseudo-labeling with MIRA. Reviewer E9n2 and y6Mg also suggest more ablation studies on pseudo-labeling, especially about (1) the number of clusters and (2) the different steps of iterations, respectively. We describe the results below:\n* The number of clusters\n\n| # of clusters | 300 | 1000 | 3000 | 10000 | 30000 |\n|:---:|:---:|:---:|:---:|:---:|:---:|\n| Linear Top-1 | 67.7 | 69 | 69.3 | 69.5 | 69.5 |\n| k-NN (100%) | 58.9 | 60.5 | 61.6 | 61.7 | 61.7 |\n| k-NN (10%) | 49.5 | 51.9 | 53.3 | 53.3 | 53.3 |\n| k-NN (1%) | 36.6 | 39.7 | 41 | 41 | 41 |\n\nWe train MIRA on ImageNet-1k for 100 epochs without multi-crop augmentations while varying the number (#) of clusters. When the number of clusters is sufficiently large enough (>=3000), we observe no particular gain w.r.t. the number of the clusters.\n\n* The number of the fixed point iterations (*modified*)\n\n| # of iterations | 1it | 3it | 30it |\n|---|:---:|:---:|:---:|\n| MIRA 100ep | 69.5 | 69.5 | 69.3 |\n\nIn the table, the convergence to the fixed point doesn't seem really important for downstream performance; it seems possible to choose a small number of iterations in practice. ~~However it is uncertain if such results hold when loss becomes sufficiently small as it approaches convergence. We will later update the results on a longer epoch during the discussion period~~. We update the linear evaluation results with the longer epochs (400) training below:\n\n| # of iterations | 1it | 3it | 30it |\n|:---:|:---:|:---:|:---:|\n| MIRA, 100ep | 69.5 | 69.5 | 69.3 |\n| MIRA, 400ep | 72.7 | 72.6 | 72.9 |\n\nIn the 100 epochs training, the models with a smaller number of fixed point iterations, 1it and 3it, perform slightly better (+0.2%); while in 400 epochs training, the model with more iterations, 30it, performs better (+0.2~0.3%). As a result, the convergence to the fixed point is not the major factor for learning; however, it seems reasonable to choose a sufficiently large number of fixed point iterations in longer epoch training.\n", " We appreciate the reviewer for emphasizing our proposed method as \"well-motivated, simple, and neat\" and highlighting the extensive experiments featured in this paper.\n\n> Q1. 800ep training results with MIRA.\n>\n> Q2. Multi-crop augmentation applied to a smaller batch size in Tab. 8?\n\n→ Unfortunately, the experiments about Q1/Q2 cannot be finished in the *Author Rebuttal* phase due to limited time. We will answer Q1/Q2 with experimental results in the discussion period as soon as the experiments are done, in less than three days.\n\n> Q3. Will MIRA achieves desirable performance on downstream detection/segmentation tasks?\n\n→ We test our method on detection/segmentation of the COCO 2017 dataset with Masked R-CNN, R50-C4 on a 2x scheduled setting. We use the configuration from the MoCo official implementation. MIRA performs better than the supervised baseline and is comparable to MoCo; is not as desirable as in the classification tasks. The reviewer mBbA pointed out, methods that consider local or pixel-wise information show superior performance in these tasks. Incorporating such formulation into clustering-based approaches or MIRA seems to be an important future direction.\n\n| | $\\text{AP}^\\text{bb}$ | $\\text{AP}^\\text{bb}_\\text{50}$ | $\\text{AP}^\\text{bb}_\\text{75}$ | $\\text{AP}^\\text{mk}$ | $\\text{AP}^\\text{mk}_{50}$ | $\\text{AP}^\\text{mk}_{75}$ |\n|------|:-----:|:--------:|:--------:|:-----:|:--------:|:--------:|\n| Sup | 40 | 59.9 | 43.1 | 34.7 | 56.5 | 36.9 |\n| MoCo | 40.7 | 60.5 | 44.1 | 35.4 | 57.3 | 37.6 |\n| MIRA | 40.6 | 61 | 44.1 | 35.3 | 57.2 | 37.3 |", " We are glad that you not only acknowledged our methodology and experiments but also praised our additional experiments with small batch sizes. We address your question as follows:\n\n\n> Q1. It would be interesting to study the effect of this hyper-parameter on the performance on downstream tasks.\n> \n→ Following the reviewer's comment, we study the effect of the number of clusters on the performance of linear and k-NN evaluation (with 1%/10%/100% labels). We train MIRA on ImageNet-1k for 100 epochs without multi-crop augmentations while varying the number (#) of clusters. When the number of clusters is sufficiently large enough (>=3000), we observe no particular gain w.r.t. the number of the clusters. This is consistent with the observation in SwAV.\n\n| # of clusters | 300 | 1000 | 3000 | 10000 | 30000 |\n|:---:|:---:|:---:|:---:|:---:|:---:|\n| Linear Top-1 | 67.7 | 69 | 69.3 | 69.5 | 69.5 |\n| k-NN (100%) | 58.9 | 60.5 | 61.6 | 61.7 | 61.7 |\n| k-NN (10%) | 49.5 | 51.9 | 53.3 | 53.3 | 53.3 |\n| k-NN (1%) | 36.6 | 39.7 | 41.0 | 41.0 | 41.0 |", " This paper proposes Mutual Information Regularized Assignment (MIRA) which is a pseudo-labelling algorithm for unsupervised representation learning inspired by information maximization. The pseudo-labelling procedure is formulated as an optimization problem solved via a fixed-point iteration method that maximize the mutual information between the label and data while being close to a given model probability. The proposed methodology has been claimed to be converging relatively faster and to be achieving state-of-the-art performance on various downstream tasks such as linear evaluation and transfer learning. In general, the paper is well written and it is easily understandable, and I haven't found any typos and grammatical mistakes. **Strengths**\n\n(1) The paper is theoretically well grounded. The method can obtain an approximate solution of equation (3) by utilizing a fixed point iteration technique which has theoretical guarantee of convergence. In a nutshell, the proposed technique used optimal transport based pseudo labelling by maximizing the mutual information that avoids using hard artificial constraints. Altogether the idea looks very cool.\n\n(2) On different downstream tasks, the proposed model has consistently outperformed the existing methodologies. The method has been thoroughly ablated and different components of the method are well justified.\n\n**Weaknesses**\n\n(1) In this method the optimal transport based pseudo labelling by maximizing the mutual information is the main novel component, which I find is less ablated. Also justification behind a two step pipeline (1. pseudo label prediction, 2. classification using those pseudo labels) is not clear. Representation can be learnt by directly maximizing MI.\n\n(2) It is little difficult to follow the update rule in equation (5) from the unique optimal point found in equation (4). To this end, more explanation and references will be helpful to the reader.\n\n(3) The paper has shown experiments on linear evaluation, semi-supervised learning, k-nn evaluation, transfer learning etc,, which are basically image classification experiments. More diverse experiments (such as object detection, semantic segmentation etc) are missing, which is not completely validating the success of the proposed MIRA model. Note that most of the existing self-supervised learning models do consider diverse tasks. (1) What will happen if the predicted pseudo labels become unbalanced in an iteration? What is the guarantee that the predicted pseudo labels be reasonably balanced in the update step?\n\n(2) Mutual information (MI) maximization has been widely used for representation learning, but what is the motivation of using MI for having pseudo labels and then using those pseudo labels for representation learning whereas one can directly use MI maximization for representation learning?\n The authors have mentioned that the time complexity of the proposed MIRA model can be a limitation of the work, which I think is normal for self-supervised learning models. Another limitation could be the consideration of only the global information and not considering any kind of local information, which will make this method only applicable for global task, such as classification.\n", " The paper proposed a novel clustering-based self-supervised learning (SSL) method, MIRA. Motivated by the infomax principle, the paper constructs the SSL problem as an optimization problem that maximizes the mutual information between pseudo labels and data while taking the predictions of the model into consideration. To solve the online-clustering problem, the paper proposed a fixed-point iteration that takes only a few iterations to produce desirable pseudo-labels. Linear evaluation, KNN, and semi-supervised results on ImageNet verify the effectiveness of the method. Strengths:\n1. The paper is well-motivated, and the loss design is simple and neat. \n2. The paper is well-organized.\n3. The authors provide plenty of experiments to demonstrate the effectiveness of the method. The method achieves state-of-the-art on several evaluations. \n\nWeakness:\n1. The paper did not mention the performance of MIRA on downstream detection/segmentation tasks. It would be better if the method can also achieve better performance on detection benchmarks as detection is one of the main downstream tasks for computer vision. \n2. The paper did not provide the linear evaluation results with 800 epochs training to clarify the effectiveness of introducing Mutual information regularization. As Proposition 1 provides faster convergence, an 800-epochs result can further verify the improvement of pseudo-labels using MI regularized cluster assignment in Eq.3. \n 1. As mentioned in the paper, MIRA converges faster than SwAV, and introduces MI regularization for better pseudo labeling. I guess MIRA will achieve better performance with 800 epochs of training. Or, will the method suffers a performance drop like TWIST from 400 epochs to 800 epochs?\n2. In Tab.8, it seems the reported results are trained without multi-crops, will the phenomenon still be the same when multi-crops are applied?\n3. Will MIRA achieve desirable performance on downstream detection/segmentation tasks? Yes. ", " This paper proposes a new self-supervised method for representation learning from unlabeled images. The core representation learning algorithm, called MIRA, relies on predicting pseudo labels from the data, which are chosen, actually learned, based on maximizing the mutual information between themselves and the data distribution. To make this proposed algorithm tractable, they derive a fixed-point iteration solution, for which they provide a proof for its ability to find the optimal solution. To achieve this, they prove that the proposed optimization problem for finding the optimal set of pseudo labels is a convex problem. \nThe proposed method is evaluated by assessing the quality of the learned data representations on multiple downstream tasks, obtained from several datasets. The experimental results highlight the richness of the learned semantic representations. Strengths:\n1- The proposed method is novel, elegantly formulated, and encourages learning rich semantic representations with shorter training schedules compared to the baselines. \n2- The writing is clear, and the used language is understandable.\n3- The experimental evaluation is extensive, and is performed on several benchmarks and datasets. The results illustrate the merits of the method in improving the quality of the learned representations.\n4- Essential to me is that this work also provides experimental results using smaller batch sizes, showed in Table 8. This set of results, while it can be a little more comprehensive, is necessary, from my point of view. It encourages a wider adoption of self-supervised learning for budget resources.\n\nWeaknesses:\n- I believe the number of clusters K, which is the number of pseudo label assignments, is used as a fixed number (3000) across the experimental results section. It would be interesting to study the effect of this hyper-parameter on the performance on downstream tasks. Please see weaknesses above. The authors provide these limitations adequately. ", " The paper proposes a method of pseudo-labeling such that the pseudo-labels have maximal mutual information with the samples. This is in contrast to SWaV where the pseudo-labels have maximum entropy. The paper proposes an iterative algorithm to recover the pseudo-labels, which is used inside a SSL method, as done in SWaV. The method proposes an interesting perspective on pseudo-labeling that was not previously discussed in the literature, as far as I know. Their methods is based on the KKT optimality conditions and the pseudo-labels are found\nby iterating such condition. I appreciated the connection, but I believe that this connection is important and should be presented in the main paper. There is a clear connection with SWaV and I believe that they can be connected through the framework of Optimal Transport. I would have appreciated a more in depth discussion comparing\nMIRA (the proposed method) with SWaV other than just saying that MIRA also minimizes the conditional entropy (while SWaV only maximizes the entropy). For example, could it be possible to define the updates of MIRA as a Sinkhorn algorithm\nas it is done in the lecture of [0]. Other than this, I don't have much issues with the method section other than I believe that the authors should explain how they obtain their necessary and sufficient condition of Equation 4 in the main paper.\n\nMy biggest issues with this paper are with the experiments. The authors claim state-of-the art for their method for several setups. But, their are several issues with this claim:\n* The authors select different hyper-parameters for the different training setups. E.g. they use different learning rates for 100/200 epochs in comparison to 400 and 800 epochs. This is not done for most of their baseline. E.g. BYOL [1] results are taken from the SimSiam paper and they used the same hyper-parameters for every setups.\n* The results are reported on the momentum encoder, while the results of the other methods are generally reported on the online network. The results should be reported on the same network or it should be clear where the results are probed for all the methods.\n* Looking at the code, it is not clear that the authors have not tuned their method on the validation/test set. This is not standard practice, see e.g. BYOL [1]\n* The above is true especially for the parameters chosen for semi-supervised learning for ImageNet-1k.\n* It is not clear what and how the linear classification hyper-parameters were chosen for ImageNet-1k.\n\nThe above concerns make it hard to be satisfied that the reported improvements are solely due to the proposed method, MIRA. Perhaps, if the authors made it clear that they used exactly the same parameters than SwAV and the the only intervention is the MIRA algorithm replacing the Sinkhorn, then it would be more satisfying.\n\nAnother concern that I have regards Figure 2. SwAV uses only 3 Sinkhorn iterations.\nHowever, from Figure 2, it seems that, at that point, SwAV is far from converging and the authors off SwAV claim that this amount of iterations is sufficient for obtaining their better numbers. Then, is convergence to the fixed point really important for downstream performance?\nIf so, it is not clear to me why we should care about such speed of convergence. And, I am also wondering how MIRA is comparing to SWaV with a much smaller number of iteration. Is 30 iterations necessary?\n\nFinally, I found the abstract and the introduction lacking clarity. For example, the phrase \"We derive a fixed-point iteration method and prove its convergence to the optimal solution.\" does not tell much\nabout the algorithm. I believe that the authors should be more explicit about the algorithm and the idea presented in the paper in the introduction so that the reader have a general idea of the paper after reading the introduction.\nIn particular, I believe that the fourth paragraph of the introduction should be clearer about the method.\n\nMinor: The learning rate is missing at L:427.\n\n[0] https://bicmr.pku.edu.cn/~wenzw/bigdata/lect-ot.pdf\n[1] https://arxiv.org/pdf/2006.07733.pdf\n * What are the MIRA numbers when using the online network for probing the representation for all epochs?\n* What are the test performances of MIRA, without re-training the linear probe, on ImageNet-v2? The author can compare with [2]\n* What is the performance of MIRA with only 3 iterations?\n* What are the practical reasons we should care about the convergence of the pseudo-labeling?\n* In Table 6, are the hyper-parameters of SwAV and MIRA *exactly* the same except for the pseudo-labeling? If not, what are the results if all of the hyper-parameters are exactly the same (including the number of iteration of the pseudo-labeling algorithm). The authors have discussed generic limitations of SSL methods, but they have not discussed limitations related to their method." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "s-IBBepzZME", "Qhp_i9xk_6U", "zX87lv2njFZ", "ji8_QAG4Iz3", "mdBrnSMOit", "IrnaP0iZdVb", "nips_2022_W-xJXrDB8ik", "mdBrnSMOit", "Qhp_i9xk_6U", "aavw41RbZTD", "oMOKezYed-Z", "yyOp9cT-ES", "MibC0T8e3I_", "O2icJxbjfBs", "nips_2022_W-xJXrDB8ik", "nips_2022_W-xJXrDB8ik", "nips_2022_W-xJXrDB8ik", "nips_2022_W-xJXrDB8ik" ]
nips_2022_um2BxfgkT2_
Pure Transformers are Powerful Graph Learners
We show that standard Transformers without graph-specific modifications can lead to promising results in graph learning both in theory and practice. Given a graph, we simply treat all nodes and edges as independent tokens, augment them with token embeddings, and feed them to a Transformer. With an appropriate choice of token embeddings, we prove that this approach is theoretically at least as expressive as an invariant graph network (2-IGN) composed of equivariant linear layers, which is already more expressive than all message-passing Graph Neural Networks (GNN). When trained on a large-scale graph dataset (PCQM4Mv2), our method coined Tokenized Graph Transformer (TokenGT) achieves significantly better results compared to GNN baselines and competitive results compared to Transformer variants with sophisticated graph-specific inductive bias. Our implementation is available at https://github.com/jw9730/tokengt.
Accept
The paper applies transformers directly to a graph by treating all nodes and edges in the graph as tokens. The author prove this approach is theoretically at least as expressive as an invariant graph network, which is already more expressive than all message-passing graph neural networks. The approach is simple and interesting. Reviewers had concerns on the empirical studies as only one dataset was used. In the response, the authors have partially addressed the concerns by offering more results. More discussion/analysis of the empirical results is expected. Considering that the paper is mainly on the theoretical side and the paper does provides interesting new insights, I'd recommend acceptance.
test
[ "_wSUrLD4RcM", "cTlAYsgfQVQ", "7XIA1MTSbHG", "kriAHTonRH", "Ny2RLkE3yAd", "QXPw4PGudKI", "AaeKY3Pnu2", "_nj11zepgp1", "2mhIykArnz1", "aZzieG9hMql", "oFnD20SZxT8", "jJq7UJDrv0R", "L5P1o82MDDY", "lylut5jBVny", "L5-55B_Xzrh", "_QNSiesND3_5", "2SUPNaEpJoH", "7RgfdJEhsTG", "rvf1wltR6lZ", "3nHqNBSQhud", "BOCq5loDczS", "vNi7da5Aa1EF", "bC8D4WS0PBHO", "SLEUsI1pVNq", "EByE1osL5uL", "lVmW25iPI5Y", "Sz-hv6eoMtl", "5eDd0-MG5X" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the author for the detailed reply. Most of my concerns have been addressed. I keep my original rating.\n", " Thank you for the detailed response. Most of my concerns are addressed. Hence I raised my score to 6 (I am actually a 5.5 now). I strongly suggest the authors add all the discussions to the final version, especially on how the model needs modification and approximations when $n > d_p$. Following that, it would be helpful if the authors can evaluate the model on three datasets, each corresponding to one setting: (1) small $n$, (2) moderately large $n$ and (3) very large $n$. I would assume PCQM4M corresponds to the first setting.\n\nBesides, I am also not entirely satisfied with the discussion on why the model performs worse than other Graph Transformers. The authors mentioned several conjectures including the loss of structural information of Laplacian eigenvectors and sign ambiguity. What exactly do you refer to as \"structural information\"? Can you give examples and ground this conjecture on PCQM4M dataset? What type of structural information will be lossy for Laplacian eigenvectors?\n\nThe authors propose one potential solution with the requirement of relaxing the theoretical assumptions, especially on orthonormality. I think more experiments on that would be helpful to gain more insights. I'm curious to see whether such expressiveness is useful in real world application, which I highly doubt. And actually the result of the paper is exactly saying such expressiveness is not necessary. Real world graphs often have rich node/edge/graph features, rendering the expressiveness less useful. I wonder why the authors do not conduct more expriments on graphs without any features, just demonstrating a setting where the model can actually shine.\n\nGenerally, I think it's a solid paper on method and theoretical analysis. However, I maintain my opinion that significantly more experiments are needed for the paper to have more impact.", " Dear reviewer YAnt,\n\nWe have just obtained the test scores of SGT (Lap) (best one in Table 2) on PCQM4Mv2. The result is given in the following table (please see **test-dev MAE**). Overall, we find a tendency consistent with our claims in the paper based on validation MAE scores (Table 2 in the main text).\n\n| method | test-dev MAE | validate MAE |\n|---|---|---|\n| ***Message-passing GNNs*** |\n| GCN | 0.1398 | 0.1379 |\n| GIN | 0.1218 | 0.1195 |\n| GCN-VN | 0.1152 | 0.1153 |\n| GIN-VN | 0.1084 | 0.1083 |\n| ***Transformers with strong graph-specific modifications*** |\n| Graphormer | N/A | 0.0864 |\n| EGT | 0.0872 | 0.0869 |\n| GRPE | 0.0898 | 0.0890 |\n| ***Pure Transformer*** |\n| SGT (Lap) | 0.0919 | 0.0910 |", " Thanks for your detailed response, my concerns have been addressed. Thus I remain my tendency to accept this paper.", " Dear reviewer YAnt,\n\nIn the following link, we provide the code for reproducing our experiments in the paper (synthetic second-order basis approximation and PCQM4Mv2 large-scale graph regression). The data is either generated on the fly (synthetic) or automatically downloaded and processed (PCQM4Mv2).\n\nhttps://anonfiles.com/RdJ6Na2fye/code-20220808T134603Z-001_zip", " Dear reviewer, we managed to train Graphormer on Photo, Computers, and Crocodile (previously OOM) through a bit of optimization and hyperparameter search. We updated the scores in the table and updated the relevant discussions.", " Dear reviewer, we managed to train Graphormer on Photo, Computers, and Crocodile (previously OOM) through a bit of optimization and hyperparameter search. We updated the scores in Table 1 and updated the relevant discussions.", " Dear reviewer, we managed to train Graphormer on Photo, Computers, and Crocodile (previously OOM) through a bit of optimization and hyperparameter search. We updated the scores in Table 1 and updated the relevant discussions.", " Dear reviewer, we managed to train Graphormer on Photo, Computers, and Crocodile (previously OOM) through a bit of optimization and hyperparameter search. We updated the scores in Table 1 and updated the relevant discussions.", " We thank the reviewer for the positive comments. Below we respond to the questions.\n\n> However, I think their empirical evaluations are not enough and the advantages of their proposed SGT are not clear compared with other Graph Transformers.\n\nA1. While we acknowledge that our empirical evaluation is conducted only on PCQM4Mv2, we would like to note that PCQM4Mv2 is one of the largest-scale graph dataset up to date containing 3.8 million graphs [1]. This makes it one of the few suitable benchmark to test our model as Transformers are generally designed to work with extremely large-scale data [2, 3].\n\nTo further demonstrate the effectiveness of our method in a more broad class of graph understanding tasks, we conducted additional experiments on transductive node classification datasets including co-authorship (CS, Physics) [4], co-purchase (Photo, Computers) [4], and Wikipedia page networks (Chameleon, Crocodile) [5], which generally involve large-scale graphs. The statistics of the datasets are outlined below:\n\n| Dataset | CS | Physics | Photo | Computers | Chameleon | Crocodile |\n| --- | --- | --- | --- | --- | --- | --- |\n| # nodes ($n$) | 18,333 | 34,493 | 7,650 | 13,752 | 2,277 | 11,631 |\n| # edges ($m$) | 81,894 | 247,962 | 119,081 | 245,861 | 36,101 | 180,020 |\n| # classes | 15 | 5 | 8 | 10 | 6 | 6 |\n\nWe randomly split the dataset into train, validation, and test sets by randomly reserving 30 random nodes per class for validation and test respectively, and using the rest for training. We experiment with simple variants of SGT equipped with Performer kernel attention (*details can be found at the end of the response*), and compare them against strong GNN baselines including the following:\n* GCN (message-passing that works well on large graphs [4])\n* GAT (message-passing based on attention)\n* GIN (message-passing with 2-WL expressiveness guarantee similar to ours [6])\n* Graphormer (graph transformer equipped with shortest path-based edge encoding and spatial encoding [9])\n\nTable 1. Results of transductive node classification experiment. OOM denotes out-of-memory error on a 24GB RTX 3080 GPU. We report aggregated test accuracy at best validation accuracy over 7 randomized runs.\n| | CS | Physics | Photo | Computers | Chameleon | Crocodile |\n| --- | --- | --- | --- | --- | --- | --- |\nGCN | 0.895 +- 0.004 | 0.932 +- 0.004 | 0.926 +- 0.008 | 0.873 +- 0.004 | 0.593 +- 0.01 | 0.660 +- 0.01 |\nGAT | 0.893 +- 0.005 | 0.937 +- 0.01 | 0.947 +- 0.006 | **0.914 +- 0.002** | 0.632 +- 0.011 | 0.692 +- 0.017 |\nGIN | 0.895 +- 0.005 | 0.886 +- 0.046 | 0.886 +- 0.017 | 0.362 +- 0.051 | 0.479 +- 0.027 | 0.515 +- 0.041 |\n| Graphormer | 0.791 +- 0.015 | *OOM* | 0.894 +- 0.004 | 0.814 +- 0.013 | 0.457 +- 0.011 | 0.489 +- 0.014 |\nSGT (Near-ORF) + Performer | 0.882 +- 0.007 | 0.931 +- 0.009 | 0.872 +- 0.011 | 0.82 +- 0.019 | 0.568 +- 0.019 | 0.583 +- 0.024 |\nSGT (Lap) + Performer | 0.902 +- 0.004 | 0.941 +- 0.007 | 0.919 +- 0.009 | 0.86 +- 0.012 | 0.637 +- 0.032 | 0.638 +- 0.025 |\nSGT (Lap) + Performer + Sp. Equiv. Basis | **0.903 +- 0.004** | **0.950 +- 0.003** | **0.949 +- 0.007** | 0.912 +- 0.006 | **0.653 +- 0.029** | **0.718 +- 0.012** |\n\nThe results are outlined in Table 1. Graphormer [8] results in out-of-memory in the Physics dataset mainly due to the spatial encoding that requires $\\mathcal{O}(n^2)$ memory. By constraining the model capacity appropriately, we were able to run Graphormer on other datasets. However, we observe a low performance, presumably due to the memory complexity that prevents depth and head scaling. As the spatial encoding is incorporated into the model via attention bias, the model strictly requires $\\mathcal{O}(n^2)$ memory and cannot be easily made more efficient. On the other hand, SGT variants are able to utilize Performer kernel attention with $\\mathcal{O}(n+m)$ cost, which allows using larger models to achieve the best performance in all but one datasets (Computers, where the performance is on par with the best model).\n\n(continued to 2/5)", " (continued from 1/5)\n\nLet us finish by explaining how we applied SGT for this task. Considering large $n$, an immediate challenge for our model is dealing with the orthonormality assumption on the node identifiers (Lemma 1), as the maximal number of orthonormal node identifers is upper bounded by its dimension $d_p$. In this case, it is reasonable to introduce *near-orthonormal vectors* as node identifiers, as it is theoretically guaranteed that we can draw an exponential number $\\mathcal{O}^{\\Omega(d_P)}$ of $d_p$-dimensional near-orthonormal vectors [7]. For *SGT (Near-ORF)*, we used $d_p = 64$-dimensional random node identifiers where each entry is sampled from $\\{-1/d_p, +1/d_p\\}$ under equal probability [7]. For *SGT (Lap)*, we used a subset of the Laplacian eigenvectors as node identifiers, more specifically $d_p/2$ eigenvectors with lowest eigenvalues and $d_p/2$ eigenvectors with highest eigenvalues, and choose $d_p$ in the range of $64$ to $100$ based on validation performance.\n\nWhile *Near-ORF* and *Lap* can theoretically serve as an efficient low-rank approximation for orthonormal node identifiers, their approximation can affect the quality of modeled equivariant basis. In particular, equivariant basis ($\\mu$) represented as ***sparse*** basis tensor ($\\mathbf{B}^\\mu$; Definition 2) are expected to be affected more, as they require most entries to be zero. To remedy this, we take a simple approach of residually adding one of such sparse equivariant operators $\\mathbf{X}\\_{ii} \\mapsto \\mathbf{X}\\_{ii} + \\sum\\_{j\\neq i}\\mathbf{X}\\_{ij}$ explicitly after each Transformer layer. We denote this variant as *SGT (Lap) + Performer + Sp. Equiv. Basis*. This fix is minimal, easy to implement, and highly efficient as it only requires a single $\\texttt{torch.coalesce()}$ call, and also empirically effective, as shown in Table 1.\n\n[1] Hu et al., OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs (2021)\n\n[2] Brown et al., Language Models are Few-Shot Learners (2020)\n\n[3] Dosovitsky et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (2020)\n\n[4] Shchur et al., Pitfalls of Graph Neural Network Evaluation (2019)\n\n[5] Rozemberczki et al., Multi-scale Attributed Node Embedding (2019)\n\n[6] Xu et al., How Powerful are Graph Neural Networks? (2019)\n\n[7] Gorban et al., Approximation with random bases: Pro et Contra (2016)\n\n[8] Ying et al., Do Transformers Really Perform Badly for Graph Representation? (2021)", " > It is still not clear to me why Transformers for graph need to be the same as other transformers. I think it is natural that Graph Transformers need to have something special for graph tasks. According to \"No free lunch\" theorem, there can not be an algorithm that works the best for all tasks. Therefore, considering task-specific models are necessary. For example, Swin-Transformers takes the inductive bias for vision tasks into Transformers for vision tasks.\n\nA2. While we agree with the reviewer's opinion, we think graph Transformers with a minimal graph-specific inductive bias is an exciting direction to explore by itself. This is mainly due to its potential to open a new chapter for incorporating graph-structured data such as scene graphs to generalist multi-modal agents similar to Perceiver [1] and Perceiver IO [2] . The design of such models are focused on reducing the data-specific inductive bias and instead scaling the data and model; to incorporate graphs to such pipeline, a design that adheres better to modality-agnostic architecture tech stack is needed. Our work provides a rigorous theoretical base on such design, by showing that pure Transformers can work well on graphs.\n\n[1] Jaegle et al., Perceiver: General Perception with Iterative Attention (2021)\n\n[2] Jaegle et al., Perceiver IO: A General Architecture for Structured Inputs & Outputs (2021)\n\n> The empirical results also show no advantages compared with other Transformers for Graphs with larger resources cost (...) I'd like to know why SGT performs worse than other GraphFormers. Do you have any possible causes?\n\nA3. While our model is currently underperformed by Graphormer and its successors, we think the low performance is, in part, because we intentionally kept its components simple to faithfully adhere to the equivariance theory. Indeed, direct comparison of our method to the graph transformers is unfair since these methods rely on various graph-specific features and inductive biases (e.g., centrality encoding, minimum shortest path distance, etc.) while our method intentionally eliminated them from the model and is based on pure attention mechanism of Transformers. We think it has a lot of room for performance improvement if we focus on engineering. For example, the model currently uses Laplacian eigenvectors [1] as node identifiers, which has been criticized for issues such as loss of structural information [2] and sign ambiguity [3]. We could, e.g., try to relax the theoretical requirement for orthonormality of node identifiers and incorporate such more powerful node positional encodings as node identifiers, which could potentially yield better performance in practice. We think engineering our model to match or outperform more sophisticated graph Transformers is a promising and important next research direction.\n\n[1] Dwivedi et al., Benchmarking Graph Neural Networks (2020)\n\n[2] Kreuzer et al., Rethinking Graph Transformers with Spectral Attention (2021)\n\n[3] Lim et al., Sign and Basis Invariant Networks for Spectral Graph Representation Learning (2022)", " > As for SGT+Performer, I'm not sure whether such a comparison is fair since other Transformers for Graphs may also use the linearized acceleration by modifying their models like Linformer's attention?\n\nA4. To address the concern, let us explain why Graphormer [1] that utilizes fully-connected self-attention operator on nodes cannot utilize many efficient attention methods to reduce the memory complexity from $\\mathcal{O}(n^2)$ to $\\mathcal{O}(n)$. Prior graph transformers including EGT [2], GRPE [3], and SAN [4] can be analyzed analogously. Let us first remind self-attention with query, key, and value $\\mathbf{Q}, \\mathbf{K}, \\mathbf{V}\\in\\mathbb{R}^{n\\times d}$ and the attention matrix $\\boldsymbol{\\alpha}\\in\\mathbb{R}^{n\\times n}$:\n\n$\\text{Att}(\\mathbf{Q}, \\mathbf{K}, \\mathbf{V})\\_i = \\sum\\_j\\boldsymbol{\\alpha}\\_{ij}\\mathbf{V}\\_j \\text{ where }\\boldsymbol{\\alpha}\\_{ij} = \\frac{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_j/\\sqrt{d}\\right)}{\\sum\\_{k}{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_k/\\sqrt{d}\\right)}}.$\n\nFor graphs, as the self-attention on nodes alone cannot incorporate the edge connectivity structure, Graphormer incorporates the graph information into the self-attention matrix $\\alpha\\in\\mathbb{R}^{n\\times n}$ utilizing the attention bias matrix $\\textbf{b}\\in\\mathbb{R}^{n\\times n}$ (referred to as the edge/spatial encoding) as the following:\n\n$\\boldsymbol{\\alpha}\\_{ij} = \\frac{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_j/\\sqrt{d} + \\mathbf{b}\\_{ij}\\right)}{\\sum\\_{k}{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_k/\\sqrt{d} + \\mathbf{b}\\_{ik}\\right)}}.$\n\nHere, the attention bias matrix $\\textbf{b}\\in\\mathbb{R}^{n\\times n}$ is the essential component to encorporate graph structure into computation. Unfortunately, this modification immediately precludes the adaptation of many efficient attention techniques developed for pure self-attention. As representative examples, let us take Performer [5] (which we mainly used in our work), linear Transformer [6], efficient Transformer [7], and Random Feature Attention [8]. The methods are based on kernelization of the $\\text{Att}()$ operator as following [5]:\n\n$\\text{Att}\\_\\phi(\\mathbf{Q}, \\mathbf{K}, \\mathbf{V})\\_i = \\sum\\_j\\frac{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_j/\\sqrt{d}\\right)}{\\sum\\_{k}{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_k/\\sqrt{d}\\right)}}\\mathbf{V}\\_j$\n$= \\sum\\_j\\frac{\\phi(\\mathbf{Q}\\_i)^\\top \\phi(\\mathbf{K}\\_j)}{\\sum\\_{k}{\\phi(\\mathbf{Q}\\_i)^\\top \\phi(\\mathbf{K}\\_k)}}\\mathbf{V}\\_j = \\frac{\\phi(\\mathbf{Q}\\_i)^\\top\\left(\\sum\\_j\\phi(\\mathbf{K}\\_j)\\mathbf{V}\\_j^\\top\\right)}{{\\phi(\\mathbf{Q}\\_i)^\\top\\left(\\sum\\_{k}\\phi(\\mathbf{K}\\_k)\\right)}}.$\n\nThe above factorization of the exponential into a pairwise dot product, in turn, eliminates the need to explicitly compute the attention matrix for computing $\\text{Att}\\_\\phi()$ and consequently reduces both time and memory cost to $\\mathcal{O}(n)$. Unfortunately, Graphormer and related variations are fundamentally unable to utilize the method. The bias term $\\mathbf{b}\\_{ij}$ is added to the dot product before the exponential, requiring that the full pairwise self-attention matrix $\\boldsymbol{\\alpha}\\in\\mathbb{R}^{n\\times n}$ is always explicitly computed to obtain the output and leading to $\\mathcal{O}(n^2)$.\n\nWhile our above explanation mainly regards kernelization methods, a number of other efficient Transformers, including Set Transformer [9], LUNA [10], Linformer [11], Nyströmformer[12], Perceiver [13], and Perceiver-IO [14] are not applicable to Graphormer due to similar reasons. We appreciate the comment and will add relevant discussions to the main text.\n\n[1] Ying et al., Do Transformers Really Perform Bad for Graph Representation? (2022)\n\n[2] Hussain et al., Global Self-Attention as a Replacement for Graph Convolution (2022)\n\n[3] Park et al., GRPE: Relative Positional Encoding for Graph Transformer (2022)\n\n[4] Kreuzer et al., Rethinking Graph Transformers with Spectral Attention (2022)\n\n[5] Choromanski et al., Rethinking Attention with Performers (2020)\n\n[6] Katharopoulos et al., Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (2020)\n\n[7] Shen et al., Efficient Attention: Attention with Linear Complexities (2018)\n\n[8] Peng et al., Random Feature Attention (2021)\n\n[9] Lee et al., Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks (2018)\n\n[10] Ma et al., Luna: Linear Unified Nested Attention (2021)\n\n[11] Wang et al., Linformer: Self-Attention with Linear Complexity (2020)\n\n[12] Xiong et al., Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention (2021)\n\n[13] Jaegle et al., Perceiver: General Perception with Iterative Attention (2021)\n\n[14] Jaegle et al., Perceiver IO: A General Architecture for Structured Inputs & Outputs (2021)", " > Typo: line 286 Figure 1 -> Figure 2\n\nA5. We appreciate the comment and will fix the typo.\n\n> I'd like to know whether other baselines like GraphFormers use the Laplacian positional embedding or not. Since the Laplacian positional embeddings are useful tricks and have been used in GraphFormers, SAN and other works. (...) I think they should clearly tell the readers that they are not the first who uses the Laplacian's eigenvectors in Graph Transformers since this trick is a simple but effective one.\n\nA6. We appreciate the comment. Other baselines in Table 2, including Graphormer that uses shortest path-based spatial encoding, do not utilize Laplacian eigenvectors. An exception is EGT [1] that uses singular value decomposition on the adjacency matrix to obtain node features, which can be interpreted similarly to Laplacian eigenvectors. Still, we note that their framework is based on the edge encodings of Graphormer [2], which is largely different from our approach that does not require attention bias. We will revise the main text so that the relation to prior work [3, 4, 5] that introduce Laplacian eigenvectors as graph positional embedding is more clear. \n\n[1] Hussain et al., Global Self-Attention as a Replacement for Graph Convolution (2022)\n\n[2] Ying et al., Do Transformers Really Perform Bad for Graph Representation? (2022)\n\n[3] Dwivedi et al., Benchmarking Graph Neural Networks (2020)\n\n[4] Kreuzer et al., Rethinking Graph Transformers with Spectral Attention (2021)\n\n[5] Lim et al., Sign and Basis Invariant Networks for Spectral Graph Representation Learning (2022)", " We sincerely thank all the reviewers for the constructive comments. We are encouraged by the positive feedback on the novelty (E51P, YAnt) and clarity (To3r) of our work, in particular the theoretical contributions (E51P, To3r, YAnt) and their validation with the synthetic experiment (YAnt). Our responses to the specific questions, mainly regarding additional experiments, can be found in respective comments.", " We thank the reviewer for the positive comments. Below we respond to the questions.\n\n> Are node and type identifiers learned?\n\nA1. For the type identifiers (Section 2), we initialize them as learnable vectors and jointly train them with the model. For the node identifiers (Section 2), our theory only requires them to be orthonormal (Section 3.3); therefore, we choose them as simple randomized vectors (ORF) or matrix factorization-based vectors (Lap) that are not learned but algorithmically obtained at each forward pass.\n\n> Evaluated on only one graph dataset.\n\nA2. While we acknowledge that our empirical evaluation is conducted only on PCQM4Mv2, we would like to note that PCQM4Mv2 is one of the largest-scale graph datasets up to date containing 3.8 million graphs [1]. This makes it one of the few suitable benchmarks to test our model, as Transformers are generally designed to work with extremely large-scale data [2, 3].\n\nTo further demonstrate the effectiveness of our method in a more broad class of graph understanding tasks, we conducted additional experiments on transductive node classification datasets, including co-authorship (CS, Physics) [4], co-purchase (Photo, Computers) [4], and Wikipedia page networks (Chameleon, Crocodile) [5], which generally involve large-scale graphs. The statistics of the datasets are outlined below:\n\n| Dataset | CS | Physics | Photo | Computers | Chameleon | Crocodile |\n| --- | --- | --- | --- | --- | --- | --- |\n| # nodes ($n$) | 18,333 | 34,493 | 7,650 | 13,752 | 2,277 | 11,631 |\n| # edges ($m$) | 81,894 | 247,962 | 119,081 | 245,861 | 36,101 | 180,020 |\n| # classes | 15 | 5 | 8 | 10 | 6 | 6 |\n\nWe randomly split the dataset into train, validation, and test sets by randomly reserving 30 random nodes per class for validation and test, respectively, and using the rest for training. \n| | CS | Physics | Photo | Computers | Chameleon | Crocodile |\n| --- | --- | --- | --- | --- | --- | --- |\nGCN | 0.895 +- 0.004 | 0.932 +- 0.004 | 0.926 +- 0.008 | 0.873 +- 0.004 | 0.593 +- 0.01 | 0.660 +- 0.01 |\nGAT | 0.893 +- 0.005 | 0.937 +- 0.01 | 0.947 +- 0.006 | **0.914 +- 0.002** | 0.632 +- 0.011 | 0.692 +- 0.017 |\nGIN | 0.895 +- 0.005 | 0.886 +- 0.046 | 0.886 +- 0.017 | 0.362 +- 0.051 | 0.479 +- 0.027 | 0.515 +- 0.041 |\n| Graphormer | 0.791 +- 0.015 | OOM | 0.894 +- 0.004 | 0.814 +- 0.013 | 0.457 +- 0.011 | 0.489 +- 0.014 |\nSGT (Near-ORF) + Performer | 0.882 +- 0.007 | 0.931 +- 0.009 | 0.872 +- 0.011 | 0.82 +- 0.019 | 0.568 +- 0.019 | 0.583 +- 0.024 |\nSGT (Lap) + Performer | 0.902 +- 0.004 | 0.941 +- 0.007 | 0.919 +- 0.009 | 0.86 +- 0.012 | 0.637 +- 0.032 | 0.638 +- 0.025 |\nSGT (Lap) + Performer + Sp. Equiv. Basis | **0.903 +- 0.004** | **0.950 +- 0.003** | **0.949 +- 0.007** | 0.912 +- 0.006 | **0.653 +- 0.029** | **0.718 +- 0.012** |\n\nThe results are outlined in the above Table. Graphormer [8] results in out-of-memory in the Physics dataset mainly due to the spatial encoding that requires $\\mathcal{O}(n^2)$ memory. By constraining the model capacity appropriately, we were able to run Graphormer on other datasets. However, we observe a relatively low performance, presumably due to the memory complexity that prevents depth and head scaling. As the spatial encoding is incorporated into the model via attention bias, the model strictly requires $\\mathcal{O}(n^2)$ memory and cannot be easily made more efficient. On the other hand, SGT variants are able to utilize Performer kernel attention with $\\mathcal{O}(n+m)$ cost, which allows using larger models to achieve the best performance in all but one dataset (Computers, where the performance is on par with the best model).\n\n(continued to 2/2)", " (continued from 1/2)\n\nLet us finish by explaining how we applied SGT for this task. Considering large $n$, an immediate challenge for our model is dealing with the orthonormality assumption on the node identifiers (Lemma 1), as the maximal number of orthonormal node identifers is upper bounded by its dimension $d_p$. In this case, it is reasonable to introduce *near-orthonormal vectors* as node identifiers, as it is theoretically guaranteed that we can draw an exponential number $\\mathcal{O}^{\\Omega(d_P)}$ of $d_p$-dimensional near-orthonormal vectors [7]. For *SGT (Near-ORF)*, we used $d_p = 64$-dimensional random node identifiers where each entry is sampled from $\\{-1/d_p, +1/d_p\\}$ under equal probability [7]. For *SGT (Lap)*, we used a subset of the Laplacian eigenvectors as node identifiers, more specifically $d_p/2$ eigenvectors with lowest eigenvalues and $d_p/2$ eigenvectors with highest eigenvalues, and choose $d_p$ in the range of $64$ to $100$ based on validation performance.\n\nWhile *Near-ORF* and *Lap* can theoretically serve for an efficient low-rank approximation for orthonormal node identifiers, their approximation can affect the quality of modeled equivariant basis. In particular, equivariant basis ($\\mu$) represented as ***sparse*** basis tensor ($\\mathbf{B}^\\mu$; Definition 2) are expected to be affected more, as they require most entries to be zero. To remedy this, we take a simple approach of residually adding one of such sparse equivariant operators $\\mathbf{X}\\_{ii}\\mapsto \\mathbf{X}\\_{ii} + \\sum\\_{j\\neq i} \\mathbf{X}\\_{ij}$ explicitly after each Transformer layer. We denote this variant as *SGT (Lap) + Performer + Sp. Equiv. Basis*. This fix is minimal, easy to implement, and highly efficient as it only requires a single $\\texttt{torch.coalesce()}$ call, and also empirically effective as shown in Table 1.\n\n[1] Hu et al., OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs (2021)\n\n[2] Brown et al., Language Models are Few-Shot Learners (2020)\n\n[3] Dosovitsky et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (2020)\n\n[4] Shchur et al., Pitfalls of Graph Neural Network Evaluation (2019)\n\n[5] Rozemberczki et al., Multi-scale Attributed Node Embedding (2019)\n\n[6] Xu et al., How Powerful are Graph Neural Networks? (2019)\n\n[7] Gorban et al., Approximation with random bases: Pro et Contra (2016)\n\n[8] Ying et al., Do Transformers Really Perform Badly for Graph Representation? (2021)", " We thank the reviewer for the positive comments. Below we respond to the questions.\n\n> The experiments are not very convincing. Only one graph dataset, PCQM4Mv2, is considered in the paper. If one to two new datasets are added, that would be great. (...) The results seem not strong enough, and more experiments can make this paper stronger.\n\nA1. While we acknowledge that our empirical evaluation is conducted only on PCQM4Mv2, we would like to first appeal that PCQM4Mv2 is one of the largest-scale graph datasets up to date containing 3.8 million graphs [1]. This makes it one of the few suitable benchmarks to test our model, as Transformers are generally designed to work with extremely large-scale data [2, 3].\n\nTo further demonstrate the effectiveness of our method in a more broad class of graph understanding tasks, we conducted additional experiments on transductive node classification datasets, including co-authorship (CS, Physics) [4], co-purchase (Photo, Computers) [4], and Wikipedia page networks (Chameleon, Crocodile) [5], which generally involve large-scale graphs. The statistics of the datasets are outlined below:\n\n| Dataset | CS | Physics | Photo | Computers | Chameleon | Crocodile |\n| --- | --- | --- | --- | --- | --- | --- |\n| # nodes ($n$) | 18,333 | 34,493 | 7,650 | 13,752 | 2,277 | 11,631 |\n| # edges ($m$) | 81,894 | 247,962 | 119,081 | 245,861 | 36,101 | 180,020 |\n| # classes | 15 | 5 | 8 | 10 | 6 | 6 |\n\nWe randomly split the dataset into the train, validation, and test sets by reserving 30 random nodes per class for validation and test, respectively, and using the rest for training. We experiment with simple variants of SGT equipped with Performer kernel attention (*details can be found at the end of the response*), and compare them against strong GNN baselines, including the following:\n* GCN (message-passing that works well on large graphs [4])\n* GAT (message-passing based on attention)\n* GIN (message-passing with 2-WL expressiveness guarantee similar to ours [6])\n* Graphormer (graph transformer equipped with shortest path-based edge encoding and spatial encoding [8])\n\nTable 1. Results of transductive node classification experiment. OOM denotes out-of-memory error on a 24GB RTX 3080 GPU. We report aggregated test accuracy at best validation accuracy over 7 randomized runs.\n| | CS | Physics | Photo | Computers | Chameleon | Crocodile |\n| --- | --- | --- | --- | --- | --- | --- |\nGCN | 0.895 +- 0.004 | 0.932 +- 0.004 | 0.926 +- 0.008 | 0.873 +- 0.004 | 0.593 +- 0.01 | 0.660 +- 0.01 |\nGAT | 0.893 +- 0.005 | 0.937 +- 0.01 | 0.947 +- 0.006 | **0.914 +- 0.002** | 0.632 +- 0.011 | 0.692 +- 0.017 |\nGIN | 0.895 +- 0.005 | 0.886 +- 0.046 | 0.886 +- 0.017 | 0.362 +- 0.051 | 0.479 +- 0.027 | 0.515 +- 0.041 |\n| Graphormer | 0.791 +- 0.015 | *OOM* | 0.894 +- 0.004 | 0.814 +- 0.013 | 0.457 +- 0.011 | 0.489 +- 0.014 |\nSGT (Near-ORF) + Performer | 0.882 +- 0.007 | 0.931 +- 0.009 | 0.872 +- 0.011 | 0.82 +- 0.019 | 0.568 +- 0.019 | 0.583 +- 0.024 |\nSGT (Lap) + Performer | 0.902 +- 0.004 | 0.941 +- 0.007 | 0.919 +- 0.009 | 0.86 +- 0.012 | 0.637 +- 0.032 | 0.638 +- 0.025 |\nSGT (Lap) + Performer + Sp. Equiv. Basis | **0.903 +- 0.004** | **0.950 +- 0.003** | **0.949 +- 0.007** | 0.912 +- 0.006 | **0.653 +- 0.029** | **0.718 +- 0.012** |\n\nThe results are outlined in Table 1. Graphormer [8] results in out-of-memory in the Physics dataset mainly due to the spatial encoding that requires $\\mathcal{O}(n^2)$ memory. By constraining the model capacity appropriately, we were able to run Graphormer on other datasets. However, we observe a low performance, presumably due to the memory complexity that prevents depth and head scaling. As the spatial encoding is incorporated into the model via attention bias, the model strictly requires $\\mathcal{O}(n^2)$ memory and cannot be easily made more efficient. On the other hand, SGT variants are able to utilize Performer kernel attention with $\\mathcal{O}(n+m)$ cost, which allows using larger models to achieve the best performance in all but one dataset (Computers, where the performance is on par with the best model).\n\n(continued to 2/2)", " (continued from 1/2)\n\nLet us finish by explaining how we applied SGT for this task. Considering large $n$, an immediate challenge for our model is dealing with the orthonormality assumption on the node identifiers (Lemma 1), as the maximal number of orthonormal node identifers is upper bounded by its dimension $d_p$. In this case, it is reasonable to introduce *near-orthonormal vectors* as node identifiers, as it is theoretically guaranteed that we can draw an exponential number $\\mathcal{O}^{\\Omega(d_P)}$ of $d_p$-dimensional near-orthonormal vectors [7]. For *SGT (Near-ORF)*, we used $d_p = 64$-dimensional random node identifiers where each entry is sampled from $\\{-1/d_p, +1/d_p\\}$ under equal probability [7]. For *SGT (Lap)*, we used a subset of the Laplacian eigenvectors as node identifiers, more specifically $d_p/2$ eigenvectors with lowest eigenvalues and $d_p/2$ eigenvectors with highest eigenvalues, and choose $d_p$ in the range of $64$ to $100$ based on validation performance.\n\nWhile *Near-ORF* and *Lap* can theoretically serve as an efficient low-rank approximation for orthonormal node identifiers, their approximation can affect the quality of modeled equivariant basis. In particular, equivariant basis ($\\mu$) represented as ***sparse*** basis tensor ($\\mathbf{B}^\\mu$; Definition 2) are expected to be affected more, as they require most entries to be zero. To remedy this, we take a simple approach of residually adding one of such sparse equivariant basis operators $\\mathbf{X}\\_{ii} \\mapsto \\mathbf{X}\\_{ii} + \\sum\\_{j\\neq i}\\mathbf{X}\\_{ij}$ explicitly after each Transformer layer. We denote this variant as *SGT (Lap) + Performer + Sp. Equiv. Basis*. This fix is minimal, easy to implement, and highly efficient as it only requires a single $\\texttt{torch.coalesce()}$ call, and also empirically effective, as shown in Table 1.\n\n[1] Hu et al., OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs (2021)\n\n[2] Brown et al., Language Models are Few-Shot Learners (2020)\n\n[3] Dosovitsky et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (2020)\n\n[4] Shchur et al., Pitfalls of Graph Neural Network Evaluation (2019)\n\n[5] Rozemberczki et al., Multi-scale Attributed Node Embedding (2019)\n\n[6] Xu et al., How Powerful are Graph Neural Networks? (2019)\n\n[7] Gorban et al., Approximation with random bases: Pro et Contra (2016)\n\n[8] Ying et al., Do Transformers Really Perform Badly for Graph Representation? (2021)\n\n\n> It will be better to conduct more experimental ablation studies, such as the effects of type embedding.\n\nA2. We appreciate the comment. We would like to gently remind the reviewer that Table 1 (Section 4.1) provides a thorough analysis on the role of node and type identifiers in a close proximity to our theory in Section 3. Specifically, we have shown that both the node and type identifiers contribute to the approximation ability for equivariant basis, while node identifiers play a more critical role relative to type identifiers. Furthermore, please note that we conducted a comprehensive ablation study on the choice of node identifiers on the PCQM4Mv2 dataset (Table 2). Specifically, we have shown that, while node identifier based on orthonormal random embedding (ORF) works to some degree compared to absence of node identifier, the one based on the Laplacian eigenvector (Lap) provides additional information on the structure of a graph and empirically performs better. If the reviewer has additional concerns regarding the ablation study, please feel free to suggest us. We are happy to elaborate more in the rolling discussion period. ", " We thank the reviewer for the constructive comments. Below we respond to the questions.\n\n> I’m not entirely sure that the prior graph transformers cannot leverage different attention/transformer architecture. (...) 2. why can’t the prior works use the same efficient transformer implementations?\n\nA1. To address the concern, let us explain why Graphormer [1] that utilizes the fully-connected self-attention operator on nodes cannot utilize many efficient attention methods to reduce the memory complexity from $\\mathcal{O}(n^2)$ to $\\mathcal{O}(n)$. Prior graph transformers, including EGT [2], GRPE [3], and SAN [4], can be analyzed analogously. Let us first remind self-attention with the query, key, and value $\\mathbf{Q}, \\mathbf{K}, \\mathbf{V}\\in\\mathbb{R}^{n\\times d}$ and the attention matrix $\\boldsymbol{\\alpha}\\in\\mathbb{R}^{n\\times n}$:\n\n$\\text{Att}(\\mathbf{Q}, \\mathbf{K}, \\mathbf{V})\\_i = \\sum_j\\boldsymbol{\\alpha}\\_{ij}\\mathbf{V}\\_j \\text{ where }\\boldsymbol{\\alpha}\\_{ij} = \\frac{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_j/\\sqrt{d}\\right)}{\\sum\\_{k}{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_k/\\sqrt{d}\\right)}}.$\n\nFor graphs, as the self-attention on nodes alone cannot incorporate the edge connectivity structure, Graphormer incorporates the graph information into the self-attention matrix $\\boldsymbol{\\alpha}\\in\\mathbb{R}^{n\\times n}$ utilizing the attention bias matrix $\\textbf{b}\\in\\mathbb{R}^{n\\times n}$ (referred to as the edge/spatial encoding) as the following:\n\n$\\boldsymbol{\\alpha}\\_{ij} = \\frac{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_j/\\sqrt{d} + \\mathbf{b}\\_{ij}\\right)}{\\sum\\_{k}{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_k/\\sqrt{d} + \\mathbf{b}\\_{ik}\\right)}}.$\n\nHere, the attention bias matrix $\\textbf{b}\\in\\mathbb{R}^{n\\times n}$ is the essential component to encorporate graph structure into computation. Unfortunately, this modification immediately precludes the adaptation of many efficient attention techniques developed for pure self-attention. As representative examples, let us take Performer [5] (which we mainly used in our work), linear Transformer [6], efficient Transformer [7], and Random Feature Attention [8]. The methods are based on kernelization of the $\\text{Att}()$ operator as following [5]:\n\n$\\text{Att}\\_\\phi(\\mathbf{Q}, \\mathbf{K}, \\mathbf{V})\\_i = \\sum\\_j\\frac{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_j/\\sqrt{d}\\right)}{\\sum\\_{k}{\\exp\\left(\\mathbf{Q}\\_i^\\top \\mathbf{K}\\_k/\\sqrt{d}\\right)}}\\mathbf{V}\\_j$\n$= \\sum\\_j\\frac{\\phi(\\mathbf{Q}\\_i)^\\top \\phi(\\mathbf{K}\\_j)}{\\sum\\_{k}{\\phi(\\mathbf{Q}\\_i)^\\top \\phi(\\mathbf{K}\\_k)}}\\mathbf{V}\\_j = \\frac{\\phi(\\mathbf{Q}\\_i)^\\top\\left(\\sum\\_j\\phi(\\mathbf{K}\\_j)\\mathbf{V}\\_j^\\top\\right)}{{\\phi(\\mathbf{Q}\\_i)^\\top\\left(\\sum\\_{k}\\phi(\\mathbf{K}\\_k)\\right)}}.$\n\nThe above factorization of the exponential into a pairwise dot product, in turn, eliminates the need to explicitly compute the attention matrix for computing $\\text{Att}\\_\\phi()$ and consequently reduces both time and memory cost to $\\mathcal{O}(n)$. Unfortunately, Graphormer and related variations are fundamentally unable to utilize the method. The bias term $\\mathbf{b}\\_{ij}$ is added to the dot product before the exponential, requiring that the full pairwise self-attention matrix $\\boldsymbol{\\alpha}\\in\\mathbb{R}^{n\\times n}$ is always explicitly computed to obtain the output and leading to $\\mathcal{O}(n^2)$.\n\nWhile our above explanation mainly regards kernelization methods, a number of other efficient Transformers, including Set Transformer [9], LUNA [10], Linformer [11], Nyströmformer[12], Perceiver [13], and Perceiver-IO [14] are not applicable to Graphormer due to similar reasons. We appreciate the comment and will add relevant discussions to the main text.\n\n[1] Ying et al., Do Transformers Really Perform Bad for Graph Representation? (2022)\n\n[2] Hussain et al., Global Self-Attention as a Replacement for Graph Convolution (2022)\n\n[3] Park et al., GRPE: Relative Positional Encoding for Graph Transformer (2022)\n\n[4] Kreuzer et al., Rethinking Graph Transformers with Spectral Attention (2022)\n\n[5] Choromanski et al., Rethinking Attention with Performers (2020)\n\n[6] Katharopoulos et al., Transformers are RNNs: Fast Autoregressive Transformers with Linear Attention (2020)\n\n[7] Shen et al., Efficient Attention: Attention with Linear Complexities (2018)\n\n[8] Peng et al., Random Feature Attention (2021)\n\n[9] Lee et al., Set Transformer: A Framework for Attention-based Permutation-Invariant Neural Networks (2018)\n\n[10] Ma et al., Luna: Linear Unified Nested Attention (2021)\n\n[11] Wang et al., Linformer: Self-Attention with Linear Complexity (2020)\n\n[12] Xiong et al., Nyströmformer: A Nyström-Based Algorithm for Approximating Self-Attention (2021)\n\n[13] Jaegle et al., Perceiver: General Perception with Iterative Attention (2021)\n\n[14] Jaegle et al., Perceiver IO: A General Architecture for Structured Inputs & Outputs (2021)", " > One weakness/question in mind is on scalability and how to adapt to graphs with various sizes. The node identifier matrix $\\mathbf{P}\\in\\mathbb{R}^{n\\times d_p}$, where $d_p$ should be at least $n$. How can this scale to large graphs? Also the position encoding dimensions (node + type identifiers) may be much larger than the original node/edge features ($\\mathbf{X}_v$). Laplacian eigenvectors as node identifiers are still different from sinusoidal positional embeddings in NLP since the sinusoidal position embeddings can be of arbitrary dimension, but for the Laplacian eigenvectors, the dimension is fixed (which is the number of nodes on the graph). (...) How do you set the dimension of node identifiers? Is the dimension the number of nodes in the graph? How do you deal with a batch of graphs with different sizes?\n\nA2. For small $n$, we can set $d_p$ as an integer larger than the maximum number of nodes in the training set. Then, for an input graph with $n$ nodes, we zero-pad the $n\\times n$ node identifier matrix to $n\\times d_p$. This allows batching the node identifiers of $B$ graphs with $N$ maximum nodes into a single tensor of size $B\\times N\\times d_p$.\n\nFor moderately large $n$ (often in an inductive learning setting), the number of nodes can be a problem as we can maximally draw $d_p$ orthonormal node identifiers for node identifier dimension $d_p$. If we need $n > d_p$, it is reasonable to introduce *near-orthonormal vectors* as node identifiers, as it is theoretically guaranteed that we can draw exponentially many ($\\mathcal{O}^{\\Omega(d_p)}$) $d_p$-dimensional near-orthonormal vectors [1]. Such near-orthonormal vectors include random vectors where each entry is a binary random variable with support $\\{-d_p^{-1/2}, +d_p^{1/2}\\}$ [1], and more practically, a subset of Laplacian eigenvectors containing $d_p<n$ vectors as often suggested in GNN literature [2]. While such vectors serve for an efficient low-rank approximation for orthonormal node identifiers, we observe that they do not necessarily harm the performance in practice. For example, in PCQM4Mv2 (Section 4.2), our preliminary observations suggest that using $d_p=16$ eigenvectors with the smallest eigenvalues for SGT (Lap) leads to the best performance even though the number of nodes can exceed $20$.\n\nHowever, for very large $n$ (often in a transductive learning setting), as noted by the reviewer, the approximation error of near-orthonormal node identifiers can bring a challenge to our model. More specifically, their approximation error can affect the quality of the learned equivariant basis (Lemma 1 and Definition 2), affecting the performance. In particular, we speculate that the equivariant basis ($\\mu$) represented as ***sparse*** basis tensor ($\\mathbf{B}^\\mu$ with $\\mathcal{O}(n)$ nonzero entries) can be affected; they require most of the self-attention coefficients to be zero, which particularly requires low-error orthonormality and can be challenging to model for $n \\gg d_p$. Therefore, in our added experiments involving very large graphs, we additionally introduce a very simple remedy of manually adding one such sparse equivariant basis $\\mathbf{X}\\_{ii} \\mapsto \\mathbf{X}\\_{ii} + \\sum\\_{j\\neq i}\\mathbf{X}\\_{ij}$ after each Transformer layer. As we show in a later response, this fix is minimal, easy to implement, cost-efficient, and empirically fixes the performance issue of our model when applied to very large graphs.\n\n[1] Gorban et al., Approximation with random bases: Pro et Contra (2016)\n\n[2] Dwivedi et al., Benchmarking Graph Neural Networks (2020)\n\n> From a practitioner point of view, the experimental results are not sufficient. The authors only evaluate their model on one real world dataset PCQM4M. There are multiple graph classification/regression dataset on Open Graph Benchmark and Benchmarking GNNs.\n\nA3. While we acknowledge that our empirical evaluation is conducted only on PCQM4Mv2, we would like to note that PCQM4Mv2 is one of the largest-scale graph datasets up to date containing 3.8 million graphs [1]. This makes it one of the few suitable benchmarks to test our model, as Transformers are generally designed to work with extremely large-scale data [2, 3].\n\nTo further demonstrate the effectiveness of our method in a more broad class of graph understanding tasks, we conducted additional experiments on transductive node classification datasets, which typically involve much larger graphs than PCQM4M. For this experiment, please refer to A4.\n\n[1] Hu et al., OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs (2021)\n\n[2] Brown et al., Language Models are Few-Shot Learners (2020)\n\n[3] Dosovitsky et al., An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (2020)", " > Even with the claim that the proposed method can leverage efficient transformer implementations, it does not really address my concern. 1. you never show that the model actually works in a setting where we really need an efficient implementation, i.e., other quadratic implementations do not run or oom. (...) The only limitation in my mind is that the proposed method may not necessarily generalize to large graphs as well as message passing networks. Although the paper mentioned that they can use all the efficient transformer architectures, yet there are no experiments, no evidence showing this. The full transformer architecture with quadratic complexity still runs.\n\nA4. To further demonstrate the effectiveness of our method in a more broad class of graph understanding tasks, we conducted additional experiments on transductive node classification datasets, including co-authorship (CS, Physics) [4], co-purchase (Photo, Computers) [4], and Wikipedia page networks (Chameleon, Crocodile) [5], which generally involve large-scale graphs. The statistics of the datasets are outlined below:\n\n| Dataset | CS | Physics | Photo | Computers | Chameleon | Crocodile |\n| --- | --- | --- | --- | --- | --- | --- |\n| # nodes ($n$) | 18,333 | 34,493 | 7,650 | 13,752 | 2,277 | 11,631 |\n| # edges ($m$) | 81,894 | 247,962 | 119,081 | 245,861 | 36,101 | 180,020 |\n| # classes | 15 | 5 | 8 | 10 | 6 | 6 |\n\nWe randomly split the dataset into the train, validation, and test sets by randomly reserving 30 nodes per target class for validation and test, respectively, and using the rest for training. Our models are SGT equipped with Performer kernel attention. For the node identifiers, we either use $64$-dimensional near-orthonormal random vectors (denoted *SGT (Near-ORF)*) or a subset of Laplacian eigenvectors with half lowest and half highest eigenvalues (denoted *SGT (Near-ORF)*) with the number of eigenvectors chosen in the range of $64$ to $100$ based on validation performance. For *SGT (Near-ORF)*, we additionally introduce a very simple sparse equivariant basis $\\mathbf{X}\\_{ii} \\mapsto \\mathbf{X}\\_{ii} + \\sum\\_{j\\neq i}\\mathbf{X}\\_{ij}$ after each Transformer layer as suggested in A2 (denoted *SGT (Near-ORF) + Sp. Equiv. Basis*). We compare our method against strong GNN baselines, including the following:\n* GCN (message-passing that works well on large graphs [1])\n* GAT (message-passing based on attention)\n* GIN (message-passing with 2-WL expressiveness similar to ours [2])\n* Graphormer (graph transformer equipped with shortest path-based edge encoding and spatial encoding [3])\n\nTable 1. Results of transductive node classification experiment. OOM denotes out-of-memory error on a 24GB RTX 3080 GPU. We report aggregated test accuracy at best validation accuracy over 7 randomized runs.\n| | CS | Physics | Photo | Computers | Chameleon | Crocodile |\n| --- | --- | --- | --- | --- | --- | --- |\nGCN | 0.895 +- 0.004 | 0.932 +- 0.004 | 0.926 +- 0.008 | 0.873 +- 0.004 | 0.593 +- 0.01 | 0.660 +- 0.01 |\nGAT | 0.893 +- 0.005 | 0.937 +- 0.01 | 0.947 +- 0.006 | **0.914 +- 0.002** | 0.632 +- 0.011 | 0.692 +- 0.017 |\nGIN | 0.895 +- 0.005 | 0.886 +- 0.046 | 0.886 +- 0.017 | 0.362 +- 0.051 | 0.479 +- 0.027 | 0.515 +- 0.041 |\n| Graphormer | 0.791 +- 0.015 | *OOM* | 0.894 +- 0.004 | 0.814 +- 0.013 | 0.457 +- 0.011 | 0.489 +- 0.014 |\nSGT (Near-ORF) + Performer | 0.882 +- 0.007 | 0.931 +- 0.009 | 0.872 +- 0.011 | 0.82 +- 0.019 | 0.568 +- 0.019 | 0.583 +- 0.024 |\nSGT (Lap) + Performer | 0.902 +- 0.004 | 0.941 +- 0.007 | 0.919 +- 0.009 | 0.86 +- 0.012 | 0.637 +- 0.032 | 0.638 +- 0.025 |\nSGT (Lap) + Performer + Sp. Equiv. Basis | **0.903 +- 0.004** | **0.950 +- 0.003** | **0.949 +- 0.007** | 0.912 +- 0.006 | **0.653 +- 0.029** | **0.718 +- 0.012** |\n\nThe results are outlined in Table 1. Graphormer [4] results in out-of-memory in the Physics dataset mainly due to the spatial encoding that requires $\\mathcal{O}(n^2)$ memory. By constraining the model capacity appropriately, we were able to run Graphormer on other datasets. However, we observe a low performance, presumably due to the memory complexity that prevents depth and head scaling. As the spatial encoding is incorporated into the model via attention bias, the model strictly requires $\\mathcal{O}(n^2)$ memory and cannot be easily made more efficient. On the other hand, SGT variants are able to utilize Performer kernel attention with $\\mathcal{O}(n+m)$ cost, which allows using larger models to achieve the best performance in all but one dataset (Computers, where the performance is on par with the best model).\n\n[1] Shchur et al., Pitfalls of Graph Neural Network Evaluation (2019)\n\n[2] Rozemberczki et al., Multi-scale Attributed Node Embedding (2019)\n\n[3] Xu et al., How Powerful are Graph Neural Networks? (2019)\n\n[4] Ying et al., Do Transformers Really Perform Badly for Graph Representation? (2021)", " > Is there any difference between modeling directed and undirected edges? If one edge (u,v) is undirected, I think the modeling of (u,v) and (v,u) should be identical. However, it seems that the edge embedding will be $[\\mathbf{X}\\_{(u,v)}, \\mathbf{P}\\_u, \\mathbf{P}\\_v][\\mathbf{X}\\_{(u,v)},\\mathbf{P}\\_v, \\mathbf{P}\\_u]$ for $(u, v)$. Also wonder how does the above issue about directedness and undirectedness generalize to higher-order hypergraphs?\n\nA5. As the reviewer noted, we treat an undirected input edge $(u, v)$ as if both diretions $(u,v)$ and $(v,u)$ are present. This leads to a pair of edge tokens $[\\mathbf{X}\\_{(u,v)}, \\mathbf{P}\\_u, \\mathbf{P}\\_v][\\mathbf{X}\\_{(v, u)},\\mathbf{P}\\_v, \\mathbf{P}\\_u]$. This is a common characteristic of tensor-based permutation equivariant neural networks, namely $k$-IGN [1, 2, 3, 4] and its successors such as PPGN [5]. Similar to the second-order case, an undirected order-$k$ input hyperedge $(v\\_1, ..., v\\_k)$ of an higher-order hypergraph is parsed to all possible orderings of node identifiers. While this can be easily avoided in practice by using a single token for each undirected edge and pooling the node identifiers as $\\sum\\_{i=1}^k\\rho(\\mathbf{P}\\_{v\\_i})$, we refrained from doing it to adhere more faithfully to the theory of $k$-IGN, and considering that having an additional edge token introduces a tolerable overhead in the second-order (graphs).\n\n[1] Maron et al., Invariant and Equivariant Graph Networks (2019)\n\n[2] Maron et al., On the Universality of Invariant Networks (2019)\n\n[3] Keriven et al., Universal Invariant and Equivariant Graph Neural Networks (2019)\n\n[4] Serviansky et al., Set2Graph: Learning Graphs From Sets (2020)\n\n[5] Maron et al., Provably Powerful Graph Networks (2019)\n\n> It’s interesting that using Laplacian eigenvectors as node identifiers and not using type id leads to comparable performance with using type identifiers on sparse inputs (the 7th row vs the final three rows in Table 1). Any insights?\n\nA6. We think that it might be possible that the network learned the sparse structures of the graphs encoded the Laplacian eigenvector, and partially utilized it to produce approximate patterns even in the absence of type identifiers. We still note that the difference coming from the presence of type identifiers is significant (please see Table 4).\n\n> Have you submitted your final results to get the numbers on the test set of PCQM4M? The authors do not seem to include a discussion on how they tune the hyperparameters on PCQM4M especially since you only have the validation set. The validation performance alone is not convincing enough, let alone the proposed method achieves higher MAE than prior transformer-based methods such as Graphormer.\n\nA7. We appreciate the comment. We will shortly submit our best-performing model (SGT (Lap) in Table 2) during the rebuttal (it requires a bit of time as the leaderboard submission requires a technical report), and will report the test score once we have it. Most of the hyperparameters of our model, including the depth, width, batch size, and learning schedule, were kept identical to the $\\texttt{Graphormer-base}$ model [1] without extensive tuning for a controlled comparison. Thus, we (cautiosly) anticipate that the performance gap between the validation and the test set would not be high enough to affect our arguments in Section 4.\n\n[1] Ying et al., Do Transformers Really Perform Badly for Graph Representation? (2021)", " > Even on the PCQM4M dataset, it seems that all variants of the proposed methods are worse than prior graph transformers. (...) The model achieves higher MAE than all the three graph transformer baselines in Table 2. I did not find sufficient discussion on this result. There is basically a one-liner, which still emphasizes that the proposed method can be optimized computationally while not discussing the reasons for not matching the same empirical performance. State-of-the-art results are not necessary for publication but detailed discussion and analysis on why not is necessary.\n\nA8. We appreciate the comment. While we could not include the following in-depth discussion in the initial draft due to page restrictions, we will add them to the revised version. While our model is currently underperformed by Graphormer and its successors, we think the low performance is, in part, because we intentionally kept its components simple to faithfully adhere to the equivariance theory. We think it has a lot of room for performance improvement if we focus on engineering. For example, the model currently uses Laplacian eigenvectors [1] as node identifiers, which has been criticized for issues such as loss of structural information [2] and sign ambiguity [3]. We could, e.g., try to relax the theoretical requirement for orthonormality of node identifiers and incorporate more powerful node positional encodings [2, 3] as node identifiers, which could potentially yield better performance in practice. We consider engineering our model to match or outperform more sophisticated graph Transformers as a promising and important next research direction.\n\n[1] Dwivedi et al., Benchmarking Graph Neural Networks (2020)\n\n[2] Kreuzer et al., Rethinking Graph Transformers with Spectral Attention (2021)\n\n[3] Lim et al., Sign and Basis Invariant Networks for Spectral Graph Representation Learning (2022)\n\n> Can the proposed method be applied to other node-level or edge-level tasks like link prediction? (...) Besides, it would be nice to add more discussion on how the proposed method may be applied to other node and edge level tasks.\n\nWe appreciate the comment and will add the following discussion to the main text. As our method produces a representation of each node and edge token, in principle, any node-level or edge-level prediction can be made by putting a prediction head on the appropriate output token. For link prediction, one could obtain the node tokens and use pairwise logistic regression head following standard practice [1], or more interestingly, could \"query\" the model with the concatenated pairs of node identifiers (which is just an edge token) and feed the output token to a logistic regresion head. While we demonstrate the use of our model for node-level classification task in Table 1, we consider extending our framework to more diverse tasks including link prediction as an important next direction.\n\n[1] Sankar et al., Dynamic Graph Representation Learning via Self-Attention Networks (2018)\n\n> The authors mentioned in the checklist that they included the code and data. However, their supplementary is only a PDF, I do not see the code to reproduce the concrete numbers listed in the experiment section.\n\nWe apologize for the confusion, and will shortly provide the anonymized code for reproducing the results in the experiment section.", " The authors of the paper claim that Transformers can be as expressive as graph-invariant networks by applying them directly to a graph and treating all nodes and edges in the graph as tokens. Using appropriate node and type identifiers, self-attention can approximate any permutation equivariant linear operator in the graph. Orthogonal random features and Laplacian feature vectors for node identifiers are proposed. The final proposed model, Soft Graph Transformer (SGT), achieves performance improvements over the GNN baseline on the PCQM4Mv2 dataset. Strengths: \n1. It is novel to treat nodes and edges as input tokens to Transformers and add node and type identifiers.\n2. It is shown that SGT is as expressive as 2-IGN with equivariant linear layers.\n\nWeakness: \n1. Evaluate on only one graph dataset. 1. Are node and type identifiers learned? 1. Evaluate on only one graph dataset.", " This paper proposes a variant of graph transformers, the Soft Graph Transformer (SGT). In particular, all nodes and edges are simply treated as independent tokens, and then they are augmented with token embedding. The authors prove this approach is theoretically at least as expressive as an invariant graph network, which is already more expressive than all message-passing graph neural networks. SGT performs significantly better than all GNNs and is competitive with Transformer variants with strong graph-specific architectural components. Strengths:\nThe motivation is clear. This paper proposes an approach of applying a standard Transformer directly for graphs. Although the algorithm is simple, it is shown that this simple approach yields a powerful graph learner in terms of both theory and experiment. The theoretical analysis is comprehensive and proves the effectiveness of the proposed method.\n\nWeaknesses:\nThe experiments are not very convincing. Only one graph dataset, PCQM4Mv2, is considered in the paper. If one to two new datasets are added, that would be great. It will be better to conduct more experimental ablation studies, such as the effects of type embedding. The results seem not strong enough, and more experiments can make this paper stronger. I have no questions. The authors have addressed the limitations.", " The paper proposes a new way to encode nodes and edges on the graph so that one can directly transform the input graph into a sequence and directly use the sequence transformers. With carefully designed position, type encodings, the paper proves that it is theoretically as expressive as a second-order invariant graph network. The authors validate several of their claims through a synthetic experiment and an experiment on real-world dataset PCQM4M. It achieves comparable performance with prior models and has the potential to scale to large graphs since the proposed method can just do a drop-in replacement of the transformer with all efficient transformer architectures. The paper aims to serialize graphs so that we can leverage the benefits of all those expressive transformer architectures which have been investigated for the past few years. Although I’m not entirely sure that the prior graph transformers cannot leverage different attention/transformer architecture. I think the challenge of applying transformers to graphs is always how to differentiate nodes (i.e., designing different position encodings) while preserving the permutation invariance and equivariance. I like the second part of the paper where the model with the proposed position encodings is proved to be as expressive as a 2-IGN, and I also enjoy the first experiment which validates the theoretical claim.\n\nOne weakness/question in mind is on scalability and how to adapt to graphs with various sizes. The node identifier matrix $\\mathbf{P} \\in \\mathbb{R}^{n \\times d_p}$, where $d_p$ should be at least $n$. How can this scale to large graphs? Also the position encoding dimensions (node + type identifiers) may be much larger than the original node/edge features ($\\mathbf{X}_v$).\nLaplacian eigenvectors as node identifiers are still different from sinusoidal positional embeddings in NLP since the sinusoidal position embeddings can be of arbitrary dimension, but for the Laplacian eigenvectors, the dimension is fixed (which is the number of nodes on the graph).\n\nBesides, from a practitioner point of view, the experimental results are not sufficient. The authors only evaluate their model on one real world dataset PCQM4M. There are multiple graph classification/regression dataset on Open Graph Benchmark and Benchmarking GNNs. Even on the PCQM4M dataset, it seems that all variants of the proposed methods are worse than prior graph transformers. Even with the claim that the proposed method can leverage efficient transformer implementations, it does not really address my concern. 1. you never show that the model actually works in a setting where we really need an efficient implementation, i.e., other quadratic implementations do not run or oom. 2. why can’t the prior works use the same efficient transformer implementations?\n - How do you set the dimension of node identifiers? Is the dimension the number of nodes in the graph? How do you deal with a batch of graphs with different sizes?\n- Is there any difference between modeling directed and undirected edges? If one edge (u,v) is undirected, I think the modeling of (u,v) and (v,u) should be identical. However, it seems that the edge embedding will be $[\\mathbf{X}_{(u,v)}, \\mathbf{P}_u, \\mathbf{P}_v]$ for (u,v) and $[\\mathbf{X}_{(u,v)}, mathbf{P}_v, \\mathbf{P}_u]$ for (v,u). \n- Also wonder how does the above issue about directedness and undirectedness generalize to higher-order hypergraphs?\n- It’s interesting that using Laplacian eigenvectors as node identifiers and not using type id leads to comparable performance with using type identifiers on sparse inputs (the 7th row vs the final three rows in Table 1). Any insights?\n- Have you submitted your final results to get the numbers on the test set of PCQM4M? The authors do not seem to include a discussion on how they tune the hyperparameters on PCQM4M especially since you only have the validation set. The validation performance alone is not convincing enough, let alone the proposed method achieves higher MAE than prior transformer-based methods such as Graphormer. \n- The model achieves higher MAE than all the three graph transformer baselines in Table 2. I did not find sufficient discussion on this result. There is basically a one-liner, which still emphasizes that the proposed method can be optimized computationally while not discussing the reasons for not matching the same empirical performance. State-of-the-art results are not necessary for publication but detailed discussion and analysis on why not is necessary.\n- Can the proposed method be applied to other node-level or edge-level tasks like link prediction?\n- The authors mentioned in the checklist that they included the code and data. However, their supplementary is only a PDF, I do not see the code to reproduce the concrete numbers listed in the experiment section.\n The only limitation in my mind is that the proposed method may not necessarily generalize to large graphs as well as message passing networks. Although the paper mentioned that they can use all the efficient transformer architectures, yet there are no experiments, no evidence showing this. The full transformer architecture with quadratic complexity still runs. Besides, it would be nice to add more discussion on how the proposed method may be applied to other node and edge level tasks (as raised in the above section).", " The authors show that we can use the vanilla Transformers for languages and Vit without any graph-specific modifications can lead to promising results in graph learning. The interesting view is that the proposed model Soft Graph Transformer (SGT) uses edge features as tokens compared to other graph learners who only use node features as input tokens. And identifiers are designed to encode the graph's local information. Therefore, they can use the techniques for Transformers without any other modifications. Furthermore, they also theoretically proved their equivalence to the k-IGN models and the k-WL algorithms. However, I think their empirical evaluations are not enough and the advantages of their proposed SGT are not clear compared with other Graph Transformers. Strengths:\n1. Proposed a new view on the tokens instead of only using node features.\n2. They also give a theoretical analysis of SGT's representative ability as powerful as k-WL and more expressive than GCN. Their analysis also gives us a direct reason why the node identifiers are orthogonal.\n\nWeaknesses:\n1. It is still not clear to me why Transformers for graph need to be the same as other transformers. I think it is natural that Graph Transformers need to have something special for graph tasks. According to \"No free lunch\" theorem, there can not be an algorithm that works the best for all tasks. Therefore, considering task-specific models are necessary. For example, Swin-Transformers takes the inductive bias for vision tasks into Transformers for vision tasks.\n\n2. The empirical results also show no advantages compared with other Transformers for Graphs with larger resources cost. As for SGT+Performer, I'm not sure whether such a comparison is fair since other Transformers for Graphs may also use the linearized acceleration by modifying their models like Linformer's attention?\n\n\nTypo:\nline 286 Figure 1 -> Figure 2 Questions:\n\n1. I'd like to know whether other baselines like GraphFormers use the Laplacian positional embedding or not. Since the Laplacian positional embeddings are useful tricks and have been used in GraphFormers, SAN and other works.\n\n2. I'd like to know why SGT performs worse than other GraphFormers. Do you have any possible causes? I think they should clearly tell the readers that they are not the first who uses the Laplacian's eigenvectors in Graph Transformers since this trick is a simple but effective one." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 5, 3 ]
[ "QXPw4PGudKI", "Sz-hv6eoMtl", "bC8D4WS0PBHO", "5eDd0-MG5X", "SLEUsI1pVNq", "2SUPNaEpJoH", "aZzieG9hMql", "vNi7da5Aa1EF", "rvf1wltR6lZ", "5eDd0-MG5X", "5eDd0-MG5X", "5eDd0-MG5X", "5eDd0-MG5X", "5eDd0-MG5X", "nips_2022_um2BxfgkT2_", "EByE1osL5uL", "EByE1osL5uL", "lVmW25iPI5Y", "lVmW25iPI5Y", "Sz-hv6eoMtl", "Sz-hv6eoMtl", "Sz-hv6eoMtl", "Sz-hv6eoMtl", "Sz-hv6eoMtl", "nips_2022_um2BxfgkT2_", "nips_2022_um2BxfgkT2_", "nips_2022_um2BxfgkT2_", "nips_2022_um2BxfgkT2_" ]
nips_2022_Lr2Z85cdvB
Differentiable hierarchical and surrogate gradient search for spiking neural networks
Spiking neural network (SNN) has been viewed as a potential candidate for the next generation of artificial intelligence with appealing characteristics such as sparse computation and inherent temporal dynamics. By adopting architectures of deep artificial neural networks (ANNs), SNNs are achieving competitive performances in benchmark tasks such as image classification. However, successful architectures of ANNs are not necessary ideal for SNN and when tasks become more diverse effective architectural variations could be critical. To this end, we develop a spike-based differentiable hierarchical search (SpikeDHS) framework, where spike-based computation is realized on both the cell and the layer level search space. Based on this framework, we find effective SNN architectures under limited computation cost. During the training of SNN, a suboptimal surrogate gradient function could lead to poor approximations of true gradients, making the network enter certain local minima. To address this problem, we extend the differential approach to surrogate gradient search where the SG function is efficiently optimized locally. Our models achieve state-of-the-art performances on classification of CIFAR10/100 and ImageNet with accuracy of 95.50%, 76.25% and 68.64%. On event-based deep stereo, our method finds optimal layer variation and surpasses the accuracy of specially designed ANNs meanwhile with 26$\times$ lower energy cost ($6.7\mathrm{mJ}$), demonstrating the advantage of SNN in processing highly sparse and dynamic signals. Codes are available at \url{https://github.com/Huawei-BIC/SpikeDHS}.
Accept
This paper proposes a new architecture search algorithm for spiking neural networks (SNNs). The key insight is to optimize both the cell and the architecture level of the SNN. Convincing numerical results are provided on image classification tasks (CIFAR10, CIFAR100, and an event-based stereo task). One concern raised by the reviewers regards the comparison to existing work (some of which appears to be very recent). This point is raised by all the four reviewers (although it has led to a rather large variance in their initial assessments). After an in-depth discussion between authors and reviewers and a discussion between AC and reviewers as well, it appears that this concern has been addressed in a satisfactory way. Other concerns (e.g., training pipeline and versatility by reviewer cjsQ) have been also resolved, and the remaining ones (measuring energy accurately as mentioned by reviewer LhUf, and computational overhead on neuromorphic hardware as mentioned by reviewer hUzC) have been regarded as out of scope. In summary, the reviewers have found the authors’ response convincing and have reached a consensus towards accepting the paper. After my own reading of the manuscript, I agree with this assessment and I am happy to recommend acceptance. As a final note, I would like to encourage the authors to include in the camera ready the discussions related to the feedback from the reviewers.
train
[ "8A5Zx1db9M-", "ZRpPkr5bn5o", "HJ6tN9KC5q8", "XBHjb5P3Bty", "RfBM5ydvewX", "u4AwF9ufDyl", "xLaCRXI0KA", "AvK3nlWU6G4", "EQZxGej2SuS", "6jRJ90u9RYi", "6m0SU06XPa", "zdLed46sH1", "wkJs38wHYJW", "dx7KkE_yqXQ", "zGLcJgDMgfV", "0RzQrCiHb1i", "WW5HtbQ0i4", "CVvSInpg_hh", "GxGNgybmjru", "YxxibRhimpE", "xINKCDvGgDo" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your reply. We understand your concerns for search time cost and power consumption measurement problems in the field of SNN. However, we believe these issues should not influence the main contribution of this work stated in the general response, the reasons are following:\n\n\n**For the issue of search time cost**\n\n\nIn terms of \"The authors just use a previous NAS technique and show accuracy improvement without actually looking at the bottomline problems of search time cost, SNN computation overload during search etc\": Actually, our work has considered the search time cost problem. One of our motivation of using continuous differentiable architecture search (DARTS) methods rather than traditional discrete \nNAS methods is to reduce search time cost (explained in line 66-71 of the manuscript). Experimentally, by reducing the number of nodes in cell and using limited candidate operations, our approach **demonstrates limited computation cost** in both classification (1.4 GPU day, line 210-211, our new experiments with 3 nodes (n=3 experiments in general response) cost 0.5 day) and event-stereo task (0.4 GPU day, line 246-247) **meanwhile achieving high accuracy**.\n\n\nIn terms of \"Thus, SNN based NAS will be more expensive in terms of training complexity than ANN NAS.\": A main reason for the larger training time cost of SNN compared to ANN is that SNN needs multiple time steps (T) for evolution. In event-based stereo, **we avoid this problem by using streaming training with T=1** (line 234-236), thus largely reducing the searching and retraining time cost of SNN, demonstrating the efficiency and low latency of SNN in processing temporal data streams.\n\n\n**For the issue of power consumption**\n\n\nIn terms of \"However, analytical estimations using the methodology of [1,2] on 45nm CMOS is a gross estimation and really does not qualify as efficiency improvement. For e.g. you might be sparse, but if your hardware cannot take advantage of sparsity, you will in fact have more computations.\": For the energy estimation, we followed the approach [1] as applied in Diet-SNN (IEEE TNNLS 2021) [2] and Dspike (NeuraIPS 2021) [3], which is a commonly recognized way for power consumption estimation of SNN. There could exist energy difference between analytical estimation and real measurement on neuromorphic hardware. However, **the argument of spike sparsity is not sufficient for power reduction does not mean it is not necessary**. A more accurate estimation of the power consumption of SNN depends on specific neuromorphic hardware and is beyond the scope of this work, so it should not influence our main contribution.\n\n\n**For other comments:**\n\n\nIn terms of \"I still think that with respect to previous works as the authors is marginal\": **both AutoSNN and SNASNet are constrained to fixed network backbone and are limited to demonstrations on image classification**. In contrast, as stated in the general response, our approach **enables network level optimization of SNN and is effective in both classification and dense prediction tasks**. In addition, **we develop the DGS method which has been demonstrated of general usage for different SNN structures and robust to different SG functions** (in supplement experiments). We believe these contributions are not marginal.\n\n\nIn terms of \"The authors make an argument that their method does well on event-stereo tasks compared to ANNs. Again, there have been some recent work on neuromorphic datasets that show SOTA performance with SNNs.\": To our best knowledge our work is the current SNN SOTA on the benchmark event-based stereo task of the MVSEC dataset. We would appreciate if the reviewer can provide more details of these mentioned recent works so we can compare.\n\n\n[1] M. Horowitz, “1.1 Computing’s energy problem (and what we can do about it),” in IEEE Int. Solid-State Circuits Conf. (ISSCC) Dig. Tech. Papers, Feb. 2014, pp. 10–14.\n\n[2] Nitin Rathi and Kaushik Roy. Diet-snn: A low-latency spiking neural network with direct input encoding and leakage and threshold optimization. IEEE Transactions on Neural Networks and Learning Systems, 2021.451\n\n[3] Yuhang Li, Yufei Guo, Shanghang Zhang, Shikuang Deng, Yongqing Hai, and Shi Gu. Differentiable spike: Rethinking gradient-descent for training spiking neural networks. Advances in Neural Information Processing Systems, 34, 2021.", " Dear authors,\n\nThank you for doing a revised analysis to showcase the accuracy performance improvement of your work over previous NAS studies on SNNs. You have shown number of spikes as a comparison for energy efficiency. However, analytical estimations using the methodology of [1,2] on 45nm CMOS is a gross estimation and really does not qualify as efficiency improvement. For e.g. you might be sparse, but if your hardware cannot take advantage of sparsity, you will in fact have more computations. I think more than inference time sparsity, one major bottleneck of using NAS which previous works like SNASNet and AutoSNN have acknowledge is the search time complexity. This is a huge bottleneck, and just throwing compute for doing DARTS may not be the most reasonable for SNNs given that there will be exploding memory usage with SNNs irrespective of sparsity if you are running on GPU platform on Tensorflow/Pytorch. Thus, SNN based NAS will be more expensive in terms of training complexity than ANN NAS. So, I am not sure if inference time sparsity is a good and strong argument to make as a major contribution for the paper. The authors make an argument that their method does well on event-stereo tasks compared to ANNs. Again, there have been some recent work on neuromorphic datasets that show SOTA performance with SNNs.\n\nI still think that with respect to previous works as the authors is marginal. The authors just use a previous NAS technique and show accuracy improvement without actually looking at the bottomline problems of search time cost, SNN computation overload during search etc. The inference time sparsity by just counting number of spikes is a very gross approximation taht cannot be used to justify the novelty of this work. And hence I still recommend for rejection.\n\n[1] M. Horowitz, “1.1 Computing’s energy problem (and what we can do about it),” in IEEE Int. Solid-State Circuits Conf. (ISSCC) Dig. Tech. Papers, Feb. 2014, pp. 10–14. \n\n[2] Nitin Rathi and Kaushik Roy. Diet-snn: A low-latency spiking neural network with direct input encoding and leakage and threshold optimization. IEEE Transactions on Neural Networks and Learning Systems, 2021.451\n\n", " Thank you for your reevaluation and recognition of our work. We will illustrate these supplement experiments in the main text of our future revision. Thanks again.\n\n", " The authors have well addressed most of my concerns. The newly added comparison in general response shows DGS has outperformed previous studies. The supplement experiments on a variety of SG functions substantiate the versatility and make their work more solid.\n\nFor the above reason, I will raise my score to 6. I suggest these supplement trials be illustrated in the main text in their future revisions.", " Thank you for your reply and thoughtful question. Stem layers are common in NAS methods and we inherited this structure. They are normal convolution layers for channel variation and feature extraction. For classification task, we didn't choose it specifically and used one stem layer as the first layer, like the original DARTS [1] framework. It maps the input image with dimension [3, H, W] to a feature map of size [108, H, W]. For event-stereo task two stem layers were used as front layers for each subnetwork and the choice is empirical. Other reasonable choices of number of stem layers can be explored however it would be less related to the core contribution of this work. \n\n\nIn terms of why stem layers are treated separately from other SNN layers, as we have stated before, the stem layers were applied with Relu activation during search and in retraining they were switched to spiking activation for full SNN training. The reason why we chose Relu rather than spiking activation for stem is because different from cell layers where SG functions can be optimized for spiking activation, here the SG function is fixed (only during search, in retraining DGS can be applied), so if the SG is not chosen appropriately it can result in unstable training or gradient vanish for the supernet. However when the SG function is appropriately chosen (may need trial and error since it could vary for different tasks and network structures) this phenomenon can also be avoided. This has been demonstrated in experiments and we can add it in revision.\n\n\nIn brief summary, the use of Relu rather than spiking activation for stem is to **ensure an efficient search of SNN cell structure, avoiding potential influence from an ill chosen SG function of the stem**. However this does not mean we have to use Relu, i.e separating stem from other SNN layers, appropriate SG functions for the stem can be obtained with sufficient empirical exploration. We hope this answers your question, please contact us if you have further questions.\n\n\n[1] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018. \n", " Thanks for the authors' clarification and effort in supplement experiments. \n\nHowever, I still have questions about how the choice of stem layers is decided and why they are treated separately from the other SNN layers.", " Thank you for your reply and recognition of our efforts on extended experiments. We respect your reevaluation, however we really hope that you can reconsider your ratings, our reasons are following:\n\n\n1. For the energy estimation, we followed the approach [1] as applied in Diet-SNN (IEEE TNNLS 2021) [2] and Dspike (NeuraIPS 2021) [3], which is a commonly recognized way for power consumption estimation of SNN. We acknowledge the significance of a power measurement on real neuromorphic hardware however this would be another work (considering the versatility of neuromorphic chips and their constrains on SNN implementation) and is not the main contribution of this work. If the reviewer prefer, we can consider adding estimated energy cost on neurormorphic hardware based their energy cost per spike from published literatures.\n\n\n2. On algorithm level, as strengthened in the general response, using the same GPU (NVIDIA Tesla V100) our approach surpassed sophisticated ANN in benchmark event-based stereo task in terms of accuracy and inference speed, with much smaller network size. We believe this is already an achievement and a breakthrough in the field of SNN, since only a few SNN works are applied to hard dense prediction problems and the current SOTA are significantly behind ANN performance ([4, 5, 6]).\n\n\n[1] M. Horowitz, “1.1 Computing’s energy problem (and what we can do about it),” in IEEE Int. Solid-State Circuits Conf. (ISSCC) Dig. Tech. Papers, Feb. 2014, pp. 10–14.\n\n\n[2] Nitin Rathi and Kaushik Roy. Diet-snn: A low-latency spiking neural network with direct input encoding and leakage and threshold optimization. IEEE Transactions on Neural Networks and Learning Systems, 2021.451\n\n\n[3] Yuhang Li, Yufei Guo, Shanghang Zhang, Shikuang Deng, Yongqing Hai, and Shi Gu. Differentiable spike: Rethinking gradient-descent for training spiking neural networks. Advances in Neural Information Processing Systems, 34, 2021.\n\n\n[4] Jesse Hagenaars, Federico Paredes-Vallés, and Guido De Croon. Self-supervised learning of event-based optical flow with spiking neural networks. NeuraIPS 2021. \n\n\n[5] Youngeun Kim, Joshua Chough, and Priyadarshini Panda. Beyond classifcation: Directly training spiking neural networks for semantic segmentation. arXiv preprint arXiv:2110.07742, 2021. \n\n\n[6] Ulysse Rançon, Javier Cuadrado-Anibarro, Benoit R Cottereau, and Timothée Masquelier. Stereospike: Depth learning with a spiking neural network. arXiv preprint arXiv:2109.13751, 2021. \n", " Thank you very much for the author's reply and efforts to supplement the experiment.\n\nAccording to the answer of Q5, then the exploration of SNN in neuromorphic hardware means that it has not been carried out. Computational efficiency is only analyzed from the perspective of addition and multiplication operations. \n\nThus, I maintain my rating based on the responses.", " Thank you for your reevaluation and recognition of our work. We will add ImageNet results, corresponding comparisons and energy estimation in our future revision. Thanks again.", " The authors have honestly addressed my concerns, I raise to score to 8 and hope to see this paper in the NeurIPS 2022 conference. \n\nIn their future revision, I hope they can add ImageNet results, comparison to literature and architecture details. If possible, the energy estimation of the searched architectures should also be added into comparison. ", " Thank you for your remind and quick response, we have updated the general response (turns out we didn't make it visible to all Reviewers in the last version). Please contact us for potential further questions.", " I'd like to thank the authors for their effortful, detailed response. However, there is no general response yet. Can the authors provide it so that I can re-evaluate the score?", " Thank you for your recognition of the contribution of our work and detailed review. Here we provide replies to your concerns and really appreciate it if you can kindly reconsider your ratings. If you have further questions, please do not hesitate to reply to us. \n \n\n**For your questions**:\n\n\n**Q1**: Due to the large training time cost of deep SNNs, we limit to a small search space. For CIFAR, we use the same downsampling network backbone as DARTS [1] and limited our candidate operation to {skip, 3x3 conv with Dspike(b=3), 3x3 conv with Dspike(b=5)}. Comparatively, DARTS [1] used different types of convolutions and pooling operations, including {3 × 3 and 5 × 5 separable conv, 3 × 3 and 5 × 5 dilated separable conv, 3 × 3 max pooling, 3 × 3 average pooling, identity}. For event-stereo, our search space includes both the cell operations and the layer structure, where the latter is selected among {upsampling, downsampling, same}. Similar to DARTS and our work, NAS-Bench-101 used a stem followed by stacks of directed acyclic graph cells as network pattern and searched for optimal cell structures. However, it constructed a map from exhaustive CNN architectures (423k structures) to their runtime and accuracies on CIFAR10, which enabled NAS experiments to be run via querying a table instead of performing the costly train and evaluate procedure. Similar works can be done for SNNs in the future. \n \n\n**Q2**: This is a good question. Empirically, we found that search grid intervals ($\\Delta b$) and applied epoch interval ($e_D$) will influence the effect of DGS. The selection of different SG functions and applied layers also have influence (please see replied tables for **Q1** Reviewer hUzC and general response). In principle, the degree of improvement of DGS is largely related to how much the performance of SNN is influenced by an ill SG function. We will add more analysis in future version.\n\n\n**For limitations**:\n\n\n**L1**.  Thank you for reminding us the two contemporary works about NAS for SNN, for the comparison between theirs and our work, please see our general response. We will add the comparison in revision.\n \n\n**L2**: A comparison of network capacity is shown in table below. We calculated the capacity of ResNet-19 from the open source code of TET. For Dspike, we didn't find the open source code so we don’t know the exact capacity of the ResNet-18 used in the paper. However, they used a modified version from the Resnet-18 in [2] and we constructed the network following exactly their descriptions. In general our network has similar capacities as ResNets we compared with. \n \n\n| Architecture | Method | Simulation length | Params.[M] | Accuracy [%] |\n| :----: | :----: | :----: | :----: | :----: | \n| ResNet-18 | FDG | 6 | 11.45 | $94.25 \\pm 0.07$ | \n| | static | 6 | 11.45 | $94.54$ | \n| | DGS | 6 | 11.45 | $94.66$ | \n| ResNet-19 | TET | 6 | 12.63 | $94.50 \\pm 0.07$ |\n| DARTS-SNN | DARTS-SNN | 6 | 12.33 | $94.34\\pm 0.06$ |\n| | DARTS-SNN^D | 6 | 12.33 | $94.68\\pm 0.05$ |\n\n\n**L3**: A table of comparison of FDG (Dspike with varying temperature), static training (Dspike with fixed temperature) and DGS based on ResNet-18 (see **L2** for how we construct the network) is shown in table above. Since we didn't find open source code of Dspike paper and we were not able to obtain ideal results with FDG implemented by ourselves, using exactly the same training receipt is challenging. The result of FDG is taken from the original paper. For DARTS-SNN we didn't use techniques such as initialization model or time-inheritance training as in FDG. Another difference is that FDG uses tdBN and we use normal batch normalization during training, whose parameters are converted to convolution weights during inference.\n \n\n**L4**: With limited time our experiments on ImageNet is preliminary, initial results are shown in table below. By the time of submission the experiment of DGS is still running so we provide its latest result. We assume for such a big dataset more explorations on hyperparameters and candidate operations are necessary to obtain competitive results with current SOTA SNNs using advanced training techniques. We believe this should not undermine the contribution of this work.\n\n\n| Architecture | Method | Simulation length | Trained epochs | Top-1 accuracy [%] |\n| :----: | :----: | :----: | :----: | :----: | \n| DARTS-SNN | DARTS-SNN | 6 | 100 | 67.96 |\n| | | 6 | 55 | 64.63 |\n| DARTS-SNN | DARTS-SNN^D | 6 | 55 | 65.19 |\n\n\n[1] Hanxiao Liu, Karen Simonyan, and Yiming Yang. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055, 2018. \n\n\n[2] Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. Bag of tricks for image classification with convolutional neural networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 558–567, 2019.\n", " Thank you for your careful review and nice suggestions, we will recheck typos and terminologies and enlarge fontsize in the figure in revision. Here we provide exact replies to your concerns and really appreciate it if you can kindly reconsider your ratings. If you have further questions, please do not hesitate to reply to us. \n\n\n**For your questions:**\n\n\n**Q1**: It is a good suggestion to put the SG function in the background section. We will put sec 3.1 as preliminary section. We will also add [1] and [2] as references for SG function in the background. Our initial motivation of using the Dspike function is because it can cover a wide range of SGs with different smoothness by changing the temperature. In principle other SG functions with similar property could also be considered. However, as analyzed in [1], it is possible that SG functions with penalization on extreme values can lead to better results. The table below shows our preliminary results with different SG functions for event-based stereo task on split 1. Dspike and triangle functions (both with zero values for region outside of [0,1]) outperforms the other two functions where both have long tails expanding to extreme values. Note that we apply mixed operation at the membrane potential (see supplement D.4) for this task. We will add more analysis in revision.\n \n\n| SG function | Method | 1PA [%] ↑|\n| :----: | :----: | :----: |\n| Dspike | fixed hyperparameter | 91.0 |\n| | DGS | 91.3 |\n| triangle | fixed hyperparameter | 90.9 |\n| | DGS | 91.3 |\n| Arctan | fixed hyperparameter | 89.3 |\n| | DGS | 90.1 |\n| Superspike | fixed hyperparameter | 89.6 |\n| | DGS | 89.7 |\n\n\n**Q2**: For parallel optimization of SG functions in DGS method, we denote a special case when the chosen SG function is linearly scaled with its control parameter (Line 190), e.g rectangle functions with the same width but different amplitudes, thus the weights updated with different SGs can be estimated using one SG value, enabling fast parallel update of SG functions across different layers. However, in experiment we only applied DGS in one layer. We will clarify it in revision.\n \n\n**Q3**: For both CIFAR10 and CIFAR100, we compare with the most recent SOTA SNNs with advanced training strategies that we didn’t use, such as the moment loss in TET and initialization model as well as time-inheritance training in Dpike. It would not be unexpected that our method had different improvements on two different datasets. \n\n\n**Q4**: There are four metrics in the evaluation criteria of the MVSEC dataset and the overall performance of SNN is better than BNN on both splits, either with DGS or not. For the issue on median depth error, the loss function (Eq. 7) we used is designed to reduce the disparity error, i.e pixel error of the disparity map, and it is usually the case when 1PE gets its minimum other metrics are not, since they are calculated with different methods.\n  \n\n**Q5**: The measure of power consumption is performed based on unified method for both ANN and SNN, i.e. by counting the MAC numbers of the network. SNN has significantly lower energy cost since the network has sparse activation and only addition operation, which consumes less energy compared to multiplication operation (0.9 pJ vs 4.6 pJ in 45nm CMOS technology [1, 2]). However, future implementation of the SNN on potential neuromorphic device could lead to further reduction of energy cost.\n \n\n[1] M. Horowitz, “1.1 Computing’s energy problem (and what we can do about it),” in IEEE Int. Solid-State Circuits Conf. (ISSCC) Dig. Tech. Papers, Feb. 2014, pp. 10–14.  \n[2] Nitin Rathi and Kaushik Roy. Diet-snn: A low-latency spiking neural network with direct input encoding and leakage and threshold optimization. IEEE Transactions on Neural Networks and Learning Systems, 2021.451\n", " Thank you for your thorough review on our work and constructive questions. Here we provide exact replies to your questions and really appreciate it if you can kindly reconsider your ratings. If you have further questions, please do not hesitate to reply to us. \n\n\n**For your questions:**\n\n\n**Q1**: Thank you for reminding us the two contemporary works about NAS for SNN, for the comparison between theirs and our work, please see our general response. We will add the comparison in revision.\n \n\n**Q2**: During the search phase, we use Relu activation function for stem layers, which constitute only a small part of the model parameters (0.03% for classification and 5% for event-stereo), the majority of the network (cells) is SNN. In retraining phase we replace the Relu function with spiking function and retrain the searched model from scratch without inheriting any weights from the searched architecture. Since we directly train a full SNN, we think its fair enough to compare with directly trained SNNs. The reason for using Relu activation is to ensure more stable updating of the supernet, since deep SNNs suffer from gradient vanish problem when the SG functions is not chosen appropriately. Empirically we replaced the Relu function in stem layer with Dspike function under appropriate hyperparameters and the search process is also stable. We will add it in revision.\n\n\n**Q3**: This is a very good suggestion. We chose Dspike as SG function because it can cover a broad range of SGs with different smoothness by changing the temperature. However, other SG functions with similar property can also be considered. In preliminary experiments we tested with Superspike, triangle and Arctan SG functions with fixed hyperparameter training and varying hyperparameters with DGS. The table below shows results with different SG functions for event-based stereo task on split 1, which demonstrates the robustness of DGS for different SG functions. We will add more details in revision.\n\n\n| SG function | Method | 1PA [%] ↑|\n| :----: | :----: | :----: |\n| triangle | fixed hyperparameter | 90.9 |\n| | DGS | 91.3 |\n| Arctan | fixed hyperparameter | 89.3 |\n| | DGS | 90.1 |\n| Superspike | fixed hyperparameter | 89.6 |\n| | DGS | 89.7 |\n", " Thank you for your review on our work and the list of recommended papers. Here we provide exact replies to your concerns and really appreciate it if you can kindly reconsider your ratings. If you have further questions, please do not hesitate to reply to us. \n\n\nThank you for reminding us the two contemporary works about NAS for SNN. Regarding differences and the contribution of our work, please see our general response. Related contents will be added in revision.\n\n\nIn terms of \"The authors have compared their technique… The authors have failed to acknowledge most recent works.\":\nwithin limited space, we only presented the most recent publications of SNN classification on CIFAR dataset from top conferences (Dspike, 2021 NeuraIPS; TET, 2022 ICLR, etc) and they are the current SOTA in terms of accuracy (as far as we know) by submission. We acknowledge works from Priya Panda's group and Emre Neftci's group (actually we have already cited two of their works, Line 75 as [41] and [42]) and will cite more of their works in revision to broaden the scope of the paper. \n \n\nThank you for providing the check list of papers and we have checked each of them. Except the two NAS SNN papers ([5] and [11]) mentioned above, [8] was already cited in our manuscript (Line 96 as [23]). [1] is a nice perspective paper on neuromorphic computing and we can cite it. [2] is about SNN training with SG method and [6] developed several techniques for SNN on event-based image classification, we could consider cite them as well. However, the rest papers in the list are comparatively less related to our work. We acknowledge previous works in various aspects of SNNs from Priya Panda's group and in principle we would like to cite as much related papers as possible to broaden the scope of our paper. ", " We thank all reviews for their time, constructive comments and valuable suggestions. While we will respond to each reviewer individually, we would like to first give a general response to the common issues and reemphasize our contributions to the field of SNN.\n\n\nReviewer Y1Po,cjsQ and LhUf have mentioned two exiting works related to NAS for SNN. Both AutoSNN [1] and SNASNet [2] were on arxiv ([1] has been recently accepted by ICML2022) and we didn’t notice them by the time we submitted to NeuraIPS. We acknowledge they are contemporary works and will cite them in revision. Specifically, [1] studied pooling operations for downsampling in SNNs and applied NAS to reduce the overall spike numbers of the network. [2] applied NAS to improve network initialization and explore backward connections. However, both works only searched for different SNN cells or combinations of them under fixed network backbone and their application is limited to image classification.\n\n\nOur approach differs from both works in terms of method and application scope. Methodologically, our approach **optimize on both the cell and the architecture level of SNN using end-to-end differentiable hierarchical search**, which not only achieves SOTA accuracy on image classification of CIFAR (a comparison is provided in table below), but is also of **general usage and effective for hard dense prediction tasks** where architectures require more variation, **a field where few SNN works exist and significantly behind ANN performance ([3, 4, 5])**. Experimentally, **we demonstrate the superiority of SNN in processing sparse and dynamical signals in benchmark event-based stereo task**, in terms of accuracy, energy cost and inference speed, surpassing sophisticated designed ANN. In addition, we also extend the differentiable principle and **develop the DGS method to efficiently optimize SG functions and improve general SNN training**, as demonstrated in both classification and event-stereo task.\n\n\n| Dataset | Method | Architecture | Simulation Length | Params. [M] | Spikes[K] | Accuracy[%] |\n| :---: | :----: | :---: | :---: | :----: | :---: | :---: |\n|CIFAR10 | SNASNet | SNASNet-Fw | 5 | - | - | $93.12\\pm 0.42$ |\n| | | SNASNet-Fw | 8 | - | - | $93.64\\pm 0.35$ |\n| | | SNASNet-Bw | 5 | - | - | $93.73\\pm 0.32$ |\n| | | SNASNet-Bw | 8 | - | - | $94.12\\pm 0.25$ |\n| | AutoSNN | AutoSNN (C=64) | 8 | 5.44 | 261 | $92.54$ |\n| | | AutoSNN (C=128) | 8 | 20.92 | 310 | $93.15$ |\n| | DARTS-SNN | DARTS-SNN (n=4) | 6 | 12.33 | 788 | $94.34\\pm 0.06$ |\n| | DARTS-SNN^D (1s) | DARTS-SNN (n=4) | 6 | 12.33 | 865 | $94.68\\pm 0.05$ |\n| | DARTS-SNN | DARTS-SNN (n=3) | 6 | 14.29 | 752 | $95.35 \\pm 0.05$ |\n| | DARTS-SNN^D (1s) | DARTS-SNN (n=3) | 6 | 14.29 | 724 | $95.36 \\pm 0.01$ |\n| | DARTS-SNN^D (5c) | DARTS-SNN (n=3) | 6 | 14.29 | 720 | $95.50 \\pm 0.03$ |\n| | | | | | | |\n|CIFAR100 | SNASNet | SNASNet-Fw | 5 | - | - | $70.06\\pm 0.45$ |\n| | | SNASNet-Bw | 5 | - | - | $73.04\\pm 0.36$ |\n| | AutoSNN | AutoSNN (C=64) | 8 | - | - | $69.16$ |\n| | DARTS-SNN | DARTS-SNN (n=4) | 6 | 11.91 | 962 | $75.70\\pm 0.14$ |\n| | DARTS-SNN^D (1s) | DARTS-SNN (n=4) | 6 | 11.91 | 1025 | $76.03\\pm 0.20$ |\n\n\n(-) means information not given. As shown in the table, our networks achieve higher accuracy compared to previous NAS-SNN works. The AutoSNN has fewer spikes due to specific optimization target. Furthermore, we made two changes in architecture in extended experiments and further improved the network performance. Specifically, we increased the output channel of the 1st stem to 144 (originally 108) and used 3 nodes (originally 4) within a cell. Instead of applying DGS to the first stem layer (1s), in new architecture we also applied DGS to the first node of the 5th cell (5c) which leads to our current best result. \n\n\n[1] Na B, Mok J, Park S, et al. AutoSNN: Towards Energy-Efficient Spiking Neural Networks[J]. arXiv preprint arXiv:2201.12738, 2022.\n\n\n[2] Kim Y, Li Y, Park H, et al. Neural architecture search for spiking neural networks[J]. arXiv preprint arXiv:2201.10355, 2022.\n\n\n[3] Jesse Hagenaars, Federico Paredes-Vallés, and Guido De Croon. Self-supervised learning of event-based optical flow with spiking neural networks. NeuraIPS 2021. \n\n\n[4] Youngeun Kim, Joshua Chough, and Priyadarshini Panda. Beyond classifcation: Directly training spiking neural networks for semantic segmentation. arXiv preprint arXiv:2110.07742, 2021. \n\n\n[5] Ulysse Rançon, Javier Cuadrado-Anibarro, Benoit R Cottereau, and Timothée Masquelier. Stereospike: Depth learning with a spiking neural network. arXiv preprint arXiv:2109.13751, 2021. ", " The paper presents a NAS optimization algorithm for SNN search. +The authors present interesting results with the differentiable NAS search.\n-There are two major works related to NAS for SNNs that has been recently out [5], [11]. The authors have not cited these works. It makes me wonder what is the author's contributiona s compared to these works. [5] talks about the fact that training SNN using standard NAS methods might be too complex because SNNs need large training time, so they come up with a NAS without tarining technique. [11] talks about a differentiable NAS technique. Both works show good results on a avraiety of datasets, and talk about the intricacies of architecture search.\n-The authors have also compared their technique to select works in table 1. There is a lot of work from Priya Panda's group at Yale, Emre Neftci's group, and many others with regard to SNN training that show SOTA results on DVS and static datasets. The authors have failed to acknowledge most recent works.\n\nBelow is a list of publications (not exhaustive) that the author should check:\n[1] Towards spike-based machine intelligence with neuromorphic computing K Roy, A Jaiswal, P Panda Nature 575 (7784), 607-617\n\n[2] Enabling spike-based backpropagation for training deep neural network architectures C Lee, SS Sarwar, P Panda, G Srinivasan, K Roy Frontiers in neuroscience, 119\n\n[3] Rate Coding Or Direct Coding: Which One Is Better For Accurate, Robust, And Energy-Efficient Spiking Neural Networks? Y Kim, H Park, A Moitra, A Bhattacharjee, Y Venkatesha, P Panda ICASSP 2022-2022\n\n[4] Neuromorphic Data Augmentation for Training Spiking Neural Networks Y Li, Y Kim, H Park, T Geller, P Panda arXiv preprint arXiv:2203.06145\n\n[5] Neural architecture search for spiking neural networks Y Kim, Y Li, H Park, Y Venkatesha, P Panda arXiv preprint arXiv:2201.10355\n\n[6] Optimizing deeper spiking neural networks for dynamic vision sensing Y Kim, P Panda Neural Networks 144, 686-698\n\n[7] Federated Learning with Spiking Neural Networks Y Venkatesha, Y Kim, L Tassiulas, P Pand IEEE Transactions on Signal Processing 2021\n\n[8] Beyond classification: directly training spiking neural networks for semantic segmentation Y Kim, J Chough, P Panda arXiv preprint arXiv:2110.07742\n\n[9] Revisiting batch normalization for training low-latency deep spiking neural networks from scratch Y Kim, P Panda Frontiers in neuroscience, 1638\n\n[10]Na, Byunggook, et al. \"AutoSNN: Towards Energy-Efficient Spiking Neural Networks.\" arXiv preprint arXiv:2201.12738 (2022). See my above comments. See weakness section.", " This work is aimed to search for both the optimal SNN architecture and hyperparameters of surrogate gradient (SG) functions. In the architecture search phase, they use DARTS and refine the search to different granularities (layer-level and cell-level). The search for SG function (DGS) focuses on optimizing the temperature of the Dspike SG function. The results show that searched architecture achieve SOTA performance on image classification and event-based stereo matching task. **Pros**\n1. The search for the architecture alone significantly increases the performance of image classification tasks, which reveals the potential to be applied to various more complicated tasks.\n2. The idea of searching hyperparameter of SG function is novel, simple but effective.\n\n**Cons**\n1. The idea of applying NAS on SNNs is not novel till the deadline of NeurIPS submission. SNASNet[1] and AutoSNN[2] have proposed that NAS methods can be used for searching the structure of SNNs. The latter has been accepted at ICML2022.\n2. The articulation of the training pipeline is not highlighted and is somewhat unclear to me. See the **Questions** below.\n3. The trials of search on SG functions are confined to the Dspike function.\n\n[1] Youngeun Kim, et al. \"Neural architecture search for spiking neural networks.\" *arXiv preprint arXiv:2201.10355* (2022).\n\n[2] Byunggook Na, et al. \"AutoSNN: Towards Energy-Efficient Spiking Neural Networks.\" *arXiv preprint arXiv:2201.12738* (2022).\n 1. The comparison of previous NAS work on SNNs should be added.\n2. The training pipeline is unclear to me. Does the training pipeline include **both** ANN-to-SNN conversion and direct training using SG? I notice that L158 suggests that ANN layers are converted to SNN layers. If so, the comparison to some previous work using only directly trained SNNs is not fair.\n3. Why Dspike is the only tested SG function? As mentioned in ref. 64, the choice of SG function affects the viable range of hyperparameters. The range determines the difficulty level of search. As there has been no claimed best SG function till now and ref. 64 points out that some different SG functions actually have the same maximum performance, the author should try some popular SG functions like SuperSpike[1], triangle[2], ArcTan[3], etc., to show the versatility of this method.\n\n[1] Friedemann Zenke, and Surya Ganguli. \"Superspike: Supervised learning in multilayer spiking neural networks.\" *Neural computation* 30.6 (2018): 1514-1541.\n\n[2] Guillaume Bellec, et al. \"Long short-term memory and learning-to-learn in networks of spiking neurons.\" *NeurIPS* (2018).\n\n[3] Wei Fang, et al. \"Incorporating learnable membrane time constant to enhance learning of spiking neural networks.\" *ICCV*. 2021. N/A", " In this work, the authors propose a differentiable hierarchical search framework for spiking neurons, where spike-based computation is realized on both the cell and the layer level search space. Meanwhile, the authors find effective SNN architectures under limited computation cost. In order to avoid the standard SG approach that leads the network into suboptimal solutions, the authors propose a differentiable surrogate gradient search method where the SG function can be efficiently optimized locally in parallel. Finally, this work shows some interesting results on the image classification tasks. Strengths:\n1. A hierarchical differentiable surrogate gradient search framework is proposed to obtain better performance of the spiking model.\n2. Significant improvements in energy savings on deep stereo.\n\nWeakness:\n1. In terms of writing, some methods that were not proposed in the work were placed in the methods section. There are also some typos in terminology.\n2. The results of the ablation experiments and the analysis of some elements do not match.\n3. The font of the figure seems to be a small and not clear enough, which leads to a very careful reading to find valuable information.\n4. The percentage improvement of the proposed method varies greatly on the two image classification datasets. Even the improvement on CIFAR-10 is only 0.18. 1. My core concern lies in the choice of the surrogate gradient (SG) function in the methods section. First, it looks like the SG function is from existing work [31]. If this is true, the reviewer suggests that the authors put this subsection in the background section. Second, what is the motivation for choosing such a SG function? Because, the motivation for the design of the SG function has been given in the existing studies [1-2]. It is clear that SG methods are a key part of the direct training of spiking neurons, and the choice of such methods also affects the search results, and the reviewers suggest that the authors explain the motivation clearly.\n\n[1]\"Accurate and efficient time-domain classification with adaptive spiking recurrent neural networks,\" Nat. Mach. Intell. 3(10): 905-913 (2021)\n\n[2]\"A Hybrid Spiking Neurons Embedded LSTM Network for Multivariate Time Series Learning under Concept-drift Environment,\" in IEEE Transactions on Knowledge and Data Engineering, doi: 10.1109/TKDE.2022.3178176.\n\n2. The parallel optimization involved in the search process needs further explanation by the authors.\n\n3. There is a significant difference in performance improvement on the CIFAR-10 and CIFAR-100 datasets, and the authors need to explain the reason for this phenomenon.\n\n4. The ablation experiments concluded the important value of temporal dynamics in the spiking neuron, but why the MDE metric and Median depth error metric appear opposite on split3. i.e., DARTS-BN^D better than DARTS-SNN^D in terms of Median depth error metric-split3.\n\n5. In Table III, the experimental results show that the power consumption of the proposed method is significantly lower. However, the reviewers note that the SNN model is tested on a chip and whether it is the device that leads to a more recent reduction in power consumption.\n The authors illustrate the limitations of their work.", " In this submission draft, the authors device a differentiable hierarchical search framework tailored for SNNs. In the meantime, this framework is able to search the surrogate gradient in a differentiable manner. Their methods are validated on the CIFAR dataset and an event-based deep stereo dataset. Overall this is an interesting work. The authors come up with an end-to-end differentiable framework that solves two critical problems in SNN: the architecture and the surrogate gradient. \n\n\n1. Developing SNN-oriented architectures are novel and necessary. Even though this work is not the first trial in the community. \n2. Searching the SG is interesting and I am glad to see a learning-based method to address the issue. \n2. The results on the CIFAR10/100 dataset are promising. 1. Can authors elaborate on the search space in this work, especially when compared to the search space in DARTS and NAS-Bench-101?\n\n2. Can the authors provide more details on the difference in searching gradients with Dspike in Section 3.4? Currently, the only saying is *\" [31] demonstrates that by optimizing the width (or temperature) of the SG function the performance of SNN can be improved\"*, but there is no drawbacks analysis and to which degree that this DGS method can improve. 1. Need to include two prior SNN NAS papers in the discussion or experiments. See references below.\n\n\n2. A critical problem is that there is no comparison between the searched architecture and the ResNets used in other works. What if the searched architecture has a higher capacity than ResNets?\n\n3. An ablation study on the DGS is recommended, the authors should compare static temperature gradient, DGS, and [31] on the same neural architecture and under the same training receipt.\n\n4. Better to have an ImageNet result. \n\n\n\n------\n\n**References**\n\nNa B, Mok J, Park S, et al. AutoSNN: Towards Energy-Efficient Spiking Neural Networks[J]. arXiv preprint arXiv:2201.12738, 2022.\n\nKim Y, Li Y, Park H, et al. Neural architecture search for spiking neural networks[J]. arXiv preprint arXiv:2201.10355, 2022." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 5 ]
[ "ZRpPkr5bn5o", "0RzQrCiHb1i", "XBHjb5P3Bty", "RfBM5ydvewX", "u4AwF9ufDyl", "zGLcJgDMgfV", "AvK3nlWU6G4", "dx7KkE_yqXQ", "6jRJ90u9RYi", "6m0SU06XPa", "zdLed46sH1", "wkJs38wHYJW", "xINKCDvGgDo", "YxxibRhimpE", "GxGNgybmjru", "CVvSInpg_hh", "nips_2022_Lr2Z85cdvB", "nips_2022_Lr2Z85cdvB", "nips_2022_Lr2Z85cdvB", "nips_2022_Lr2Z85cdvB", "nips_2022_Lr2Z85cdvB" ]
nips_2022_FRDiimH26Tr
TA-MoE: Topology-Aware Large Scale Mixture-of-Expert Training
Sparsely gated Mixture-of-Expert (MoE) has demonstrated its effectiveness in scaling up deep neural networks to an extreme scale. Despite that numerous efforts have been made to improve the performance of MoE from the model design or system optimization perspective, existing MoE dispatch patterns are still not able to fully exploit the underlying heterogeneous network environments. In this paper, we propose TA-MoE, a topology-aware routing strategy for large-scale MoE trainging, from a model-system co-design perspective, which can dynamically adjust the MoE dispatch pattern according to the network topology. Based on communication modeling, we abstract the dispatch problem into an optimization objective and obtain the approximate dispatch pattern under different topologies. On top of that, we design a topology-aware auxiliary loss, which can adaptively route the data to fit in the underlying topology without sacrificing the model accuracy. Experiments show that TA-MoE can substantially outperform its counterparts on various hardware and model configurations, with roughly 1.01x-1.61x, 1.01x-4.77x, 1.25x-1.54x improvements over the popular DeepSpeed-MoE, FastMoE and FasterMoE systems.
Accept
Mixture-of-Expert (MoE) models have demonstrated a lot of success recently. To further improve upon the existing literature this paper studies MoE routing for different network topologies. This is essentially to deal with the communication overhead of MoE training. The strategy is to add another layer on top for the topology along with a corresponding objective to optimize. The authors also provide experiments demonstrating improved speed of convergence. The reviewers were in general positive and liked the idea of the paper. The reviewers did however raise issues about lack of clear demonstration that accuracy is not compromised, lack of large data, and a few other more technical concerns. The reviewers concerns seem to be more or less addressed by the authors. My overall assessment of the paper is positive. I think the general premise of the paper is interesting and the paper has interesting ideas. I do agree however that the experiments need to be more thorough. I am recommending acceptance but request that the authors follow the reviewers comments to improve their experimental results
val
[ "CJjEFXOZzNK", "F6xWNVk_HYP", "L7fg_BjF_lU", "hmVpzhdb8p", "hQsA_W62jWp", "sG7x1zlbhQ1", "E0b_tRA2Ny5", "DnQMqmcbZA", "Ebhq9jKib8R", "OZBQk6it-V7", "xGdY48er99r" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I agree with your point. I will keep my current rating 5 considering all of those. Thanks!", " We appreciate the constructive feedback and valuable suggestions again.\n\nWe apologize that we are currently not able to have experiments with more experts or more updates due to the limitations of the computation resources. Please kindly notice that in the revised Figure 3, we have enlarged the value scale from 2.5-3.0 to be clearer. As a result, the fluctuation of curves combined with magnified scale might amplify the impression of loss difference. In fact, the loss curves of TA-MoE are consistent to the baselines with a few fluctuations, which we think is acceptable. To further demonstrate the consistency, we also attached the PPL metric result in Table 1 of the supplementary materials. Under configurations with 8, 16, 32 and 48 experts, the PPL differences, i.e., baseline valid PPL minus TA-MoE valid PPL, are +0.15, +0.21, +0.16 and -0.06. Although with 48 experts the PPL value (the less, the better) is a little higher than the baseline, the extent (0.06) is much lower than the others (0.15, 0.21 and 0.16), where TA-MoE shows better PPL results. Therefore, we can conclude that the difference is an acceptable fluctuation.", " Thank you for the valuable feedback. We were not aware that there are questions left unanswered. Could you kindly let us know which questions require further clarifications? Thank you again.", " Thank you for the clarifications on my questions.\nBased on those, I would increase my rating to 5.\n\nAlso, I have one follow-up question regarding Figure 3.\nIt seems there are data points that TA-MoE gives worse loss value when the number of experts increases (at the same number of updates).\nDo you have any experiments with more experts or more updates with E48 case?", " I thank the authors for their efforts in addressing many of my questions and concerns and other reviewers' as well. I hope the rest will be answered in later versions of the paper and I am retaining my score of borderline accept for now. ", " Thanks for your valuable feedbacks. Our responses are listed below.\n\n1) **About the capacity controlling method**\n\nWe consider two different capacity controlling methods used in DeepSpeed-MoE and FastMoE (line 109-112). The details on how to handle the capacity based on these two implementations have been introduced in line 221-228. For FastMoE, we only need to replace the original balance loss with topology loss and the capacity controlling method remains the same. As for DeepSpeed-MoE, every data chunk $c_{ie}$ from process $i$ to expert $e$ is pruned by $\\hat{c_{ie}}$, an unevenly split part of $C$. Section 4.2 shows how to get the optimal sub capacity size $\\hat{c_{ie}}$ based on topologies.\n\n2) **About the locality routing algorithm**\n\nCompared with global routing and forced routing, TA-MoE is advantageous in the following aspects:\n\nTA-MoE can take advantage of the adjacent information of data, which can be revealed in the dispatch patterns in Figure 6(b) and section 1.2 of Appendix. The figures reveal a “ladder-like” trend that the ranks within a node have higher dispatch preference to intra-node rank groups. As a result, experts prefer to receive “adjacent tokens” from nearby ranks. In fact, the correlation of adjacent tokens usually contains more important information, which makes TA-MoE be potential to maintain the accuracy and achieve higher performance.\n\n TA-MoE has some advantages over existing compulsive local routing works, e.g., FasterMoE (line 85-87). Firstly, TA-MoE achieves better time to convergence performance as shown in Figure 5 and line 279-282. Secondly, a forced local routing needs a manually set proportion of local data chunk size, which makes it hard to adapt to various topologies. Lastly, compulsive local routing usually requires extra local sorting operations, introducing computation overheads.\n\n 3) **About GPT model configurations**\n\nThe information of the GPT model, such as the dataset and model architecture, have been introduced in the second paragraph of section 5. We add the number of layers in Table 2 of the paper. \n\n 4) **About Figures**\n\nFigure 3 and Figure 6 (b) have been improved in the revised paper.\n", " Thank you for giving inspiring suggestions and pointing out insufficient details. Our responses are listed below.\n\n1) **About the accuracy metric**\n\nLike some well-recognized related works, e.g., BASE layer and DeepSpeed-MoE, we have depicted the validation performance of every fixed interval step in Figure 3 as the comparison metric. To be more comprehensive, we have added the perplexity (PPL) metric in Table 1 of the Appendix. \n\n2) **About testing on more model architectures**\n\nTo be more general, we have added the tests of MoE on Swin Transformer based MoE tasks in section 1.3 of the Appendix. The results have further demonstrated the effectiveness of the proposed topology-aware routing algorithm on different model architectures. \n\n3) **About data dispatch distribution of other ranks**\n\nThe data dispatch of all the ranks have been shown in section 1.2 of the Appendix, which shows similar conclusions as rank 0.\n\n4) **About the transfer volumes in the motivation**\n\nIn the motivation section, 128M data is used as an example to show that the static load-balanced dispatch is not effective on complex distributed environments. In fact, the value of the load-balanced transfer volumes has little effects on the demonstrated phenomenon. To be more representative, we choose 128M as an illustration, which is close to the real chunk size in most of the MoE training experiments. \n\n5) **About the code**\n\nWe will publish our code after the paper is accepted as soon as possible.\n\n6) **Discussions on cloud platforms**\n\nThe cloud provisions/hypervisors concentrate more on high-level schedules of different tasks. We focus on the single MoE task itself. They are orthogonal to each other. We observed that the dynamic feature and global data exchange pattern of MoE training mismatch the hierarchically heterogeneity of distributed cloud environments. To solve it, we propose a topology-aware data dispatch algorithm. In fact, cloud architectures share most of the topology abstractions analyzed in our paper. We therefore believe our algorithm can achieve similar performance improvements when it is applied to other cloud platforms.\n", " Thanks for your valuable suggestion. Please find our response below.\n\n1) **About testing on more model architectures**\n\nTo be more general, we have added the tests of MoE on Swin Tansformer based MoE tasks in section 1.3 of the Appendix. The results have further demonstrated the effectiveness of the proposed topology-aware routing algorithm on different model architectures. \n", " This paper addresses the problem of MoE routing under the cases of different network topologies by allocating another abstraction layer for the topology and designing an auxiliary objective to optimize. Experiments show very good improvement in terms of speed compared to strong baselines. Strength: \n\n1. The paper offers an important contribution to the AI community at the system level, which is probably not difficult to approach for many people working in this field. In fact, in my humble opinion, not so many AI people have the opportunity to access detailed hardware information as cloud users such as with Azure or AWS. \n\n2. The experiments show very good improvement over strong baselines. System analysis is clearly presented. \n\nWeakness\n\n1. The paper addresses the system level. However, since it claims a significant boost of speed without sacrificing the model accuracy, it needs to show the accuracy, e.g. at least the LM-related one with NLP-related metrics. \n\n2. Line 240, which claims \"without loss of generality\", is probably too strong. My suggestion is if the solution is good, with the current hardware settings, the authors can run current codes for other many applications of which codes are available to further solidify their claims. \n\n3. Likewise, why not show the data dispatch distribution of other ranks but only rank 0? If space is limited, appendix space is always there. \n\n4. In the era of GPUs and large data, the motivation is led by demonstrating only 128MB of data is probably inefficient. Probably at least some GBs, or even stronger in a combination with different types of data would make a stronger motivation. \n\n5. No code is provided. \n As said, while many (if not most, in my humble opinion) AI people are not working near the system level to address the same problems as in this paper, e.g., they simply use AWS, Azure, Colab, or other cloud infrastructure for fast prototyping or deployment, I wonder how much improvement this solution can make with different cloud platforms? In other words, have “cloud” people already seen those heterogeneous infrastructure problems and addressed that at the cloud provision/hypervisors level? The paper presents the case of PaddleCloud but how about others? Maybe not very relevant since the paper addresses the system-related level and thus is hard to judge those impacts. ", " The paper proposes a new algorithm to improve training efficiency of Mixture of Experts models in a distributed training setting by exploiting the network topology information. To achieve this, the authors propose a new auxiliary loss term incorporating communication bandwidth to encourage tokens to be routed to closer nodes rather than further nodes. By applying this new algorithm, authors claim that they could achiever faster throughput (1.01x - 4.77x) without losing accuracy on their several different clusters. As a result, they show a faster wall-clock time convergence. The communication overhead is one of the major issues for the MoE model training and this paper proposes a new method to deal with this problem naturally. Given the increased usage of MoE model technology, this is a timely work. Having a soft guidance seems like a good idea not to hurt the original training dynamics while encouraging locality of token routing. And, as authors mentioned, there have not been this kind of topology aware loss terms before as far as I know.\nHowever, there are a few missing details about model configurations and algorithms asked in the question section. And, the overall speed gain is minor. There are a few questions that need to be answered as well. First, it is not clear that how capacity and overflow tokens are handled in the proposed algorithm. They are known to be critical factors for the successful MoE model trainings, but not much details about those are included in the paper.\nSecond, it is not clearly shown how the locality preferred routing impacts the training compared to the global routing. Maybe, it could be useful to have a few data points such as forced local routing. It is unclear how experts can have expertise while trained by local tokens more. It might be useful to see how different topology affects expertise of different experts.\nLastly, a few details are missing (number of layers, datasets, model architecture). Figure 3 and Figure 6 (b) are hard to read. This paper is focusing on the computation algorithm itself. So, it might not have direct societal impact.", " Sparsely gated Mixture-of-Expert (MoE) plays a vital role in large-scale model training but suffers from both load imbalance and global communication. In addition, the existing even dispatch approach may cause network contention and worsen the previous challenges. This work proposed a topology-aware large-scale MoE training method, called TA-MoE, that can adapt communication volume to fit the underlying network topology without interfering with the model convergence. The key ideas are abstracting the dispatch problem as a communication cost optimization problem and then adding an auxiliary loss with pattern-related coefficients. Experiments show that TA-MoE provides up to 1.61x speedup and 4.77x speedup over DeepSpeed-MoE and FastMoE without accuracy loss. Strengths:\n+ this work tried to tackle a very significant and interesting challenge in MoE system: network topology may worsen the communication and load balance problems during the dispatch in MoE.\n+ the paper is well organized and easy to follow\n+ the proposed TA-MoE method is simple and effective: extensive experiments show that TA-MoE is able to offer noticeable speedup over the state-of-the-art under different hardware and model configurations.\n\nWeaknesses:\n- the experiments are mostly doen with GPT models; it would be better to have models with different neural architectures in the evaluation benchmark. It is unclear how TA-MoE works on other MoE using models other than GPTs. Please refer to the above weakness part. The authors have adequately addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 2 ]
[ "F6xWNVk_HYP", "hmVpzhdb8p", "hQsA_W62jWp", "sG7x1zlbhQ1", "E0b_tRA2Ny5", "OZBQk6it-V7", "Ebhq9jKib8R", "xGdY48er99r", "nips_2022_FRDiimH26Tr", "nips_2022_FRDiimH26Tr", "nips_2022_FRDiimH26Tr" ]
nips_2022_4rTN0MmOvi7
DetCLIP: Dictionary-Enriched Visual-Concept Paralleled Pre-training for Open-world Detection
Open-world object detection, as a more general and challenging goal, aims to recognize and localize objects described by arbitrary category names. The recent work GLIP formulates this problem as a grounding problem by concatenating all category names of detection datasets into sentences, which leads to inefficient interaction between category names. This paper presents DetCLIP, a paralleled visual-concept pre-training method for open-world detection by resorting to knowledge enrichment from a designed concept dictionary. To achieve better learning efficiency, we propose a novel paralleled concept formulation that extracts concepts separately to better utilize heterogeneous datasets (i.e., detection, grounding, and image-text pairs) for training. We further design a concept dictionary (with descriptions) from various online sources and detection datasets to provide prior knowledge for each concept. By enriching the concepts with their descriptions, we explicitly build the relationships among various concepts to facilitate the open-domain learning. The proposed concept dictionary is further used to provide sufficient negative concepts for the construction of the word-region alignment loss, and to complete labels for objects with missing descriptions in captions of image-text pair data. The proposed framework demonstrates strong zero-shot detection performances, e.g., on the LVIS dataset, our DetCLIP-T outperforms GLIP-T by 9.9% mAP and obtains a 13.5% improvement on rare categories compared to the fully-supervised model with the same backbone as ours.
Accept
The paper receives overall positive reviews and rebuttal has resolved the reviewer's concerns. Reviewers agree that the paper proposes a simple yet effective approach to enrich language concepts to learn better region-concept alignment for object detection. The approach is supported by solid empirical evidence on the LVIS dataset and 13 DATA. AC agrees the methodology of processing data is worth sharing to a broader audience and recommends to accept the paper.
train
[ "bisKCjF7Kg", "9HsBY-fFyYT", "0eT8pCePoSo", "d0Fes_rWLIW", "BR-YqPZcJEK", "9jxWHTM-hxH", "P2V-NZewsdSk", "oDAYdq6SD6", "HZVTm7KSOY", "ke5CqQNASSq", "yZZAUT13S4I", "lSwlXGuae3v" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors' effort to address my comments, especially evaluating their method using the VILD protocol. This result should be included in the final version (at least in the supplementary). I still believe I was right in my original assessment about the advantages and disadvantages of the proposed method. Overall, I still believe the paper makes a good contribution and hence I stick to my original score.", " We thank the reviewer for the positive feedback and insightful comments! All comments are summarized and addressed as follows.\n\n**R4Q1. The parallel processing is rather thin and is actually the first thing that would come to mind.**\n\nWe argue that the proposed paralleled formulation for unifying heterogeneous datasets is new. We also demonstrate this idea is neat yet effective. Specifically, it achieves significant performance improvement (+4.1% mAP on LVIS) compared with the sequential formulation used in GLIP, as compared in row1 and row2 in the Tabel 3 of the main paper. And paralleled formulation is also efficient for both training (5x faster than GLIP) and inference (19x faster than GLIP), as demonstrated in Table 2 of the main paper. These explorations can serve as a valuable guideline for the community to design vision-language models for open-domain detection tasks.\n\n**R4Q2. Most of the improvements and contributions come from the dataset side. It is more an engineering approach rather than an actual methodological contribution.**\n\nIt should be noted that our main focus is not simply using more training data for learning, but to investigate and design an effective framework able to incorporate large-scale heterogeneous data sources to largely boost the learning of wider range of visual-language knowledge, and to provide a broader-generalization model for open-domain detection, which is a critically important direction that also shares the same motivation and focus as GLIP. While directly comparing to the pioneering work GLIP in this direction, our proposal has several key technical advantages and contributions: (i) we propose a neat yet effective paralleled formulation, (ii) we introduce a novel concept enrichment method with the proposed concept dictionary to explicitly build the relationship between categories (iii) we explore label completion technique to better utilize large-scale image-text pair data. Moreover, as demonstrated in Q1, our method can significantly improve the performance as well as the efficiency compared to GLIP. \n\n**R4Q3. Can the authors evaluate the proposed method following the VILD protocol?**\n\nGLIP protocol and VILD protocol are both common practices to evaluate the zero-shot detection performance. The former is a stronger and more challenging setting since it does not make any prior assumptions about the data distribution of downstream tasks, For instance, GLIP and DetCLIP can directly perform the evaluation on 13 detection datasets with highly varied distributions, while the VILD protocol still assumes the testing images and training images should come from the same dataset (LVIS). Besides, achieving a good performance under the GLIP protocol is more difficult and that is why we should use large-scale and heterogeneous training data. \n\nTo make a more comprehensive evaluation of our method, we also perform experiments under the VILD protocol as suggested. We replace the Objects365 part in our training data with LVIS-base, and GoldG and YFCC1M are still included. Including additional data will lead to somehow unfair comparison with VILD but it is necessary since this is the core component in our method to enable zero-shot capability, which differs from VILD that distills knowledge from a pre-trained CLIP model. The result is shown in the following Table. Our method (27.3% mAP) outperforms VILD (22.5% mAP) by 4.8% mAP. Note that due to the limited time of the rebuttal, we implement DetCLIP using the same training/testing setting as in the paper, and do not use techniques such as large-scale jittering and prompt ensemble which is adopted by VILD to boost the performance. The comparison is updated in the revised paper (Table 5 in Appendix C). \n\n| Model | Backbone | LVIS val AP |\n| :-----: | :------: | :---------: |\n| VILD | ResNet50 | 22.5 |\n| DetCLIP | ResNet50 | **27.3** |\n\n**R4Q4. No limitation paragraph.**\n\nPlease refer to the Appendix A.\n", " We thank the reviewer for the positive feedback and insightful comments! All comments are summarized and addressed as follows.\n\n**R3Q1. Experiments mainly use Swin-Transformer as the backbone. Can the backbone be ViT?**\n\nOur method is not limited to a specific backbone. Other vision backbones like ViT or ResNet can be considered as a choice. Swin-Transformer is more suitable for object detection tasks compared to ViT since the window-attention mechanism can help significantly reduce the computational cost when the input has a high resolution. Another consideration of using Swin-Transformer is it can lead to a fair comparison with GLIP if we use the same backbone. If someone would like to use ViT as the backbone, we recommend following the method in [1].\n\n[1] Li, Yanghao, et al. \"Exploring plain vision transformer backbones for object detection.\" arXiv preprint arXiv:2203.16527 (2022).\n\n**R3Q2. The notations in Formula (1) should be changed.**\n\nThanks for your suggestion, we will change it in the revision.", " **R2Q4. What is partial positive concept and is it derived from the partial labelling problem?**\n\nYes, it is derived from the partial labelling problem. Specifically, the \"lack of annotations of partial positive concepts\" indicates that some concepts in images are not annotated with captions in grounding and image-text datasets. We will make it more clear in revision.\n\n**R2Q5. Writing issues.**\n\nThanks for you suggestions. We will refine the abstract/introduction to make them more clear and neat, and modify the bold text in the revision.\n\n**R2Q6. Many concepts were used directly without explanation/definitions/references at the first hand.**\n\nThanks for your kindly reminder, we will update this information in the revision. The explanations are listed as follows:\n\n1) The GLIP denotes the method in [1] and the DetCLIP is our proposed method.\n\n2) `-T' denotes the model that adopts the Swin-T transformer as the image encoder.\n\n3) The positive/negative samples for an image denote the categories (or concepts) labeled/not labeled in the ground truth of the image, respectively. In our paralleled formulation, for each training image, we use ground truth categories/concepts as positives and sample negatives from a pool of candidates to perform classification learning. The pool of negative candidates for detection data is the set of categories not appear in the ground truth, and for grounding/image-text pair data it is our proposed concept dictionary. \n\n4) The grounding data denotes the data annotated with fine-grained correspondence between\ntext phrases in a sentence and objects in an image.\n\n5) Partial label problem stands in some grounding or image-text pair datasets, only the main objects that people care about are labeled in the caption.\n\n[1] Li, Liunian Harold, et al. \"Grounded language-image pre-training.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.", " We thank the reviewer for the positive feedback and insightful comments! All comments are summarized and addressed as follows.\n\n**R2Q1. Authors claimed their method is much more efficient and effective than GLIP in terms of data usage. However, I notice that they have also utilized other models such as CLIP, FILIP. Are these models considered as part of the proposed method (are they multi-stage)? How would this compare to GLIP?**\n\nYes, these models are considered as part of the proposed method and they are multi-stage. \nAlthough our method involves external models such as CLIP and FILIP, the comparison with GLIP on data efficiency is fair and clear. Specifically, we can compare the components in our method where the external models are introduced:\n\n1) We use weights of a vision-language pre-trained FILIP text-encoder as the initialization of our text-encoder, while GLIP loads weights from a language pre-trained RoBERTa. Both models are transformer models but FILIP has lower complexity, i.e., 63.3 M parameters of FILIP-base v.s. 124.8 M parameters of RoBERTa-base, which should not lead to an advantage of our model in comparison. Indeed, in our early stage of experiments, we have also tried using RoBERTa-base as the text encoder, and it achieves comparable results compared to using FILIP's text encoder.\n\n2) In our concept enrichment technique, we use a pre-trained FILIP text-encoder to retrieve definitions for concepts without a direct match in WordNet. In Q2 of Reviewer 5KVn, we show that this process is robust to the choice of language model and it can achieve a strong performance even without using a language model. \n\n3) We use CLIP to perform pseudo labeling for image-text pair data while GLIP uses a GLIP-L model trained on detection/grounding data. The effects of using different models during this process cannot be directly compared since we follow different training stages, i.e., we use 2 stages of pseudo-labeling->training while GLIP follows 3 stages of training->pseudo-labeling->re-training. However, to show our method can achieve a better data usage efficiency, we can exclude the pseudo labeling process for both methods and compare the results trained with only detection+grounding data. Under this setting, as shown in the updated Table 1 in the main paper, our DetCLIP-T(B) (34.4% mAP) can outperform the counterpart GLIP-C (24.9% mAP) by 9.5% mAP on LVIS. \n\n**R2Q2. Further explanation for why interaction between categories is unnecessary.**\n\nThe interaction between categories can cause inefficient learning and an inconsistency between the training and testing phases. Specifically, in our method, the text embedding of the category name serves as the classification weight of the detector. In the sequential formulation, due to the interaction between category names, the text embedding of each category will depend on other categories which are input together with it, and the order in which they are arranged. This will lead to dynamic classification weights for each category. In other words, the classification weight of a category depends on all the category names and also depends on how we organize them. That is to say, we expect the model to learn visual embeddings that can work on dynamic classification weights, which increases the learning difficulty and hurts the final performance. As demonstrated in our experiments (Fig.4 and Table 3 in the main paper), replacing the sequential formulation with paralleled formulation can significantly improve learning efficiency as well as performance.\n\nOn the other hand, to make sequential formulation work, GLIP samples a large number of different category names at each training iteration and shuffles their order as the input to the text-encoder. The purpose of these practices is to reduce the influence of contextual information in the text on each class name, which is contrary to the purpose of interaction.\n\n**R2Q3. Why context information is just removed? Maybe it can be added to every individual concept phrase.**\n\nWe remove the context information since our paper targets at the open-world detection which concentrates on recognizing the class of each object. Specifically, in our setting, we keep the adjective for each concept and remove the conjunction and verb in each caption in grounding datasets, which follows the way the grounding datasets annotate objects.\n\nWe conduct the experiments (L150-153 in the revised main paper \\& Table 6 in the Appendix D) to show that randomly shuffling the word order in the grounding training data for GLIP can even bring a slight improvement for the downstream detection task, indicating that the noun phrase is more critical for the detection task, compared to the context information.\n\nWe agree including more context information (e.g., the verb) into concept phrases may be helpful for the model to better distinguish more fine-grained patterns, e.g., actions of objects, which is not considered in this paper. ", " **R1Q5. How the pseudo labeling for label completion is conducted since dictionary size is usually very large (14K)?**\n\nDuring pseudo labeling, since we use a text encoder that has no interaction with the vision encoder, the text embedding of all concepts can be pre-computed and stored for the later computation. This process requires all concepts to forward the text encoder only once, which is quite efficient. \nThen the text embeddings are used as the classification weights for object proposals, where the category number is the number of concepts, i.e., 14k. Before calculating the classification scores, we first use a series of conditions to filter proposal candidates with low quality (as described in the L16-18 in Appendix B), which significantly reduces the number of object candidates. By doing this, we can directly calculate the similarities between object proposals and concept texts without encountering memory issues. If a larger scale of concepts is involved, we can split the concepts into multiple chunks and compute the similarities for each chunk separately. We have updated more implementation details about this part in the Paragraph **Pseudo Labeling on Image-Text Pair Data** in Appendix B.\n\n**R1Q6. The comparison with GLIPv2.**\n\nThe DetCLIP focuses on building an effective paralleled framework with external knowledge for open-world object detection while GLIPv2 is proposed to serve different vision tasks, which is orthogonal to our work. \n\nFor zero-shot detection performance comparison, our DetCLIP-T still outperforms GLIPv2-T (refer to Table 2 of GLIPv2 paper) by a large margin (35.9% v.s. 29.0%) on LVIS minival. We have updated this comparison in Table 1 (row 9) of the revised paper. Note that the performance of the other two models GLIPv2-B and GLIPv2-H cannot be directly compared with our DetCLIP since they include images of LVIS during the training, which is thus not zero-shot.", " We thank the reviewer for the positive feedback and insightful comments! All comments are summarized and addressed as follows.\n\n**R1Q1. The construction of the concept dictionaries is constrained by the WordNet.**\n\nYes, the current concepts dictionary relies on the concepts in WordNet since it is large enough to cover most object concepts in our daily life.\n\nSpecifically, according to [1], WordNet contains approximately 57,000 noun word forms organized into approximately 48,800 word meanings (synsets), which can cover more than 95\\% concepts of the currently available detection datasets like LVIS and Object365.\n\nWhile making dictionaries larger and more diverse promises to further improve the performance, one of our main purposes is to demonstrate the effectiveness of the idea of the concept dictionary. As shown in Table 3 of the main paper, our models using the current dictionary improves the LVIS rare category performance from 22.2 to 26.0 when trained with Objects365 and from 21.6 to 26.4 when trained with Objects365+GoldG, respectively. We will consider enriching our dictionary by including concepts from broader sources like Wikipedia in the future.\n\n[1] Miller, George A., et al. \"Introduction to WordNet: An on-line lexical database.\" International journal of lexicography 3.4 (1990): 235-244.\n\n**R1Q2. The concept enrichment relies on a pre-trained language model, how would it affect the final performance? Do the authors try other text-encoders?**\n\nDuring training, we use a pre-trained language model to retrieve a definition in our dictionary for concepts without a direct match in WordNet. Due to the wide coverage of WordNet, only a small fraction (less than 10%) of concepts require retrieving their definitions. Therefore we can expect that the choice of the language model would lead to limited influence on the performance.\n\nWe also conduct additional experiments to study how the pre-trained language model in the concept enrichment affects the final performance. Three different settings are considered: 1) do not use the language model, i.e., directly adopt the category name as the input for the concepts not in WordNet; 2) use a pre-trained FILIP text encoder; 3) use a pre-trained RoBERTa as in GLIP. The results are shown in the following table. We can observe that: 1) the concept enrichment procedure can bring significant improvements, (e.g., +3.6% on rare categories) even without using a pre-trained language model; 2) using FILIP can further boost the AP performance from 28.3 to 28.8, while using RoBERTa achieves similar performance with no language model is used. We update these experimental results in the revised paper (refer to the Paragraph 'Impact of pre-trained language models in Concept enrichment.' and Table 9 in Appendix D).\n\nTable 1: Performance comparison of using different pre-trained text encoders in concept enrichment\nprocedure on LVIS minival dataset. The training dataset is Objects365. \n\n| Concept Enrichment | Pre-trained Text Encoder | LVIS minival AP (r/c/f) |\n| :----------------: | :----------------------: | :-----------------------: |\n| X | / | 27.8 (22.2/26.8/29.7) |\n| &#10004; | None | 28.3 (25.8/27.0/29.9) |\n| &#10004; | RoBERTa-base | 28.2 (24.5/27.3/29.7) |\n| &#10004; | FILIP text-encoder | **28.8 (26.0/28.0/30.0)** |\n\n**R1Q3. When resolving the name ambiguity among different datasets, only the text part is used without exploiting the corresponding image information.**\n\nThanks for your suggestion, images can provide valuable information for resolving the name ambiguity, e.g., \"mouse\" can be an animal or equipment. We will consider this direction in our future work.\n\n**R1Q4. Is the definition from WordNet necessary to be used? How would it compare to using the unified label?**\n\nYes, the definition from the WordNet is important for the performance since it can explicitly provide the useful relationships between various concepts. We show the performance of only using the unified label (similar to the listed CVPR22 paper [2]) in row4 of Table 3 in the main paper, where O365 and GoldG are used for training and no concept enrichment is adopted. Comparing it to the results of row5, where the concept enrichment is used, we can find that augmenting category names with definitions achieves significant improvements (+4% on LVIS and +3% on 13 detection datasets)\n\n[2] Yang, Jianwei, Chunyuan Li, Pengchuan Zhang, Bin Xiao, Ce Liu, Lu Yuan, and Jianfeng Gao. \"Unified contrastive learning in image-text-label space.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19163-19173. 2022.\n", " We thank all the reviewers for their time, insightful suggestions, and valuable comments. We are glad that **ALL** reviewers give positive feedback and find our method simple and effective, an elegant concept parallel framework that scales well with a large number of categories (reviewer 'rAnj' and 'aMQW'), with significant improvements over GLIP on LVIS and 13 downstream detection datasets (reviewer '5KVn', '82yp' and 'aMQW'). \n\nWe respond to each reviewer's comments in detail below. We have also revised the main paper and appendix according to the reviewers' suggestions. The main changes are listed as follows:\n\n- In Table 1, we add comparisons with GLIP-C and GLIPv2-T.\n\n- In Appendix B, we elaborate on more implementation details about how to conduct pseudo labeling for YFCC dataset when using a large-size concept dictionary.\n\n- In Appendix C, we add an experiment to evaluate our method under VILD protocol.\n\n- In Appendix D, we add experimental results about the impact of the pre-trained language model for concept enrichment.\n\n- As suggested by Reviewer2, we revise the abstract to make it more clear and neat and modify the improper bold text.\n\nNote that we marked the revisions in blue. We hope that our efforts address the reviewers' concerns.", " The work proposes to build a concept dictionary which enables to perform visual-concept pretraining in a unified way across multiple heterogeneous datasets for open-world detection. With the proposed dictionary design, the proposed method can even perform partial annotation enrichment for label completion and negative concept sampling to further improve the performance. Strength:\n\n1. The built concept dictionary can be used to convert the annotations of heterogeneous datasets, including detection, visual grounding, etc, into a unified one.\n2. With the concept dictionary, the proposed method can perform concept enrichment with the assistance of other pretrained text encoder, FILIP in addition to negative concept sampling and partial annotation enrichment for those labels which are not well annotated in the existing datasets.\n3. The performance improvement in LVIS for rare and frequent categories is significant as compared with other previous state-of-the-art method.\n\nWeakness:\n\n1. The construction of the concept dictionaries rely on the wordNet, and thus the concepts are mainly composed of those which are well-defined in WordNet. \n2. For the concept enrichment, it still requires a pre-trained text encoder, like FILIP, to match related concepts. The quality will depends on the reliability of the pretrained text-encoder used.\n3. When resolving the name ambiguity among different datasets, only the text part is used without exploiting the corresponding image information. The experimental results are very thorough covering different open-vocabulary datasets, and also show promising performance improvement. I have few questions.\n\n1. Is the definition from the wordNet necessary to be used? If we only use the unified label, how much will it affect the performance?\nThe following work is not targeted at open-vocabulary detection, but it also proposes to perform contrastive learning using the unified space.\n\nYang, Jianwei, Chunyuan Li, Pengchuan Zhang, Bin Xiao, Ce Liu, Lu Yuan, and Jianfeng Gao. \"Unified contrastive learning in image-text-label space.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 19163-19173. 2022.\n\n\n2. How is the pseudo labeling for label completion of using the all the concepts in the dictionary conducted? The dictionary size is usually very large (e.g., 14k). The paper describes to add the concepts in the dictionary as the addition category inputs. \n3. For concept enrichment and partial concept enrichment, how would the pretrained encoder FILIP affect the final performance? Do the authors try other text-encoders?\n4. How is the proposed method compared with recently released work, GLIPv2 (optional)?\nhttps://arxiv.org/abs/2206.05836 The authors do address the limitations and why there is no potential negative societal impact of their work.", " This paper explores a parallel visual-concept pretraining method for open vocabulary object detection by resorting to knowledge enrichment from a designed concept dictionary.\nThey make the following contributions:\n1.a novel paralleled concept formulation to improve learning efficiency.\n2.a novel concept dictionary to enrich the prompt text concepts during joint pretraining.\n3.DetCLIP outperforms the state-of-the-art GLIP. Strengths\n1. The method is simple and effective and I like the examples author offered here and there.\n2. The model significantly improves the learning efficiency.\n3. The experiment results seem solid. They evaluate the method on LVIS and transfer the model to 13 downstream detection datasets. \n\nWeaknesses\nI think the writing of the abstract and introduction needs to be significantly improved: they are too long especially for the abstract. And also many concepts were used directly without explanation/definitions/references at the first hand or not at all. For instance, what is GLIP, DetCLIP-T, GLIP-T, positive samples v.s. negative samples ? What is the grounding, what's its difference to image-text pair data ? what's the partial label problem ? \n\nAuthors claimed their method is much more efficient and effective than GLIP in terms of data usage. However I notice that they have also utilized other models such as CLIP , FILIP. Are these models considered as part of the proposed method (are they multi-stage) ? How would this compare to GLIP ? \n\nL157 Can you provide further explanation that why interaction between categories is unnecessary?\n\nL169: Why context information is just removed. I know the context cannot fed into the text encoder individually. Maybe it can be added to every individual concept phrase. Any study for this ?\n\nL243 : What is partial positive concept ? Is it derived from the partial labelling problem, I don't see a clue. \n\nLast, bold text is normally for title/subtitle as a standard practice. Be using it properly! \n See above See above ", " The paper is developed based on GLIP:\n\n- To solve the problems of inefficient interaction between categories and the restriction of max input sentence length, it proposes a parallel method. Compared to the sequential input in GLIP, the paper extracts phrases as the input in a parallel manner.\n- To build implicit relationships between input phrases, it proposes a concept dictionary to provide the prior knowledge.\n\nThe paper is verified in the task of zero-shot detection on LVIS.\n Strengths:\n- The framework does make sense. I agree with the point that extracting the noun phrase is efficient for alignment between regions and concept words.\n- The proposed large-scale concept dictionary could be a future valuable toolkit for future open-vocabulary or vision-language research if it can be open-released. \n- Compared to GLIP, improvements are large.\n - The notations in Formula (1) are odd due to the upper letter T and Transpose. Please consider changing it.\n- Experiments mainly use Swin-Transformer as the backbone. Can the backbone be ViT? None.", " This paper proposes an extension of GLIP for Open World Detection. The extension comprises two parts: one methodological by introducing a so-called parallel processing of the text data, and one more data related that allows the unification of many heterogeneous datasets for training. The latter also proposes a richer description of the categories -- deemed concepts in the paper -- through wordnet.\nThe paper reports significant improvements of GLIP. It is mainly evaluated on LVIS dataset On the positive side:\n- the authors study an important problem in literature that of open-world object detection where not many solutions are available\n- the parallel extension to GLIP, although straightforward, makes sense. The original GLIP sequentual formulation was problematic and did not scale well with a large number of categories\n- the authors did a good job in terms of proposing a framework for unifying many established and heterogeneous datasets including standard detection dataset, grounding data, and image/caption pairs. Actually, in my opinion, this is the biggest contribution of the paper. \n- the improvements over GLIP are significant. Even when trained on the same data, +5.1% is reported (table 3). On the negative side:\n- the methodological contribution of the parallel processing is rather thin and is actually the first thing that would come to mind when reading the original GLIP paper.\n- most of the improvements and contributions come from the dataset side. Although the framework for creating the dataset is worthy, it is more an engineering approach rather than an actual methodological contribution. After reading the paper you get mostly the impression that the authors created a much better dataset and then trained a (straightforward) extension of GLIP on it.\n- The evaluation on LVIS is sound and convincing. However with such big and heterogeneous data used, the zero shot capabilities of the method are not clear. Can the authors also evaluate following the VILD protocol (https://arxiv.org/pdf/2104.13921.pdf) where the method is trained on base classes and then explicitly evaluated on unseen classes?\n\n**Final Recommendation**\nAs said I will stick to my original score which is line with the score from other Rs. I didn't see a paragraph for this in the paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 5, 5 ]
[ "9HsBY-fFyYT", "lSwlXGuae3v", "yZZAUT13S4I", "BR-YqPZcJEK", "ke5CqQNASSq", "P2V-NZewsdSk", "HZVTm7KSOY", "nips_2022_4rTN0MmOvi7", "nips_2022_4rTN0MmOvi7", "nips_2022_4rTN0MmOvi7", "nips_2022_4rTN0MmOvi7", "nips_2022_4rTN0MmOvi7" ]
nips_2022_Wo1HF2wWNZb
On the Identifiability of Nonlinear ICA: Sparsity and Beyond
Nonlinear independent component analysis (ICA) aims to recover the underlying independent latent sources from their observable nonlinear mixtures. How to make the nonlinear ICA model identifiable up to certain trivial indeterminacies is a long-standing problem in unsupervised learning. Recent breakthroughs reformulate the standard independence assumption of sources as conditional independence given some auxiliary variables (e.g., class labels and/or domain/time indexes) as weak supervision or inductive bias. However, nonlinear ICA with unconditional priors cannot benefit from such developments. We explore an alternative path and consider only assumptions on the mixing process, such as Structural Sparsity. We show that under specific instantiations of such constraints, the independent latent sources can be identified from their nonlinear mixtures up to a permutation and a component-wise transformation, thus achieving nontrivial identifiability of nonlinear ICA without auxiliary variables. We provide estimation methods and validate the theoretical results experimentally. The results on image data suggest that our conditions may hold in a number of practical data generating processes.
Accept
Strong paper with all reviewers arguing for acceptance. Only minor concerns from the reviewers were on whether the preconditions required for the theorems in the paper were likely to hold in practice. This was discussed thoughtfully by the author response, including a new appendix section.
train
[ "2-FK49RYgwHH", "yhBR3ZY6yt", "RhNmxm9pCVn", "YyRX0HGx4L", "CQSSofLkVV9", "C1_sZVQAnIso", "PClA-HpXzo4", "skLa7x0sa0V", "MERRuAam3oB" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We greatly appreciate the reviewer’s time, encouraging comments, and constructive suggestions. Our responses are provided below.\n\n**Q1**: What would happen if the ground truth data were to violate the assumptions but the method was run anyway?\n\n**A1**: Thanks for the great question. In the ablation study, assumptions of the data generating process of *Base* (i.e., the vanilla baseline) are violated and one could observe a significant performance drop compared to others (Fig. 4). Besides, we also tried adding the regularization term during the estimation of *Base* and the performance stays low. This may not be surprising since our regularization terms are specifically designed for the proposed conditions on the generating process.\n\n**Q2**: Equivalence classes of indeterminable solutions without the assumption and the existence of the unique sparsest solution.\n\n**A2**: Thanks for the insightful comment. This is an excellent point. Theoretically, without the assumption on the mixing process, we could not identify a unique solution up to trivial indeterminacies based only on the sparsity regularization, i.e., the sparse solution may not fall within the true equivalence classes of indeterminable solutions. \n\nA straightforward example is the linear Gaussian case in which rotation indeterminacy exists. Specifically, one can easily construct different rotated solutions with equally sparse supports that lie outside of the range of acceptable indeterminacies of full identifiability. Therefore, it is possible to have non-unique sparse solutions without specific assumptions on the mixing process.\n\nOur theories provide one set of conditions for identification based on sparsity regularization. The proposed conditions could be used to formalize the pattern of equivalence classes with unique sparse solutions. At the same time, we guess that there must exist other weaker assumptions, which we hope to explore in future work because of the significance of nonlinear ICA to the foundation of unsupervised learning, disentanglement, and causal discovery.\n\n---\n\n**Reference**:\n\nGhassami, A., Yang, A., Kiyavash, N., & Zhang, K. (2020, November). Characterizing distribution equivalence and structure learning for cyclic and acyclic directed graphs. In International Conference on Machine Learning (pp. 3494-3504). PMLR.", " **References**:\n\nBing, X., Bunea, F., Ning, Y., & Wegkamp, M. (2020). Adaptive estimation in structured factor models with applications to overlapping clustering. The Annals of Statistics, 48(4), 2055-2081.\n\nMoran, G. E., Sridhar, D., Wang, Y., & Blei, D. M. (2021). Identifiable variational autoencoders via sparse decoding. arXiv preprint arXiv:2110.10804.\n\nK. Rohe and M. Zeng. Vintage factor analysis with varimax performs statistical inference. arXiv preprint arXiv:2004.05387, 2020.\n\nT. Rhodes and D. Lee. Local disentanglement in variational auto-encoders using jacobian l_1 regularization. Advances in Neural Information Processing Systems, 34:22708–22719, 2021.\n\nS. Lachapelle, P. R. López, Y. Sharma, K. Everett, R. L. Priol, A. Lacoste, and S. Lacoste-Julien. Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA. Conference on Causal Learning and Reasoning, 2022.\n\nP. Spirtes, C. N. Glymour, R. Scheines, and D. Heckerman. Causation, prediction, and search. MIT press, 2000.\n\nJ. Zhang. A comparison of three occam’s razors for markovian causal models. The British journal for the philosophy of science, 64(2):423–448, 2013.\n\nG. Raskutti and C. Uhler. Learning directed acyclic graph models based on sparsest permutations. Stat, 7(1):e183, 2018.\n\nM. Forster, G. Raskutti, R. Stern, and N. Weinberger. The frugal inference of causal relations. The British Journal for the Philosophy of Science, 2020.\n\nD. M. Busiello, S. Suweis, J. Hidalgo, and A. Maritan. Explorability and the origin of network sparsity in living systems. Scientific reports, 7(1):1–8, 2017.\n\nA. Einstein. Does the inertia of a body depend upon its energy-content. Annalen der Physik, 18(13): 639–641, 1905.\n\nL. K. Nash. The nature of the natural sciences. 1963.\n", " **Others**: Real-life situations for the sparsity assumption.\n\n**A3**: We are very grateful for your constructive feedback. We have added a section of additional discussion on it (Appx. C in the appendix) as follows:\n\n---\n\n``Sparsity assumptions have been widely used in various fields. For latent variable models, sparsity in the generating process plays an important role in the disentanglement or identification of latent factors both empirically and theoretically (Bing et al., 2020; Rohe and Zeng, 2020; Moran et al., 2021; Rhodes and Lee, 2021; Lachapelle et al., 2022). In causality, various versions of Occam's razor, such as faithfulness (Spirtes et al., 2000), the minimality principle (Zhang, 2013), and frugality (Raskutti and Uhler, 2018; Forster et al., 2020), have been proposed to serve as fundamental assumptions for identifying the underlying causal structure.\n\nFormulated as a measure of the density of dependencies, sparsity assumptions are more likely to be held when the observations are actually influenced by the sources in a ``simple” way. For example, in biology, analyses on ecological, gene-regulatory, metabolic, and other living systems find that active interactions may often be rather sparse (Busiello et al., 2017), even when these systems evolve with an unlimited number of complicated external stimuli. In physics, it is an important heuristic that a relatively small set of laws govern complicated observed phenomena. For instance, Einstein's theory of special relativity contains parsimonious relations between substances as an important heuristic to shave away the influence of ether compared to Lorentz's theory (Einstein, 1905; Nash, 1963).\n\nHowever, sparsity is not an irrefutable principle and our assumption may fail in a number of situations. The most direct one could be a scenario with heavily entangled relations between sources and observations. Let us consider the example of animal filmmaking in Sec. 4, where people are recording the sound of animals in a safari park. If the filming location is restricted to a narrow area of the safari park and multiple microphones are gathered together, the recording of each microphone will likely be influenced by almost all animals. In such a case, the dependencies between the recording of microphones and the animals are rather dense and our sparsity assumption is most likely not valid.\n\nAt the same time, even when the principle of simplicity holds, our formulation of sparsity, which is based on the sparse interactions between sources and observations, may still fail. One reason for this is the disparity between mechanism simplicity and structural sparsity. To illustrate this, one could consider the effect of sunlight on the shadow angles at the same location. In this case, the sun’s rays and the shadow angles act as the sources and the observations, respectively. Because rays of sunlight, loosely speaking, may be parallel to each other, the processes of them influencing the shadow angles may be almost identical. Thus, the influencing mechanism could be rather simple. On the other hand, each shadow angle is influenced by an unlimited number of the sun’s rays, which indicates that the interactions between them may not be sparse, therefore violating our assumption. This sheds light on one of the limitations of our sparsity assumption, because the principle of simplicity could be formulated in several ways. Besides, these different formulations also suggest various possibilities for identifiability based on simplicity assumptions. Another proposed assumption, i.e., independent influences, may be one of the alternative formulations, and more works remain to be explored in the future.”\n\n---\n", " We sincerely thank the reviewer for the time dedicated to reviewing our paper, the constructive suggestions, and encouraging feedback. Please find the response to your comments and questions below.\n\n**Q1**: Several references relating to sparsity and identifiability in latent variable models are missing.\n\n**A1**: Thank you for suggesting these related references. It is extremely helpful–we have included and discussed them in the revised version. As the reviewer summarized elegantly, we would also like to note again that, though closely related, the settings of these important works differ from the nonlinear ICA setting we considered. For example, (Bing et al., 2020, Moran et al., 2021) adopt the anchor feature assumption, i.e., for each source, there are at least two observations that are influenced only by that source. This requires the number of observations to be several times larger than that of sources.\n\n**Q2**: The meaning of “in a non-statistical sense” in the assumption of independent influences.\n\n**A2**: Thanks for raising this excellent question. It means those quantities do not contain information about each other. The main reason we use ``independence in a non-statistical sense” is that statistical independence is defined w.r.t. variables, whereas the partial derivatives of the mixing function, strictly speaking, may not be considered as variables. At the same time, if we straightforwardly formalize the condition based on the uncorrelatedness of the Jacobian columns, the connection with the natural mixing might not be intuitive and lack interpretation. \n", " **Q2**: What was preventing MCC score $\\rightarrow$ 1?\n\n**A2**: Thanks for the great question. During the estimation, we used General Incompressible flow Network (GIN) to maximize the objective function, of which the coupling block consists of MLPs and ReLU activation layers. Thus, the estimation process may never achieve the global optimum within finite epochs as the overall optimization problem is nonconvex, as is typical for methods involving deep learning. Besides, the error due to the finite sample size may also prevent the model from reaching a perfect performance. Moreover, to induce sparsity, we use $L_1$ penalty as an approximation of the $L_0$ penalty to enable gradient-based optimization, which may also introduce errors empirically. Based on our empirical observations as well as related works focusing on these issues, we believe that these are part of the underlying reasons.\n\n**Minor**: Polishing condition (iii) of theorem 1 to avoid potential back-references by the readers.\n\n**A3**: Thanks for your constructive suggestion. We have updated it as well as some other conditions in the paper.\n\n---\n\n**References**:\n\nP. Spirtes, C. N. Glymour, R. Scheines, and D. Heckerman. Causation, prediction, and search. MIT press, 2000.\n\nG. Raskutti and C. Uhler. Learning directed acyclic graph models based on sparsest permutations. Stat, 7(1):e183, 2018.\n\nM. Forster, G. Raskutti, R. Stern, and N. Weinberger. The frugal inference of causal relations. The British Journal for the Philosophy of Science, 2020.\n\nRamsey, J., Zhang, J., & Spirtes, P. L. (2012). Adjacency-faithfulness and conservative causal inference. arXiv preprint arXiv:1206.6843.\n\nBing, X., Bunea, F., Ning, Y., & Wegkamp, M. (2020). Adaptive estimation in structured factor models with applications to overlapping clustering. The Annals of Statistics, 48(4), 2055-2081.\n\nK. Rohe and M. Zeng. Vintage factor analysis with varimax performs statistical inference. arXiv preprint arXiv:2004.05387, 2020.\n\nMoran, G. E., Sridhar, D., Wang, Y., & Blei, D. M. (2021). Identifiable variational autoencoders via sparse decoding. arXiv preprint arXiv:2110.10804.\n\nT. Rhodes and D. Lee. Local disentanglement in variational auto-encoders using jacobian l_1 regularization. Advances in Neural Information Processing Systems, 34:22708–22719, 2021.\n\nS. Lachapelle, P. R. López, Y. Sharma, K. Everett, R. L. Priol, A. Lacoste, and S. Lacoste-Julien. Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA. Conference on Causal Learning and Reasoning, 2022.\n\nJ. Zhang. A comparison of three occam’s razors for markovian causal models. The British journal for the philosophy of science, 64(2):423–448, 2013.\n\nD. M. Busiello, S. Suweis, J. Hidalgo, and A. Maritan. Explorability and the origin of network sparsity in living systems. Scientific reports, 7(1):1–8, 2017.\n\nA. Einstein. Does the inertia of a body depend upon its energy-content. Annalen der Physik, 18(13): 639–641, 1905.\n\nL. K. Nash. The nature of the natural sciences. 1963.\n", " We are very grateful for your time, insightful comments, and encouragement. Below please see our point-by-point response.\n\n**Q1**: More discussion of the assumption of sparsity on the mixing function.\n\n**A1**: We sincerely appreciate this essential point. We completely agree with you that the sparsity assumption might not hold in some types of situations, such as the ray tracing-based rendering you mentioned. Regarding the testability of the assumption, we agree that it is rather challenging to determine whether the assumption holds. For instance, assumptions such as faithfulness (Spirtes et al., 2000) and frugality (Raskutti and Uhler, 2018; Forster et al., 2020) are generally not testable (specifically, frugality can be interpreted as sparsity of the edges in the causal structure), since the true structures are not known a priori, which therefore remains an active area of research. It is worth noting that some parts of the faithfulness assumption are shown to be partially testable–Ramsey et al. (2006) showed that the orientation faithfulness assumption is testable under the adjacency faithfulness assumption. Therefore, we think it is an interesting future direction to explore whether this type of partial testability can be carried out for the sparsity assumption in our work.\n\nRegarding the practical significance and limitation of the sparsity assumption, we added a separate section in Appendix (Appx. C) in light of your suggestions. The discussion is as follows:\n\n---\n``Sparsity assumptions have been widely used in various fields. For latent variable models, sparsity in the generating process plays an important role in the disentanglement or identification of latent factors both empirically and theoretically (Bing et al., 2020; Rohe and Zeng, 2020; Moran et al., 2021; Rhodes and Lee, 2021; Lachapelle et al., 2022). In causality, various versions of Occam's razor, such as faithfulness (Spirtes et al., 2000), the minimality principle (Zhang, 2013), and frugality (Raskutti and Uhler, 2018; Forster et al., 2020), have been proposed to serve as fundamental assumptions for identifying the underlying causal structure.\n\nFormulated as a measure of the density of dependencies, sparsity assumptions are more likely to be held when the observations are actually influenced by the sources in a ``simple” way. For example, in biology, analyses on ecological, gene-regulatory, metabolic, and other living systems find that active interactions may often be rather sparse (Busiello et al., 2017), even when these systems evolve with an unlimited number of complicated external stimuli. In physics, it is an important heuristic that a relatively small set of laws govern complicated observed phenomena. For instance, Einstein's theory of special relativity contains parsimonious relations between substances as an important heuristic to shave away the influence of ether compared to Lorentz's theory (Einstein, 1905; Nash, 1963).\n\nHowever, sparsity is not an irrefutable principle and our assumption may fail in a number of situations. The most direct one could be a scenario with heavily entangled relations between sources and observations. Let us consider the example of animal filmmaking in Sec. 4, where people are recording the sound of animals in a safari park. If the filming location is restricted to a narrow area of the safari park and multiple microphones are gathered together, the recording of each microphone will likely be influenced by almost all animals. In such a case, the dependencies between the recording of microphones and the animals are rather dense and our sparsity assumption is most likely not valid.\n\nAt the same time, even when the principle of simplicity holds, our formulation of sparsity, which is based on the sparse interactions between sources and observations, may still fail. One reason for this is the disparity between mechanism simplicity and structural sparsity. To illustrate this, one could consider the effect of sunlight on the shadow angles at the same location. In this case, the sun’s rays and the shadow angles act as the sources and the observations, respectively. Because rays of sunlight, loosely speaking, may be parallel to each other, the processes of them influencing the shadow angles may be almost identical. Thus, the influencing mechanism could be rather simple. On the other hand, each shadow angle is influenced by an unlimited number of the sun’s rays, which indicates that the interactions between them may not be sparse, therefore violating our assumption. This sheds light on one of the limitations of our sparsity assumption, because the principle of simplicity could be formulated in several ways. Besides, these different formulations also suggest various possibilities for identifiability based on simplicity assumptions. Another proposed assumption, i.e., independent influences, may be one of the alternative formulations, and more works remain to be explored in the future.”\n\n---\n", " This paper presents new identifiability result for nonlinear ICA that gets around the need for auxiliary information by leveraging a sparsity assumption on the mixing function. While sparsity has been explored in the nonlinear ICA community, the existing results have focused on the relationship between source distributions (Lachapelle et al., 2022), rather than on sparsity of the mixing function. On exception is Gresele et al [2021] that did show a different sparsity assumption rules out some of the standard counterexamples, but didn't provide a full identification proof. This paper does that and shows analogous results for the linear Gaussian case which well-known to not be identifiable without additional assumptions (usually non-Gaussian sources). The paper also gives results under an \"independent influences\" assumption which is more similar to Gresele et al [2021] and show identification results there too. Finally, the theory results are supported with some simple experiments. Strengths\n* Strong identification (up to a component-wise transform & permutation) results in a hard problem. \n\nWeakness\n* I am always very uncomfortable with making assumptions on the mixing function $f: s \\rightarrow x$, especially in the image domain. While it is relatively intuitive that there should be some simple dependencies among the source variables (as people, we wouldn't be able to understand the world if this were not the case), it's not obvious that the mixing function should have any sparsity: consider how realistic images are rendered in modern rendering engines. The most realistic looking renderers use ray-tracing which explicitly bounce light around all the objects in the scene in order to correctly model how light reflects in a scene. That seems to imply a dense dependence between sources rather than any kind of sparsity. Of course, it may be the case that structured sparsity is a reasonable approximation of the mixing function, but I would have liked to see some more discussion on when we can expect this to hold. The discussion on page 7 with reference to the independent influences assumption was far better.\n* The empirical results have the expected relationships between approaches, but I was surprised that the MCC scores were only $\\approx 0.8$. \n\nI am conflicted on this paper - while ICA is not my research area, I know that this is a very well-studied area and identification in this IID setting is hard to achieve (as the paper points out - most papers rely on auxiliary variables). So if we take this work as a pure piece of theory, the structured sparsity condition is a sufficient condition for identification is a difficult domain, and this is very interesting paper. On the other hand - the practitioner in me can barely parse the structured sparsity assumption - so I'd have no idea whether or not it applied in my domain, so I think a lot more work is needed in expanding on when we expect this assumption to hold or alternatively, are there any testable implications of this assumption? 1. Do you know what was preventing MCC scores from getting above 0.9? These problems are relatively straightforward so I would have expected to see MCC score $\\rightarrow 1$; it would be good to understand what the limitations of the empirical results are?\n\nMinor\n* Condition (iii) of theorem 1 is hard to parse without a lot of back references. Consider rewording as follows, \"Denote by F the support of (all possible values of **the Jacobian of $f$**) $J_f (s)$. For all $i \\in {1, . . . , n}$,. there exist **|F_{i,:} | sources**...\". Similar amendments could be made to the other theorem statements too - the conditions in the theorems are already pretty dense, so it helps to remind the reader what all the variables refer to. Yes.", " This paper considers the identifiability of the nonlinear ICA model with unconditional priors:\n$x = f(s)$\nwhere $x$ is the observed data and $s$ are the unobserved sources.\n\nThey prove that the sources $s$ are identifiable identifiable under two different settings:\n\n1. when a \"structural sparsity\" assumption holds;\n\n2. when an \"independent influence\" assumption holds.\n\nThey also provide experiments where they estimate the latent sources via regularized objective functions. The regularization terms are inspired by the two settings above.\n\nMore detail: \n\n1. Structural sparsity: this assumption is that for every latent source $s_i$, there is a set of observed variables such that $s_i$ is the only latent source that participates in the generation of all variables in the set. \n\nIn Theorem 1, the authors prove that under structural sparsity, the sources can be identified up to permutation and coordinate-wise transformation. For this result, the authors also assume invertibility of $f$, that the observed Jacobian $J_f(s)$ spans the support of $J_f(s)$, and the estimated decoder is as sparse as the true decoder.\n\nIn Theorem 3, the authors again assume structural sparsity but relax the invertibility of $f$. They prove that if the estimated $\\widehat{f}$ is a 'rotated-Gaussian mixture preserving automorphism' (i.e. it has a particular functional form which rotates the true decoder $f$) then the sources are identifiable.\n\n\n2. Independent influence: this assumption is that the influence of each source on the observed variables is independent of each other i.e. $\\partial f / \\partial s_i$ is independent of the other sources and their influences in a non-statistical sense. \n\nIn Theorem 4, the authors prove that if independent influence holds (among other assumptions) then the latent sources are identifiable. \n\n\n\n Strengths\n\n- The paper is timely and relevant and seems like an important contribution to the study of identifiability\n\n- The writing is clear\n\nWeaknesses \n\n- Some definitions are unclear (see questions below).\n\n- The paper is missing a number of references relating to sparsity and identifiability in latent variable models. \n\nIn the linear case, Bing et al. (2020) and Rohe and Zeng (2020) prove that the assumption of sparsity in the mixing function (i.e. loadings matrix) gives identifiable solutions.\n\nIn the nonlinear case, Moran et al. (2021) prove that assuming sparsity in the decoder of a deep generative model gives identifiable solutions.\n\nSimilarly to the experiments in this paper, Rhodes and Lee (2021) propose a L1 regularization term on the Jacobian of the mixing function (without proving identifiability however). \n\n- Bing, Xin, et al. \"Adaptive estimation in structured factor models with applications to overlapping clustering.\" The Annals of Statistics 48.4 (2020): 2055-2081.\n\n- Rohe, Karl, and Muzhe Zeng. \"Vintage factor analysis with varimax performs statistical inference.\" arXiv preprint arXiv:2004.05387 (2020).\n\n- Moran, Gemma E., et al. \"Identifiable variational autoencoders via sparse decoding.\" arXiv preprint arXiv:2110.10804 (2021).\n\n- Rhodes, Travers, and Daniel Lee. \"Local Disentanglement in Variational Auto-Encoders Using Jacobian $ L_1 $ Regularization.\" Advances in Neural Information Processing Systems 34 (2021): 22708-22719.\n - The assumption of independent influence is that the influence of each source on the observed variables is independent of each other i.e. $\\partial f / \\partial s_i$ is independent of the other sources and their influences in a non-statistical sense.\n\nCan the authors be more precise about what \"in a non statistical sense\" means? From the paper, it seems that it means uncorrelatedness of the Jacobian columns? Can the authors clarify? The authors do not elaborate on real life situations where their assumptions are likely to hold (except for the animal recording example in the independent influences section).", " The paper proves identifiability results for Independent Component Analysis. In contrast to other recent works which exploit assumptions on auxiliary variables, this work exploits sparsity assumptions on the mixing process. Identifiably results for ICA is an important problem as it provides a rigorous foundation for other applied areas such as disentangled feature learning and parts of causality. As such, advances in this area are important and potentially impactful.\n\nThe paper is well written and clear. It explains the results in the context of the problem and prior work. I believe it makes a solid contribution and is worth accepting to the conference.\n\nOne thing I think could improve the paper would be to elaborate more on the assumtions being made and when they are violated. In particular:\n- what would happen if the ground truth data were to violate the assumptions but the method were run anyway? \n- thinking about the equivalence classes of indeterminable solutions without this assumption, is it possible to relate these equivalence classes to the sparse solutions? E.g. does each equivalence class contain exactly one such sparse solution? Do only some contain one sparse solution (and if so, what is special about these equivalence classes)? \n\n\n See above See above" ]
[ -1, -1, -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "MERRuAam3oB", "RhNmxm9pCVn", "YyRX0HGx4L", "skLa7x0sa0V", "C1_sZVQAnIso", "PClA-HpXzo4", "nips_2022_Wo1HF2wWNZb", "nips_2022_Wo1HF2wWNZb", "nips_2022_Wo1HF2wWNZb" ]
nips_2022_nRcyGtY2kBC
Transfer Learning on Heterogeneous Feature Spaces for Treatment Effects Estimation
Consider the problem of improving the estimation of conditional average treatment effects (CATE) for a target domain of interest by leveraging related information from a source domain with a different feature space. This heterogeneous transfer learning problem for CATE estimation is ubiquitous in areas such as healthcare where we may wish to evaluate the effectiveness of a treatment for a new patient population for which different clinical covariates and limited data are available. In this paper, we address this problem by introducing several building blocks that use representation learning to handle the heterogeneous feature spaces and a flexible multi-task architecture with shared and private layers to transfer information between potential outcome functions across domains. Then, we show how these building blocks can be used to recover transfer learning equivalents of the standard CATE learners. On a new semi-synthetic data simulation benchmark for heterogeneous transfer learning, we not only demonstrate performance improvements of our heterogeneous transfer causal effect learners across datasets, but also provide insights into the differences between these learners from a transfer perspective.
Accept
The paper studies methods for estimating conditional average treatment effects (CATE) under a shift in domain where source and target feature spaces are heterogenous. It is assumed that the (respective) CATEs in both source and target domains are identifiable through ignorability and overlap. No formal assumptions are made regarding the similarity of potential outcome distributions across domains, but implicitly that there exists a shared structure in the outcome functions. A number of heuristics are proposed to modify popular neural network CATE estimators to this setting, including a wide array of meta-learners such as propensity weighting, doubly robust estimators and TARNet. Reviewers appreciated the setting of heterogenous feature domain adaptation which is understudied in the literature and representative of many transfer tasks of interest, such as transfer from a clinical trial to an observational cohort. Typically, the feature set collected in trials is smaller than in, say, a registry. However, as pointed out by one reviewer, the empirical evaluation does not consider such applications. In addition, no details are given in the main paper for how the heterogenous feature spaces are constructed for experiments (this is only given in the Appendix). The uniform sampling is quite unrealistic and most likely less challenging than real-world cases. The authors make assumptions of ignorability and overlap, referring to previous work that this renders the causal effect identifiable. While this is true, the interesting complication in this work is that no assumptions are made regarding similarities of feature sets or outcome functions; these are left implicit. As a result, no claims can be made about the usefulness of source data for this task, see e.g., [1] for a discussion on hardness of transfer. In other words, the authors rely on empirical evidence to demonstrate this usefulness. In semi-synthetic experiments, the authors find that their proposed approach improves significantly over using only shared features, even when the number of target samples is minimal. Reviewers were concern with the contextualisation of the work in the literature, given previous work on transportability of causal effects and on domain adaptation. Adding to this list, I would suggest that the authors refer to previous work on heterogenous-feature transfer learning. Under ignorability and overlap, the settings are not much different from each other, not least demonstrated by the fact that the T-learner solution performs well. The authors propose several "building blocks" but don't evaluate the importance of these in isolation, using, for example, an ablation study. This makes it difficult to assess which components are necessary and which are not. In summary, the considered setting is interesting and the algorithmic contributions appear useful empirically. The theoretical and methodological contributions are rather small, and the work should be better contextualised in the related topics of domain adaptation and transportability. [1] Ben-David, Shai, and Ruth Urner. "On the hardness of domain adaptation and the utility of unlabeled target samples." International Conference on Algorithmic Learning Theory. Springer, Berlin, Heidelberg, 2012.
train
[ "yn0PVHkgsy0", "h9bRlCbPoy0", "O-0n4XN8rPZ", "nmd8lSbCW1f", "XQCjMkoH9j5", "F3AUXTpebPx", "zJq-nSyvU2q", "l8x35inBJxO", "ZvZOMrUFLx", "CTevtWAcbFw", "793DtsPMODP", "fjE682UB8iA", "xiBcPK4JR7n", "CEZuhRLf1OL" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the thoughtful responses. After reading this response, along with the other reviews and author responses, I retain my recommendation of acceptance.\n\nThank you for your detailed thoughts on the specific topics and extensions which I mentioned in the original review. While I find these specific topics interesting and meaningful as ways to think about extensions of the work, I don't intend to urge the authors to necessarily devote precious space in the manuscript to enumerate each particular topic. Rather, a broader discussion of the choices and tradeoffs made in this framework could be helpful. In this broader sense, I see similarities between my question and questions from other reviewers about significance. Discussion of decisions, motivations, and extensions could improve the manuscript from both perspectives.", " My concerns have been solved. Thanks.", " My concerns are all addressed. I would remain the score.", " Thank you very much for your insightful comments and suggestions which have helped us to improve the paper!\n\nWe provide below answers to the questions raised by the reviewer. Please note that the line references are for the revised manuscript. Moreover, please note that the changes in the revised manuscript are highlighted in blue. \n\n### 1. Definition of transfer learning \n\nIn our paper, we have followed the common definition of transfer learning [R1 - see Definition 1 (Transfer Learning)] where the conditional distribution of the outcome given the covariates changes between domains: $p(Y(w) \\mid X^{R}) \\neq p(Y(w) \\mid X^{T})$. To be able to learn these changes in the outcome distribution in the transfer learning setting it is crucial to have labelled data in the target domain. Moreover, having heterogeneous feature spaces between the source and target domains further categorizes our setting as transfer learning. \n\nAs opposed to us, [32] considers the setting where the source and target domains have the same feature space, the marginal distributions are different $p_R(X) \\neq p_T(X)$, but the conditional label distribution stays the same $p_R(Y(w) \\mid X) = p_T(Y(w) \\mid X)$. This is why, using labelled data from the source domain and **unlabelled** data from the target domain, [32] aims to learn potential outcome functions that are invariant across the source and target domains. This setting is referred to as unsupervised domain adaptation, as also mentioned in [32].\n\nNevertheless, we agree with the reviewer that our work has similarities with multi-task learning where the different domains represent different tasks and in fact, our model architecture is inspired by works in multi-task learning. However, these have been adapted to our heterogenous transfer setting for CATE estimation where we needed to handle the heterogenous feature space, transfer information between PO functions across domains and between PO functions within each source and target domain. \n\nIn the revised manuscript, we have incorporated [R1] to clarify and justify our use of transfer learning for the problem we address in our paper. \n\n### 2. Lines 229-230 (now 232-233)\nOn the lines, we indeed mean + and not ||. This is because for layer L, we build the outputs $h\\_{w, L}^{s}, h\\_{w, L}^{p\\_R}, h\\_{w, L}^{p\\_T}$ to each have the same dimension as the potential outcome $y$. Thus the potential outcomes for each domain are obtained by adding together the outcomes of the shared and private subspaces for continuous outcomes and by also applying a sigmoid function for binary outcomes (see lines 233-234). Please note that a similar approach is used by FlexTENet. We have now revised Section 4.2. to include this clarification about the dimensions of $h\\_{w, L}^{s}, h\\_{w, L}^{p\\_R}, h\\_{w, L}^{p\\_T}$. \n\n### 3. Figures 3.a and 3.b\n\nWe have fixed the typos in Figure 3a and Figure 3b where we had the private encoders switched between the source and target domains. Moreover, on line 264, we have fixed the typo of having the treatment w concatenated with the private features of each domain. Thank you very much for pointing these out! Please let us know if there are other inconsistencies you were referring to. \n", " \n\n### 4. Experiments and clarifications of differences between source and target domains \n\nPlease allows us to first clarify what we consider as differences between source and target domains. We consider that the features spaces of the source and target domains are heterogeneous (which is why we write $p(X^{R}) \\neq p(X^{T}))$, but not completely disjoint (note that we have now slightly revised lines L151-155 to incorporate this clarification). As mentioned on lines L181-188, we consider that the source and target covariates can be split into $X^R = (X^s, X^{p_R} ) $ and $X^T=(X^s, X^{p\\_T}) $ such that we have a set of features private (specific) to the source dataset $X^{p_R} \\in \\mathbb{R}^{D_{p_R}}$, a set of features private to the target dataset $X^{p_T} \\in \\mathbb{R}^{D_{p_T}}$ and a set of shared features between the two datasets $X^{s} \\in \\mathbb{R}^{D_S}$.\n\nIn addition, in accordance with the definition of transfer learning [R1], we consider that the conditional outcome distribution is different $p(Y(w) \\mid X^{R}) \\neq p(Y(w) \\mid X^{T})$ between the source and target domains. Moreover, to allow for different degrees of selection bias to be present in the source and target datasets we also consider that $p(W=1 \\mid X^{R}) \\neq p(W=1 \\mid X^{T})$ (thus improving the generality of the method). \n\nPlease allows us to now explain how for the experiments, we have used datasets that satisfy these requirements and assumptions. To begin with, we only obtain $X^R$ and $X^T$ from the following real datasets: TCGA, Twins and MAGGIC as described in Appendix D. The treatment assignments and outcomes are simulated as described in Section 6.1. \n\nMAGGIC consists of datasets with heterogeneous features from 30 domains representing medical studies of patients who have experienced heart failure. During training, we obtain the features for the target dataset $X^T$ by taking the patient characteristics from a randomly sampled dataset with $<500$ patients, while the features for the source dataset represent the patient characteristics from a randomly sampled dataset with $>500$ patients (from the 30 medical studies in MAGGIC). Alternatively, given that Twins and TCGA originally represent a single dataset (for which we only consider the patient features) we randomly subsample parts of them to obtain the features for the source and target domains. In particular, for both Twins and TCGA we sample features for the target dataset to be of size $N_T \\sim \\mathcal{U}(100, 500)$ and a source dataset of size $N_R \\sim \\mathcal{U}(1000, N_F-N_T)$, where $N_T$ is the size of the target dataset, $N_R$ is the size of the source dataset and $N_F$ is the size of the full dataset. Moreover, to create heterogeneous feature spaces for the source and target domains, let $D_F$ be the number of features in each full dataset (for Twins and TCGA). From these, we randomly sample $D_{p_R} \\in \\mathcal{U}(5, D_{F}/3)$ features that are private for the source dataset, $D_{p_T} \\in \\mathcal{U}(5, D_{F}/3)$ features that are private for the target dataset and $D_{s} \\in \\mathcal{U}(5, D_{F}/3)$ features that are shared between the two.\n\nFor all of TCGA, Twins and MAGGIC, the different features for the source and target datasets are re-sampled for each of the different 10 random seeds used for all experimental results. In the following table, we provide summary statistics of the different characteristics of the patient features in the full source and target datasets used across our 10 experimental runs. Note that for the experiments where we vary the size of the target dataset, $N_T$ is fixed. To highlight the diversity of the datasets used for evaluating the benchmarks, we describe in the following table the mean and standard deviations for the size of the source datasets $N_R$, size of the target datasets $N_T$, number of features shared between the source and target datasets $D_s$, number of features private to the source datasets $D_{p_R}$, number of features private to the target datasets $D_{p_T}$, proportion of shared features in the source datasets $D_s / (D_s + D_{p_R})$, proportion of shared features in the target datasets $D_s / (D_s + D_{p_T})$ and the Maximum Mean Discrepancy (MMD) (computed using RBF kernel) between the shared features of the source and target datasets $\\text{MMD}(X^{s_R}, X^{s_T})$. Note that the MMD between the shared features of the source and target datasets for MAGGIC is much higher than for TCGA and Twins, as the different datasets in MAGGIC represent medical studies with different patient populations.\n", " (cont'd)\n\n| Statistic | TCGA | Twins | MAGGIC |\n| :--- | :----: | :----: | :----: |\n| $N_R$ | $4219 \\pm 2569$ | $5106 \\pm 2988$ | $2533 \\pm 3711$ |\n| $N_T$ | $360 \\pm 120$ | $360 \\pm 120$ | $267 \\pm 75$ |\n| $D_s$ | $20 \\pm 8$ | $9 \\pm 2$ | $27 \\pm 7$ |\n| $D_{p_R}$ | $15 \\pm 7$ | $9 \\pm 2$ | $25 \\pm 9$ |\n| $D_{p_T}$ | $21 \\pm 9$ | $8 \\pm 3$ | $21 \\pm 15$ |\n| $D_s / (D_s + D_{p_R})$ | $0.57 \\pm 0.17$ | $0.49 \\pm 0.10$ | $0.54 \\pm 0.13$ |\n|$D_s / (D_s + D_{p_T})$| $0.50 \\pm 0.14$ | $0.52 \\pm 0.08$ | $0.60 \\pm 0.19$|\n|$\\text{MMD}(X^{s_R}, X^{s_T})$ | $0.0009 \\pm 0.0005$ | $0.0011 \\pm 0.0008$ | $0.056 \\pm 0.057$ |\n\nMoreover, we would like to re-emphasize that these real datasets are only used to obtain the patient characteristics and, as descibed in in Section 6.1 we propose a new semi-synthetic data simulation for the outcomes and treatment assignments to evaluate the benchmarks in our heterogenous transfer learning set-up. Thus, please note that the data generating process is **not** the same for both the source and target datasets. In particular, once we obtain the heterogeneous features from the real-dataset, we use **different** functions to generate the potential outcomes in the source and target domains. As described in equations (7) and (8), the function for generating the potential outcomes (Y(w)^{R}) in the source domain uses the parameters $v\\_{w, j}^{s}, v\\_{w, j}^{p\\_{R}}, v\\_{j}^{{R}}$, while the function for generating the potential outcomes in the target domain (Y(w)^{R}) uses the parameters $v\\_{w, j}^{s}, v\\_{w, j}^{p\\_{T}}, v\\_{j}^{{T}}$. Out of these, $v\\_{w, j}^{s}$ are the same in both domains, while the others are sampled differently, thus resulting in different outcome distributions $p(Y(w) \\mid X^{R}) \\neq p(Y(w) \\mid X^{T})$. Note also that the potential outcomes in each domain depend on the private features characteristics for each domain. In addition given that the treatments are assigned based on the difference in PO in each domain $W\\mid X^R \\sim \\text{Bernoulli}(\\text{Sigmoid}(\\kappa (Y^R(1) - Y^R(0))))$ and $W\\mid X^T \\sim \\text{Bernoulli}(\\text{Sigmoid}(\\kappa (Y^T(1) - Y^T(0))))$, we also have that $p(W=1 \\mid X^{R}) \\neq p(W=1 \\mid X^{T})$. The full observational datasets for the source and target domains used for training are obtained by using the patient features sampled from the real-datasets, the simulated treatments and the corresponding simulated potential outcome for the simulated treatment. For testing, we evaluate the ability of the benchmarks to predict the difference in potential outcomes for the test split of the target dataset. \n\n\n\nPlease also note that in our experiments, we also investigate \n* (1) performance when varying the size of the target dataset (see Figure 6 (bottom) and Section 6.2) and we indeed note that our method brings most benefits when the target dataset is small\n* (2) performance when varying the amount of information shared between the PO in the source and target datase through the parameters $\\alpha$ (this denotes how much the potential outcomes depend on the shared features as opposed to the private features). Note that when $\\alpha$ is low, the PO depend more on the private features, thus making their conditional distributions even more different across the domains. See Figure 6 (bottom) and Section 6.2 for the discussion of results. \n\nDue to space limitations for the revised person in the rebuttal period, it is not possible to include many experimental details in the main paper which is why we have updated Appendix D with the additional dataset details. However, we will move the details about how the shared and private features are samples and about the differences in datasets in the camera-ready version of the paper (which allows for an extra page). \n\n### 5. Typos \n\nWe have fixed the typo of ‘size size’. Moreover, the Twins dataset does indeed only have 11400 examples and not 114000. We have also fixed this typo in the revised manuscript. \n\n\n—------------------------\n\n\nWe thank the reviewer again for the insightful comments! Please let us know if you have any other concerns as we are eager to address them. \n\n\n[R1] Pan, Sinno Jialin, and Qiang Yang. \"A survey on transfer learning.\" IEEE Transactions on knowledge and data engineering 22, no. 10 (2010): 1345-1359.\n", " Thank you very much for your insightful comments and suggestions which have helped us to improve the paper! \n\nWe provide below answers to the main concerns raised by the reviewer. Moreover, we have updated the discussion of related works (see Section 2 and in more details Appendix A) to incorporate the additional related methods provided by the reviewer. Please note that the changes in the revised manuscript are highlighted in blue. \n\n### Model design innovation \n\nPlease allow us to clarify what the contributions of our method are and where the novelty in our model design is coming from. To begin with, the main challenge we are aiming to address in this paper is to improve the estimation of conditional average treatment effects (CATE) for a target domain of interest by leveraging data from a source domain with a different feature space. To address this, we **do not** propose a single model, but rather introduce several building blocks that can be used to obtain heterogeneous transfer causal effect (HTCE-) learner equivalents for the most common and popular CATE-learners developed for binary treatments in a single patient population (such as the CATE-meta learners and TARNet based architectures). The motivation for this comes from the fact that the different CATE learners available have their own benefits and drawbacks in terms of the way they handle and the covariate shift induced by the selection bias in observational datasets and the inductive biases they use for modelling the PO functions within a single domain [R1]. Because of this, different CATE learners will result in better performance in different settings. \n\nConsequently, in this paper, our novelty comes from:\n* (1) proposing several building blocks (see Section 4) that can (a) handle the heterogeneous feature spaces, (b) share information between PO functions across domains $(\\mu\\_1^{R}$, $\\mu\\_1^{T})$ and $(\\mu\\_0^{R}$, $\\mu\\_0^{T})$ and (c) share information between PO function within each domain $(\\mu\\_0^{R}$, $\\mu\\_1^{R})$ and $(\\mu\\_0^{T}$, $\\mu\\_1^{T})$. Note that all of these are needed for heterogeneous transfer learning for CATE estimation. \n* (2) showing how these building blocks can be combined in a flexible way to obtain HTCE-equivalents of the common CATE-learners. We then demonstrate how our proposed HTCE-learners archive improved performance over the CATE-learners in a variety of scenarios relevant for transfer learning with heterogeneous feature spaces. \n\n[R1] Alicia Curth and Mihaela van der Schaar. Nonparametric estimation of heterogeneous treatment effects: From theory to learning algorithms. In International Conference on Artificial Intelligence and Statistics, pages 1810–1818. PMLR, 2021.\n", " ### Novelty and relationship to literature suggested by the reviewer\n\nPlease allow us to clarify the novelty of our method and the relationship with the related works suggested by the reviewer. Firstly, please note that we acknowledge in the paper (Section 2) that the problem of transfer learning for treatment effect estimation has been previously considered. Our claim is to be the first work that addresses the problem of heterogeneous transfer, specifically for CATE estimation. This is different from the works suggested by the reviewer as described below. \n\n[R2] Pearl and Bareinboim, 2011, Transportability of Causal and Statistical Relations.\n\n[R3] Bareinboim and Pearl, 2016, Causal inference and the data-fusion problem. \n\nBoth [R2] and [R3] aim to address the problem of the **identifiability** of causal effects for an observational dataset of interest by leveraging data from other datasets. In particular, [R2] investigates how to transfer **average** treatment (causal) effects obtained from experimental data (such as randomized clinical trials) to an observed population that may have different distribution of covariates, treatments and outcomes and where the causal relationship of interest cannot be identified using only the observational data. The paper assesses under which conditions such average causal effects can be transported according to the differences between the randomized and observational data. The authors also provide a brief discussion of how to transfer these average causal estimates between observational datasets. Alternatively, [R3] aims to help identify the **average** effects of interventions on a target population of interest by integrating multiple types of auxiliary data: data from a randomized study on the same population, data from an observational study on the same population, selection biased data from the same population and data from a randomized study from a different population. All of the auxiliary datasets have the same set of features as the target dataset. Both [R2] and [R3] consider this transportability problem of average treatment effects in the context of causal diagrams. \n\nThis is different from our set-up where (1) we want to estimate **conditional average** treatment effects such that we can make personalized treatment recommendations based on the features of each patient. In addition, (2) we assume that the potential outcomes $Y(0)$ and $Y(1)$ **are identifiable** in both the source and target domains (see Assumptions 1 and 2). We also (3) handle the case of heterogeneous feature spaces and (4) only assume access to a source domain larger than a target domain, without putting any restrictions on whether this data is experimental or observational (see the experiment in Section 6.2 where we show that our method works well under different degrees of selection bias present in the source dataset).\n", " (cont’d)\n\n[R4] Magliacane et al, 2017, Causal Transfer Learning. Published with the following title (Domain Adaptation by Using Causal Inference to Predict Invariant Conditional Distributions)\n\n[R4] addresses the problem of domain adaptation for the **predictive** setting, where it considers labelled data in one or more source domains, **unlabelled** data in the target domain, same feature spaces between the source and target domains and aims to learn predictive functions that are **invariant** to the changes between domains to be able to reliably estimate the outcomes in the target domain. In particular, [R4] proposes tackling this unsupervised domain adaptation problem by modelling the different distributions between the source and target domains as different contexts of a single underlying system. The system variables are denoted as $X$ and the context variables are denoted as $C$. The context variables $C$ are used to model the interventions causing the distribution shifts between domains, such that the source and target domains come from different interventions on $C$. The aim is to predict the missing values of a target (outcome) variable in the target domain given the available source and target datasets. The paper solves this problem by finding a separating feature set A such that the label is conditionally independent of the context variables given $A$. This is because the distribution of $Y$ conditional on $A$ will then be invariant between the source and target domains and thus they can use a predictor from $A$ to $Y$ trained on the source domain to estimate $Y$ in the target domain. \n\nOur work is significantly different in the following ways: (1) we consider the problem of CATE estimation, (2) assume access to labelled data in both the source and target domains, (3) consider the case of heterogeneous feature space and (4) model different conditional distributions of the outcome given the features in the source and target domains. Our aim is to improve the estimation of CATE in the target domain by leveraging the shared structure with the source domains, while at the same time modelling the specific characteristic of the potential outcome functions in the target domain. \n\n[R5] Mooij et al., 2016, Joint Causal Inference from Multiple Contexts. \n\nFinally, [R5] proposes Joint Causal Inference (JCI), a new approach to **causal discovery** that uses multiple datasets from different contexts representing different types of interventions (the interventions in this setting represent perturbations to different variables in the dataset). In particular, [R5] aims to find the causal relationships among all variables in a system of interest. This is fundamentally different from our objective, as we are not aiming to perform causal discovery and build a causal graph, but rather to improve the estimation of CATE for a target domain of interest by leveraging data from a source domain. \n\n\n-------------------\n\nWe thank the reviewer again for the insightful comments! Please let us know if you have any other concerns as we are eager to address them. \n", " Thank you very much for your insightful comments and suggestions which have helped us to improve the paper! \n\nWe agree with the reviewer that the manuscript would be significantly improved if we provide a more detailed discussion about potential extensions. We provide below detailed answers to the questions about potential extensions raised by the reviewer. Due to space limitations, it is not possible to add all of these to the revised paper. Thus, in our revised manuscript we have updated the Discussion section (Section 7) to briefly mention them, and we discuss them in more detail in the new Appendix F. Please note that the changes in the revised manuscript are highlighted in blue. We will incorporate more details about the extensions in the camera-ready version of the paper (which allows for an extra content page). \n\n### Multiple domains\n\nWhile in this paper we mainly consider the standard and most common transfer learning setting of leveraging a source dataset to improve the estimation of outcomes on a target dataset of interest, our proposed approach could be easily extended to having multiple source domains, and in fact, would scale linearly to incorporating additional domains. \n\nMore precisely, consider access to $M$ source domains, $\\\\{ D^{R_{m}} \\\\}^{M}\\_{m=1}$, where each source domain consists of $\\\\mathcal{D}^{R\\_m} = \\\\{(X^{R\\_m}\\_i, W\\_i, Y\\_i)\\\\}\\_{i=1}^{N\\_{R\\_{m}}}$ and a target domain as described in Section 3. The objective would still be the one from Section 3 of improving the estimation of CATE for the target domain, but in this case, we need to leverage data from all $M$ source domains. Our proposed building blocks could be extended as follows to handle the additional source domains. The building block for handling heterogeneous feature spaces between source and target domains could be extended by having encoders $\\phi^{R\\_m}$ for the private features $X^{p\\_{R_m}}$ of each source domain (part of $X^{R\\_m}$). Moreover, we can build an approach that shares information between the potential outcomes of each source domain with the target domain by having $L$ layers shared between all source domains and the target domains and $L$ private layers for each source domain. For source domain $m$, the input to the layer $(l+1)$-th can be obtained using $\\tilde{h}\\_{w, l+1}^{p\\_{R\\_m}} = [ h\\_{w, l}^{s} || h\\_{w, l}^{p\\_{R\\_m}}]$ where $h\\_{w, l}^{p\\_{R\\_m}}$ is the output of the $l$-th private layer for source domain $m$ and $h\\_{w, l}^{s}$ is the output of the $l$-th shared layer across all domains. \n\nWhile from a model development perspective this extension can be easily done, one also needs to consider whether the multiple source domains satisfy the underlying implicit assumptions for which such an architecture would be appropriate. In particular, it would be important to consider whether the PO functions across all source domains share information with the PO functions in the target domain, as using source domains that are significantly different from the target domain could harm performance. \n\n### Datasets that are streaming\n\nWe believe that there are multiple options for considering streaming datasets. One option could be the case where we already have a source domain and a target domain and we have incoming data streaming from the target domain. One such example in healthcare would be the case where in a single hospital we start collecting more or different clinical covariates for the patients which we now want to use for CATE estimation. In this case, the source domain would be the patients with only the original set of features, while the target domain would be the patients with the new set of features. As we start collecting these additional features, the initial target dataset will be small but it can start increasing with time as we observe more patients. In this setting, we can train an HTCE-learner with the initial data available from the source and target domains. However, instead of retraining the full HTCE-learner as we obtain more patients from the target domain, one option would be to fine-tune the weights using the incoming examples. While this is outside the scope of our paper, we believe that it would be important to investigate appropriate ways for performing such fine-tuning.\n", " (cont'd) \n\nAnother option could be the case of having full source datasets streaming, while the target dataset remains fixed. This would happen in the setting where for instance we gradually get access to data from multiple locations and we want to use these datasets as source datasets. In this scenario, one possibility would be to use the approach described above for having multiple source datasets and retraining a new model that incorporates all of the available source datasets as we get access to them. Another possibility would be to, instead of retraining a full model as we go from $M$ to $M+1$ source domains, we can add the needed private encoder and layers for the $(M+1)$-th domain and train only the new parameters and fine-tune the shared ones with the data from the new domain. While this is again outside our scope, it can provide interesting avenues for future work. \n\n### Unknown domains at training time\n\nIn this paper, we assume that the source and target domains are known. However, given that we handle the setting of transfer learning for heterogeneous feature spaces if the domains are unknown, one way to split available data into different domains, in this case, would be by using the same features to denote a single domain. For instance, if different patients in the dataset have recorded different sets of features, then the patients can be grouped according to having the same information collected for them and these can denote the different domains. Then, one needs to decide which represents the target domain, while keeping in mind that doing transfer learning is most beneficial when the target dataset is small (as highlighted by our experimental results in Section 6.2 and Figure 6 (bottom)).\n\n### Personalized feature spaces \n\nIn the case where different features are available for different patients and the source and target domains are unknown, then the patients can be grouped according to their feature spaces as described in the paragraph above. On the other hand, if the source and target datasets are pre-defined and the patients within each dataset have different features available for them, one possible option would be to consider the super-set of their features as the different feature spaces and consider the features that are not available for each patient as missing. However, as our HTCE-learners have not been designed for this particular setting of having missing features, one would also need to investigate if additional assumptions (in addition to ensuring that the no hidden confounders and overlap assumptions are still satisfied) are needed to be able to obtain valid estimates of causal effects. \n\n-------------------\n\nOverall, we believe that our paper handles an under-explored area for treatment effect estimation and could be a stepping stone in the development of more methods tailored to many of the scenarios described by the reviewer that have not yet been handled by methods for CATE estimation, despite their practical applicability. \n\nWe thank the reviewer again for the insightful suggestions for future directions of our work. Please let us know if you were referring to different scenarios in your questions than the ones we discussed. Moreover, please let us know if you have any other concerns as we are eager to address them. \n", " This paper addresses the problem of heterogeneous transfer learning for CATE estimation by using representation learning and a multi-task architecture to transfer information between potential outcome functions across domains, generalizing several existing CATE estimators to the transfer learning perspective. Strengths:\n- The paper deals with an important problem that is understudied. I appreciate the motivation to study multitask architectures for CATE transfer learning when CATE has often been examined in isolated examples which real-world medical data provide compelling motivation for multitask and transfer learning settings.\n- The paper effectively generalizes several CATE frameworks to the transfer learning setting which.\n- The empirical results are sufficient and effectively show individual effects (e.g. the information sharing and the selection bias in Figure 6 and Figure 7), although not extensive experiments.\n\nWeaknesses:\n- The framework deals with fixed datasets with source and target domains. How does this framework extend to more complicated settings? For example, what if we have multiple domains, the datasets are streaming, or the domains are unknown at training time? What about personalized feature spaces (e.g. different features are available for different samples)? Of course, I don’t expect all of these situations to be addressed in this manuscript but discussions about extensions would be helpful.\n See weaknesses above. Limitations and ethics are addressed appropriately. Some of the limitations and directions for future work discussed in the conclusion could also be mentioned more explicitly throughout the paper to make tradeoff decisions clear when introducing the framework.", " This work aims to solve the heterogeneous transfer learning problem for CATE estimation by introducing several building blocks that use representation learning to handle the heterogeneous feature spaces and a flexible multi-task architecture with shared and private layers to transfer information between potential outcome functions across domains. Besides, they propose several building blocks to construct HTCE-learner, similar to the most common CATE learners. Strengths:\n1. The paper is overall well written and it clearly defines the problem.\n2. These building blocks involve handling the heterogeneous feature spaces, sharing information between PO functions across domains and sharing information between PO functions within a single domain.\n\nWeaknesses:\n1. the model design is lack of innovation and major idea is based on the meta-learner.\n2. limited baseline models.\n\n limited baseline models To the best of my knowledge, the work is not as novel as the authors claim it is. Please refer to the following works:\n* Pearl and Bareinboim, 2011, Transportability of Causal and Statistical Relations, https://ftp.cs.ucla.edu/pub/stat_ser/r372-a.pdf\n* Bareinboim and Pearl, 2016, Causal inference and the data-fusion problem, https://ftp.cs.ucla.edu/pub/stat_ser/r450-reprint.pdf\n* Magliacane et al, 2017, Causal Transfer Learning, https://www.semanticscholar.org/paper/Causal-Transfer-Learning-Magliacane-Ommen/b650e5d14213a4d467da7245b4ccb520a0da0312\n* Mooij et al., 2016, Joint Causal Inference from Multiple Contexts, https://arxiv.org/abs/1611.10351", " This paper proposes a Heterogeneous Transfer Causal Effect (HTCE) framework to improve treatment effect estimations on the target dataset under the heterogeneous transfer learning problem. Originality: 3. This problem has already been noticed by some works. The basic structure of HTCE is actually the combination of layer sharing + FlexTENet.\n\nQuality: 4. I think it is good when authors combine HTCE structure with meta-learners and TARNet. They are explicit.\n\nClarity: 5. This paper is well written. Section 4 is especially easy to follow. Good!\n\nSignificance: 3. I think the argument and setting of estimating TEs for heterogeneous transfer learning is meaningful, but I don't think the experiment parts show the significance of the proposed method aiming to this problem. 1. In lines 110-111, you mentioned that [32] do not assume access to label information. But in my view, I think that should be the true definition of \"transfer learning\". In general, your problem is similar to shared machine learning.\n\n2. In lines 230-231, do you mean $||$ instead of $+$?\n\n3. Your figures 3.a and 3.b are inconsistent with the text. It is just a small mistake.\n\n4. I think the most interesting parts are i) solving the problem when the target dataset is substantially different from the source dataset, especially when the target dataset is in a small sample size; ii) combining HTCE with different learners. But your experiments do not show the importance and significance. For both your simulated and benchmark datasets, the distributions of source and target datasets are very similar. Specifically, the DGP or the empirical distribution is fixed, and the only difference is that you split them into source and target. It is undoubted that involving shared information will give better results than losing shared information. That is inconsistent with your claim in lines 149-152. I suggest you show how small the sample size of the target dataset is, and how big the differences between the distributions of source and target datasets are before reporting your experimental results.\n\n5. In your Appendix, there are some typos, e.g, there exists \"size size\". Also, is twins 11400 or 114000? The limitations are well dicusessed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 5, 5 ]
[ "793DtsPMODP", "ZvZOMrUFLx", "F3AUXTpebPx", "CEZuhRLf1OL", "CEZuhRLf1OL", "CEZuhRLf1OL", "xiBcPK4JR7n", "xiBcPK4JR7n", "xiBcPK4JR7n", "fjE682UB8iA", "fjE682UB8iA", "nips_2022_nRcyGtY2kBC", "nips_2022_nRcyGtY2kBC", "nips_2022_nRcyGtY2kBC" ]
nips_2022_HH_jBD2ObPq
BR-SNIS: Bias Reduced Self-Normalized Importance Sampling
Importance Sampling (IS) is a method for approximating expectations with respect to a target distribution using independent samples from a proposal distribution and the associated to importance weights. In many cases, the target distribution is known up to a normalization constant and self-normalized IS (SNIS) is then used. While the use of self-normalization can have a positive effect on the dispersion of the estimator, it introduces bias. In this work, we propose a new method BR-SNIS whose complexity is essentially the same as SNIS and which significantly reduces bias. This method is a wrapper, in the sense that it uses the same proposal samples and importance weights but makes a clever use of iterated sampling-importance-resampling (i-SIR) to form a bias-reduced version of the estimator. We derive the proposed algorithm with rigorous theoretical results, including novel bias, variance, and high-probability bounds. We illustrate our findings with numerical examples.
Accept
Importance sampling requires the knowledge of the normalization constant of the distribution to be sampled from. SNIS (Self-Normalized Importance Sampling) does not, but is biased. The authors study methods to reduce the bias in SNIS: BR-SNIS. They prove error bounds for this method (Theorems 3 and 4). The reviewers agreed that BR-SNIS does a nice job in practice, and that the theoretical analysis is novel and sound [especially Reviewers 9JEd, 1FgZ]. However, they also all pointed out that the paper is overstating the novelty of BR-SNIS. Some identified BR-SNIS to the exising i-SIR/multiple proposal MCMC [9JEc]. Other reviewers wrote that they are not exactly the same but that BR-SNIS is strongly based on i-SIR and that this is not emphasized enough by the authors [nbwq, 1FgZ]. They all required that the author to make it clear that i-SIR is an existing method, that BR-SNIS is strongly based on it, to discuss the connection to multiple proposal MCMC thoroughly and to add many references. Overall, "the novelty in methodology seem to be not big" [hWyK], so the authors should make it clear that their main contribution is the analysis of the method (Theorems 3 and 4). One of them also required to clarify the novelty of the Rao-Blackwellised estimator [9JEd]. Finally, while they agree that Theorems 1 and 2 are necessary to understand BR-SNIS, it should also be stressed that these results are "classical results", on the contrary to Theorems 3 and 4 [9JEd]. The authors acknowledged this, and promised to fix the paper accordingly. During the discussion, the reviewers agreed that the theoretical analysis in the paper is interesting enough to justify its acceptation. I will therefore recommend to accept the paper. I will ask the authors to carefully include in the paper all the discussion on i-SIR and multiple proposal MCMC and on the novelty of BR-SNIS in general (and of course Rao-Blackwellised estimator // Theorems 1 and 2). There was also some potential computational problem pointed out by [hWyK], who was finally addressed by the authors reply: please implement the corresponding corrections in the paper if necessary.
train
[ "wavCzxsfvl4", "EQwzAiPQAAN", "DuvDaNSZ3xN", "61QO8X4_THbK", "CfUmSbW2Wlo", "dd8uMhGIBnR", "X1D3bKfnrd", "m-qK1gHzti", "pj9ziYysbcd", "MMWA4ILzbjr", "bzo3Fc9xll_", "IKWrFgrbYyd", "GOaeoeTV_r", "aQ5tpiNkMZ", "CtIPnFED4nh", "2mnaykexEas", "37wQOx_nvV9", "CuKt6-Er8AtW", "AhxI7Sm8-eX", "xhjej3dZayO", "e5urouJ0bep", "-5y87WueO2I", "0rtg_rI1XL8" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your feedback. Following your suggestion, we have now added—a short version of—this discussion to Section 2.3. ", " Thanks a lot for this remark. We have added a reference to the work you mention at the point where we introduce the induced Markov kernel in question. ", " Thank you for the detailed reply.\n\nI just want to point out that it is well known in the literature on particle Gibbs samplers that you don't need to keep the index of the reference path in the (extended) target distribution. That is, you can write down the induced Markov kernel without the need for resorting to \"partially- collapsed Gibbs\" sampling type arguments. For instance, see Section 4 in:\n\nAndrieu, C., Lee, A., & Vihola, M. (2018). Uniform ergodicity of the iterated conditional SMC and geometric ergodicity of particle Gibbs samplers. Bernoulli, 24(2), 842-872.", " I would still say that the analysed algorithm is more adequately characterised as an MCMC algorithm with multiple proposals (this would be in line with the literature) than as importance sampling. But I take your point that since you are specifically requiring that the proposals are \"independent\" proposals, i.e. that they do not depend on the previous state of the Markov chain, all the proposals can be generated in parallel at the start of the MCMC chain.", " Thank you for taking the timing to respond in detail. The point about bias contributing differently to variance in contexts other than mean squared error is certainly well taken, I would have liked to see this point made more clearly in the manuscript and am please to hear that you intend to do so.", " We have updated the numeric section with a comparison between our estimator and the zero bias estimator presented in [Middleton, L., Deligiannidis, G., Doucet, A., and Jacob, P. E. (2019). Unbiased Smoothing using Particle Independent Metropolis-Hastings. Proceedings of Machine Learning Research vol. 89. https://proceedings.mlr.press/v89/middleton19a.html]. Even though the two estimators have different goals (our goal is to provide a wrapper that allows to reduce the bias of SNIS methods at minimal computational cost not a zero bias estimator), it can be of interest to compare both of them in the case where there is a restriction in the total number of available samples.\n\nThe table below displays approximate $95\\\\%$ intervals for both estimators under a strict budget of $8.192 \\cdot 10^7$ samples for several configurations of each estimator in the Mixture of Gaussians example. The intervals were re-centered by the reference value.\n\n|N|k|algorithm|center|lower bound|upper bound|interval length|\n| :----: | :----: | :----: | :----: | :----: | :----: | :----: |\n|129|512|BR-SNIS|0.0002|-0.0034|0.0038|0.0072|\n|129|1024|BR-SNIS|-0.0001|-0.0037|0.0034|0.0071|\n|257|512|BR-SNIS|0.0004|-0.0033|0.0040|0.0073|\n|513|256|BR-SNIS|0.0013|-0.0025|0.0050|0.0075|\n|65536|N/A|Unbiased-PIMH|-0.0013|-0.0052|0.0026|0.0078|\n|32768|N/A|Unbiased-PIMH|-0.0020|-0.0061|0.0022|0.0084|\n|16384|N/A|Unbiased-PIMH|-0.0008|-0.0051|0.0035|0.0086|\n|8192|N/A|Unbiased-PIMH|-0.0020|-0.0067|0.0028|0.0095|\n|2048|N/A|Unbiased-PIMH|0.0028|-0.0030|0.0085|0.0115|\n\nMore details are shown in the latest revision.", " Thank you for the detailed answers. I have changed my score from 4 to 5.", " We thank all the reviewers for the relevant commentaries. We have taken the commentaries into account, and we have uploaded a new revision with added references and an updated numeric section with better results and presentation.", " In the appendix of the revised version of our paper, we have added a table with the budget $M$ and number $k$ of iterations for all the data presented in the logistic regression study. \nWe show here an extract of the table for the estimation of the 8-th coordinate of the posterior mean vector for the heart dataset. We also add the following two quantities: \n\n* \"Ratio bias\": bias of BR-SNIS divided by the bias of SNIS,\n* \"Ratio MSE\": MSE of BR-SNIS divided by the MSE of SNIS.\n\n|Dataset | M | k | N | Ratio bias| Ratio MSE |\n| :---: | :---: | :---: | :---: | :---: | :---: |\n|heart|32|4|9| 0.18 | 0.05 |\n|heart|64|8|9|0.03 | 0.05|\n|heart|128|8|17|0.005 | 0.08|\n|heart|256|16|17|0.025 | 0.14|\n|heart|512|32|17|0.07|0.23|\n", " (a): The fact that the variance is inversely proportional to $(k-k_0) N= M$ is the \"expected\" result (which, incidentally, is not trivial). If we eliminate $k_0$ blocks of size $N-1$ for the burn-in, we are left with $(k-k_0)(N-1)$ samples on which to compute the estimator. Note: $k$ here is the total number of blocks of size $N-1$ in the sample of size $M$. Thus, **there is nothing strange here**. The variance is then reduced by the bootstrap procedure (if we iterate the procedure).\n\n(b) and (c): In the first set of simulations, we have set $k_0= (k-1)$ because it is simple (we only have to choose the block size $N$) and maximizes the bias-reduction effect. **All the samples are recycled using the bootstrap procedure**: we randomly swap the data, and repeat the procedure. This is the bootstrap that ensures we use all the data. Other choices are possible, as illustrated in our numerical experiments. We have not developed automatic procedures for choosing $k_0$, but it is clear that we can take inspiration from the explicit bounds here. \n\nAs illustration, we have performed an additional simulation for the elementary mixture model. \n| Ratio Bias | Ratio MSE | Burn-in (k_0 / k) |\n| :----: | :----: | :----: |\n|0.644 |\t1.048 |\t0.200| \n|0.355 \t | 1.125 |\t0.400|\n|0.229\t |1.172 |\t0.600|\n| 0.159 | 1.211 |\t0.800|\n\n\"Ratio Bias\" is the bias of BR-SNIS divided by the bias of SNIS. \"Ratio MSE\" is the MSE of BR-SNIS divided by the MSE of SNIS. The burn-in percentage is the percentage ($\\upsilon$ in our work) of proposed samples removed for each bootstrap sample. Here, the minibatch size ($N$ in the paper) is set to 129, while all other sizes remain as specified in the paper. The number of bootstrap replicates is 128.\nClearly, larger $k_0$ yields larger bias reduction. The increase of variance is moderate due to bootstrap. \n\nWe updated the figures in the manuscript to focus more clearly on bias reduction. We have also added a figure illustrating the tradeoff between reducing the bias and increasing MSE.", " What we read shows a lack of understanding of what we are proposing. Our presentation is probably not clear enough and we will rework the introduction. **We are not developing an MCMC algorithm, but a wrapper** to reduce SNIS bias (while tightly controlling the increase in variance), all at very low algorithmic cost. **We do not propose a new variant of i- SIR algorithm**. \n\nTo compute an SNIS estimator, we need to simulate $M$ samples under the proposal law and compute $M$ importance weights. The bias and variance of the SNIS estimator are inversely proportional to $M$ (with explicit constants). Our estimator computes the same quantities but uses them differently, with the goal of reducing bias.\nWe then add an \"extra-randomization\" as follows. \n- We divide the set of samples into $k$ blocks of size $N-1$ with $M=k(N-1)+1$. \n- We select a sample and then use the i-SIR algorithm with the previously calculated importance weights (with no need of simulating new points or recalculating importance ratios).\n- We eliminate the first $k_0$ blocks (burn-in) and then compute an estimator by recycling all samples over the remaining $(k-k_0)$ blocks. \n- We repeat this process $\\ell$ times after randomly permuting the data and finally combine these estimators [bootstrap procedure].\nThe cost of the procedure is very limited (as most of the computational cost in the examples that we are dealing with comes from the simulation under the importance law and the calculation of the importance weights). \n\nQuestion (e): The goal of this article is not to propose a new method for importance sampling. There is, of course, an extensive literature on adaptive importance sampling methods, but that is absolutely not the goal of the present paper.", " First of all we would like to thank you for raising these important points, which will make the paper's message clearer.\nWe will address the following point in this comment:\n* \"Is there really a reduction in variance at no cost; seemingly, the variance increases by more than the squared bias falls in all of the empirical results and the theoretical results seem to provide a bound on the amount by which the MSE can increase rather than establishing as claimed in the abstract that there is no increase.\"\n\nFor the mixture-of-Gaussians example (first example) presented in our paper, the bias$^2$ (bias$^2$ of the SNIS estimator is of order $10^{-4}$ in this case) is much smaller than the MSE ($10^{-2}$); this is not surprising, since the bias as well as the variance scale inversely proportionally to the number of proposed samples. Nevertheless, we claim that we can achieve a significant reduction of the bias with only a moderate increase of the MSE (or, equivalently, the variance). To make this claim more explicitly, we provide the following table:\n\n| Ratio Bias | Ratio MSE | Burn-in (k_0 / k) |\n| :----: | :----: | :----: |\n|0.644 |\t1.048 |\t0.200| \n|0.355 \t | 1.125 |\t0.400|\n|0.229\t |1.172 |\t0.600|\n| 0.159 | 1.211 |\t0.800|\n\nRatio bias is the bias of BR-SNIS divided by the bias of SNIS. Ratio MSE is the MSE of BR-SNIS divided by the MSE of SNIS. The burn-in percentage is the percentage ($\\upsilon$ in our work) of proposed samples removed for each bootstrap sample. Here, the minibatch size ($N$ in the paper) is set to 129, while all other sizes remain as specified in the paper. The number of bootstrap replicates is 128.\n\nWe updated the figures in the manuscript to focus more clearly on bias reduction. We have also added a figure illustrating the tradeoff between reducing the bias and increasing the MSE.\n\nMaximizing the bias reduction prompts us to choose $k_0$ large. In the previous example, we obtain a reduction of bias by a factor of 10 by taking $k_0= (4/5) k$. This choice leads to an increase in variance by a factor 5 for a single replication. The bootstrap procedure reduces this increase to a factor of 1.2; thus, the bootstrap procedure allows us to reduce the bias fairly aggressively while avoiding a large increase in the variance.\n\nFigure 1(a) now reports the evolution of the MSE as a function of $k$ and the number of bootstrap rounds (1, 21, and 201).", " First, thank you for your very constructive review, which will allow us to improve the presentation of the paper. In particular, in the new version of the paper, we have now updated the experimental section. \n\n- We do not claim that the proposed method reduces the variance. More precisely, we have a bound on the variance, whose main term remains inversely proportional to the number of simulated samples (with the same constant as for the SNIS estimator). Moreover, the procedure for reducing the bias leads to a controlled increase in variance, as we report in our experiments and in the table of results below (for logistic regression). \n- Reduction of bias **makes a difference**! Of course, reducing the bias is relevant primarily from a frequentist point of view. Indeed, if we only draw a single realization once, only the distribution of the variable (and thus typically the bias and the variance) matters. The bias of an estimator becomes important in situations where the estimator is used multiple times, as in stochastic approximation procedures; see\n[Doucet, A., & Tadic, V. B. (2017). Asymptotic bias of stochastic gradient search. Annals of Applied Probability, 27(6).]\nor\n[Karimi, B., Miasojedow, B., Moulines, E., & Wai, H. T. (2019). Non-asymptotic analysis of biased stochastic approximation scheme. In Conference on Learning Theory (pp. 1944-1974). PMLR.]\nIn stochastic approximation, the bias and variance of the mean-field estimates do not play the same role in the bounds. For this reason, it is interesting to reduce bias (provided the variance does not explode) for a given computational budget, and there are also reasons why reducing bias (or obtaining an unbiased estimator) has given rise to much work related to simulation algorithms over the past 30 years; see for example [Glynn, P. W., & Rhee, C. H. (2014). Exact estimation for Markov chain equilibrium expectations. Journal of Applied Probability, 51(A), 377-389.] and [Jacob, P. E., O’Leary, J., & Atchadé, Y. F. (2020). Unbiased Markov chain Monte Carlo methods with couplings. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 82(3), 543-600.]\nWe will add a discussion to the revised version of our paper with the aim of convincing the reader that reducing bias is indeed a reasonable goal when estimators are used repeatedly, but that this does not make much sense when the estimator is computed only once.\n", " The objective of the paper is not to propose \"yet\" a new MCMC algorithm but a method to reduce the bias of SNIS while keeping the variance explicitly controlled by a bound that is inversely proportional to the number of proposal samples.\n\nWe repeat below the argument given to Reviewer 9JEd. \n\nWe use the same \"input\" as the SNIS algorithm, in the sense that we draw samples from the proposal distribution and then compute the importance weights, which is exactly what is required when computing the \"classical\" SNIS estimator. In most applications of ML, this is where most of the computational effort lies (as we typically use complex sampling methods and computing the importance weights can be costly). \n\nWe then propose to perform an additional \"extra-randomization\" to compute the final estimator (but, mind you, without drawing any new samples). The computational cost of the extra-randomization operation is very small. Thus, **the objective is not to define a new MCMC procedure, but a very simple algorithm to improve SNIS at the price of an extremely limited increase of computational complexity**.\n\nThe extra-randomization is derived from the i-SIR algorithm with full recycling; however, we stress that **we do not need any new simulations or new calculations of importance weights**. All the samples and importance weights have been computed once and for all, and this is where 99% of the computational burden lies in the examples that we are dealing with. \n\nThis extra-randomization allows us to tradeoff bias against variance, in the sense that we are able to significantly reduce bias at the cost of a (very modest) increase in variance. Thus, this is a \"different\" way to compute an estimator using the same input as the SNIS estimator, **and it is not a \"new\" MCMC algorithm**. \n\nThe references you give are excellent, and we really like Martino's and Calderhead's papers; still, we are not completely sure whether these are relevant here. On the contrary, we are a bit afraid that this will reinforce the confusion—which we will clear up by rewriting the introduction of the paper—about the real objectives of our contribution. \n\nTo sum up, the main contributions of our work are \n- the proposal of a **wrapper** that allows to reduce the bias of SNIS methods at minimal computational cost (for the cases of interest).\n- explicit bounds (Theorems 4 and 5) on bias and variance of the proposed estimator, showing the often dramatic bias reduction and the explicit control of the variance, respectively. \n- numerical results that clearly support our claims.", " Thank you very much for your very pertinent comments (possibly with the exception of the first comment), which will help us improving the presentation of our results. We suggest you read our answers in reverse order, since we provide the same in the order of your questions. ", " Here we wanted to say that it is possible to extend the ideas and results of our paper to the sequential Monte Carlo and particle Gibbs frameworks, in order to design bias-reduced particle estimators of expectations under Feynman–Kac path distributions. Obviously, the derivations and the calculations of bias and variance will be considerably more complicated in that case. ", " We are sorry for having given the impression of misquoting existing results. We wanted to give the logical \"flow\" of the construction, which led us to formulate Theorem 2, but we agree on that we should have been more careful in saying that this (Rao–Blackwellisation) result already appears in the work by Tjelmeland. In addition, we will of course add the other interesting references that you have indicated.", " In fact, we mention in the introduction that the i-SIR algorithm goes back to the work of Tjelmeland [40], on which our work is based, rather than the (admittedly much better known and cited) particle Gibbs (PG) sampler, which is in some sense an extension of Tjemeland to the context of sequential Monte Carlo. Still, there is one small difference between our algorithm and that of Tjelmeland (and the PG sampler), in the sense that Tjelmeland keeps the index of the particle in the target distribution, while we instead keep the value of the selected sample. This small modification allows us to obtain a clean duality equality that does not appear in Tjemeland. As a result, this duality formula allows us to define a neat Gibbs procedure in two steps (whereas PG typically takes the form of a collapsed Gibbs sampler), which has very interesting theoretical properties (monotonicity, positivity, etc.). We agree, however, that this does not constitute the originality of our paper. Our goal was to allow the reader to follow our ideas without necessarily knowing all the arcana of PG, but we will change the text with this in mind.", " There is no reason for you to be disappointed, as **we do not propose an MCMC algorithm**, but a method to reduce the bias of SNIS while keeping the variance explicitely controlled by a bound that is inversely proportional to the number $M$ of samples from the proposal distribution. \n\nNote that we use the same \"input\" as the SNIS algorithm, in the sense that we draw $M$ samples from the proposal distribution and then compute the importance weights, which is exactly what is needed when computing the \"classical\" SNIS estimator. In most ML applications, this is where most of the computational burden lies (as we typically use complex sampling procedures and computing the importance weights might also be costly). \n\nWe then propose to add, on the top of that, an \"extra-randomization\" to compute the final estimator (but, mind you, without drawing any new samples). The computational cost of the extra randomization is very small. The extra-randomization is derived from the i-SIR algorithm with full recycling, which does not require any new simulations or new calculations of importance weights; everything has been calculated once and for all, which corresponds to 99% of the calculations in the examples we have treated. This additional randomization allows us to trade-off bias against variance, in the sense that we are able to significantly reduce bias at the cost of a (very modest) increase in variance. **So you need not be disappointed, our procedure is indeed a \"different\" way to compute an estimator from the quantities computed for the SNIS estimator, and this is not a \"new\" MCMC**.", " This work analysis a well-known multiple-proposal MCMC kernel known as i-SIR. The main contributions are \n\n1. Theorem 3, which bounds the rate at which the bias/error decays with the number of iterations or number of particles.\n2. Theorems 4, which gives bias, mean-square error, and high-probability bounds for a Rao--Blackwellised estimator (9) which re-cycles (almost) all generated samples. Strengths:\n\n1. Novel bias/error bounds for the i-SIR algorithm and for the \"Rao--Blackwellised\" estimator from (9) (as well as a high-probability bound for the latter).\n\nWeaknesses:\n\n1. I was slightly disappointed after reading the paper because the title suggests a way to reduce the bias in self-normalised importance sampling (SNIS) but the analysed method is an MCMC algorithm. And the iterative nature of this MCMC algorithm may render it inapplicable in some settings in which \"standard\" SNIS is employed.\n\n2. Theorems 1 and 2 are well known in the literature on particle MCMC methods. It might worth mentioning this somewhere on Page 3 to avoid the false impression that these are novel.\n\n3. From reading the manuscript, it is not 100% clear to me whether the Rao--Blackwellised estimator in Equation 9 is claimed to be a novel contribution (Page 2 makes it sound as if it is). Just in case, it might be worth adding a reference to at least the first of the following sources where this Rao-Blackwellised estimator has been previously suggested (albeit without any theoretical analysis):\n\n* Tjelmeland, H. (2004). Using all Metropolis–Hastings proposals to estimate mean values (No. NTNU-S-2004-4).\n\n* Yang, S., Chen, Y., Bernton, E., & Liu, J. S. (2018). On parallelizable Markov chain Monte Carlo algorithms with waste-recycling. Statistics and Computing, 28(5), 1073-1081.\n\n* Schwedes, T., & Calderhead, B. (2021, March). Rao-blackwellised parallel MCMC. In International Conference on Artificial Intelligence and Statistics (pp. 3448-3456). PMLR.\n\n* Schwedes, T. (2019). Parallel Markov chain quasi-Monte Carlo methods. PhD thesis.\n\n\n\n\nTypos et al:\n\n- define $\\mathbb{N}^*$\n- $n$ -> $N$ in Theorem 4 In Line 241, it is stated that \"An approach very similar to BR-SNIS can be taken also in [the particle Gibbs] context\". What does this refer to? Do you mean one could use a similar Rao--Blackwellisation or similar strategy of analysis? Yes", " The authors describe and analyze an iterated importance sampling resampling scheme and a possible bias-reduction technique. The idea is interesting, in my opinion. The main weakness is that the paper is difficult to read in some part.\nMoreover, some relationships and connections with other methods in the literature should be discussed. The paper contains very interesting material, in my opinion. I believe that the degree of novelty is mainly in the bias reduction idea. However, I have also some concerns. See below.\n\n- However, the relationships and connections with MCMC methods with multiple candidates schemes such as the Ensemble MCMC algorithms in\n\nR. Neal, MCMC using ensembles of states for problems with fast and slow variables such as Gaussian process regression, arXiv:1101.0387, 2011.\n\nB. Calderhead, A general construction for parallelizing Metropolis-Hastings algorithms, Proceedings of the National Academy of Sciences of the United States of America (PNAS) 111 (49) (2014) 17408–17413.\n\nE. Bernton, S. Yang, Y. Chen, N. Shephard, J. S. Liu, Locally weighted Markov Chain Monte Carlo, arXiv:1506.08852 (2015) 1–14.\n\nshould be discussed. An interesting summary and review of these techniques is given in Section 4.4 of\n\nL. Martino, \"A Review of Multiple Try MCMC algorithms for Signal Processing\", Digital Signal Processing, Volume 75, Pages: 134-152, 2018.\n\nThese techniques are very similar, in my opinion, to the method you are describing, hence this is an important discussion for your work. The true main limitation of the proposed technique is mainly in its \"degree of novelty\", in the sense that similar ideas has been already proposed in the literature.", " The paper proposes an MCMC-based extension of self-normalized importance sampling which can be seen as an iterated SIR algorithm or as a single-time-step analogue of particle MCMC. The interest in the current setting is to use a \"waste recycling\" variant of the iSIR algorithm in the SNIS context in order to provide a variance reduction.\n\nA thorough theoretical analysis is presented which it is claimed support the idea that this approach may lead to \"drastic bias reduction without impairing the variance\". I like the attempt to improve on a basic algorithm using a simple \"plug-in\" strategy which should give the proposed method wide applicability. The main strengths of the paper are probably casting this problem in a framework which should allow practitioners to readily adapt existing SNIS implementations to make use of the proposed approach and the rigorous theoretical treatment of the method which provides some confidence in the performance of the approach.\n\nIn terms of weaknesses, there seem to me to be two:\n\n1. The claimed novelty seems to me to be overly broad.\n2. It is not immediately clear that one obtains the \"drastic\" improvements that are reapeatedly claimed in a way which leads to any practical benefit.\n\nIn terms of novelty, I was surprised to see it suggested that the idea of recycling \"rejected\" samples in this setting was somehow an innovation as it's an idea I've seen discussed regularly at conferences. Indeed, looking at Section 4.6 of [Andrieu, Doucet and HOlenstein. Particle Markov chain monte Carlo methods. Journal of the Royal Statistical Society Series B, 72(3):269--342 2010] or [Frenkel, D. (2006) Waste-recycling Monte Carlo. Lect. Notes Phys., 703, 127–138.] demonstrate that this is something which has been reasonably well known and in the literature for some time -- and with some formal justification.\n\nIn terms of improvement, there are two issues which the manuscript doesn't presently do justice to in my mind. These are:\n1. Is there really a reduction in variance at no cost; seemingly, the variance increases by more than the squared bias falls in all of the empirical results and the theoretical results seem to provide a bound on the amount by which the MSE can increase rather than establishing as claimed in the abstract that there is no increase.\n2. Does the reduction in bias make meaningful practical differences? The last numerical example seems to come closest to showing that it leads to an improvement in a widely used figure of merit but it stops short of showing improved, say, predictive performance. Although it might seem obvious that reducing the bias would improve performance it seems to me not to be so largely because bias in SNIS vanishes at the same rate of variance and hence typically makes negligible contribution to MSE for even modest numbers of samples and the MSE estimates shown for the proposed algorithm seem marginally worse than those for SNIS.\nThese are both important questions in assessing how relevant the work is to the NeurIPS community.\n 1. I've been unable to reconcile the claim in the abstract that the proposed method reduces bias without increasing variance with the numerical results shown in Figure 2. MSE = bias^2 + variance and the proposed method seemingly increases the MSE relative to the basic SNIS method which would be consistent with it increasing the variance by somewhat more than the reduction in (squared) bias. Is that the case? If so the abstract is a little misleading, if not some explanation seems necessary.\n\n2. Does the reduction in bias lead to meaningful improvements in practical problems that the authors have seen?\n\n3. Is the work of Frenkel or Andrieu et al. on \"waste recycling\" relevant prior art from a methodological perspective or is there a reason that it is not? The authors indicated that their work is of a theoretical/methodological nature and hence did not consider societal impact. I think that is reasonable.\n\n", " The iterated sampling importance resampling (i-SIR) is an iterative IS estimator that at each iteration a pool of candidate particles consists of one resample particle from the previous iteration and particles generated from an importance function. The paper suggests to average estimators over k-iterations instead of taking an estimator at the last iteration to reduce the bias of estimator and make better use of the available computational resources. Then theoretical properties were studied and simulation studies are given in comparison to a naïve self-normalized IS. Most of literature tend to focus on an importance function to make a better estimator. In that sense the study on i-SNR is interesting. Without too much difficulty, it is easy to see that averaging independent multiple IS estimates will reduce the bias compared to an estimate from a single iteration. Although the i-SNR is slight different to independent ISs, the novelty in methodology seem to be not big.\n\nThe evidence supporting the efficient computation does not look sufficient and I feel the method is needed to be fully investigated both theoretically and numerically in general. \n\na) The expectation error bound in Theorem 4 looks increasing with k (number of iterations). The burn-in k0 is likely to depend on various things like the deviation of an importance function to the target distribution and a number of samples per iteration. It sounds strange. \nb) How do you find the burn-in k0? \nc) In the experiment, k0=k-1. This mean that an estimate from the last iteration is taken and this against the claim that the new approach makes better use of computation by recycling all candidates.\nd) Given the computation budget, there will be some trade-off between a number of samples N and k (iterations). Some investigation will be good.\ne) Numerical comparison with other similar estimator (iterative IS estimator) will make the comparison more fair and useful. For example, Population Monte Carlo method. \nCappe, O., Guillin, A., Marin, J. M> and Robert, C. P. (2004) Population Monte Carlo, Journal of Computational and Graphical Statistics. 13(4) 907-929. \nf) For simulation study with logistic regressions, what are the computing budget M and k? \n Please see Strengths and Weakness section. yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 3 ]
[ "CfUmSbW2Wlo", "DuvDaNSZ3xN", "CuKt6-Er8AtW", "AhxI7Sm8-eX", "GOaeoeTV_r", "nips_2022_HH_jBD2ObPq", "bzo3Fc9xll_", "nips_2022_HH_jBD2ObPq", "0rtg_rI1XL8", "0rtg_rI1XL8", "0rtg_rI1XL8", "-5y87WueO2I", "-5y87WueO2I", "e5urouJ0bep", "xhjej3dZayO", "xhjej3dZayO", "xhjej3dZayO", "xhjej3dZayO", "xhjej3dZayO", "nips_2022_HH_jBD2ObPq", "nips_2022_HH_jBD2ObPq", "nips_2022_HH_jBD2ObPq", "nips_2022_HH_jBD2ObPq" ]
nips_2022_xWvI9z37Xd
Where to Pay Attention in Sparse Training for Feature Selection?
A new line of research for feature selection based on neural networks has recently emerged. Despite its superiority to classical methods, it requires many training iterations to converge and detect the informative features. For datasets with a large number of samples or a very high dimensional feature space, the computational time becomes prohibitively long. In this paper, we present a new efficient unsupervised method for feature selection based on sparse autoencoders. In particular, we propose a new sparse training algorithm that optimizes a model's sparse topology during training to quickly pay attention to informative features. The attention-based adaptation of the sparse topology enables fast detection of informative features after a few training iterations. We performed extensive experiments on 10 datasets of different types, including image, speech, text, artificial, and biological. They cover a wide range of characteristics, such as low and high-dimensional feature spaces, as well as few and large training samples. Our proposed approach outperforms the state-of-the-art methods in terms of the selection of informative features while reducing training iterations and computational costs substantially. Moreover, the experiments show the robustness of our method in extremely noisy environments.
Accept
After the rebuttal and discussion, all reviewers recommend acceptance of this paper to some degree. The paper has benefited from a careful review by reviewer U347 and additional experiments and clarifications performed by the authors in response to the reviewer's concerns. All reviewers noted that the paper is clearly written, the proposed method is simple and easily implemented, and that it appears to perform quite well.
train
[ "YeDk9PYcxBe", "XLVDXqVMhrM", "xo6xn505gK", "T67jspSf5xQ", "DgxsZR_4Go", "LafUzOFqcn0", "745wHHn4fFx", "XyxzqA7JvrO", "uzkDpDSpCCx", "KtXBKaPDhg", "wws3d2ibO6U", "B_byu8dozje", "zHQvPF8B0Kw", "xXIfanKOdea", "PyKvQ0rdeQf", "mRuf-f_3FGa" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you. We appreciate the reviewer’s time in checking our response and the constructive suggestions. We will illustrate the overlapping features on Madelon in the final version of the paper.", " Thanks for your reply,\n\n> We did follow QS in adding noise to the input features. Kindly find this information in Appendix A.1\n\nThanks for the clarification.\n\n> on some datasets, such unsupervised feature selection may result even in a better subset of features for the downstream classification task\n\nI don't agree that this is the \"expected\" behaviour, certainly not on underdetermined high-dimensional tasks (e.g. speech tasks, brain-computer interface), but the point is well taken that the fact that unsupervised methods beat supervised ones at picking features for classification is not necessarily aberrant, and not worth mentioning in the paper as a result.\n\n> We believe that our evaluation on the Madelon dataset could cover this case, where we have only 20 real features and the rest 480 features are purely random. High accuracy could reflect the feature selection algorithm is able to select the correct ones. \n\nIt could, and it presumably does, but it is worth checking how well the features actually do overlap with the correct ones.", " Most of my concerns were addressed by the authors' response. I raised the review score.", " The authors have responded to questions that you raised:\n- U347 requested a comparison to DUFS, pointed out that no experiments are done with noisy samples, claims that no tests are done on datasets having fewer samples than features, requested testing of other NN-based approaches, suggested that the number of selected features be a hyperparameter, requested 10 replications of each experiment instead of 5, requested a more detailed comparison to QS, flagged a typo in Table 2, asked if the other methods were reimplemented by the paper authors, requested numeric results on consistency and stability, asked about missing results in Table 2, suggested comparing training times for NN and non-NN methods, and asked about accuracy/sparsity tradeoffs\n- cbLE asked about applicability of the method beyond classification, why only input and output neurons were subject to the drop-and-grow process, and why WAST improved classification accuracy\n- y8xW asked how λ is selected, if the output is processed through a softmax, for a comparison to DropConnect, and suggested testing on something fresher than the spoken digit recognition task\n- XbZ9 claims that WAST does not appear to add noise, expresses concern that WAST is competitive with supervised methods, and asks for an evaluation (on synthetic data) of whether or not WAST selects the correct features \n\nAs we are near the end of the author-reviewer discussion period, I ask that you please read the authors' reply and respond promptly.\n\nThanks\n", " We thank all the reviewers for the positive reviews and constructive comments that help us to emphasize the contributions of our approach. We are encouraged to hear that the reviewers found the approach **novel** (Reviewers y8xW, XbZ9), the **gains are excellent/significant** (Reviewers cbLE, XbZ9), and the method is **superior across various domains** (Reviewers U347, cbLE). We appreciate that the reviewers acknowledge that the paper is well-presented. In response to the thoughtful comments, we have addressed them one by one in the individual responses and provided a summary below:\n\n* We added analysis on the effect of the sparsity level of WAST on performance.\n* We added analysis on the robustness of the studied methods to noisy samples.\n* We added comparisons to another recently supervised NN-based method and unsupervised classical method. \n* We performed additional experiments on a popular music dataset.\n\nThanks again for all of your valuable suggestions. We updated the paper accordingly, with changes highlighted in green. We appreciate the reviewers' time to check our response. ", " We thank the reviewer for the positive and thoughtful reviews and for acknowledging the gains of our method. Please find the answers to the questions below.\n\n**Q1: “QS assigns some importance to the fact that they use denoising autoencoders (because adding noise is supposed to encourage the network to find the most important features), but WAST does not appear to add any noise. Do you have any comment on this? What does this tell us about the approach and why it works?”**\n\n**A1:** We did follow QS in adding noise to the input features. Kindly find this information in Appendix A.1 Lines 530-531.\n\n**Q2: “There is also something quite disturbing about the fact that this method is competitive with supervised methods, given that their objectives are so different. What does this tell us about the approach and why it works?”**\n\n**A2:** This is true that the joint distribution of data and labels can be such that supervised feature selection can lead to much better downstream (classification) results. We can observe this in our experiments too, e.g. the PCMAC dataset. The same holds for other forms of dimensionality reduction / feature extraction and representation learning. However, it is not unreasonable to expect that the compact representation that an autoencoder learns [1] provides us with a good choice of informative features to pick such that they allow the classifier to distinguish different classes of the data. And on some datasets, such unsupervised feature selection may result even in a better subset of features for the downstream classification task. Such behavior was also observed [2] for supervised (e.g., LDA-based) vs. unsupervised (e.g. PCA-based) feature extraction. We hope this explanation clarifies that the results are not disturbing, but are in line with what we expect. \n\n[1] Bengio, Y.: Learning deep architectures for AI. Now Publishers Inc (2009)\n\n[2] Fukunaga, K. Introduction to Statistical Pattern Recognition (1990) \n\n**Other Points:**\n\n**P1: “....to ask whether it can also go farther than just picking out useful features to picking the correct ones”**\n\n**A1:** Thanks for the thoughtful question. We believe that our evaluation on the Madelon dataset could cover this case, where we have only 20 real features and the rest 480 features are purely random. We evaluate the accuracy using only 4% of the features (K=20 features). High accuracy could reflect the feature selection algorithm is able to select the correct ones. ", " We thank the reviewer for the positive and thoughtful reviews and for acknowledging the novelty of our method. Please find the answers to the questions below.\n\n**Q1:\"In equation 3), how did you select the \\lamda hyper parameter to balance the two components?\"**\n\n**A1:** The search space of the hyperparameter \\lamda is guided by analyzing the absolute value of the two components ( the magnitude of the gradient and the sum of absolute weights) for a few iterations. For instance, for SMK, the average value of the magnitude of the gradients is approximately ~0.8, while the average sum of the absolute weight is ~ 0.1. In this case, we choose a small value of \\lamda (i.e., 0.01). \n\n**Q2:” In the network propagation, is the output processed with a softmax at the end?”**\n\n**A2:** We use linear activation for the output layer. Kindly find all architectural details in Appendix A.1.\n\n**Q3: “What do you think on your work's difference with Dropconnect? I know there definitely are some, but as a popular method, you may want to address it.”**\n\n**A3:** One fundamental difference is that DropConnect acts on dense neural networks, while DST methods act on sparse neural networks. DropConnect resembles (broadly speaking) DropOut at the connection levels and aims to act as a regularizer for training a dense neural network. DST-based methods (including our proposed method, WAST) aim to co-optimize during training the values of the weights and the connectivity of sparse neural networks in order to find sparse optimal connectivity capable of good performance. Also, there is a difference at the algorithmic level. While DropConnect inactivates at random a different set of connections at every iteration during training, DST methods remove or add permanently connections during training based on various criteria. Nevertheless, the principle of adding and removing connections (even permanently) also enables some innate regularization capabilities to the DST methods, and this reflects in their competitive performance with dense neural network training. \n\n**Other points:**\n\n**P1: “...for spoken digit recognition task, we may want to replace it with a little bit more popular and up-to-date sets”**\n\n**A1:** Thank you for the suggestion. We performed additional experiments on a recent and popular music dataset Free Music Archive (FMA) [1,2]. FMA includes a collection of 106574 audio tracks with artist and album information categorized in 161 genres. We focus on the genre classification task that categorizes the data into eight classes. The number of input pre-extracted features is 140. We evaluate the accuracy using K={25, 50, 75}. We use the same setting as other benchmarks, where WAST is trained for 10 epochs, while the others are trained for 100 epochs. As shown in the table below, consistent with our results on other datasets, WAST outperforms other unsupervised NN-based methods in most cases. Please check Appendix D in the revised version of the paper for full details.\n\n| | **25** | **50** | **75** |\n|-----------------|------------|-------------|------------|\n| **AEFS** | 38.50±1.11 | 41.20±1.48 | 42.74±1.32 |\n| **CAE** | 37.06±1.17 | 39.00±0.84 | 39.90±1.32 |\n| **QS** | 41.50±2.29 | 44.65±1.10 | 45.57±0.37 |\n| **WAST (ours)** | 42.13±2.51 | 44.98±1.10 | 45.45±0.48 |\n\n[1] Defferrard, M., Benzi, K., Vandergheynst, P., & Bresson, X. (2016). FMA: A dataset for music analysis. arXiv preprint arXiv:1612.01840.\n\n[2] Defferrard, M., Mohanty, S. P., Carroll, S. F., & Salathé, M. (2018, April). Learning to recognize musical genre from audio: Challenge overview. In Companion proceedings of the the web conference 2018 (pp. 1921-1922).", " We thank the reviewer for the thoughtful feedback and for acknowledging the significance of our method. Please find the answers to the questions below.\n\n**Q1: Beyond the classical classification tasks, how would the proposed approach fare against the state-of-the-art in CV, NLP, and graph ML tasks?**\n\n**A1:** Thank you for this interesting question! We would like to clarify that we use classification accuracy as one of the commonly used techniques for the evaluation purpose of how representative the selected subset of features is [1]. Studying the actual utility of feature selection in concrete applications goes beyond the scope of this paper. It can indeed vary depending on the primary goal of feature selection, including e.g., dimensionality reduction, removal of irrelevant and/or redundant features, knowledge discovery, interpretability, supervised (e.g., classification), unsupervised (e.g., clustering) or reinforcement learning, and on the application domain including CV, NLP and graph analytics. Studying the utility of our approach for downstream classification tasks would be indeed the most straightforward, but not the most obvious in the case of CV, NLP, and graph ML - since those tasks would commonly require feature extraction step(s) to make the application of feature selection more meaningful [2]. If we can setup and run enough experiments in the coming period, we will add a statement about the potential of our approach.\n\n[1] Cai, J., Luo, J., Wang, S., Yang, S.: Feature selection in machine learning: A new perspective. 327 Neurocomputing 300, 70–79 (2018) \n\n[2] Bolon-Canedo, V., & Remeseiro, B. (2020). Feature selection in image analysis: a survey. Artificial Intelligence Review, 53(4), 2905-2931.\n\n**Q2: “Why are only the input and output neurons subject to the drop and grow process but not the hidden neurons?”**\n\n**A2:** The drop and grow process is performed on the level of the connections. Thus, the distribution of the connections on the hidden neurons is also changing during training. The growth of the connections on the input/output neurons is based on its contribution to the loss to pay fast attention to informative input features, while the probability of growth in a hidden neuron is uniform since we consider the hidden neuron equally important. We briefly experimented, during the development of the approach, estimating the importance of each neuron based on the gradient of its connections with respect to the loss and observed that the performance is comparable to treating the hidden neuron equally important. Yet, it would be an interesting future work to design new criteria for estimating the importance of hidden neurons to increase the learning speed of the hidden representation and consequently detect the informative features even faster. \n\n**Q3:” I can understand the improvements in efficiency but why does the classification accuracy improve with the use of WAST? Section 5.1 describes this additionally but doesn't explain the root causes.”**\n\n**A3:** The classification accuracy is a factor of the chosen K features. The more informative the selected features are, the higher the classification accuracy is. WAST is able to select more informative features based on the attention mechanism used during training.\n", " **P5:The number of target selected features should be provided as a hyperparameter.**\n\n**A5:** WAST, like most feature selection methods (Table 2 from [1]), provides the ranking of the input features. The user can select the desired number of features (K) based on the application and the target task's memory/computational resource limitation. Determining the optimal K is an open and challenging problem [1]. From the algorithmic side of feature selection, our proposed approach is independent of the chosen K. Section 4.4 and Figure 2 provide an analysis of the effect of algorithmic dependence on K in increasing the computational costs, as in the CAE method. \n\n[1] Li, Jundong, Kewei Cheng, Suhang Wang, Fred Morstatter, Robert P. Trevino, Jiliang Tang, and Huan Liu. \"Feature selection: A data perspective.\" ACM computing surveys (CSUR) 50, no. 6 (2017): 1-45.\n\n", " **P2: “....This work doesn’t provide any experimental results on datasets with noisy samples”**\n\nA2: Thank you for the thoughtful comment. Following your suggestion, we evaluated the performance of unsupervised NN-based methods on noisy samples by adding a gaussian noise with zero mean and different values for the standard deviation {0.2, 0.4, 0.6, 0.8} on the training data. We performed this analysis on datasets of different types. Below, we report the accuracy using K=50. Interestingly, across different domains, sparse-based methods (QS and WAST) have strong robustness to the noise, especially at high noise levels. Moreover, WAST outperforms QS in all cases. Please check Appendix G in the revised version of the paper for full details.\n\n| | | **USPS** | | | | | **Isolet** | | |\n|----------|------------|------------|------------|------------|----------|------------|------------|------------|-------------|\n| | **0.2** | **0.4** | **0.6** | **0.8** | | **0.2** | **0.4** | **0.6** | **0.8** |\n| **AEFS** | 92.58±1.12 | 88.66±0.49 | 82.36±2.44 | 77.20±3.29 | **AEFS** | 74.9±4.01 | 62.62±1.98 | 52.64±4.97 | 46.52± 5.76 |\n| **CAE** | 91.76±0.79 | 86.68±0.70 | 81.08±2.11 | 78.02±1.07 | **CAE** | 68.38±5.54 | 63.82±4.49 | 52.3±3.72 | 45.22±3.84 |\n| **QS** | 95.70±0.71 | 95.32±0.41 | 94.40±0.43 | 92.74±0.53 | **QS** | 77.81±2.46 | 76.71±1.38 | 73.24±2.59 | 71.51±1.75 |\n| **WAST** | 96.51±0.30 | 95.66±0.34 | 94.48±0.30 | 92.90±0.25 | **WAST** | 84.35±1.59 | 83.79±1.28 | 81.33±0.69 | 78.00±1.49 |\n\n| | | **HAR** | | | | | **PCMAC** | | |\n|----------|------------|------------|-------------|------------|----------|-------------|------------|------------|------------|\n| | **0.2** | **0.4** | **0.6** | **0.8** | | **0.2** | **0.4** | **0.6** | **0.8** |\n| **AEFS** | 78.56±2.38 | 74.44±2.49 | 67.52±6.07 | 63.88±5.33 | **AEFS** | 56.62± 2.96 | 59.94±2.98 | 55.74±3.17 | 55.22±1.54 |\n| **CAE** | 79.74±2.05 | 73.34±4.98 | 65.98±3.84 | 57.42±7.11 | **CAE** | 51.26±2.02 | 51.38±1.76 | 50.12±1.46 | 50.02±1.17 |\n| **QS** | 87.39±0.70 | 83.86±0.29 | 80.68±0.91 | 77.77±1.60 | **QS** | 56.97±2.21 | 57.22±2.42 | 56.86±1.79 | 57.89±3.80 |\n| **WAST** | 88.33±0.60 | 85.47±1.27 | 82.14±0.93 | 78.93±1.54 | **WAST** | 57.58±3.28 | 58.71±4.66 | 58.82±4.18 | 58.20±3.13 |\n\n**P3:“The datasets that were chosen to validate the effectiveness of the method are not challenging since the number of samples in the datasets larger or near the same as the number of features.”**\n\n**A3:** Our experiments do cover the cases where the datasets have **few** samples and **very high** dimensional datasets. For instance:\n| **Dataset** | **#Sampels** | **# Features** |\n|-------------|--------------|----------------|\n| **SMK** | 187 | 19993 |\n| **GLA** | 180 | 49151 |\nWe kindly ask the reviewer to check Table 1 for the characteristics of the datasets.\n\n**P4:”...only a single NN-based method was chosen.“**\n\n**A4:** We followed the reviewer's suggestion and added another recent supervised NN-based method, STG [2]. Please find the results below using K=50, except Madelon, where K=20. WAST outperforms supervised-based methods in 5 cases, while LassoNet and STG are the best performer in 3 and 2 cases, respectively. We included the results for other values of K in Appendix B Tables 5 and 6. \n\n| | | **Madelon** | **USPS** | **COIL** | **MNIST** | **FMNIST** |\n|------------------|------------------|-------------|------------|------------|------------|------------|\n| **Unsupervised** | **WAST** | 83.27±0.63 | 96.69±0.27 | 99.58±0.14 | 95.27±0.26 | 82.16±0.57 |\n| **Supervised** | **LassoNet [1]** | 79.50±1.22 | 95.80±0.12 | 95.83±1.18 | 94.38±0.12 | 82.63±0.23 |\n| **Supervised** | **STG [2]** | 59.53±1.90 | 95.78±0.6 | 97.57±1.7 | 92.53±0.86 | 83.32±0.45 |\n| | | **Isolet** | **HAR** | **PCMAC** | **SMK** | **GLA** |\n| **Unsupervised** | **WAST** | 85.33±1.39 | 91.20±0.16 | 60.51±2.53 | 84.74±1.05 | 75.56±4.08 |\n| **Supervised** | **LassoNet [1]** | 85.70±0.38 | 93.93±0.15 | 86.53±1.25 | 77.37±3.57 | 76.67±2.22 |\n| **Supervised** | **STG [2]** | 89.38±1.19 | 91.75±0.59 | 56.04±1.9 | 81.05±5.37 | 71.11±2.83 |\n\n[1] Lemhadri, Ismael, Feng Ruan, and Rob Tibshirani. \"Lassonet: Neural networks with feature sparsity.\" In International Conference on Artificial Intelligence and Statistics, pp. 10-18. PMLR, 2021.\n\n[2] Yamada, Yutaro, Ofir Lindenbaum, Sahand Negahban, and Yuval Kluger. \"Feature selection using stochastic gates.\" In International Conference on Machine Learning, pp. 10648-10659. PMLR, 2020.\n\n\n", " **(2) Stability** \n\nFor clear readability of the stability during the early stages of training, we believe that it is better to represent it graphically as in Figure 5 instead of reporting 400 numbers (4 methods, 10 epochs, 10 datasets). As an example, we provide the stability, estimated by the standard deviation, at epoch 3 below:\n\n| **Method** | **Madelon** | **COIL** | **USPS** | **MNIST** | **FMNIST** | **Method** | **Isolet** | **HAR** | **PCMAC** | **SMK** | **GLA** |\n|------------|-------------|----------|----------|-----------|------------|------------|------------|---------|-----------|---------|---------|\n| **AEFS** | 3.87 | 2.3 | 0.72 | 0.74 | 0.8 | **AEFS** | 2.58 | 5.49 | 0.38 | 4.42 | 4.51 |\n| **CAE** | 5.16 | 0.68 | 0.34 | 0.96 | 1.15 | **CAE** | 2.18 | 1.95 | 2.14 | 2.88 | 4.52 |\n| **QS** | 1.32 | 1.90 | 0.67 | 0.80 | 2.54 | **QS** | 1.10 | 1.20 | 1.17 | 3.06 | 4.44 |\n| **WAST** | 1.06 | 0.56 | 0.13 | 0.19 | 0.60 | **WAST** | 1.72 | 0.76 | 4.71 | 2.68 | 3.76 |\n\n**Q5: “Why are some results missing from Table 2?”**\n\n**A5:** In these cases, prohibitive training time is needed either because of the high dimensionality of the features (SMK) or the high number of training samples (MNIST and FashionMNIST). Experiments that exceed 12 hours limit are not considered. We explain this in Lines 180-182 in the manuscript.\n\n**Q6:“Once you compare your method to classical unsupervised FS, I think it could be interesting to compare the total training time between non-NNet and NNet-based methods.\"**\n\n**A6:** Motivated by the success of NN-based methods in outperforming non-NN-based ones, our goal is to address their limitation of being computationally extensive. Hence, we focus on evaluating the reduction of the computational costs obtained by our method compared to the other NN-based ones. As the focus of this paper is on the algorithmic side, we leave its truly sparse implementation for the next future work, where the running time that reflects the actual gain of sparsity could be reported. Kindly refer to Section 6 and Appendix I for a detailed discussion.\n\n**Q7: “Have you conducted experiments on sparsity level hyper parameter, how does it affects the performance of your method (accuracy/sparsity tradeoff)?”**\n\n**A7:** Thank you for the thoughtful question. We analyzed the performance of WAST on 5 different sparsity levels {0.2, 0.4, 0.6, 0.8, 0.9}. Please find the accuracy reported in the table below. Interestingly, the performance of WAST is robust to the sparsity level of the model. Yet, we mostly care about the performance at high sparsity levels, as the goal is to achieve high performance at a low computational cost. Please check Appendix H in the revised version of the paper for full details.\n\n| Dataset/Sparsity | 0.2 | 0.4 | 0.6 | 0.8 | 0.9 |\n|------------------|------------|------------|------------|------------|--------------|\n| Madelon | 82.07±0.83 | 81.63±1.06 | 82.10±0.60 | 83.27±0.63 | 82.80 ± 1.57 |\n| USPS | 95.37±0.26 | 96.08±0.36 | 96.80±0.14 | 96.69±0.27 | 96.28 ± 0.11 |\n| HAR | 91.46±0.74 | 91.97±0.44 | 90.67±0.78 | 91.20±0.16 | 91.14± 0.49 |\n\n\n**Other points:**\n\n**P1: “ Some related recent works are missed, f.e. DUFS [1].....”**\n\n**A1:** Thanks for your comment. We conducted a comparison with DUFS using the official implementation of the paper. Below is a summary of the accuracy using K=50 on all datasets except Madelon, where K=20. We included the results for other values of K {25, 75, 100, 150, 200} in Appendix B Tables 5 and 6. \n| Method | Madelon | USPS | COIL | MNIST | FMNIST |\n|----------|------------|------------|------------|------------|------------|\n| DUFS [1] | 52.57±5.50 | 95.62±0.54 | 97.43±1.22 | 62.09±0.0 | 74.69±1.86 |\n| WAST | 83.27±0.63 | 96.69±0.27 | 99.58±0.14 | 95.27±0.26 | 82.16±0.57 |\n| **Method** | **Isolet** | **HAR** | **PCMAC** | **SMK** | **GLA** |\n| DUFS [1] | 85.62±2.53 | 86.90±1.06 | 57.79±3.18 | 81.05±3.07 | 70.83±1.39 |\n| WAST | 85.33±1.39 | 91.20±0.16 | 60.51±2.53 | 84.74±1.05 | 75.56±4.08 |\n\n[1] Lindenbaum, Ofir, et al. \"Differentiable unsupervised feature selection based on a gated laplacian.\" Advances in Neural Information Processing Systems 34 (2021): 1530-1542.", " We thank the reviewer for the constructive feedback and positive comments. Below we answer all questions and provide the requested additional experiments and analyses.\n\n**Q1: “Could you please provide a more detailed comparison to the QS method. Although your method outperforms it, the improvement is marginal in 7 out of 10 datasets (in t.o. accuracy)?”**\n\n**A1:** Please find the difference between QS and WAST in various aspects below:\n\n1. **[Technical aspects]** (a) Unlike QS, which randomly explores different topologies during training, the core idea of WAST is to optimize the sparse topology faster by paying attention to the informative features during training. We provide a visualization example in Figure 6.\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(b) The criteria used for the drop and grow phases of the sparse topology adaptation are different in the two algorithms. In summary, QS uses weight magnitude-based drop and random growth, while WAST exploits the neuron importance in the drop and grow mechanisms. We provide this explanation in lines 80-83 in the related work section for QS and Section 3 for WAST.\n\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;(c) The update schedule of the sparse topology is different. The effect of the update schedule is analyzed in Appendix E, with visualization in Appendix F.\n\n2. **[Accuracy]** WAST provides rather a substantial improvement over QS in terms of accuracy. With 90% less number of training epochs (Table 2), WAST outperforms QS in 9 benchmarks of out 10 with improvements ranging from 0.4% to 10.71%, where 5 cases have improvements larger than 1.5%. \n \nUsing a small number of epochs (i.e., 10) for the two methods (Figure 5), WAST outperforms QS in the 10 benchmarks, with improvements ranging from 1.09% to 30.54%. The improvement per dataset is as follows: \n\n| Artificial | Image | | | | Speech | Time series | Text | Biological | |\n|------------|:-----:|:-----:|:-----:|:------:|--------|-------------|-------|------------|-------|\n| Madelon | USPS | COIL | MNIST | FMNIST | Isolet | HAR | PCMAC | SMK | GLA |\n| 30.54% | 1.14% | 2.43% | 1.26% | 2.87% | 1.09% | 4.59% | 2.42% | 1.59% | 4.45% |\n\n3. **[Computational costs]** WAST reduces the number of training epochs by 90%, thanks to the fast attention to informative features during training. Hence, the computational costs are reduced by 90% (Table 3). \n\n**Q2: “Is it a typo in Table 2 (AEFS has the maximal accuracy on FashinMNIST)?”**\n\n**A2:** Thank you for pointing out this. This was a typo indeed. The accuracy of AEFS on FashionMNIST is 80.88±0.71 (as correctly reported in Table 5). Hence, CAE has the best accuracy, as presented in Table 2. We have corrected this in the revised version.\n\n**Q3: “Have you re-implemented other methods by yourself or used them as-is? I didn’t find them in your supplementary.”**\n\n**A3:** We implemented QS. For other methods, we use their official codes and the Scikit-Feature library. We state the details of the implementation used for every method in the main manuscript. We kindly ask the reviewer to check Section 4.3: Experimental Settings “Implementation” Lines 176-182. \n\n**Q4: “Line 230 (Consistency and Stability) - I think it is better to provide a numeric comparison to support your claim.”**\n\n**A4:** Thank you for the suggestion. Please find the numeric comparison below:\n\n**(1) Consistency**\nAs shown in the table, on different dataset types (artificial, image, speech, time series, text, biological) and different characteristics (high/low dimensional features, few-shot/large data), WAST consistently outperforms other methods. We included the numeric comparison in Appendix C.\n\n| Method | Madelon | USPS | COIL | MNIST | FMNIST |\n|:------:|:----------:|:----------:|:----------:|:----------:|:----------:|\n| AEFS | 53.50±1.17 | 95.46±0.39 | 96.18±2.15 | 93.83±0.63 | 79.45±3.26 |\n| CAE | 61.20±6.61 | 94.94±1.37 | 96.20±1.10 | 91.60±1.33 | 80.54±0.46 |\n| QS | 52.73±2.28 | 95.55±0.46 | 97.15±0.13 | 94.01±0.37 | 79.29±0.49 |\n| WAST | **83.27±0.63** | **96.69±0.27** | **99.58±0.14** | **95.27±0.26** | **82.16±0.57** |\n| **Method** | **Isolet** | **HAR** | **PCMAC** | **SMK** | **GLA** |\n| AEFS | 81.98±3.17 | 86.43±3.40 | 57.23±2.57 | 78.42±3.87 | 70.00±2.72 |\n| CAE | 78.30±4.57 | 89.02±1.54 | 61.16±3.45 | 81.04±4.52 | 73.34±3.32 |\n| QS | 84.24±1.56 | 86.61±2.83 | 58.09±2.10 | 83.15±2.10 | 71.11±2.83 |\n| WAST | **85.33±1.39** | **91.20±0.16** | **60.51±2.53** | **84.74±1.05** | **75.56±4.08** |\n", " The authors propose a method for unsupervised feature selection which is based on Dynamic Sparse Training applied to an autoencoder with a single hidden dimension and trained with reconstruction loss. The number of selected features is given as an hyperparameter along with sparsity level of the NN. Fixing the number of epochs to be 10 for a proposed method while the other methods are trained with 100 epochs, produces comparable results in terms of accuracy to both supervised and unsupervised methods. Strengths:\n+ The method reduces the number of epochs x10 less than other methods\n+ The experiments conducted on datasets that come from different domains: image, speech, time series, text, bio\n+ Simple method, with clear description, easy to follow\n+ Single loss term without additional reguralizations\n+ Outperforms almost in all datasets selected for the experiments\n\nWeaknesses:\n - Some related recent works are missed, f.e. DUFS [1] - the method outperforms both CAE and MCFS on real datasets\n - The author has a claim on “robustness of our method in extremely noisy environments and its effectiveness for datasets with very high dimensional feature space and a few training samples”, which is misleading. Noisy environment is defined both in terms of the number of noise features which are unrelated to dataset labels, and also in terms of noise in data samples. This work doesn’t provide any experimental results on datasets with noisy samples. \n- The datasets that were chosen to validate the effectiveness of the method are not challenging since the number of samples in the datasets larger or near the same as the number of features.\n- The supervised methods chosen for comparison are mostly classical non-NNet methods so they couldn’t serve as an upper bound (in terms of accuracy) for the proposed unsupervised method, while only a single NN-based method was chosen.\n- The number of target selected features should be provided as a hyperparameter. \n- The experiments run only 5 times for each method and dataset, I think it should be at leaset 10 times.\n\n[1] Lindenbaum, Ofir, et al. \"Differentiable unsupervised feature selection based on a gated laplacian.\" Advances in Neural Information Processing Systems 34 (2021): 1530-1542.\n * Could you please provide a more detailed comparison to the QS method. Although your method outperforms it, the improvement is marginal in 7 out of 10 datasets (in t.o. accuracy)?\n* Is it a typo in Table 2 (AEFS has the maximal accuracy on FashinMNIST)?\n* Have you re-implemented other methods by yourself or used them as-is? I didn’t find them in your supplementary.\n* Line 230 (Consistency and Stability) - I think it is better to provide a numeric comparison to support your claim. \n* Why are some results missing from Table 2?\n* Once you compare your method to classical unsupervised FS, I think it could be interesting to compare the total training time between non-* NNet and NNet-based methods.\n* Have you conducted experiments on sparsity level hyper parameter, how does it affects the performance of your method (accuracy/sparsity tradeoff)?\n The limitations are mentioned, also the social impact.", " The authors propose an extension to a recent auto-encoder-based sparse network approach for feature selection/extraction. Specifically, they extend the previously proposed drop and grow approach for inducing sparsity by leveraging the contribution of an input or output neuron to the reconstructed output. Experiments on 10 different datasets show that the proposed approach provides statistically significant gains in computational cost compared to the previous approach(es). Strengths\n1. Easy to read and follow paper.\n2. Lucid experimental details and crisp presentation.\n3. Fair coverage of competing classical approaches.\n4. Offer to make the code publicly available.\n\nGrowth opportunities\n1. The work is an incremental improvement over the previous approach. The proposed approach is intuitively motivated with several judgement calls (on leveraging neuronal weights differently) that are not theoretically justified.\n2. The field is moving away from explicit feature selection. Hence, the impact of this work might be limited. 1. Beyond the classical classification tasks, how would the proposed approach fare against the state-of-the-art in CV, NLP, and graph ML tasks?\n2. Why are only the input and output neurons subject to the drop and grow process but not the hidden neurons?\n3. I can understand the improvements in efficiency but why does the classification accuracy improve with the use of WAST? Section 5.1 describes this additionally but doesn't explain the root causes.\n Yes, the authors have addressed many of the limitations and societal impact. They might additionally wish to consider the questions above.", " This paper presents an effective unsupervised method on feature selection via auto-encoders, inspired by the attention mechanism. It acquires the idea of the attention framework and applies it onto the weight-zeroing scheme of the autoencoder recursively, resulting in better training efficiency and promising performance on multiple tasks. ## Strengths\n1. This paper clearly addresses its difference from earlier state-of-the-art level work and identifies its main contributions clearly.\n2. The presentation is very clear, with solid statements and references. At least I did not spot any typo or grammatical mistakes there.\n3. The novelty is clearly confirmed and proven in the paper, with rigorous mathematical proof.\n4. This paper clearly addresses the potential limitation and societal impact.\n\n## Weaknesses\n1. The dataset used for benchmarking is a bit out-dated. For example, for spoken digit recognition task, we may want to replace it with a little bit more popular and up-to-date sets such as TIMIT and [yesno](https://www.openslr.org/1/). They are not necessarily harder than the one covered in the paper, but clearly has more work pieces at state-of-the-art level.\n2. Also, instead of spoken digit recognition, voice command recognition might be a better choice as tasks, as this paper addresses a lot on real-time application and usage.\n3. The work lacks comparison with ad-hoc/hand-crafted features and task-specific features. For example, for spoken digit recognition, one additional experiment with common acoustic features such as [MFCC](https://en.wikipedia.org/wiki/Mel-frequency_cepstrum) may help. But this is trivial. 1. In equation 3), how did you select the \\lamda hyper parameter to balance the two components?\n\n2. In the network propagation, is the output processed with a softmax at the end?\n\n3. What do you think on your work's difference with [Dropconnect](https://proceedings.mlr.press/v28/wan13.html)? I know there definitely are some, but as a popular method, you may want to address it. I think the authors have adequately addressed the limitations and potential societal impact of their work.", " This paper proposes a novel method for unsupervised feature selection called WAST, which is an adaptation of Quick Selection (Atashgahi et al 2021) to give a more efficient search. The general idea of this and related methods is to pass the features through a simple autoencoder and to keep features which are more important. The main difference between WAST and QS is that, on each epoch, after dropping low-importance weights from the autoencoder, QS regrows random weights, WAST adds weights in a way which takes into account the importance of the weights in the previous epoch. The paper demonstrates good performance (for K=50) on a large number of benchmarks, often beating supervised baselines. More importantly, the paper demonstrates much better performance than *comparable* benchmarks over many values of K, including QS. The paper also shows that WAST is slightly more efficient than QS (which is already much more efficient than some other related methods). \n\nThe benchmarks in question are evaluated on whether or not they retain accuracy. However, typical supervised variable selection is evaluated on whether it selects the *right* features, on artificial datasets for which the true features are known. While one could argue that the objectives of unsupervised feature selection are not the same, I think it is quite reasonable, given that the method presented here is competitive with supervised methods, to ask whether it can also go farther than just picking out useful features to picking the correct ones. \n\nThe technical advance is somewhat minor, given how close the algorithm is to QS, but on the other hand, the gains are excellent. QS assigns some importance to the fact that they use denoising autoencoders (because adding noise is supposed to encourage the network to find the most important features), but WAST does not appear to add any noise. Do you have any comment on this? What does this tell us about the approach and why it works?\n\nThere is also something quite disturbing about the fact that this method is competitive with supervised methods, given that their objectives are so different. What does this tell us about the approach and why it works? The discussion seems well thought out." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "XLVDXqVMhrM", "LafUzOFqcn0", "uzkDpDSpCCx", "nips_2022_xWvI9z37Xd", "nips_2022_xWvI9z37Xd", "mRuf-f_3FGa", "PyKvQ0rdeQf", "xXIfanKOdea", "KtXBKaPDhg", "wws3d2ibO6U", "B_byu8dozje", "zHQvPF8B0Kw", "nips_2022_xWvI9z37Xd", "nips_2022_xWvI9z37Xd", "nips_2022_xWvI9z37Xd", "nips_2022_xWvI9z37Xd" ]
nips_2022_DmT862YAieY
A Continuous Time Framework for Discrete Denoising Models
We provide the first complete continuous time framework for denoising diffusion models of discrete data. This is achieved by formulating the forward noising process and corresponding reverse time generative process as Continuous Time Markov Chains (CTMCs). The model can be efficiently trained using a continuous time version of the ELBO. We simulate the high dimensional CTMC using techniques developed in chemical physics and exploit our continuous time framework to derive high performance samplers that we show can outperform discrete time methods for discrete data. The continuous time treatment also enables us to derive a novel theoretical result bounding the error between the generated sample distribution and the true data distribution.
Accept
The work proposes a continuous-time generalization of diffusion models on a discrete space. The description uses continuous-time Markov chain (CTMC), in parallel to the existing stochastic differential equation description for continuous space. Reverse CTMC and modeling and ELBO objective are described. Some practical considerations and inspirations are also discussed, including avoiding exponentially large model in high dimensions, efficient reverse (generation) process simulation, and a corrector technique that further exploit the model to improve simulation (generation) quality. An error bound on the learned data distribution is also presented that shows a mild dependency on data dimensionality. All the reviewers agree that this work presents the very right way to describe the continuous-time version of diffusion model on discrete space, and thereafter inspired techniques make a desired contribution to the community. Some concerns are raised, including still inferior performance than the continuous counterpart, and on the independence among dimensions. The authors provide reasonable remarks on them. Hence, I recommend accept to this paper. One minor point: In Sec. 4.2, it would be clearer if the independence is specified both among the random variables $x^{1:D}$ in “output” and between each $x^d$ in “output” and $x^{1:D\backslash d}$ in “input”. Conventionally independence refers to the former, in which case the size is only reduced to $S^D \times D S^2$.
train
[ "uvKKjIqa1WR", "O_XreQ-OaS", "YW9tb7hXj52", "ksywP7FVRsJ", "5NiFJvebzl", "l2gRVMcCFlu", "luethdqMXyA", "7wtqXZh4lFd", "ZFrmIaGePVo", "AhqZsm6VcVG", "ndADo_yUjpu", "-bjFx7gK9Cn", "25eVeSCfLWV", "nPAWYINL9vL", "uuWHL-pBNi", "zPSY2_P87e9", "6Dqa_9V8Dr" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the answer. I will keep my recommendation for acceptance. ", " I thank the authors for their detailed response. All of my questions have been answered and I raised my score accordingly.", " Thanks for the clear response. I have raised my score.", " I would like to thank the authors for their detailed reply and acknowledge that I have read it. Overall, my impression of the paper hasn't changed and I still think that this is a high-quality work that should be accepted.", " > ***\"I think the tau-leaping idea is very cool and breaks the stigma that discrete diffusion models are inherently slow at inference time. Given the success of accelerating inference for continuous diffusion models, I think it would be very valuable to have more experiments/investigations/outlooks how to even further improve acceleration? Maybe adapt tau-leaping towards the special use case of sampling from discrete diffusion?''***\n\nThis is an exciting avenue for further research and we hope that our continuous time perspective can provide additional tools for speeding up discrete denoising models just as it has done in the continuous state space case. As for outlooks on the possible future avenues to be explored, we agree that adapting the tau-leaping algorithm for our specific use-case here is a good basis for more research. We hope that through showing the link with the chemical physics field, we can invite expertise in CTMC simulation to be applied to this problem. We admit that we are not experts in state of the art CTMC simulation techniques but some themes that could be explored are: using advanced tau selection methods to take larger jumps or using more exotic forms of CTMC integrator. In this work, we focused on augmenting our approach with corrector steps to improve sample quality and demonstrate discrete denoising models can become a viable model for discrete datasets. The corrector method also helps us demonstrate an immediate benefit of the continuous time interpretation as this is what allowed us to very simply derive our corrector rate.\n\n> ***Experiment 6.1: How many NFEs it the model using for the different values of $\\tau$? I expect $\\tau=0.004$ needing to use significantly more than say $\\\\tau=0.1$.''***\n\nSince our continuous time process is defined from $t=0$ to $t=1$, the number of denoising model evaluations is simply $1/\\\\tau$. So for $\\\\tau = 0.004$, there are $250$ NFEs and for $\\\\tau=0.1$ there are $10$ NFEs. We have made this clearer in an update to the paper, thanks!\n\n> ***\"Experiment 6.2: Are all baseline methods in Table 1 using 1000 NFEs? How well does the model perform when you use significantly less than 1000 NFEs, say 50?''***\n\nYes, all methods in Table 1 are using 1000 NFEs in the reverse sampling process for fair comparison. We investigate performance for low NFE values in Figure 2, performance does degrade for very low NFE numbers, extrapolating the curves, we would expect poor performance at 50 NFEs. This is expected for an initial diffusion framework, e.g. Lu et al. (2022) Figure 2 shows DDPM for low NFE values and shows a similar degradation in performance. We have shown that adding corrector steps can push the pareto optimal frontier down for higher NFE values, we hope that with further work the frontier could be pushed further down and to the left.\n\n> ***\"Experiment 6.3: What's the intuition that your method outperforms D3PM on CIFAR-10 by a large margin but only slightly (correct me if I am wrong) on the Monophonic Music dataset? Why do you compare to different versions of D3PM (absorbing/gaussian vs uniform) on the two modalities?''***\n\nThe intuition is that the CIFAR-10 dataset is more complex than the music dataset because it is higher dimensional, has an increased state space cardinality and likely has more complex structure/correlations within the datapoints themselves. Therefore, the benefits that our method brings for improving sample quality are less visible in the absolute changes in the metrics themselves. We also note that the scalings and reference points for the metrics between the images and the music dataset are very different so the specific sizes of absolute changes in metrics are less meaningful.\n\nWe use the most competitive form of D3PM for each experiment as our baseline for fair comparison, thank you for raising this point we have made this clearer in the paper. On the image dataset, it was found by Austin et al. (2021) that the Gaussian corruption matrix performs the best so we include that. On the music dataset, since the data is categorical, the Gaussian corruption matrix would perform poorly because it assumes ordinal structure. The appropriate corruption process here is the uniform corruption process which we use for both D3PM and our method. We include the birth/death ablation for our method to demonstrate that corruption processes that assume ordinal structure are not suitable here.\n\n**References**\n\nAustin et al., \"Structured Denoising Diffusion Models in Discrete State-Spaces\", NeurIPS 2021\n\nSong et al., \"Score-Based Generative Modeling through Stochastic Differential Equations\", ICLR 2021\n\nFurusawa et al., \"Generative Probabilistic Image Colorization\", preprint 2021\n\nSong & Ermon, \"Generative Modeling by Estimating Gradients of the Data Distribution\", NeurIPS 2019\n\nLu et al., \"DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps\", preprint 2022", " > ***\"The method performs very well when many corrector steps are used but what's the intuition behind that? For continuous diffusion models it seems that one (or no) corrector step(s) is sufficient. This should have been discussed/investigated.''***\n\nThis question raises some very interesting points highlighting the fundamental differences between a corrector scheme in continuous space versus one in discrete space. I will here summarize and compare the corrector schemes in both cases and then provide some hypotheses explaining this phenomenon.\n\nIn a continuous state space, the predictor and corrector step are similar apart from the extra $\\\\sqrt{2}$ scaling on the Gaussian noise addition versus the scale of the score. Specifically, looking at Algorithm 2 in Song et al. (2021)\n\\\\begin{equation}\n \\\\text{Predictor} \\\\quad x_i \\\\leftarrow x_{i+1} + (\\\\sigma_{i+1}^2 - \\\\sigma_i^2) s_\\\\theta(x_{i+1}, \\\\sigma_{i+1}) + \\\\sqrt{\\\\sigma_{i+1}^2 - \\\\sigma_i^2} z, \\\\quad z \\\\sim \\\\mathcal{N}(0, I)\n\\\\end{equation}\n\\\\begin{equation}\n \\\\text{Corrector} \\\\quad x_i \\\\leftarrow x_i + \\\\epsilon_i s_\\\\theta(x_i, \\\\sigma_i) + \\\\sqrt{2 \\\\epsilon_i} z, \\\\quad z \\\\sim \\\\mathcal{N}(0, I)\n\\\\end{equation}\n\nWhat we can take away from this is that the corrector step in continuous space tries to perform exploration of the $p_t(x)$ marginal through injecting extra Gaussian noise into the updates.\n\nIn discrete state space, for a predictor step we simulate according to the reverse rate matrix $\\\\hat{R}_t^\\\\theta$ and for a corrector step we simulate according to a `corrector rate matrix' $R^{c, \\\\theta}_t = \\\\hat{R}_t^\\\\theta + R_t$. Exact simulation involves sampling a categorical distribution based on a normalized version of the rate. Therefore, the corrector step in discrete state space can then be thought of as exploring the $p_t(x)$ marginal through a slightly noisier version of the categorical distribution corresponding to $\\\\hat{R}_t^\\\\theta$. Another distinction is that in the continuous space case, the denoising model outputs a point estimate for $x_0$/score (output dim of $D$) whereas in discrete state spaces, the denoising model outputs a conditionally independent categorical distribution in each dimension (output dim of $S \\times D$). This allows the discrete denoising model some form of uncertainty expression in its $x_0$ prediction (albeit still with conditionally independent dimensions).\n\nIn light of these differences, our hypothesis for why the discrete data model benefits from more corrector steps, is that the discrete data framework can better utilize them to explore the $p_t(x)$ marginal. Continuous state space corrector steps explore via adding Gaussian noise with the score giving a point wise directional bias, but it has been found that having too many corrector steps can make the resulting generations noisy see e.g. Furusawa et al. (2021) Fig 9 and Song & Ermon (2019) which can be seen as only having corrector steps and results in poorer image generation. On the other hand, in discrete state space, the noise introduced is based solely on sampling a categorical distribution that is largely defined through the denoising model and $\\\\hat{R}_t^\\\\theta$. The corrector steps also give a chance for multi-modal $x_0$ prediction information to mix between conditionally independent dimensions potentially exploring separate modes of $p_t(x)$ which is an effect completely absent from continuous state space models. We investigate this by following our standard reverse tau-leaping sampling until $t=0.4$ and then applying very many corrector steps whilst holding time constant (see the new experiment in Section F.2 in the Appendix). We find that the $x_0$ predictions cycle through many different possibilities corresponding to exploring modes of $p_t(x)$.\n\nIt would be an exciting piece of further work to fully investigate the similarities and differences between the predictor-corrector procedure in continuous and discrete state spaces now we have shown the equivalence using our continuous time framework.", " We thank the reviewer for their engagement with our proposed methodology and helpful suggestions. We are grateful that our work is considered novel and provided in good detail. We answer questions and respond to comments below.\n\n> ***\"There are some interesting ablations missing, for example, on the choice of $R_b$, the impact of factorizing over dimensions, and the training approximation described in C.4. I think these experiment would be very valuable.''***\n\n> ***\"The training approximation in C.4 is interesting/important and I would have liked to see some discussion/intuition in the main paper and not just the appendix.''***\n\nThank you for highlighting questions regarding the training approximation, we agree that having more discussion on it as well as experimental results will improve the description of our objective. We will add the intuition as to its validity in the main text as well as our new results with the extra space of a camera ready revision if accepted. Expanding on this here, the general idea behind the approximation is that to evaluate the CT ELBO objective naively requires us to input both $x^{1:D}$ and $\\\\tilde{x}^{1:D}$ into the denoising network even though $\\\\tilde{x}^{1:D}$ differs to $x^{1:D}$ in only a single dimension. For example, in the CIFAR10 experiment, that is to say $\\\\tilde{x}^{1:D}$ is generated from $x^{1:D}$ by picking a single channel in a single pixel and perturbing its value slightly. Since the denoising network will treat these two inputs as largely identical, we approximate the output for $x^{1:D}$ with the value output for $\\\\tilde{x}^{1:D}$, cutting training time in half.\nWe perform an extra experiment to verify the approximation's validity on the music dataset. We train 6 separate models, 3 using the one forward pass objective and 3 using the two forward pass objective. We calculate the mean and standard deviations for the evaluation metrics across the 3 runs for each objective. We find that there is no significant difference in their performance after 1M training iterations.\n\n | |Hellinger Distance | Proportion of Outliers | \n|---|-------|---|\n | One forward pass | $0.378 \\\\pm 0.002$ | $0.112 \\\\pm 0.003$ |\n| Two forward pass | $0.379 \\\\pm 0.001$ | $0.114 \\\\pm 0.003$ |\n\n\nRegarding the choice of the rate matrix, $R_b$, we find that it is important to choose a noising process appropriate for the type of discrete data used. We ablate on $R_b$ for the one-hot monophonic music dataset because it is instructive to see that a rate assuming ordinal structure (the Birth/Death rate) performs worse than the more suitable uniform rate matrix. When modelling CIFAR10 as discrete data, Austin et al. (2021) have already performed an extensive investigation into a range of corruption matrices and found uniform rate matrices perform poorly whilst Gaussian type rate matrices perform best. Our early preliminary experiments found the same result with uniform rate matrices giving an FID of $\\\\sim 30$ and so we decided to build on their best performing corruption matrices but converted into continuous time versions.\n\n\nWith respect to the factorization over dimensions, we would like to note that all other diffusion models (continuous e.g. DDPM and NCSN or discrete e.g. D3PM) factorize the forward process over dimensions. This is necessary for all models because otherwise extra dependencies would be introduced into the reverse process that would require some form of autoregressive model to parameterize. That would be a whole new model class. Furthermore, in our specific case, the factorization of the forward process is fundamental to making our method tractable at all on realistic problems because otherwise we would need to store the full $\\\\mathbb{R}^{S^D \\times S^D}$ rate matrix which is astronomically large for real datasets, e.g. in CIFAR10 it is of size $\\\\mathbb{R}^{256^{3072} \\times 256^{3072}}$.", " > ***\"More ablation study. This paper provides various techniques for continuous time diffusion models. For example, discrete time ELBO -> continuous time ELBO, discrete time sampling -> Tau-Leaping sampling. Which one makes the greatest contribution to the performance?''***\n\nIt is interesting to think about the separate parts of our proposed continuous time framework in isolation, however, they would not, by themselves, give an ablation study that provides the intuition we are looking for. That is because the CT framework requires both the objective and sampling procedure to be novel. First examining the objective, the continuous time ELBO gives us a way to learn the generative reverse rate but since the rate formulation is specific to continuous time, we would not be able to learn this rate directly using the discrete time ELBO. In theory, we could train a denoising model using the discrete time ELBO and then plug that into a reverse rate formulation, however, it would still be an inherently discrete time object, only trained to denoise at a finite selection of time points. Further, if we were to use it anyway in our continuous time sampling procedure, this would no longer match the generative procedure that the ELBO objective is built on since we have been training the denoiser to maximize likelihood with a different generative process in mind. Overall, this would mean that training with the discrete time objective but sampling with tau-leaping wouldn't fully elucidate the benefit of tau-leaping since we have introduced a number of other incompatibilities that would reduce performance and we wouldn't gain the intuition we are looking for.\n\nConversely, the contrasting ablation study, where we train with the continuous time objective and sample using a discrete time sampler wouldn't be possible due to the impracticality of calculating the matrix exponential of the learned reverse rate matrix as we described previously in point (c).\n\nTaken together, our continuous time objective and continuous time sampler, forms a direct continuous time analogue of the discrete time method and we find they perform similarly. What we find really helps performance is the addition of corrector steps that we derived using the continuous time framework. We ablate on these steps in the image and music experiments, finding they can improve performance significantly.\n\n> ***\"It seems directly applying the discrete time diffusion model to high dimensional data needs to model a complex $q_{0|k}(x_0^{1:D} | x_k^{1:D})$ ... ''***\n\nThis is a very good point in understanding the difference between discrete and continuous time models (for the full details of the following argument, please refer to Appendix G in the supplement).\n\nFor discrete time models, they also use a conditionally independent denoising model over $x_0$, i.e. $p_{0|k}^\\theta (x_0^d | x_k^{1:D})$. The reason they are also able to do this is because the forward noising process in discrete time is also factorized across dimensions just as in our work. However, an important subtlety is that to accurately model the full dimensional true reverse kernel in discrete time, $q_{k-1|k}(x_{k-1}^{1:D} | x_k^{1:D})$, the true denoising model that actually needs to be approximated is $q_{0|k}(x_0^d | x_{k-1}^{1:d-1}, x_k^{1:D})$ which would in theory require some form of autoregressive model to parameterize faithfully. This would be expensive, so an approximation is made in discrete time to model the true $q_{0|k}(x_0^d | x_{k-1}^{1:d-1}, x_k^{1:D})$ with $p_{0|k}^\\theta (x_0^d | x_k^{1:D})$. This approximation is more accurate with an increased number of steps in the noising process. Since we operate in the continuous time limit, effectively with an infinite number of steps in the noising process, we don't have to make this approximation, and the true denoising model to approximate is $q_{0|t}(x_0^d | x_t^{1:D})$ as you have mentioned.\n\n> ***\"When discussing improving sampling speed in Section 7, [1] can be cited, which is also an important work on speeding up diffusion models''***\n\nWe are happy to add this citation in an update to Section 7, thank you for making us aware of this work.\n\n\n**References**\n \nAustin et al,. \"Structured Denoising Diffusion Models in Discrete State-Spaces\" NeurIPS 2021\n", " We thank the reviewer for their review and interesting questions. We appreciate the reviewer's praise of our methodological framework and the paper's clear readability. The reviewer raises some very good points regarding the correspondence between discrete time and continuous time objects. We have made these links clearer in an update to Section 4.1 and Appendix E. We comment specifically here in more detail on each of these correspondence questions as well as other comments raised in the review.\n\n> ***\"a) What is the corresponding transition rate matrix of these prior methods''***\n\nFor some simple time homogeneous cases, the relationship between the transition rate matrix and the corresponding discrete time kernel is quite clear. For example, if in discrete time we have the uniform kernel, $P = \\\\alpha \\\\mathbf{1}\\\\mathbf{1}^T + (1 - S \\\\alpha) I$ where $\\\\mathbf{1}$ is a vector of ones and $I$ is the identity, then the corresponding continuous time transition rate matrix is $R = \\\\beta \\\\mathbf{1} \\\\mathbf{1}^T - \\\\beta S I$ with $\\\\beta$ depending on what time discretization is used. Similarly, if in discrete time we have an absorbing state kernel, $P = \\\\alpha \\\\mathbf{1} \\\\mathbf{e}_\\\\ast^T + (1-\\\\alpha) I$ where $\\\\mathbf{e}_\\\\ast$ is the one-hot encoding of the absorbing state, then the corresponding transition rate matrix is $R = \\\\beta \\\\mathbf{1} \\\\mathbf{e}_\\\\ast^T - \\beta I$. For the Gaussian kernel defined by Austin et al. (2021) the relationship is less clear. However, if we remain in the time homogeneous case, then it could be numerically calculated by finding the matrix logarithm of the discrete time kernel.\n\nThings become much more difficult when we move to the time inhomogeneous case if we would like to find some continuous time transition rate matrix that results in discrete time kernels that interpolate between kernels at known time points. This would amount to solving the Forward Kolmogorov matrix differential equation $\\\\partial_t P_{t|s} = P_{t|s} R_t$ for $R_t$ given a desired interpolation scheme for the discrete time kernel from time $s$ to time $t$, $P_{t|s}$. If you were to assume that the corresponding rate $R_t$ commutes for any two times $t$, $t'$ (which may not be known to be true if you only know $P_t$) then you could have $P_t = \\\\text{exp} \\\\left( \\\\int_0^t R_s ds \\\\right)$ (see Appendix E). However, this still doesn't get you all the way as you would then need to analytically calculate the matrix logarithm of $P_t$ in order to choose an $R_s$ that integrates to the desired value which would be very difficult for complex discrete time kernels e.g. the Gaussian kernel.\n\n> ***\"(b) Will the reverse process of prior discrete time methods (between Line 70 and Line 71) converge to the continuous time reverse process proposed in this paper?''***\n\nAssuming we start with a discrete time noising process where the corresponding matrix embedding problem is solvable, (i.e we can find a rate matrix $R_t$ such that the transition probabilities associated to $R_t$ in each forward time step $t_k$ to $t_{k+1}$ correspond to the transition probabilities used by the discrete time method), then the discrete time process has the same distribution as a CTMC with rate matrix $R_t$ sampled at the discrete set of times $t_0, t_1, \\\\dots, t_K$. Consequently, learning the reverse process in discrete time will correspond to learning a sub-sampled version of the continuous time reverse process. In the limit where the step size $\\\\tau$ is taken to be very small, the reverse process of the discrete time methods will thus converge to the continuous time reverse process.\n\n> ***\"(c) Is the sampling method in the discrete time case applicable to the continuous time case? How about comparing Tau-Leaping and discrete time sampling?''***\n\nThis observation is very important in giving the intuition behind precisely why we need to use tau-leaping for sampling. Thank you for raising this point, including this discussion will improve the clarity of the tau-leaping section which we plan to update for the camera-ready if accepted.\n\nOne could, in theory, use the discrete time sampling method and apply this to the continuous case. However, in order to calculate the transition probabilities of the reverse process between times $t_k$ and $t_{k-1}$ which the discrete method requires, one would have to either integrate the reverse process directly between these times, or calculate the exponential of a matrix with dimensions $S^D \\times S^D$. Both of these are computationally infeasible for the datasets which we study; this difficulty arises because the discrete time transition probabilities no longer factorize over the dimensions of the state space, unlike our rate matrix. The tau-leaping algorithm allows us to sidestep these computational challenges by exploiting the special structure induced into the reverse rate matrix through the forward process factorization in continuous time.", " > ***\"It could make sense to add Fig. 4 from the Appendix into the main paper ...''***\n\n> ***\"It could be helpful for the reader to provide some more intuitions about the difference between the different transition rate matrices $R(\\tilde{x}, x)$ and $r(\\tilde{x} | x)$.''***\n\nThank you for these suggestions for how to improve the clarity of the introduction to the concept of CTMCs within the main text. We would very much like to expand the introductory section at the start of Section 3 to include more intuitions and this figure given the extra space in a camera ready revision if accepted. The link between $R(\\tilde{x}, x)$ and $r(x | \\tilde{x})$ is indeed an important stepping stone in going from understanding discrete time Markov chains to understanding continuous time Markov chains. $r_t(x | \\tilde{x})$ is the probability of transitioning from state $\\tilde{x}$, to state $x$, at time $t$ *given that we already know a transition occurs at time $t$*. To know if a transition occurs, we need to run holding time simulations using the values contained within the rate matrix $R_t$ that specify the speed at which transitions occur. $r_t(x | \\tilde{x})$ is a normalized version of $R_t$ because the transitions are more likely to occur between state pairs that have higher transition rates as specified by $R_t(\\tilde{x}, x)$.\n\n> ***\"While going over the appendix, in Proposition 7, $\\psi_t(\\tilde{x}^{1:D} | x_0^{1:D}) $ in line 765 does not seem to be properly defined anywhere. $\\psi_t(\\tilde{x}^{1:D} | x_0^{1:D})$ sort of appears in the proposition but is then never really used.''***\n\nWe are just using $\\psi_t(\\tilde{x}^{1:D} | x_0^{1:D})$ to represent the $\\tilde{x}^{1:D}$ marginal of the joint $q_{t|0}(x^{1:D} | x_0^{1:D}) r_t(\\tilde{x}^{1:D} | x^{1:D})$ distribution and is only needed to take the expectation. We have made this clearer in an update to the appendix, thanks!", " > ***\"The ELBO numbers in the CIFAR10 experiments (Table 1) are behind those of D3PM ... ''***\n\nThank you for the comment, we have updated the limitations section to discuss this comparison with D3PM further. We also note that some care should be taken when interpreting ELBO scores for diffusion models. The first thing to note is that the ELBO value is independent of the specific method used for sampling the reverse process e.g. with corrector steps or not. This changes the generated data distribution and so to assess the model's learnt distribution in terms of fidelity and diversity, we should really focus on the samples themselves as this includes the effect of the reverse process sampling method. We show a large selection of samples include Figure 5 in the Appendix and see there is good generational diversity. Also, our training objective is a proper bound on the data likelihood (i.e. not using a re-weighting scheme in the loss like DDPM) and so we have increased confidence in not dropping modes of $p_{data}$. The second thing to note is that we are reporting ELBO values in Table 1, so upper bounds on NLL, whereas the DDPM++ (Song et al. 2021) results mentioned are NLL values using the probability flow ODE therefore it is not a like for like comparison. Finally, we are not focusing on using our method as a likelihood model in this work and are more concerned with generating high quality samples from the data distribution.\n\n> ***\"It's nice that the paper derived these errors bound. However, how useful are they in practice? How loose are those bounds? Would it be insightful to compute these bounds for the experiments that are run?''***\n\nThe main purpose of our error bound is to show that for any error tolerance $\\epsilon$ it is possible to find a choice of $T$, $M$ and $\\tau$ such that the total variation error of our method is less than $\\epsilon$. Moreover, the required choices of $T$, $M$ and $\\tau$ suggested by the bound are computationally feasible; we require $T$ to be chosen on the order of $t_{\\textup{mix}} \\log (1/\\epsilon) \\log D$, $M$ to be on the order of $\\epsilon/T$ and $\\tau$ on the order of $\\epsilon/(T(|R|SDC_1)^2)$, none of which collapses or explodes exponentially in the dimension $D$. This result is reassuring as it provides proof that our method can generate samples of arbitrarily high fidelity at computational cost growing only moderately with the dimensionality of the problem.\n\nHowever, beyond this we do not expect the bound of Theorem 1 to be particularly tight in practice, especially with respect to constants. The bounds of Propositions 5 and 6 used as ingredients to the proof are generally pretty loose, so it is likely too much to hope that our theorem provides a bound that is tight in practice.\n\nA further complication when using or trying to test the tightness of the bound in practice is that the constants $C_1$, $C_2$ and $M$ are not directly accessible. Indeed, $C_1$ and $C_2$ are determined by $S$ and $R_t$ according to Assumptions 1 and 2 respectively, but are not easy to calculate explicitly, while $M$ is determined by the accuracy of our neural network approximation which is again not known in practice.\n\nOne may be able to investigate the asymptotics of the approximation error, for example by performing experiments to hold all variables constant except, say, varying the time $T$ that the CTMC is run for, or the accuracy $M$ of the neural network approximation, and seeing how the error changes. This may allow one to test whether the error behaves asymptotically as suggested by Theorem 1. We leave this to future work.", " We would like to thank the reviewer for the thorough engagement with the work and detailed review. We greatly appreciate the summary that our work is high-quality and presents a lot of new ideas and methodological novelty. We address the specific comments and questions from the review here.\n\n > ***\"Performance-wise, the method is still behind standard continuous state diffusion models on image data.''***\n\nA large combined research effort has gone into modelling images as continuous data in the diffusion field. A lot of engineering effort has gone into architecture search and hyperparameter selection for this specific task, achieving very impressive results. In this work, we have directly utilized the same neural network architectures and aimed for a similar forward noising process as this continuous space work to give us a reasonable starting point for experiments. In the global model configuration space *when using discrete data*, these choices are likely sub-optimal. Despite this, it is nice to see that the gap between previous discrete state image modelling work and the state of the art continuous image modelling techniques can be significantly reduced when using the tau-leaping with corrector steps method. We have updated the limitations section to discuss the gap between continuous and discrete modelling of images.\n\nMany real world discrete datasets cannot be modelled using the mapping to a continuous space that current generative image models use. Our method is generic in that it can be applied to all these discrete data problems whilst still retaining reasonable performance on images where the inductive bias of a continuous state space process may slightly favour other methods. Applying the same engineering research effort to discrete models of images should further improve performance, fine tuning the forward noising process schedules and adapting the network architecture specifically for the discrete task.\n\n\n> ***\"It is also still very slow to sample from the learnt CTMC models. In that regard, it also seems $\\tau$-leaping only generates really strong results when combined with the corrector scheme or when doing very small steps $\\tau$.''***\n\nThe first iterations of diffusion models tend to be quite slow e.g. DDPM (Ho et al. 2020) and NCSN (Song et al. 2021) use 1000 or 2000 steps to sample the reverse process. In this work, we have also presented a new model class and picked a standard setup for the diffusion process, achieving similar speeds as these initial continuous space models. A lot of research effort has gone into improving sampling speed in continuous state spaces, largely helped by the additional insight into the reverse process gained through the continuous time / SDE framing. These are worthy contributions in their own right and we hope that our new framework can catalyse research in this direction for discrete state spaces too. For example, we hope that our link to chemical physics through tau-leaping will encourage experts in that field to join the research effort and provide additional insights into how we can make our sampling approaches more efficient in further work.\n\n> ***\"It would be interesting to consider more experiments on other data. The recent paper [1] tackles text generation, a very natural discrete state task. I would be very curious how the model performs here.''***\n\nText is indeed a very interesting application of discrete generative models, however, we do not have access to the resources required to appropriately compare to prior approaches since text models and datasets are larger than image/music models. In our work, we aimed to cover a variety of data modalities: images and monophonic music, that don't exceed our available resources and can still appropriately verify the method's performance.\n\n**References**\n\nHo et al., \"Denoising Diffusion Probabilistic Models\", NeurIPS 2020\n\nSong et al., \"Score-Based Generative Modeling through Stochastic Differential Equations\", ICLR 2021\n", " We would like to thank the reviewer for their review and positive comments. We are pleased to hear that the reviewer appreciates our characterization of the CTMC and our derivation of the predictor-corrector sampler in discrete spaces. We address the suggestions and questions below.\n\n> ***\"The only suggestion I have is to improve the readability of section 4.3 by first explaining the canonical way of sampling from a discrete-state CTMC using competing exponential clock''***\n\nThank you for highlighting this, we agree that it would improve readability to first introduce standard CTMC simulation. Unfortunately, we had to cut this introduction in the submitted draft as we did not have enough space in the main text, we hope to be able to add this back in with the extra space in a camera ready revision if accepted.\n\nYou are correct that the competing exponential clock method is indeed Gillespie's algorithm. The comparison would also be helpful in understanding why tau-leaping is useful here: we do not need to consider each transition individually which would be necessary for Gillespie's algorithm. Given the extra space, we agree an algorithm box would also be very helpful for tau leaping sampling given that many people will be unfamiliar with it.\n\n\n\n> ***\"Could the authors comment on possible reasons why discrete state modeling is still outperformed by their continuous counterpart?''***\n\nWe would like to refer the reviewer also to our answer to reviewer QzER. Overall, we believe there are multiple reasons for this. The first is that a large engineering effort has gone into optimizing diffusion models for continuous data. We reuse the same network architectures here, so there shouldn't be a difference in flexibility of generative parameterization or computation cost but they may be less optimized for the discrete data task. Secondly, the continuous state space inductive bias may slightly help model these data types that are very conducive to mapping from discrete space to the real line. We provide a generic method here for any discrete data type, we are not just proposing an image model, however, despite this we are still approaching becoming a viable alternative on images with our initial choices of forward process and network architecture. We have updated our limitations section to discuss this gap between the modelling of images as discrete or continuous data.", " This paper studies discrete state denoising diffusion-based generative models, where the data to be modelled is discrete, categorical, etc. In these cases, the standard diffusion model approach, which assumes that the data is continuous and can be perturbed in a smooth and continuous manner, breaks down. Previous discrete state diffusion models relied on frameworks that used discrete, step-wise perturbations, this is, in each step, some states are flipped to other states. This paper, in contrast, proposes a continuous-time framework for discrete state diffusion models, leveraging *Continuous Time Markov Chains* (CTMCs). Intuitively, we can think of the approach as a diffusion that runs in continuous time, and at certain times states flip to other states. The frequency how often that happens is described by transition rates (rate matrices $R_t(\\tilde{x},x')$). This seems like a very elegant and appropriate formalism for discrete state diffusion models. The paper shows how, using Bayes rule, a corresponding reverse generative CTMC can be derived. This generative CTMC isn't analytically tractable, but the paper derives an objective, a continuous-time ELBO, that can be used to learn an approximate generative CTMC. To efficiently simulate the learnt generative CTMC, the paper then further suggests to use $\\tau$-leaping, a technique taken from the chemical physics literature. While $\\tau$-leaping makes certain approximations, as a side the paper also derives a corrector scheme to essentially clean up errors made by $\\tau$-leaping. Finally, the paper also derives some error bounds between the generated and ground truth distribution.\n\nExperimentally, the paper analyses the approach on 3 generative tasks (a toy dataset, CIFAR10 image generation, symbolic music generation). The results are favourable compared to previous discrete-state diffusion models and provide insights into the different proposed components, this is, $\\tau$-leaping and its approximations as well as the corrector scheme. **Strengths:**\n- *Novelty and Originality*: As mentioned in the summary the paper proposes, derives, or suggests multiple technical innovations for discrete state diffusion models. This includes the general CTMC framework, the ELBO loss for training, the $\\tau$-leaping sampling, and the error bounds. Some of the derivations are a bit related to similar derivations from regular continuous-time diffusion models, but overall I think the paper presents a lot of methodological novelty and is very original in that regard.\n- *Clarity and Presentation*: The paper is overall well-written and nicely presented. It is a mathematically very dense paper and naturally requires very careful reading, but considering that it is relatively smooth to read. The authors make a good job in providing a lot of details and additional background in the appendix to help with that.\n- *Significance and Impact:* Diffusion models have become a very popular and promising class of generative models, and also discrete state diffusion models are used more frequently. Considering that, I think this is a significant contribution, since the paper shows a fundamentally different, potentially better way, to set up discrete state diffusion models. I believe the method will find further usage and there will be follow-up papers, building on this paper and further improving the method.\n\n**Weaknesses:**\n\nMethodologically, I have no major concerns at all. However, the experimental results, while insightful, are overall not particularly impressive.\n- Performance-wise, the method is still behind standard continuous state diffusion models on image data.\n- It is also still very slow to sample from the learnt CTMC models. In that regard, it also seems $\\tau$-leaping only generates really strong results when combined with the corrector scheme or when doing very small steps $\\tau$.\n- It would be interesting to consider more experiments on other data. The recent paper [1] tackles text generation, a very natural discrete state task. I would be very curious how the model performs here.\n- The ELBO numbers in the CIFAR10 experiments (Table 1) are behind those of D3PM and also generally quite a bit worse than more modern diffusion models such as DDPM++ [2], which also leverage a continuous time framework. This might imply that the models have reduced generation diversity, as evidently some validation data has somewhat low probability under the learnt distribution.\n\n**Conclusions and Summary:**\nOverall, I think this is a high-quality paper that should be accepted. It presents a lot of new ideas and a lot of methodological novelty and demonstrates the advantages of the proposed method. The CTMC approach to discrete state diffusion models seems overall like a very sensible idea, and maybe the way to go in the future for this model class. On the other hand, the experiments seem appropriate to test the method, but the results aren't overly impressive when compared to the broader literature.\n\n[1] Austin et al., \"Structured Denoising Diffusion Models in Discrete State-Spaces\", NeurIPS, 2021\n\n[2] Song et al., \"Score-Based Generative Modeling through Stochastic Differential Equations\", ICLR, 2021 I also have a few minor questions and comments:\n- It's nice that the paper derived these errors bound. However, how useful are they in practice? How loose are those bounds? Would it be insightful to compute these bounds for the experiments that are run?\n- It could make sense to add Fig. 4 from the Appendix into the main paper. The main paper doesn't have any larger or nice pipeline figure and this figure is actually quite insightful and helpful for the non-expert reader to get a very quick impression about what is happening in CTMCs.\n- It could be helpful for the reader to provide some more intuitions about the difference between the different transition rate matrices $R(\\tilde{x},x)$ and $r(\\tilde{x}|x)$.\n- While going over the appendix, in Proposition 7, $\\psi_t(\\tilde{\\mathbf{x}}^{1:D}|\\mathbf{x}_0^{1:D})$ in line 765 does not seem to be properly defined anywhere. $\\psi_t(\\tilde{\\mathbf{x}}^{1:D}|\\mathbf{x}_0^{1:D})$ sort of appears in the proposition but is then never really used. Some limitations have been briefly mentioned at the end of the paper (slow sampling). Ethical considerations and potential negative societal impact are discussed in some detail in the appendix. I think these discussions are satisfactory and appropriate.", " This paper proposed to extend the continuous-time diffusion/score-based modeling idea (Song et al., 2021) to discrete state space. In discrete state space, there is no SDEs therefore the forward process needs to be replaced by a discrete-state CTMC, which is characterized by the infinitesimal generator (transition rate matrix). Effective parameterizing and optimizing such models are nontrivial and the authors introduce several innovations to tackle these problems, including a continuous time lower bound for discrete state space, computable representations of the rate matrix, and a way to trade off sample speed and quality. Experiment results show it outperforms prior discrete diffusion models despite still lag behind continuous diffusion models. This is a nice paper and I have no much complain about it. Particular strong points are\n* Principled characterization of discrete CTMC via the generator (transition rate) matrix\n* A well-thought way to represent the exponentially large rate matrix that is both flexible and computationally cheap. \n* The observation that the sum of forward and backward generator has the marginal distribution at that time as stationary (Proposition 4), which enables the adoption of the predictor-corrector sampling proved effective by Song et al. (2021) in continuous diffusions. \n\nThe only suggestion I have is to improve the readability of section 4.3 by first explaining the canonical way of sampling from a discrete-state CTMC using competing exponential clock (I haven't checked but I suspect this corresponds to the Gillespie's algorithm). The tau-leaping idea could be explained in more detail in appendix and an algorithm box in main text would definitely help. * Could the authors comment on possible reasons why discrete state modeling is still outperformed by their continuous counterpart? Also related to the limitation question below. The authors discussed some limitations but I am additionally interested in the above question --- what limits the model to perform as well as continuous state diffusions on discrete image data? Is it because of the flexibility of generator parameterization? or the tradeoff caused by computational cost?", " This paper study diffusion models with discrete states. This paper considers continuous time by introducing the transition rate matrix. A relationship between the denoiser q(x0|xt) and the reverse transition rate matrix is built, and the author uses a denoising model p(x0|xt) to approximate q(x0|xt). The author further proposes a continuous time ELBO, which can be used to train p(x0|xt). The author also applies the Tau-Leaping for sampling from the diffusion model, which can be faster than the exact simulation. Experiments show the effectiveness of this method.\n Strengths:\n1. This paper develops a continuous time framework for discrete data. This framework includes optimum analysis, training, sampling and error analysis, which are nice. This paper also provides sufficient math pre knowledge to understand this framework, which makes this work more readable.\n\nWeakness:\n1. **Relationship to prior works can be discussed more.** Prior to this work, there are various works on discrete state diffusion models, e.g., as mentioned by the author, uniform kernel and absorbing kernel. The relationship to these works can be discussed more, for example from these aspects: (a) What is the corresponding transition rate matrix of these prior methods; (b) Will the reverse process of prior discrete time methods (between Line 70 and Line 71) converge to the continuous time reverse process proposed in this paper? (c) Is the sampling method in the discrete time case applicable to the continuous time case? How about comparing Tau-Leaping and discrete time sampling?\n\n2. **More ablation study.** This paper provides various techniques for continuous time diffusion models. For example, discrete time ELBO -> continuous time ELBO, discrete time sampling -> Tau-Leaping sampling. Which one makes the greatest contribution to the performance?\n\n---------------------\n\nWhile there is still a (relatively small) performance gap between this work and continuous diffusion models, I think it is an important work due to its technical contributions. I'd like to further raise my score if my concerns are addressed. 1. It seems directly applying the discrete time diffusion model to high dimensional data needs to model a complex $q_{0|k}(x_0^{1:D}|x_k^{1:D})$, according to $q(x_{k-1}^{1:D}|x_k^{1:D}) = E_{q(x_0^{1:D}|x_k^{1:D})} q(x_{k-1}^{1:D}|x_k^{1:D}, x_0^{1:D})$. However, according to Proposition 3, this work only needs to model a one-dimensional $q_{0|t}(x_0^d|x_t^{1:D})$. What is the intristic reason between this difference? This is important to further understand this work.\n\n2. When discussing improving sampling speed in Section 7, [1] can be cited, which is also an important work on speeding up diffusion models.\n\n[1] Bao et al., Analytic-DPM: an Analytic Estimate of the Optimal Reverse Variance in Diffusion Probabilistic Models. The authors adequately addressed the limitations and potential negative societal impact of their work.", " The paper proposes a continuous-time (CT) diffusion model for discrete data. In contrast to discrete-time (DT) diffusion models for discrete data, the CT diffusion model replaces the discrete time forward kernel with a transition rate matrix. It is then shown that that there exist a reverse transition matrix which can be used for generation. A CT ELBO to learn said reverse transition matrix is derived. The authors describe different choices of transition rate matrices than can be used. For high-dimensional data, the memory requirement for the transition matrices becomes large, and therefore the authors discuss a factorization technique. The paper proposes to sample from the CT diffusion model via tau-leaping, a well-known technique in chemical physics. The CT framework allows for bounds on the discrepancy between the true data distribution and the sampling distribution induced by tau-leaping. The author test their method on various experiments, outperforming existing DT diffusion models on discrete data. Strengths:\n* The proposed method is novel.\n* Using tau-leaping to reduce the NFEs needed for generation is clever.\n* The paper is very detailed and the appendix is complete (as far as I can tell)\n* I like that the paper shows experiments over two modalities. (On that note, I would have been really interested to also see how the method performs on text data. This is just a personal preference and won't influence my review/rating.)\n\nWeaknesses:\n* There are some interesting ablations missing, for example, on the choice of $R_b$, the impact of factorizing over dimensions, and the training approximation described in C.4. I think these experiment would be very valuable.\n* Some details/explanations in the experiment section are missing. See Qs below.\n* The method performs very well when many corrector steps are used but what's the intuition behind that? For continuous diffusion models it seems that one (or no) corrector step(s) is sufficient. This should have been discussed/investigated.\n\nSuggestions:\n* The training approximation in C.4 is interesting/important and I would have liked to see some discussion/intuition in the main paper and not just the appendix.\n* I think the tau-leaping idea is very cool and breaks the stigma that discrete diffusion models are inherently slow at inference time. Given the success of accelerating inference for continuous diffusion models, I think it would be very valuable to have more experiments/investigations/outlooks how to even further improve acceleration? Maybe adapt tau-leaping towards the special use case of sampling from discrete diffusion? \n\nAll in all, I think the paper propose a very interesting method (as well as very good results). I intend to raise my score if weaknesses/questions are sufficiently addressed. * Experiment 6.1: How many NFEs it the model using for the different values of $\\tau$? I expect $\\tau=0.004$ needing to use significantly more than say $\\tau=0.1$.\n* Experiment 6.2: Are all baseline methods in Table 1 using 1000 NFEs? How well does the model perform when you use significantly less than 1000 NFEs, say 50?\n* Experiment 6.3: What's the intuition that your method outperforms D3PM on CIFAR-10 by a large margin but only slightly (correct me if I am wrong) on the Monophonic Music dataset? Why do you compare to different versions of D3PM (absorbing/gaussian vs uniform) on the two modalities? N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "25eVeSCfLWV", "5NiFJvebzl", "7wtqXZh4lFd", "AhqZsm6VcVG", "l2gRVMcCFlu", "luethdqMXyA", "6Dqa_9V8Dr", "ZFrmIaGePVo", "zPSY2_P87e9", "ndADo_yUjpu", "-bjFx7gK9Cn", "nPAWYINL9vL", "uuWHL-pBNi", "nips_2022_DmT862YAieY", "nips_2022_DmT862YAieY", "nips_2022_DmT862YAieY", "nips_2022_DmT862YAieY" ]
nips_2022_XYDXL9_2P4
AD-DROP: Attribution-Driven Dropout for Robust Language Model Fine-Tuning
Fine-tuning large pre-trained language models on downstream tasks is apt to suffer from overfitting when limited training data is available. While dropout proves to be an effective antidote by randomly dropping a proportion of units, existing research has not examined its effect on the self-attention mechanism. In this paper, we investigate this problem through self-attention attribution and find that dropping attention positions with low attribution scores can accelerate training and increase the risk of overfitting. Motivated by this observation, we propose Attribution-Driven Dropout (AD-DROP), which randomly discards some high-attribution positions to encourage the model to make predictions by relying more on low-attribution positions to reduce overfitting. We also develop a cross-tuning strategy to alternate fine-tuning and AD-DROP to avoid dropping high-attribution positions excessively. Extensive experiments on various benchmarks show that AD-DROP yields consistent improvements over baselines. Analysis further confirms that AD-DROP serves as a strategic regularizer to prevent overfitting during fine-tuning.
Accept
The paper proposes a method AD-DROP to drop attention weights in a network to alleviate overfitting. It randomly sample a set of token positions with respect to attribution score calculated in first pass. The authors provide a variety of experiments on multiple tasks (SNLI, NER, MT, etc.) showing effectiveness comparing to other methods. The method is slower since it needs a separate pass to calculate attention attribution.
test
[ "xwyLjAWMaa", "okScoyGHsQW", "zEJHTnRVSdD", "XrG9IXWr-2", "gDRcp9i4NrB", "I0k1uxLJVub", "OPQvox0oKzf", "YxpMx8-ni4_", "b8OPYeIVE_", "NHWgoimpv_", "9Idjo23DzV", "pfJXd9jP-GU", "4EEOma_0WoM", "J4mKzzAs1M", "Z6bUS5ZU5LC" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the question. We conduct the MT experiments following the official colab (https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/translation.ipynb) with the pretrained OPUS-MT model, and report the BLEU results on the **test set** after five epochs. We found that the BLEU scores of OPUS-MT on EN-RO in the leaderboard (https://paperswithcode.com/sota/sequence-to-sequence-language-modeling-on-1), which are reported on the **validation set**, are consistently around $28$. \n\nWe also realize that the Transformer-based SOTA methods, such as DeLighT, achieve around $34$ in terms of BLEU on WMT2016 EN-RO task (as shown in https://paperswithcode.com/sota/machine-translation-on-wmt2016-english-1). Thank you for pointing this out, and we will supplement additional results of AD-DROP on DeLighT as well in the revised paper.", " Thank you for the responses and the additional experiment. The responses have addressed my previous concerns. A follow-up question for the MT experiment is how the experiment setup is. I am asking this because the En-Ro results look pretty off to me. For example, a Transformer base model trained on WMT16 En-Ro should give a BLEU score of around $33.xx$.", " Dear Reviewer PC5L, \n\nThank you for your valuable suggestions and constructive feedback. We have added the response to your comments. It would be really appreciated if you could let us know for any further questions or comments.\n\nBest regards, \nAuthors\n", " In my opinion, the authors have done a commendable job addressing presented concerns and the additional experiments definitely strengthen the paper. I am still not completely satisfied with the conjecture that the authors provide regarding why the method works (i.e \"Prior experiments in Figure 2 demonstrate that dropping high attribution attentions slows the fitting speed, so the model is more likely to seek global optimization during training.\" is fairly hand wavy, and more importantly, why/how does slower fitting speed relate to seeking global optimization is still quite unclear to me). But that said, given that the additional experiments strengthen the claims of the paper, I am updating my scores.", " These are helpful, especially the fewshot experiment, thanks!", " It looks like you ran additional experiments on five datasets (MRPC, RTE, CoNLL-2003, WMT2016 EN-RO, and WMT2016 TR-EN) with three additional models (RoBERTa-large, ELECTRA-base, and OPUS-MT); that's a lot for a rebuttal.\n\nI'll keep my score as Accept. Thanks, I believe your work does improve the paper.", " We appreciate your valuable comments and try to address your concerns as follows.\n* **Q1: Since AD-DROP could prevent overfitting, I'd like to see how it performs in few-shot learning. (Sec4.2 is helpful, but I think it's better to see more datasets)**\n >**A1:** Good suggestion. We carry 16, 64, and 256-shot experiments on SST-2 and CoLA with RoBERTa-base as the base model and the baseline. As shown in the table below, we report the average scores and standard deviations of five random seeds. We observe that RoBERTa with AD-DROP consistently outperforms the original finetuning approach. Besides, AD-DROP tends to bring more benefits when fewer samples are available.\n\n > **Table: Testing AD-DROP on few-shot settings.** \n | Methods | SST-2(16-shot) | SST-2(64-shot) | SST-2(256-shot) | CoLA(16-shot) | CoLA(64-shot) | CoLA(256-shot) |\n |:------------:|:--------------:|:--------------:|:---------------:|:-------------:|:--------------:|:--------------:|\n | RoBERTa-base | 74.50$\\pm$3.03 | 89.06$\\pm$0.83 | 91.44$\\pm$0.17 | 23.18$\\pm$6.38 | 39.70$\\pm$4.68 | 51.11$\\pm$1.64 |\n | +AD-DROP | **80.16$\\pm$1.51** | **91.61$\\pm$0.52** | **92.61$\\pm$0.13** | **26.7$\\pm$4.96** | **46.41$\\pm$1.98** | **52.47$\\pm$1.16** |\n\n* **Q2: The attribution method would cost some computation.**\n > **A2:** Like many other regularization techniques, the attribution method inevitably causes additional costs, but it still has a vast space to optimize. For example, Figure 8 illustrates that layers are not of the same importance. Hence, we can take layer importance into account when performing AD-DROP, which helps reduce the costs considerably.\n\n* **Q3: In equation6, what is $n$?**\n > **A3:** As defined in Line 77, $n$ in Equation 6 means the length of an input sequence.\n\n* **Q4: I don't quite get the intuition behind cross-tuning, why not just use a smaller dropout probability?**\n > **A4:** The intuition of cross-tuning is to alternate finetuning and AD-DROP to avoid dropping high attribution positions excessively. Another intuitive idea is to set a smaller dropout probability for AD-DROP. However, we found it increases the difficulty of tuning hyperparameters $p$ and $q$ and limits the adjustability of AD-DROP. Hence, cross-tuning appears to be a better trade-off between dropping too many positions and stable training.\n\n* **Q5: In Table 1, why is the baselines in BERT and RoBERTa different?**\n > **A5:** The results of baselines in Table 1 are directly taken from the original papers, while most of them are reported either on BERT or RoBERTa, which causes the difference.", " We appreciate your valuable comments and try to address your concerns as follows.\n* **Q1: Regarding additional experiments on other PLMs and tasks, and how the approach scales with model sizes.**\n > **A1:** Many thanks for the insightful comments. We conduct additional experiments on NER and Machine Translation tasks with other PLMs, and the results verify the effectiveness of AD-DROP. Please refer to the **[General Response](https://openreview.net/forum?id=XYDXL9_2P4&noteId=9Idjo23DzV)** for details.\n\n > Besides, we investigate the impact of model sizes. The table below shows the results of AD-DROP on RoBERTa-large. We observe that AD-DROP achieves consistent improvements over the larger RoBERTa model, illustrating that AD-DROP is scalable to large models.\n\n > **Table: Testing AD-DROP on RoBERTa-large.** \n | Methods | MRPC | RTE | \n |---------------|--------------|---------------| \n | RoBERTa-large | 90.83$\\pm$0.75 | 85.99$\\pm$0.86 | \n | +AD-DROP | **91.62$\\pm$0.53** | **88.01$\\pm$0.48** |\n\n* **Q2: Regarding the motivation for including the hyper-parameter $q$.**\n > **A2:** We hope to provide some clarification here. $p$ is used to define a **candidate** discard area (e.g., top 30% positions based on the attribution scores), and then we randomly drop certain amount of attention positions, controlled by $q$, within this candidate discard area (e.g., randomly drop 10% attention positions within the top 30%). Thank you for pointing this out, and we will make the description of $q$ more clear in the revision.\n\n* **Q3: Regarding the implementation of R-Drop and the search space.**\n > **A3:** To finetune RoBERTa-base with R-Drop, we employed the code released by the original paper and replaced the pretrained checkpoint with RoBERTa-base. Following the original paper, the hyper-parameter $\\alpha$ was searched among {0.1, 0.5, 1.0}, and the best results were reported. We will clarify this in the revised paper.\n\n* **Q4: Regarding the results on Table 1 and Table 3.**\n >**A4:** The purpose of Table 1 was to compare AD-DROP with several existing baselines, some of which (e.g., HiddenCut and R-Drop) have not reported repeated results. Since we directly took their results from the original papers, to make a fair comparison, we only reported the best results in Table 1. Thank you for the suggestion and we will revise this table accordingly.\n\n* **Q5: Regarding the computation cost of AD-DROP on STS-B in Table 4.**\n >**A5:** This is indeed a great observation. We agree with the reviewer that AD-DROP requires two forward pass while FT only needs one. Actually, this is the main reason why AD-DROP requires more computational cost (even with RD or AA) than FT. However, for STS-B, AD-DROP is **only applied to the first layer** instead of all 12 layers (please refer to Q8 in the response to reviewer XaQX on why only applying to the first layer), and thus is able to achieve similar computation cost to FT. The precise time cost of RD and AA are 1.02 and 1.04 respectively, instead of 1.0. We will revise these numbers accordingly in Table 4. Thank you again for the great question!", " We appreciate your valuable comments and try to address your concerns as follows.\n* **Q1: Regarding a diverse set of tasks.**\n > **A1:** Thanks. In response to this suggestion, we conduct additional experiments on NER (CoNLL-2003) and Machine Translation (WMT2016) tasks. The results validate the effectiveness and generalizability of AD-DROP. Please refer to the **[General Response](https://openreview.net/forum?id=XYDXL9_2P4&noteId=9Idjo23DzV)** for details.\n\n* **Q2: Regarding why AD-DROP works and evaluating on OOD datasets.**\n > **A2:** Good question. Our observation is that discarding a set of high attribution attentions prevents the model from overly relying on high attribution attentions, and our hypothesis is that dropping high attribution self-attention units makes the model rely less on spurious/shortcut features. Prior experiments in Figure 2 demonstrate that dropping high attribution attentions slows the fitting speed, so the model is more likely to seek global optimization during training.\n\n > Besides, following the suggestion, we test RoBERTa and RoBERTa+AD-DROP on two OOD datasets. For HANS, we use the checkpoints trained on MNLI and test its performance on the validation set (test set is not supplied). For PAWS-X, the checkpoints are trained on QQP, and we examine its performance on the test set. The evaluation metric is accuracy. From the below results we can see that RoBERTa with AD-DROP achieves better generalization, where AD-DROP boosts the performance by 0.66 on HANS and 3.35 on PAWS-X. We will incorporate these results and provide more discussion during revision.\n\n > **Table: Testing AD-DROP on out-of-distribution datasets.**\n | Methods | HANS for MNLI | PAWS-X for QQP | \n |--------------|---------------|-----------------| \n | RoBERTa-base | 69.83 | 47.90 | \n | +AD-DROP | **70.49** | **51.25** |\n\n* **Q3: Regarding repeated experiments with mean and deviation.**\n > **A3:** We notice the variances are large on small datasets. Hence, we conduct repeated experiments in Table 3 to exclude the impact of randomness. Thank you for the suggestion, and we will include the repeated results on large datasets in Table 1.\n\n* **Q4: Regarding plotting the accuracy as a function of the parameter values in Figure 4.**\n > **A4:** Thank you for the suggestion. We will revise it accordingly.\n\n* **Q5: Regarding comparing with other (non-dropout related) strategies for regularization.**\n > **A5:** Although our costs can be 4.5x, it still has a vast space to optimize. For example, Figure 8 illustrates that layers are not of the same importance. Hence, we can take layer importance into account when performing AD-DROP, which helps reduce the costs considerably. Thanks for pointing this out and providing the reference. We will test AD-DROP on T5 and compare it with SAM.\n\n* **Q6: Regarding the description of candidate discard region (Line 140).**\n > **A6:** Yes, you are right. The value 1 means the elements are kept. We will make this clear during revision.\n\n* **Q7: Why is cross-tuning done at an epoch level? How do the results change if it were done at a say minibatch level?**\n > **A7:** In fact, we have tested cross-tuning at the batch level before. But it is not stable at the beginning of training and with poor evaluation performance than epoch level. We believe it is because AD-DROP needs a relatively good model for better attribution, while cross-tuning at the batch level makes the model difficult for attribution as the model only processes limited batch data, especially in the early training stage.\n\n* **Q8: Why was AD-DROP only applied to the first layer for STS-B (Line 175)?**\n > **A8:** Although smaller than CoLA, STS-B is more stable when finetuning. As shown in Table 3, the standard deviation is less than CoLA (0.5 vs. 1.9 on BERT and 0.2 vs. 0.9 on RoBERTa). Since STS-B is a regression task, we hypothesize that it is less likely to cause overfitting. Actually, we have conducted AD-DROP in all layers on STS-B and found that applying AD-DROP to the first layer can obtain better results on STS-B.", " We appreciate your valuable comments and try to address your concerns as follows.\n\n* **Q1: Consider running experiments with larger models.**\n > **A1:** Great suggestion. We conduct the evaluation of AD-DROP with RoBERTa-large on RTE and MRPC datasets. The table below shows the average scores and standard deviations of five random seeds. There are two main observations. First, AD-DROP achieves consistent improvements over the larger RoBERTa model, illustrating that AD-DROP is scalable to large models. Second, compared with the RoBERTa-base on RTE in Table 3, the large model significantly reduces the deviation (from 1.7 to 0.86), suggesting that a larger model size indeed helps to improve the stability. And AD-DROP further improves the performance and reduces the deviation. We will supplement these results in the revised paper.\n\n > **Table: Testing AD-DROP on a larger model.** \n | Methods | MRPC | RTE | \n |---------------|--------------|---------------| \n | RoBERTa-large | 90.83$\\pm$0.75 | 85.99$\\pm$0.86 | \n | +AD-DROP | **91.62$\\pm$0.53** | **88.01$\\pm$0.48** |\n\n* **Q2: Regarding better caption of the figures.**\n > **A2:** Thanks. We will revise the captions accordingly.\n\n* **Q3: Regarding \"drop by uniform sampling\".**\n > **A3:** That’s correct. We will clarify it in the revised version.\n\n* **Q4: Regarding additional experiments on other tasks/data types.**\n > **A4:** To address this concern, we conduct additional experiments on NER (CoNLL-2003) and Machine Translation (WMT2016) tasks. The results validate the effectiveness and generalizability of AD-DROP. For the details, please refer to the **[General Response](https://openreview.net/forum?id=XYDXL9_2P4&noteId=9Idjo23DzV)**.", " We sincerely thank all reviewers for their valuable comments, which are crucial for improving our work. \n* **Q: Regarding additional experiments on other tasks.**\n > **A:** Although AD-DROP was evaluated on GLUE tasks, it is generally applicable on other tasks as well. To demonstrate this, we conduct additional experiments of AD-DROP on **NER** (CoNLL-2003) and **Machine Translation** (WMT2016) tasks. The results on the test sets are listed below. Moreover, to verify that AD-DROP can be adapted to other pretrained models, for CoNLL-2003 NER, we choose ELECTRA as the baseline in the additional experiments. For WMT2016, the strong baseline OPUS-MT is chosen. The results show that AD-DROP consistently improves the strong baselines on both NER and Machine Translation tasks. \n\n > **Table: Test results of AD-DROP on the CoNLL-2003 NER dataset.**\n | Methods | Accuracy | F1 |\n |:------------:|:------------:|:-------------:|\n | ELECTRA-base | 97.83 | 91.23 |\n | +AD-DROP | **97.95** | **91.77** |\n\n > **Table: Test results of AD-DROP on the WMT2016 EN-RO and TR-EN datasets. The evaluation metric is BLEU.**\n | Methods | EN-RO | TR-EN |\n |:--------:|:-----:|:------:|\n | OPUS-MT | 26.11 | 23.88 |\n | +AD-DROP | **26.43** | **23.96** |", " This paper introduces AdDrop, a novel dropout approach which chooses only high-attribution components of attention to apply dropout to. This paper presents experiments on the GLUE benchmark using BERT base and RoBERTa base, and does an excellent job of presenting experiments, ablations, and hyperparameter tuning to showcase the positives and negatives of the proposed approach. Strengths:\nThis paper has many strengths, including the clarity of the writing, the ablations motivation the different experimental choices, and well-chosen baselines (though there could be more baselines, there are so many variants of dropout now it's not possible to cover all baselines here).\nThe gains presented in Table 1 are surprisingly consistent. My initial thought was that we should have some presentation of the variance (e.g., from random seed), then I found Table 3 which shows exactly that. Table 3 also shows that the variance in the AD-Drop experiments is lower than in the BERT and RoBERTa experiments, which is a benefit of the approach that the authors could emphasize further.\nI really like the hyperparameter sensitivity results in Figure 6, as I was reading I was hoping to see something like this. It's clear that RoBERTa has less benefit, but that there is still a range of q for which performance improves.\nThe appendix is also great, I love that the authors reported hyperparameters, the size of the GLUE datasets, and more experiments. \nThe transparency here is great, and I believe this paper has a lot of necessary reproducibility information.\n\nWeaknesses:\nIt's clear to me that the authors appropriately allocated their limited budget to provide experimental evidence supporting their claims that this approach improves performance, so I don't count this as a negative in my review, but to increase adoption of AD-Drop in the community the authors may consider running experiments with larger models (e.g., BERT large and RoBERTa large are more standard). The results may look different; sometimes increased scale means more regularization is necessary, sometimes it means the models are more stable.\nI would recommend having the caption of each figure be descriptive enough that it can be considered stand-alone (without also reading the main body of the paper). For example, the caption to Figure 2 should say the results are for MRPC, and Figure 4 should state what p and q are. It's better to have the main takeaway for each figure in the caption as well (\"with cross-tuning\" leads to much lower variance and higher performance).\n I think where you say \"drop by random sampling\" you mean \"drop by uniform sampling\"? The authors appropriately listed limitations, though it could be made more explicit that the experimental evidence only covers BERT base and RoBERTa base, so the intended use here likely just covers similar pretrained language models during fine-tuning. Further experiments are necessary to evaluation if this works with other models or other types of data.", " * The paper presents a novel variant of dropout for self-attention layers in Transformers, by dropping self attention units with high attribution scores. The authors motivate this by showing that dropping low attribution scores leads to faster overfitting, and by dropping high attribution units, the model tends to overfit less. \n* The authors also present cross-tuning (alternatively training with and without AD-Drop across different epochs) as a method to counter excessively dropping high attribution scores\n* Results presented on GLUE benchmark shows that the AD Drop helps improve performance over vanilla dropout methods\n* The authors also present additional ablation experiments, improvements across varying dataset sizes, sensitivity to choices of hyper-parameters and compute efficiency analysis of the proposed method.\n\n[Update]\nUpdated scores based on author response. Strengths:\n* The paper presents a novel regularization method. The idea of leveraging saliency maps to inform which units to drop is quite interesting\n* The results for base model training show decent improvements over baseline methods. \n\nWeaknesses\n* Given that the paper presents AD Drop as a new regularization method, the results would be more convincing if carried out on a diverse set of tasks. Right now, the results presented are only for classification tasks. Even if this was for encoder only models, showing results on a token level task (eg: SQuAD QnA, NER etc) would help show the generality of the proposed method. Eg: [1] also propose a novel regularization methods, and demonstrate it's applicability across Machine Translation, Summarization, GLUE, Language Modeling and Image Classification. \n* While the presented method is quite interesting, the authors don't address _why_ it works. If the hypothesis is that dropping high attribution self-attention units makes the model rely less on spurious features, it would be good to show that the proposed method does better on OOD datasets (eg: HANS for MNLI, PAWS-X for QQP etc). \n* It would be good to report mean of 4 / 5 seeds for all the tasks (not just the small tasks). From Table 1, if we account for the mean of 4 seeds for small tasks, then the overall gain from AD-Drop reduces to 84.7 (taking results from Table 3). Thus the gains might be a bit over-stated, and it would be good to account for the variance due to random seeds in the results.\n* It is very hard to understand the impact of different hyper-parameter values from Figure 4. Instead, it might be more informative to plot the accuracy as a function of the parameter values for with and without cross-tuning.\n* Since that the compute cost can go up by as much as 4.5x, I think it would be informative to compare against other (non-dropout related) strategies for regularization (eg: Sharpness Aware Minimization; see [2]). For [2], the authors show consistent gains at an 2x performance hit (not considering the efficient SAM results). Thus, it would be useful to see how the proposed method compares against that. \n* [Minor] The description of candidate discard region is a bit misleading (Line 140). If my understanding is correct, then candidate discard region S_{i,j} = 1implies the self-attention logit is kept, and not discarded.\n\nReferences\n\n[1] Wu, Lijun, et al. \"R-drop: Regularized dropout for neural networks.\" Advances in Neural Information Processing Systems 34 (2021): 10890-10905.\n\n[2] Bahri, Dara, Hossein Mobahi, and Yi Tay. \"Sharpness-aware minimization improves language model generalization.\" arXiv preprint arXiv:2110.08529 (2021). Questions:\n* Why is cross-tuning done at an epoch level ? How do the results change if it were done at a say minibatch level ? \n* Given that size of STS-B is smaller than CoLA, why was AD-Drop only applied to the first layer for STS-B (Line 175) ? I think including the limitations section as a part of the main paper, as opposed to the last section of the Appendix, would be more informative in terms of informing the readers of the limitations of the work. Additionally, in it's current form, one of the big limitations that is not addressed is the lack of diversity of fine-tuning tasks: if the proposal of the paper is a new general purpose regularization method, I think not having additional tasks is a big limitation.", " This paper presents a new fine-tuning strategy called AD-DROP to prevent overfitting for PLM on language understanding tasks based on the attribution scores on attention positions. The idea is intuitive that dropping attention positions with low attribution scores increases the chance of overfitting. Based on this observation, this paper proposes to randomly discard some of the attention distribution with high attribution scores to make the model exploit more information from low attribution positions thus it is less likely to overfit. The empirical results on NLU tasks demonstrate the success of the proposed strategy, especially on small datasets. **Strengths**\n- This paper is well-written and easy to understand.\n- The proposed method is simple and easy to apply.\n- The empirical results on NLU tasks look promising compared to full fine-tuning approaches. \n\n\n**Weakness**\n- Limited tasks and analysis. Experiments are only conducted on GLUE, which leaves questions on the applicability of the proposed approach. Other than the aspect indicated by the authors that the AD-DROP could potentially be applied to different units in the model, there are still multiple dimensions that could have been considered to make the evaluation of the approach more thorough, convincing, and insightful. For example, 1) only Masked PLMs are considered; what happens if the base model is an autoregressive LM? 2) Other than NLU tasks, can it be applied to generation tasks? 3) How the approach scales with model sizes? In terms of overfitting, both limited data and large model size could be the source and this should also be investigated. \n- The motivation for including the hyper-parameter $q$ is unclear. I understood it is used to control the strength of enforcing the masking to the softmax, but isn't this sort of covered by the $p$? It seems to me that a random sampling based on uniform distribution to decide the positions (in addition to Eq 6) could do the job. Including $q$ seems to require much more hyper-parameter tuning. Besides, no description of $q$ is given around Line 145, I had to guess what $q$ is.\n- Some confusion about the experiments/analysis\n> 1. In Table 1, the improvement of R-Drop with RoBERTa Base seems quite limited compared to the improvement with RoBERTa Large on the original paper. I assume the authors implemented and ran the R-Drop with RoBERTa. However, there is no description of how they chose the hyper-parameter for R-Drop and the search space for that.\n>2. Comparing Table 3 to Table 1, it looks like the variance of the improvement is rather notable. For example, on RTE with BERT, the improvement reported in Table 1 is **4.3** while when averaging over multiple rounds, the improvement in Table 3 is **2.7**. It looks to me that it is more reasonable to report averaging numbers in Table 1 rather than the best possible numbers that are ever achieved? \n>3. In Table 4, I may misunderstand something here. It looks to me that it is almost impossible that the computation cost of AD-DROP on STS-B is the same as FT. AD-DROP needs **TWO** forward passes of the model. It requires running a **forward pass** of the model first to obtain the pseudo label as well as the **attribution scores computation**. Lastly, the model will run the forward pass again for back-propagation. On the other hand, FT only requires **ONE** forward pass and back-propagation. Would you clarify how the computational cost is defined and calculated here such that they are the same? See Weakness. Yes", " This work proposes AD-DROP to alleviate overfitting. AD-DROP involves four steps. First, predictions are made through a forward computation without dropping any attention position. Second, compute the attribution score of each attention position by attribution methods. Third, randomly sample a set of positions with high attribution scores and generate a masking matrix for each attention map. Finally, the masking matrices are applied to the next forward computation to make predictions for backpropagation.\n\nIn experiments, it is shown that AD-DROP is stable, and gives visible improvements over strong regularization baselines. Strength:\nThe method is novel in that it involves attribution methods. Instead of pure random.\nIn experiments it outperform some strong baselines. \n\nWeakness:\nExperiments maybe not complete.(refer to my questions)\nSince AD-Drop could prevent overfitting, I'd like to see how it performs in few-shot learning. (Sec4.2 is helpful, but I think it's better to see more datasets)\nThe attribution method would cost some computation.\n Question:\nIn equation6, what is n?\nI don't quite get the intuition behind cross-tuning, why not just use a smaller dropout probability?\nIn Table 1, why is the baselines in bert and roberta different? Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "okScoyGHsQW", "YxpMx8-ni4_", "J4mKzzAs1M", "b8OPYeIVE_", "OPQvox0oKzf", "NHWgoimpv_", "Z6bUS5ZU5LC", "J4mKzzAs1M", "4EEOma_0WoM", "pfJXd9jP-GU", "nips_2022_XYDXL9_2P4", "nips_2022_XYDXL9_2P4", "nips_2022_XYDXL9_2P4", "nips_2022_XYDXL9_2P4", "nips_2022_XYDXL9_2P4" ]
nips_2022_4F7vp67j79I
Verification and search algorithms for causal DAGs
We study two problems related to recovering causal graphs from interventional data: (i) $\textit{verification}$, where the task is to check if a purported causal graph is correct, and (ii) $\textit{search}$, where the task is to recover the correct causal graph. For both, we wish to minimize the number of interventions performed. For the first problem, we give a characterization of a minimal sized set of atomic interventions that is necessary and sufficient to check the correctness of a claimed causal graph. Our characterization uses the notion of $\textit{covered edges}$, which enables us to obtain simple proofs and also easily reason about earlier known results. We also generalize our results to the settings of bounded size interventions and node-dependent interventional costs. For all the above settings, we provide the first known provable algorithms for efficiently computing (near)-optimal verifying sets on general graphs. For the second problem, we give a simple adaptive algorithm based on graph separators that produces an atomic intervention set which fully orients any essential graph while using $\mathcal{O}(\log n)$ times the optimal number of interventions needed to $\textit{verify}$ (verifying size) the underlying DAG on $n$ vertices. This approximation is tight as $\textit{any}$ search algorithm on an essential line graph has worst case approximation ratio of $\Omega(\log n)$ with respect to the verifying size. With bounded size interventions, each of size $\leq k$, our algorithm gives an $\mathcal{O}(\log n \cdot \log k)$ factor approximation. Our result is the first known algorithm that gives a non-trivial approximation guarantee to the verifying size on general unweighted graphs and with bounded size interventions.
Accept
This paper's reviews as it stands are divergent. The scores are 7, 5 and 4. The paper has seen discussion between reviewers with negative opinion and the authors. One reviewer who engaged in discussion revised the score up by 1. The most unfavorable reviewer's main issue was lack of empirical evaluations comparing the authors algorithms to close competitors [SMG 20, PSS 22]. - Authors have responded turning in a quick implementation with plots comparing performance of their algorithm with competitors. Authors attached it plots and a readme in the form of an anonymous Drive folder (I looked at it briefly). It seems like their algorithm is very competitive with state of the art and infact in terms of runtime is faster than even random in some cases. Experiments seem reasonably comprehensive. I would have ideally liked the authors to include the contents of the drive folder into the main paper and uploaded a revision (I am not sure if authors are aware that one can update the paper during the rebuttal). I would consider this issue sort of taken care of. Empirical simulations do clearly show that the proposed algorithms are effective for random graphs of different size. Other concerns (even after a long discussion with reviewers) are: How significant are the theoretical results in comparison to [SMG 20, PSS 22]. 1) Does it follow from many theorems about covered edges [Chi 95] classically known and other theorems from these two recent references ? Authors responded saying - they are the first to give an *exact* algorithm to perform adaptive interventions to verify if a given graph is indeed the true one exactly characterizing the instance optimal number of interventions. I agree with the authors that this is not known and relation to covered edges does not directly follow from existing classical results (as authors have explained and I did see the proofs in the supplement.) So results for exact instance optimal verification are certainly new and novel and previous works only provided bounds on the verification number. 2) How novel are the search results ? - Here, it is true that for proving approximation guarantee they do rely on a slight modification of a lower bound, i.e. Lemma 21 in the paper as observed by one reviewer in the discussion. However, authors also point out that theirs is the first algorithm which has instance wise O(log n) approximation to the best adaptive rate for arbitrary graphs. I believe this was an open problem. Previous works like [SMG+20] could not make a general argument due to their reliance on directed clique trees and some orientation properties of the directed clique trees. Current work takes a different approach using clique separators and authors very easily extend the results to interventions of bounded size (which was also not known in general). 3) Experiments were added in an anonymous drive folder (I would strongly suggest the authors to add a few to the main camera ready + put the rest in supplement and discuss in detail about runtime benefits etc. Currently the only discussion is in the readme file). For all the three concerns, I feel authors have adequately addressed the concerns. This paper simplifies adaptive interventional design with many interesting observations and generalizations in addition to particularly novel contributions to the verification problem. Hence, I am positive about this paper. To the authors: Please do include the figures and discuss the experiments in the camera ready. Your anonymous folder contents must go into the paper (split between main paper and supplement) at the very least. Authors may think their theoretical contribution is the main point of the paper. However, experimentally seeing competitiveness to the baselines AND runtime benefits for various graph sizes is an important contribution. Unlike many other theory results, interventional complexity is unlike sample or computational complexity. Therefore, actual gains do matter (even multiplicative constants) and I do appreciate authors putting in the effort during rebuttal. It has definitely helped with one of the chief reviewer concerns.
train
[ "SxO5-bM7tQ", "Qr96OFz9Ang", "TiqNDYnG7i3", "qgyOBjThGu", "-fcHONNif-a", "hfUNvb8ioE-", "51ogZ87ya44", "N6-h0Qo7PAO", "8bTvJAoapPo", "hdhr_Tz6IC_", "o9fGIVb96de", "iG6l5Xx8cYO", "tP_2hC-qPwE", "eliRm7ZFsWf" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you so much for your time. *We really appreciate your responses and are glad that we could have this discussion!* Please refer to the following for our responses.\n\n### Comment 1: Theorem 9 via Lemma 7\n\nOn necessity: While your statement is true, it is insufficient as a proof for the necessity. For example, why is it impossible for someone to orient all the covered edges by intervening on some other vertices and then applying Meek rules? To this end, one has to argue that it is *impossible* to orient a covered edge unless some intervention explicitly intervenes one of the endpoints of that covered edge.\n\nOn sufficiency: We agree that sufficiency follows from the statement \"The set of covered edges with their orientations determines the graph uniquely\". However, we are not aware of such a result and thus had to prove that non-covered edges will also be oriented. Could you kindly point us to the specific lemma/theorem that we have missed?\n\n### Comment 2: Lemma 21\n\nThank you for appreciating Lemma 21. We would like to highlight that the additional \"max\" is an important observation and it allows one to obtain simple analysis of competitive bounds by arranging/grouping the interventions used appropriately.\n\n### Comment 3: Our search algorithm and analysis is not novel enough\n\nWhile some of the techniques used in our work are standard in algorithm design, we would like to highlight that our novelty is in the usage of graph separators to enable us to bypass the need to assume intersection-incomparability in our analysis.", " Thanks for addressing the comments.\n\n> “Regarding your first comment, we want to remark that our proof for Theorem 9 (see Appendix E) does not refer to Lemma 7 in any way and we would really appreciate if you would let us know how (our) Theorem 9 follows from Lemma 7 [Chi 95]” \n\nYes, the proof does not refer to Lemma 7, but it seems that Theorem 9 follows from Lemma 7 in the following way:\n\nNecessary: If one does not orient all covered edges (which are undirected in the essential graph), then any of them can be flipped and get another equivalent graph according to [Chi 95].\n\nSufficiency: The set of covered edges with their orientations determines the graph uniquely (follows from [Chi 95]), that is if one orients them then the graph is identified.\n\n>“Lemma 21 may look similar to Theorem 2 of [SMG+20] but our result is stronger”\n\nLemma 21 is stronger because instead of using Theorem 2 for one DAG, a wider set of DAGs (which comes from interventions on the original graph) was considered, and then the best graph using the same bound from Theorem 2 (maximization over I-essential graphs) was selected.\n\n>“Also, the upper bound guarantees of [SMG+20] only hold for intersection-comparable graphs while ours hold for any general graph”\n\nI partly disagree with the authors. For example, in [SMG+20], it is asserted:\n“The size of the problem is cut in half after each clique-intervention, so that we use at most P G∈CC(E(D))dlog2 (|C(G)|) clique-interventions, where C(G) is the set of maximal cliques for G.” that refer to an arbitrary graph G and which somehow similar to the idea of the proposed algorithm here. However, I agree that they did not propose an upper bound on the algorithm for any arbitrary graph. Moreover, it is interesting to see that the performance of the algorithm is better in terms of time complexity with comparable results to the state of the art.\n\nBased on the above discussions, I decided to change my score from 4 to 5. ", " Thank you for your response.\n\nRegarding your first comment, we want to remark that our proof for Theorem 9 (see Appendix E) does **not** refer to Lemma 7 in any way and we would really appreciate if you would let us know how (our) Theorem 9 follows from Lemma 7 [Chi 95].\n\nRegarding your second comment, note that our Lemma 21 may look similar to Theorem 2 of [SMG+20] but our result is **stronger**: Appendix G gives an illustration of how it can be stronger by even a linear factor in the size of the graph. Also, the upper bound guarantees of [SMG+20] only hold for intersection-comparable graphs while ours hold for **any general graph**; In fact, our search algorithm is much simpler and different from all the previous approaches. Independent of the techniques used in our proofs (which may look simple in hindsight), we believe that the guarantees of our search algorithm are very interesting as they hold for **any arbitrary graphs** and to the best of our knowledge remained an **open problem** until our work. \n\nFurthermore, our empirical experiments (see our previous rebuttal response) show that our search algorithm is comparable to the state of the art (in terms of number of interventions used) while running around **10x faster** on larger graphs.\n\nThank you for your time and patience.", " Thank you for your thorough and detailed answers.\n\nLemma 35 and Lemma 6 of [SMG+20] are indeed the same. The most significant result of this paper is realizing and proving that Theorem 9 (criterion for the set to be minimum verifying set) follows from Lemma 7 [Chi 95]. Theorem 9 is indeed a new and interesting result that allows authors to compute $v_1(G)$ exactly and efficiently. However other results are mostly straightforward corollaries of previous works. Unfortunately, I still believe that this work lacks novelty. For example, the idea of considering a minimal verifying set is well studied in [SMG+20], and the main results in this work are similar to ones in [SMG+20] with slight modifications/extensions (for instance, Lemma 21 and Theorem 2 [SMG +20]). ", " We thank the suggestion of some reviewers to empirically evaluate our algorithms. In response, we have coded up our algorithms and ran some experiments. We will supplement a more extensive experimental evaluation in a new appendix section in our paper revision. As we cannot upload images in these text boxes, we provide them in an anonymous Google Drive: https://drive.google.com/drive/folders/1QNZR7j73zGnHBSMBzyzB_YNb09XfMpiJ?usp=sharing. In this folder, we also provide instructions on how to reproduce our experiments and more elaboration on the plots generated from the experiments.\n\nDespite the favourable empirical outcomes from these experiments, we wish to emphasize that we still believe that the main contribution of our work is a theoretical understanding and theoretically provable algorithms for the verification and search problems. What we really find fascinating is that we can design search algorithms (Algorithm 1) with provable guarantees that is competitive (up to logarithmic factors) against the verification number $\\nu(G^*)$, despite not being aware what $\\nu(G^*)$ is.\n\n### Verification experiments\n\nBefore we talk about the experiments, we want to emphasize that verification is a basic but important problem. For instance, an efficient algorithm for computing exact verification numbers is important for benchmarking search algorithms. Prior works that ran experiments for search algorithms had to either use a lower bound for the verification number or compute it via exponential brute force search, which is impractical for large graph sizes.\n\nThe verification experiments ran by [PSS22] were to validate that their lower bound is within a factor of 2 of the true verification number, and also empirically compare their lower bound against the lower bound of [SMG+20]. As we have an exact characterization of the verification number and a practical efficient algorithm to compute it exactly, we believe that running similar experiments would not be fruitful. Instead, we have coded up our verification algorithm and tested its correctness on some well-known graphs such as cliques and trees for which we know the exact verification number. We provide the source file in our anonymous folder.\n\n### Search experiments\n\nWe ran search experiments in the framework of [SMG+20] (https://github.com/csquires/dct-policy) which empirically compares atomic intervention policies -- see Section 5 ``Experimental Results'' in [SMG+20] for a description and details of the simulated graphs. We implemented two variants of our search algorithm (Algorithm 1 with $k=1$) where we either deterministically intervene on all nodes in our clique separator one at a time, or randomly pick one until all edges are incident to the clique separator have been oriented. We implemented two variants because it is known [Ebe10] that random atomic interventions on cliques are expected to perform better than deterministic atomic interventions by a constant multiplicative factor, and we thought that it would be interesting to compare these variants empirically. Qualitatively, on large graphs, our proposed algorithm has a similar empirical performance to the current best-known atomic intervention policies in the literature (`DCT` and `Coloring`) in terms of the competitive ratio (i.e. the number of interventions used divided by the verification number of each graph) while running significantly faster (roughly 10x faster). In addition, note that our algorithm has provable guarantees that it has a $O(\\log n)$ competitive ratio with respect to the verification number for any general graph. The previously best-known theoretical guarantee for this problem is by [SMG+20]: they also show an $O(\\log n)$ approximation but their proofs only hold for special graph classes. In the revision, we will run our algorithm for other graphs and also include the implementation for non-atomic interventions.\n\n### Implementation details of our search algorithm\n\nOur implementation of the chordal graph separator is the `FAST CHORDAL SEPARATOR` algorithm in [GRE84] which first computes a perfect elimination ordering of a given chordal graph. To do so, we use Eppstein's LexBFS implementation (https://www.ics.uci.edu/~eppstein/PADS/LexBFS.py).\n\n### Reproducibility\n\nFor reproducibility purposes of our search experiments, we provide the exact modifications made and a bash script that anyone can run to directly download all the necessary files, modifies the `dct-policy` codebase, run the experiments, and generate these plots automatically. We also provide an implementation of our verification algorithm in a Python script. See the README in the Google Drive folder for details.\n\n## References\n\n[Ebe10] Frederick Eberhardt. Causal Discovery as a Game. In Causality: Objectives and Assessment, pages 87–96. PMLR, 2010\n\n[GRE84] John R. Gilbert, Donald J. Rose, and Anders Edenbrandt. A Separator Theorem for Chordal Graphs. SIAM Journal on Algebraic Discrete Methods, 5(3):306–313, 1984", " ## Our Lemma 35 versus Lemma 6 of [SMG+20]\n\nThank you for bringing this to our attention. For immediate access, we first restate the results from both papers below.\n\nOur Lemma 35:\n> Let $G$ be a moral DAG. Then, $\\nu_1(G) \\geq \\lfloor \\frac{\\omega(skeletonsyd(G))}{2} \\rfloor$.\n\nLemma 6 of [SMG+20]:\n> Let $D$ be a moral DAG and let $G$ = skel($D$). Then $m(D) \\geq \\lfloor \\frac{\\omega(G)}{2} \\rfloor$, where $\\omega(G)$ is the size of the largest clique in G. \n\nNote that the word \"skeletonsyd\" in Lemma 35 is a typo and should be \"skeleton\", which is exactly the same as \"skel\" in [SMG+20] (in the preliminaries, they write \"The skeleton of graph $D$, skel($D$), is the undirected graph with the same vertices and adjacencies as $D$\"). In the revision, we will fix this typo.\n\nIn addition, note that Definition 7 of [SMG+20] writes,\n> Given a general DAG $D$, a verifying intervention set (VIS) is a set of single-node interventions $\\mathcal{I}$ that fully orients the DAG starting from an essential graph $\\ldots$ We denote the size of the minimal VIS for $D$ as $m(D)$.\n\nThat is, $m(D)$ corresponds to our notation of $v_1(G)$. We have also defined $\\omega(G)$ in the preliminaries section of our main paper. Therefore, up to notation differences, these statements are the same. Please let us know if we misunderstood your question.\n\n## References\n\n[SMG+20] Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, and Karthikeyan Shanmugam. Active Structure Learning of Causal DAGs via Directed Clique Trees. Advances in Neural Information Processing Systems, 33:21500–21511, 2020", " We thank the reviewer for their time and valuable feedback. In the following, we provide our responses to the concerns/questions raised in the review.\n\n## Significance of windmill graph example\n\nIt is no surprise that the gap between the verification number of the windmill graph and the lower bound given by [PSS22] is small. In fact, [PSS22] already gives an *almost tight characterization* of the verification number by proving that the ratio is at most a factor of 2 (they also ran experiments); meanwhile, we present an *exact characterization* of the verification number that is easily computable in polynomial time in our work. The goal of the windmill graph construction was to provide a small explicit graph example highlighting the gap in the understanding of the verification number while showcasing the usefulness of our covered edges perspective. While constant ratios may not seem much, having an exact simple characterization greatly helps in advancing the understanding of causal discovery.\n\n## Search algorithm having steps irrelevant to minimum verification number\n\nVerification and search are two fundamentally different problems. For verification, the ground truth DAG $G^*$ is given. For search, only the essential graph is given as input and $G^*$ is *not* given. Although verification is irrelevant to our algorithm for the search problem, we remark that both these problems are important and interesting in their own right. In practice, we are often only given the essential graph (obtained via observational data) and would only be able to perform a search. Meanwhile, understanding the verification problem allows us to measure ``how much easier the search problem would have been if we knew the ground truth''. By definition, the verification number is the minimum possible interventions needed and hence it serves as a natural benchmark to measure search algorithms against.\n\nPrior to our work, all previous works either used lower bound or exponential brute force algorithms to compute the verification number when computing the benchmark to compare their search algorithms against. Therefore, we believe that the exact characterization of the verification number via a polynomial time algorithm not only furthers our understanding but also has some immediate applications such as allowing us to benchmarking search algorithms on even larger graphs. What we really find fascinating is that we can design search algorithms (Algorithm 1) with provable guarantees that is competitive (up to logarithmic factors) against the verification number $\\nu(G^*)$, despite not being aware what $\\nu(G^*)$ is.\n\n## Difference of our algorithm compared to prior works\n\nOur algorithm is different from existing algorithms both algorithmically and in terms of its theoretical guarantees (see the discussion on lines 293-298). While our analysis reuses some theorems from existing work, these existing results were insufficient on their own. Crucially, we need Lemma 21 (a strengthened lower bound of [SMG+20] that we proved) in order to obtain the theoretical guarantee of our search algorithm. Prior to our work, the best known provable guarantees for search algorithms were by [SMG+20] but their guarantees only hold for special graphs. Meanwhile, our algorithm has the same provable guarantees but these guarantees hold for any general graphs. Please let us know if we missed any references that you think closely resembles our work.\n\n## Some results are easy corollaries\n\nWe think that the statements of our results are interesting in their own right and proving them without the framework used in our paper is non-trivial. The reason why some of our proofs appear simple is because of our improved understanding of the causal discovery problem (via our covered edges perspective) and the structure of our proofs (for example, some of our induction arguments are subtle to design but easy to follow). We believe that our covered edge characterization of the verification problem is one of the strengths of our paper as it advances our understanding of causal discovery while enabling simpler proofs.\n\n## References\n\n[PSS22] Vibhor Porwal, Piyush Srivastava, and Gaurav Sinha. Almost Optimal Universal Lower Bound for Learning Causal DAGs with Atomic Interventions. In The 25th International Conference on Artificial Intelligence and Statistics, 2022.\n\n[SMG+20] Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, and Karthikeyan Shanmugam. Active Structure Learning of Causal DAGs via Directed Clique Trees. Advances in Neural Information Processing Systems, 33:21500–21511, 2020", " We thank the suggestion of some reviewers to empirically evaluate our algorithms. In response, we have coded up our algorithms and ran some experiments. We will supplement a more extensive experimental evaluation in a new appendix section in our paper revision. As we cannot upload images in these text boxes, we provide them in an anonymous Google Drive: https://drive.google.com/drive/folders/1QNZR7j73zGnHBSMBzyzB_YNb09XfMpiJ?usp=sharing. In this folder, we also provide instructions on how to reproduce our experiments and more elaboration on the plots generated from the experiments.\n\nDespite the favourable empirical outcomes from these experiments, we wish to emphasize that we still believe that the main contribution of our work is a theoretical understanding and theoretically provable algorithms for the verification and search problems. What we really find fascinating is that we can design search algorithms (Algorithm 1) with provable guarantees that is competitive (up to logarithmic factors) against the verification number $\\nu(G^*)$, despite not being aware what $\\nu(G^*)$ is.\n\n### Verification experiments\n\nBefore we talk about the experiments, we want to emphasize that verification is a basic but important problem. For instance, an efficient algorithm for computing exact verification numbers is important for benchmarking search algorithms. Prior works that ran experiments for search algorithms had to either use a lower bound for the verification number or compute it via exponential brute force search, which is impractical for large graph sizes.\n\nThe verification experiments ran by [PSS22] were to validate that their lower bound is within a factor of 2 of the true verification number, and also empirically compare their lower bound against the lower bound of [SMG+20]. As we have an exact characterization of the verification number and a practical efficient algorithm to compute it exactly, we believe that running similar experiments would not be fruitful. Instead, we have coded up our verification algorithm and tested its correctness on some well-known graphs such as cliques and trees for which we know the exact verification number. We provide the source file in our anonymous folder.\n\n### Search experiments\n\nWe ran search experiments in the framework of [SMG+20] (https://github.com/csquires/dct-policy) which empirically compares atomic intervention policies -- see Section 5 ``Experimental Results'' in [SMG+20] for a description and details of the simulated graphs. We implemented two variants of our search algorithm (Algorithm 1 with $k=1$) where we either deterministically intervene on all nodes in our clique separator one at a time, or randomly pick one until all edges are incident to the clique separator have been oriented. We implemented two variants because it is known [Ebe10] that random atomic interventions on cliques are expected to perform better than deterministic atomic interventions by a constant multiplicative factor, and we thought that it would be interesting to compare these variants empirically. Qualitatively, on large graphs, our proposed algorithm has a similar empirical performance to the current best-known atomic intervention policies in the literature (`DCT` and `Coloring`) in terms of the competitive ratio (i.e. the number of interventions used divided by the verification number of each graph) while running significantly faster (roughly 10x faster). In addition, note that our algorithm has provable guarantees that it has a $O(\\log n)$ competitive ratio with respect to the verification number for any general graph. The previously best-known theoretical guarantee for this problem is by [SMG+20]: they also show an $O(\\log n)$ approximation but their proofs only hold for special graph classes. In the revision, we will run our algorithm for other graphs and also include the implementation for non-atomic interventions.\n\n### Implementation details of our search algorithm\n\nOur implementation of the chordal graph separator is the `FAST CHORDAL SEPARATOR` algorithm in [GRE84] which first computes a perfect elimination ordering of a given chordal graph. To do so, we use Eppstein's LexBFS implementation (https://www.ics.uci.edu/~eppstein/PADS/LexBFS.py).\n\n### Reproducibility\n\nFor reproducibility purposes of our search experiments, we provide the exact modifications made and a bash script that anyone can run to directly download all the necessary files, modifies the `dct-policy` codebase, run the experiments, and generate these plots automatically. We also provide an implementation of our verification algorithm in a Python script. See the README in the Google Drive folder for details.\n\n## References\n\n[Ebe10] Frederick Eberhardt. Causal Discovery as a Game. In Causality: Objectives and Assessment, pages 87–96. PMLR, 2010\n\n[GRE84] John R. Gilbert, Donald J. Rose, and Anders Edenbrandt. A Separator Theorem for Chordal Graphs. SIAM Journal on Algebraic Discrete Methods, 5(3):306–313, 1984", " Author Response to Reviewer ua5U\n\nWe thank the reviewer for their time and valuable feedback. In the following, we provide our responses to the concerns/questions raised in the review.\n\n## Causal sufficiency assumption\n\nThank you for raising this concern and we agree that the causal sufficiency assumption might not hold in reality. Despite this, the problem of causal discovery is still well-studied under the assumptions of causal sufficiency (see the extensive list of references given in our related work). Furthermore, much of its theoretical understanding still remains unknown. Our work is the first to give a complete characterization of the verification problem (see our response regarding verification experiments about why this is an important contribution to the field) and the first to give an interventional search algorithm with provable guarantees for general graphs (prior works such as [SMG+20] only had guarantees for special graph classes). We view our work as an important step (but not an end goal) towards a principled theory of causal discovery that provides theoretical guarantees while using as few assumptions as possible.\n\n## Future work\n\nThank you for asking this question. There are indeed many possible future directions that we think are important.\nFor instance, can we work with soft interventions and design algorithms with interventional sample complexities in mind (e.g. see [KJSB19] and [ABDK18] respectively)? It is also of great interest to remove/weaken the assumptions on causal sufficiency while maintaining strong theoretical guarantees. Note that we have already included some of these discussions in the \"Societal impact and limitations\" section of our paper (Page 2). In our revision, we can try to make this more explicit.\n\n## ``arefi9x'' on line 351\n\nThank you for pointing out this typo. We will fix it in the revision. The sentence should read: Existence and efficient computation of graph separators are well studied [LT79, GHT84, GRE84, AST90, KR10, WN11] and **are** commonly used in divide-and-conquer graph algorithms and as analysis tools.\n\n## Lack of conclusion section and suggestion to put the algorithm in the appendix to make space for it\n\nThank you for your suggestion. We thought it would be more engaging to make relevant discussion points right after presenting each of our results. In the revision, we will add a conclusion section summarizing our results and also stating the possible future directions (discussed above) of our work. Regarding our algorithm description, we made a conscious choice to put it in the main text because we wanted to highlight its simplicity and it compliments the high-level proof outline which we presented in the text surrounding it.\n\n## References\n\n[PSS22] Vibhor Porwal, Piyush Srivastava, and Gaurav Sinha. Almost Optimal Universal Lower Bound for Learning Causal DAGs with Atomic Interventions. In The 25th International Conference on Artificial Intelligence and Statistics, 2022\n\n[SMG+20] Chandler Squires, Sara Magliacane, Kristjan Greenewald, Dmitriy Katz, Murat Kocaoglu, and Karthikeyan Shanmugam. Active Structure Learning of Causal DAGs via Directed Clique Trees. Advances in Neural Information Processing Systems, 33:21500–21511, 2020\n\n[KJSB19] Murat Kocaoglu, Amin Jaber, Karthikeyan Shanmugam, and Elias Bareinboim. Characterization and learning of causal graphs with latent variables from soft interventions. Advances in Neural Information Processing Systems, 32, 2019\n\n[ABDK18] Jayadev Acharya, Arnab Bhattacharyya, Constantinos Daskalakis, and Saravanan Kandasamy. Learning and testing causal models with interventions. Advances in Neural Information Processing Systems, 31, 2018", " We thank the suggestion of some reviewers to empirically evaluate our algorithms. In response, we have coded up our algorithms and ran some experiments. We will supplement a more extensive experimental evaluation in a new appendix section in our paper revision. As we cannot upload images in these text boxes, we provide them in an anonymous Google Drive: https://drive.google.com/drive/folders/1QNZR7j73zGnHBSMBzyzB_YNb09XfMpiJ?usp=sharing. In this folder, we also provide instructions on how to reproduce our experiments and more elaboration on the plots generated from the experiments.\n\nDespite the favourable empirical outcomes from these experiments, we wish to emphasize that we still believe that the main contribution of our work is a theoretical understanding and theoretically provable algorithms for the verification and search problems. What we really find fascinating is that we can design search algorithms (Algorithm 1) with provable guarantees that is competitive (up to logarithmic factors) against the verification number $\\nu(G^*)$, despite not being aware what $\\nu(G^*)$ is.\n\n### Verification experiments\n\nBefore we talk about the experiments, we want to emphasize that verification is a basic but important problem. For instance, an efficient algorithm for computing exact verification numbers is important for benchmarking search algorithms. Prior works that ran experiments for search algorithms had to either use a lower bound for the verification number or compute it via exponential brute force search, which is impractical for large graph sizes.\n\nThe verification experiments ran by [PSS22] were to validate that their lower bound is within a factor of 2 of the true verification number, and also empirically compare their lower bound against the lower bound of [SMG+20]. As we have an exact characterization of the verification number and a practical efficient algorithm to compute it exactly, we believe that running similar experiments would not be fruitful. Instead, we have coded up our verification algorithm and tested its correctness on some well-known graphs such as cliques and trees for which we know the exact verification number. We provide the source file in our anonymous folder.\n\n### Search experiments\n\nWe ran search experiments in the framework of [SMG+20] (https://github.com/csquires/dct-policy) which empirically compares atomic intervention policies -- see Section 5 ``Experimental Results'' in [SMG+20] for a description and details of the simulated graphs. We implemented two variants of our search algorithm (Algorithm 1 with $k=1$) where we either deterministically intervene on all nodes in our clique separator one at a time, or randomly pick one until all edges are incident to the clique separator have been oriented. We implemented two variants because it is known [Ebe10] that random atomic interventions on cliques are expected to perform better than deterministic atomic interventions by a constant multiplicative factor, and we thought that it would be interesting to compare these variants empirically. Qualitatively, on large graphs, our proposed algorithm has a similar empirical performance to the current best-known atomic intervention policies in the literature (`DCT` and `Coloring`) in terms of the competitive ratio (i.e. the number of interventions used divided by the verification number of each graph) while running significantly faster (roughly 10x faster). In addition, note that our algorithm has provable guarantees that it has a $O(\\log n)$ competitive ratio with respect to the verification number for any general graph. The previously best-known theoretical guarantee for this problem is by [SMG+20]: they also show an $O(\\log n)$ approximation but their proofs only hold for special graph classes. In the revision, we will run our algorithm for other graphs and also include the implementation for non-atomic interventions.\n\n### Implementation details of our search algorithm\n\nOur implementation of the chordal graph separator is the `FAST CHORDAL SEPARATOR` algorithm in [GRE84] which first computes a perfect elimination ordering of a given chordal graph. To do so, we use Eppstein's LexBFS implementation (https://www.ics.uci.edu/~eppstein/PADS/LexBFS.py).\n\n### Reproducibility\n\nFor reproducibility purposes of our search experiments, we provide the exact modifications made and a bash script that anyone can run to directly download all the necessary files, modifies the `dct-policy` codebase, run the experiments, and generate these plots automatically. We also provide an implementation of our verification algorithm in a Python script. See the README in the Google Drive folder for details.\n\n## References\n\n[Ebe10] Frederick Eberhardt. Causal Discovery as a Game. In Causality: Objectives and Assessment, pages 87–96. PMLR, 2010\n\n[GRE84] John R. Gilbert, Donald J. Rose, and Anders Edenbrandt. A Separator Theorem for Chordal Graphs. SIAM Journal on Algebraic Discrete Methods, 5(3):306–313, 1984", " We thank the reviewer for their time and valuable feedback. In the following, we provide our responses to the concerns/questions raised in the review.\n\n## Adaptive solutions to verification\n\nAs our verification result is computed in a non-adaptive manner, it is indeed natural to wonder if adaptive solutions to the verification problem can be even better. However, note that it could be the case that the given advice graph $G$ is indeed $G^*$ (i.e. any revealed arc orientations from interventions agrees with $G$). Thus, in the worst case, adaptivity will not bring any benefit to solving the verification problem.\n\n## Perceived denseness of write-up due to space constraints\n\nAs this paper covers two fundamental problems, we needed to introduce several definitions and notations. We have put an effort to remove non-essential text and pushed some of the prior work and formal proofs into the appendix while focusing on motivating our results and giving the high-level intuition of our proof techniques in the main paper. In our revision, we will try to improve the presentation further. Regarding lines 258-277, they serve to motivate why one would be interested in the weighted verification problem. Since the weighted verification cost is the least possible cost incurred by any interventional strategy in order to fully orient an essential graph (assuming the ground truth DAG is known), it is one of the key benchmarks to compare against weighted search algorithms (which do not know the ground truth DAG).", " The paper proposed recovering and verification of causal graphs via sufficient hard interventions with unlimited data. Contributions are constructing bounds on the “minimum verification number” of a graph G belonging to the Markov equivalent class of the underlying ground truth. Compared to existing approaches, their method could deal with general graph. They characterize the verifying intervention sets via separation of un-oriented cover edges of the input essential graph. \n\nThey also consider a relative new setting, i.e. verifying if an expert provided causal graph is correct using minimal interventions. \n\nFurthermore, they use an objective function to measure the optimality of intervention policies by trading off the additive weights of verification inventions as well as the size of the verification set. \nThe main contribution of this paper seems to theoretical, by proving a strengthened version of previous work. \n\nThe paper is well written with clean explanations, the contributions are summarized clearly, lemmas correspondence to existing works are well indicated. \n\nThere is no conclusion section, for page limit, it looks like algorithms could still be moved to appendix to save some space for another summary at the end if the authors did not find this unnecessary. Strengths: \n\n\n\n\nStrengths:\nS1. They extend previous results to general graphs and provide tighter bounds\nS2. They also consider a relative new setting, i.e. verifying if an expert provided causal graph is correct using minimal interventions. \nS3. They derived richer results beyond single vertex intervention. \n\n\n\nWeakness:\n\nW1. The paper assume causal sufficiency, which is usually not true in reality. \nAs noted in the introduction part, the hard intervention is also not a realistic assumption. \n\nW2. If I did not miss it, there seems to be no empirical study to support the claims made in the paper, compared to SMG 20, PSS 22.\n\n Q1. I believe a tighter bound would give better guidance in practice, before verification/adaptive search, for the experiment carrier to evaluate how many resources one might need to allocate to finish the experiment, i.e. identify the causal graph. \n\nWhat would be the future work of this work, consider if this paper get cited in the future, what would be the improvements based on this paper? \n\nQ2. If I did not miss it, there seems to be no empirical study to support the claims made in the paper, compared to SMG 20, PSS 22.\nRegarding the new scheme used in this contribution, does the author think it might be helpful to carry out empirical studies to validate those bounds or show the superiority of the new policy?\n\nQ3. In line 351, what does “arefi9x” mean?\nQ4. it seems that there is no conclusion of the paper?\n\n L1:\nI would suggest conducting an empirical study validating the proposed algorithm alongside the theorems. ", " The authors study the problems of verification and search for identifying causal DAGs from minimal-cost interventions. Using the concept of covered edges, they provide a theoretical result characterizing verifying sets, as well as a practical method for constructing such sets optimally. For the problem of search, the paper provides a novel algorithm for search, with complexity bounded by a connection to the minimum cost verifying set. Overall, this is a strong theoretical work which provides novel perspectives on minimal-cost identification of causal DAGs. The authors make a number of novel connections between the verification and search problems, some of which in particular are very interesting and even surprising:\n- Thm 9, which states that it is N+S to separate all covered edges in G to verify G (reduce G’s I-MEC to a singleton); this is a stronger version of prior results on the search problem (e.g. Thm 1 in [KDV17]) which say it is N+S to separate all edges to find the true graph (reduce all I-MECs to singletons).\n- That the search problem can be algorithmically linked to the verification problem by Lemma 21, with a strategy of intervening on cliques whose size can be bounded by intervention sizes for the verification problem.\n\nThe theoretical connections also give rise to efficient algorithms for performing verification and search on general graphs, which are a significant advance over prior approaches.\n\nFrom a presentation standpoint, the paper is generally well written with clear presentation of the results and comparison to prior work. However, it is also very dense, particularly with the novel results and algorithms, which are packed into the final 4 pages of the paper. For instance, Algorithm 1 would benefit from explanation of the function/intuition behind each step, as well as bounding the complexity. I would suggest that the authors move some of the (arguably) more peripheral content to the Appendix (e.g. lines 258-277), to make more space to explain the key results. 1) Have the authors considered adaptive solutions for the verification problem also, and whether this might provide further benefit?\n Yes, limitations and social impact have been addressed explicitly in the text.", " This paper considers the problem of recovering causal graphs from interventional data (search problem). Additionally to recovering of an unknown causal graph, the authors studied the problem of verifying whether a given directed causal graph is indeed the correct one using an optimal number of interventions (verification problem). They proposed lower bounds for both of the considered questions which are based on the minimum verification number. Finally, they proposed an algorithm for both problems and derived bounds on the number of performed interventions. Strength: \n- The authors studied an interesting connection between the optimal number of interventions that are needed to verify whether the given directed causal graph is correct and the optimal number of interventions that one needs to recover the causal graph. Furthermore, they compare their results with existing works.\n\nWeaknesses: \n- Despite there is an example that the minimum verification number is larger than the lower bounds derived in [PSS22], it is not clear how much this bound is better. It seems that the difference between these two values may be just a constant or just a constant ratio. \n- Lemma 35 is different from the lemma introduced in [SMG+20] and it is not obvious how it is possible to get Lemma 35 from Lemma 6 in [SMG+20]. Lemma 35 is then used for the proof of Lemma 21, which are one of the main results in comparing achieved lower bounds with another work.\n- The proposed algorithm is fully composed of steps irrelevant to the minimum verification number. Moreover, the whole analysis of the algorithm follows the theorems from the other works, and it is not unclear how this algorithm is different from already existent ones and why it would have better performance. \n- No experiments were done for comparing the proposed algorithm with already existing algorithms.\n- Some of the results about minimum verification number are easy corollaries of already existing theorems (such as Theorem 11 and 12).\n - Lemma 35 looks different from Lemma 6 in [SMG+20], but in the paper, it is asserted that they are the same. How can Lemma 35 be obtained from Lemma 6 [SMG+20]?\n- Why the proposed search algorithm is better than already existing algorithms and what are the main differences between them?\n The authors briefly mentioned possible limitations for transferring the results to the real world, which are reasonable. All the limitations do not have a negative social impact. \n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 4 ]
[ "Qr96OFz9Ang", "TiqNDYnG7i3", "qgyOBjThGu", "eliRm7ZFsWf", "eliRm7ZFsWf", "eliRm7ZFsWf", "eliRm7ZFsWf", "iG6l5Xx8cYO", "iG6l5Xx8cYO", "tP_2hC-qPwE", "tP_2hC-qPwE", "nips_2022_4F7vp67j79I", "nips_2022_4F7vp67j79I", "nips_2022_4F7vp67j79I" ]
nips_2022_yoLGaLPEPo_
Hyperbolic Feature Augmentation via Distribution Estimation and Infinite Sampling on Manifolds
Learning in hyperbolic spaces has attracted growing attention recently, owing to their capabilities in capturing hierarchical structures of data. However, existing learning algorithms in the hyperbolic space tend to overfit when limited data is given. In this paper, we propose a hyperbolic feature augmentation method that generates diverse and discriminative features in the hyperbolic space to combat overfitting. We employ a wrapped hyperbolic normal distribution to model augmented features, and use a neural ordinary differential equation module that benefits from meta-learning to estimate the distribution. This is to reduce the bias of estimation caused by the scarcity of data. We also derive an upper bound of the augmentation loss, which enables us to train a hyperbolic model by using an infinite number of augmentations. Experiments on few-shot learning and continual learning tasks show that our method significantly improves the performance of hyperbolic algorithms in scarce data regimes.
Accept
This paper attempts to improve learning in hyperbolic space under limited data (few shot setting). In this regards, the authors propose a hyperbolic feature augmentation method to circumvent overfitting. Furthermore, as optimizing using a large number of sampled data can be expensive, the paper proposes an upper bound the classification loss and optimize this tractable upper bound in the tangent space, which is Euclidean making the approach much more practical. There was a wide variance among reviewer scores. We thank the authors and reviewers for actively engaging in discussion and taking steps towards improving the paper including for providing additional experiments. Finally it would be appropriate to tone down the claim that this is the first paper to perform feature augmentation in hyperbolic space as it might be unsubstantiated, cf Weber et al "Robust large-margin learning in hyperbolic space" NeurIPS 2020 which also augments by solving a certification problem.
val
[ "xdE9kQqaHJ", "KGDQ82VVKeO", "k3cCMmMqm8u", "W5Zcp3Umey8", "FgjmSzaxsOY", "oJCr6pvKUO2", "auBSRmeYId4", "h0YznPY1Ti7", "V5wUN21Wr_", "ooRuOV31f2x", "C-ICPik5ySd", "AC6wB_38xPf" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer KDoT,\n\nWe thank you for the review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. If yes, we would appreciate it if you could improve the rating of our work.\n\nBest,", " I've read the authors' responses, and I would like to maintain my recommendation.", " Thank you for the positive feedback. \n\n$\\textbf{Q1.}$ Have you tried to learn prototypes by using the provided category hierarchy (e.g., on tiered-Imagenet)? It does not seem to be the case.\n\n$\\textbf{A1:}$ The prototypes in our work are obtained by averaging features of training data (the prototypes are not learned by using the category hierarchy knowledge). Thank you for the comment. Learning the prototypes in hyperbolic space is an interesting direction. In doing so, we may need to explicitly discover and model the hierarchical relationships of given data, and use such hierarchical relationships to guide representation learning. We will acknowledge your comment as a future direction of study.\n", " Thank you for the feedback. Below, we address your comments individually.\n\n$\\textbf{Q1.}$ The network F1 outputs curvatures $\\\\{c_j\\\\}_{j=1}^n$ for $n$ classes. How to unify this $c$?\n\n$\\textbf{A1:}$ We directly produce a common curvature $c$ for all classes using F1, instead of producing class-specific curvatures $\\\\{c_j\\\\}_{j=1}^n$ and unifying them. We send all data of the $n$ classes together to the gradient flow network F1, concatenate all class representations $[\\boldsymbol{e}''_1, \\boldsymbol{e}''_2, \\cdots, \\boldsymbol{e}''_n]$, and use the output fully-connect layer in F1 to produce the gradient flow for the common curvature $c$, $\\frac{d c^{t}}{ d t} = f_o([\\boldsymbol{e}''_1, \\boldsymbol{e}''_2, \\cdots, \\boldsymbol{e}''_n])$. We will clarify our method and add the details in the revised version.\n\n\n$\\textbf{Q2.}$ How to update F2, and F3 via minimizing Eq. (16)? When the classifier $\\boldsymbol{W}^{\\star}$ is fixed, it seems that only the network F1 can be trained.\n\n$\\textbf{A2:}$ Eq. (16) uses a bi-level optimization procedure (an inner-loop and an outer-loop) to update F2 and F3. Although the classifier $\\boldsymbol{W}^{\\star}$ is not changed in the outer-loop, F2 and F3 can still be updated by differentiating through the procedure of the inner loop. Concretely, in the inner-loop, we use F2 and F3 to estimate the distribution parameters $\\boldsymbol{\\mu}$ and $\\boldsymbol{\\Sigma}$, and update the classifier $\\boldsymbol{W}^{\\star}$ by Eq. (14) based on $\\boldsymbol{\\mu}$ and $\\boldsymbol{\\Sigma}$. The whole procedure of the inner-loop is differentiable, and the computational graph of the inner-loop is stored in memory. \nIn the outer loop, the classifier $\\boldsymbol{W}^{\\star}$ is fixed, but not a constant. The gradients with respect to F2 and F3 are calculated via backpropagation from $\\boldsymbol{W}^{\\star}$ to $\\boldsymbol{\\mu}$ and $\\boldsymbol{\\Sigma}$, and finally to F2 and F3, according to the stored computational graph. In this case, F2 and F3 are updated.\n\n$\\textbf{Q3.}$ What is the ratio of training data $\\mathcal{D}_t$ to validation data $\\mathcal{D}_v$ in the training stage? \n\n$\\textbf{A3:}$ In the few-shot learning experiment, the ratio of $\\mathcal{D}_t$ to $\\mathcal{D}_v$ follows the standard protocol. In the 1-shot setting, the ratio is $1:15$, that is, we sample one sample as the training data and $15$ samples as the validation data in each task. In the 5-shot setting, the ratio is $1:3$, that is, we sample $5$ sample as the training data and $15$ samples as the validation data in each task. In the continual learning task, the ratio is $1:1$.\n\n$\\textbf{Q4.}$ What is the value of initial $c$?\n\n$\\textbf{A4:}$ The initial value of the curvature $c$ is set as $-1$.\n\n$\\textbf{Q5.}$ \nIt’s better to discuss the superiority of the method in terms of time consumption. According to Algorithm 2, the upper bound Eq. (14) simplifies the training of classifiers, but Eq. (16) is still difficult to compute. \n\n$\\textbf{A5:}$ \nThank you for the comment. We add a wall-clock time experiment and compare our results with the state-of-the-art few-shot learning method FEAT [8]. Concretely, we measure the time consumption (seconds) in the training and test processes. The training time is measured over $10$ epochs, and the test time is measured over $600$ few-shot tasks.\nResults are shown in the following table.\nExperimental results show that our method has comparable time consumption with FEAT, and Eq. (16) is not overly difficult to compute in practice, although it has a seemingly complex optimization procedure. We use a simple hyperbolic gradient descent algorithm and set a small number of iterative steps (only $10$ steps) in the inner-loop, and the involved operations in the hyperbolic gradient descent algorithm are all element-wise operations, which make Eq. (16) not complicated. \n\nMethod | 1-shot Training time | 5-shot Training time | 1-shot Test time | 5-shot Test time\n:--: | :--: | :--: | :--: | :--:\nFEAT [8] | $108$ | $151$ | $47$ | $58$\nOurs | $132$ | $195$ | $49$ | $69$\n\n[8] Ye et al. Few-shot learning via embedding adaptation with set-to-set functions. CVPR 2020.\n\n$\\textbf{Q6.}$ Typo: In Table 2, the description does not match the content, e.g., \"Euclidean Metric\" (or \"Hyperbolic Metric\") and \"Model\".\n\n$\\textbf{A6:}$ Thank you for pointing it out. We will correct them in the revised version.\n", " $\\textbf{Q3.}$ The paper needs to demonstrate good performance on a reasonably hyperbolic dataset (e.g. the DISEASE dataset from HGCN [b]).\n\n$\\textbf{A3:}$ Thank you for the suggestion. We add an experiment on the graph node classification task using the DISEASE, CORA, and PUBMED datasets. Concretely, we use HGCN [b] as the baseline. \nWe add our method between the graph convolutional network and the node classifier, and perform augmentation for node features. \nWe train the gradient flow networks to estimate data distributions of node features, and use augmented features to train the node classifier.\nWe report the average F1-score (%) with the standard deviation on 10 random experiments. \nResults are shown in the following table. This experiment shows that our method leads to improvements in the traditional setting as well.\nFor example, on the DISEASE dataset, the F1 score of HGCN is $74.5$%. In contrast, HGCN+Ours achieves $78.0$%, $3.5$% higher than HGCN. We will add the experiment in the revised version. \n\nMethod | DISEASE | CORA | PUBMED\n:--: | :--: | :--: | :--:\nHGCN [b] | $74.5 \\pm 0.9$ | $79.9 \\pm 0.6$ | $80.3 \\pm 0.3$\nHGCN+Ours | $\\mathbf{78.0 \\pm 0.7}$ | $\\mathbf{81.1 \\pm 0.7}$ | $\\mathbf{81.5 \\pm 0.2}$\n\n[b] Chami et al. Hyperbolic Graph Convolutional Neural Networks. NeurIPS 2019.\n\n\n$\\textbf{Q4.}$ A Neural Manifold ODE [a] should be used for hyperbolic space.\n\n$\\textbf{A4:}$ We argue that the wrapped normal distribution should be estimated via the Euclidean neural ODE. Although the wrapped normal distribution is used to model hyperbolic data, the parameters of the distribution are in Euclidean space. The distribution parameters $\\boldsymbol{L}$ (decomposed from the augmentation covariance $\\boldsymbol{\\Sigma}$) and $\\boldsymbol{\\mu}$ (augmentation mean) are on the tangent space at the origin without the manifold constraint. The curvature $c$ does not have the manifold constraint as well. Thus, we use the Euclidean neural ODE to estimate such parameters in Euclidean space.\n\n\n[a] Lou et al. Neural Manifold Ordinary Differential Equations. NeurIPS 2018.\n\n\n\n$\\textbf{Q5.}$ The evaluation is not being done on sufficiently hyperbolic datasets to see a real significant difference in Table 2 and Figure 4.\n\n$\\textbf{A5:}$ Our evaluation is conducted on datasets with hyperbolic structures (Please see the answer to Q1 for details), and we argue that the current experimental results in Table 2 and Figure 4 are significant and can show differences between our method and the baselines. The reasons are as follows.\n\n\n(1) We compare our method with existing hyperbolic few-shot learning methods (i.e., CurAML [9] and HyperProto [3]) in Table 2, and our method achieves better performance than them. For example, in the 1-shot setting, our method has more than $3$% improvements compared with the state-of-the-art method CurAML. \n\n(2) We conduct ablation experiments in Table 2 and Figure 4. Compared with `w/o Aug’, our improvements are more than $2$% in the mini-ImageNet dataset, $3$% in the tiered-ImageNet dataset, and $3$% in the CIFAR-100 dataset. \n\n(3) In Table 2 and Figure 4, we also compare our method with some advanced few-shot learning and continual learning methods (e.g, FEAT and MeTAL in Table 2, and IL2A in Figure 4), and the goal is to show that our method can achieve advanced performance as well.\nThese advanced methods use some other techniques to improve performance. For example, FEAT uses an extra Transformer model and an auxiliary loss function to refine features, while we directly use the features extracted from the backbone. In this case, our method performs competitively or even exceeds some methods, showing the effectiveness of our method.\n\n[3] Khrulkov et al. Hyperbolic image embeddings. CVPR 2020.\n\n[9] Gao et al. Curvature-adaptive meta-learning for fast adaptation to manifold data. T-PAMI 2022.\n\n[25] Luo et al. Few-shot learning via feature hallucination with variational inference. WACV 2021.\n\n[27] Lazarou et al. Tensor feature hallucination for few-shot learning. WACV 2022.\n\n\n\n$\\textbf{Q6.}$ Some grammatical issues.\n\n$\\textbf{A6:}$ Thank you for the comment. We will improve the language of this manuscript.\n", " Thank you for the review. Below, we address your comments individually.\n\n$\\textbf{Q1.}$ The used datasets don't have hyperbolic structures.\n\n$\\textbf{A1:}$ We believe that the datasets used in our work have hyperbolic structures. The reasons are as follows.\n\n(1) Some works have shown that the used datasets (i.e., mini-ImageNet, tiered-ImageNet, CUB, and CIFAR) have hyperbolic structures, and the four datasets are commonly used to evaluate hyperbolic algorithms [c,d,e,f,g,h]. \nHaving an inherent hierarchy is an important factor to make a dataset suitable for hyperbolic representation, and images in these datasets have a lot of inherent hierarchical information.\nFor example, the work [f] suggests that the animal images of different species in the CUB dataset have hierarchical structures (e.g., images of birds and parrots have hierarchical structures). The work [c] shows that there are hierarchical structures between an image and its local part images in the mini-ImageNet dataset. We follow these works to use the four datasets.\n\n(2) To further address your concern, we compute the $\\delta$-Hyperbolicity [c,i,j] to measure whether the used datasets have hyperbolic structures.\n$\\delta$ closer to $0$ indicates a stronger hyperbolic structure of a dataset. The values of $\\delta$ are $0.24$, $0.21$, $0.25$, and $0.23$ on the mini-ImageNet, tiered-ImageNet, CUB, and CIFAR-FS datasets, respectively. Please note that some widely used tree-likeness graph datasets have larger $\\delta$ than that in the four datasets. For example, $\\delta$ of the CORA dataset is $0.35$, and $\\delta$ of the PUBMED dataset is $0.36$, where CORA and PUBMED are widely used to evaluate hyperbolic graph neural networks [b]. \n\nIn addition, we add an extra experiment using the graph data suggested by you and our method comfortably improves upon HGCN [b]. Please see the answer to your question Q3. \n\n\n[b] Chami et al. Hyperbolic Graph Convolutional Neural Networks. NeurIPS 2019.\n\n[c] Khrulkov et al. Hyperbolic Image Embedding. CVPR 2020.\n\n[d] Fang et al. Kernel Methods in Hyperbolic Spaces. ICCV 2021.\n\n[e] Gao et al. Curvature-Adaptive Meta-Learning for Fast Adaptation to Manifold Data. T-PAMI 2022.\n\n[f] Yan et al. Unsupervised Hyperbolic Metric Learning. CVPR 2021.\n\n[g] Guo et al. Clipped Hyperbolic Classifiers Are Super-Hyperbolic Classifiers. CVPR 2022.\n\n[h] Ermolov et al. Hyperbolic Vision Transformers: Combining Improvements in Metric Learning. CVPR 2022.\n\n[i] Adcock et al. Tree-like structure in large social and information. ICDE 2013.\n\n[j] Zhang et al. Lorentzian Graph Convolutional Networks. WWW 2021.\n\n\n$\\textbf{Q2.}$ The few shot descriptor should not be an essential test dataset characteristic of a general feature augmentation method.\n\n$\\textbf{A2:}$ \nThe goal of this manuscript is an augmentation method for the scarce data setting, and our method shows the benefit of augmentation when the data is scarce. This case studied in our paper addresses very significant and challenging problems, as compared to data augmentation in the traditional setting. The reasons are as follows.\n\n(1) In the open environment of the real-world, many applications provide scarce data, as collecting and labeling data is high-cost. Data augmentation is a commonly used scheme to solve this problem [k,l].\n\n(2) Compared with the traditional setting with sufficient data, data augmentation in the few-shot setting is more challenging, since capturing the real distribution from scarce data is difficult [m].\n\nMeanwhile, our method can also be applied to the traditional setting. According to your suggestion, we add an experiment on the graph node classification task, and our method leads to improvements (Please see the answer to Q3).\n\n\n[k] Zhang et al. Deep Adversarial Data Augmentation for Extremely Low Data Regimes. T-CSVT 2021.\n\n[l] Li et al. Adversarial Feature Hallucination Networks for Few-Shot Learning. CVPR 2020.\n\n[m] Jiang et al. Deceive D: Adaptive Pseudo Augmentation for GAN Training with Limited Data. NeurIPS 2021.", " $\\textbf{Q5.}$ Can the method be made in an end-to-end manner for some example tasks?\n\n$\\textbf{A5:}$\nIn our implementation, the neural ODE and the feature extractor are trained separately. Meanwhile, our method can be trained in an end-to-end manner as well. Our augmentation process is differentiable, and thus the gradients with respect to the feature extractor can be calculated via backpropagation. We empirically observe that our method achieves similar performance in few-shot learning and continual learning no matter whether it is trained in an end-to-end manner. Thus, we separately train the feature extractor and the neural ODE for less resource consumption and faster training speed (we can directly use the pre-trained feature extractor from other works).\n", " Thank you for the positive feedback. Below, we address your comments individually.\n\n$\\textbf{Q1.}$ Empirical evaluation of these gradient flow networks to further verify.\n\n$\\textbf{A1:}$ Thank you for the suggestion. We use two experiments to demonstrate the generalization ability of the gradient flow network.\n\n(1) In Section 5.2 of the Supplementary Materials, we have conducted a toy experiment by training the gradient flow networks to estimate Gaussian distributions (guided by the MSE loss). We evaluate their performance on unseen Gaussian distributions. Results show that, in the scarce data setting, the trained gradient flow networks can better identify the underlying distribution in comparison to directly computing the distribution parameters from data. For example, when estimating the parameter mean, the MSE error of our method is $2.96 \\times 10^4$, far less than the error of directly computing, which is $2.41 \\times 10^6$. \n\n(2) We add a new experiment to further demonstrate this point. We randomly generate some hyperbolic wrapped normal distributions, and sample few data from them ($5$ samples from each distribution), where the label of a sample is set by its corresponding distribution. Then we train the gradient flow networks via the bi-level optimization in Eq. (16). Finally, we use the gradient flow networks to recover unseen hyperbolic wrapped normal distributions. Results are that the MSE errors of directly computing distribution parameters from given data are $9.08 \\times 10^3$, $5.12 \\times 10^5$, and $9.05$ for the mean, covariance matrix, and curvature, respectively. In contrast, the MSE errors of using the trained gradient flow networks are $41.69$, $5.17 \\times 10^3$, and $0.42$ for the mean, covariance matrix, and curvature, respectively. These results show that the trained gradient flow networks are capable of generalizing to unseen data in the scarce data setting. We will add the experimental results to the revised version.\n\n\n$\\textbf{Q2.}$ A detailed analysis of the inner- and outer-loop optimization process and hyper-parameters for real usage.\n\n$\\textbf{A2:}$ In the bi-level optimization process, hyper-parameters include the learning rate in the two loops, the choice of the optimizer in the two loops, and the number of iterative steps in the two loops. The inner-loop optimization plays an important role in efficiently solving the bi-level optimization. An advanced optimizer and a large number of iteration steps in the inner-loop may lead to a good inner-loop solution, benefiting the outer-loop solution. However, this may increase resource consumption and cause the exploding gradient issue. Using a simpler optimizer and a small number of iteration steps can reduce the resource consumption, but it may lead to a biased solution of the inner-loop, resulting in poor performance. \n\nIn the implementation, we recommend using a simple gradient descent optimizer in the inner loop to reduce computational complexity, and using the Adam optimizer in the outer-loop to reduce the bias of optimization. \nAs for the number of iterative steps, we empirically observe that changing the number of iterations of the inner loop in the range $[5,20]$ does not deteriorate the performance, and setting the number of iterations in the outer-loop larger than $1000$ make the model converge.\nIn terms of learning rates of the two loops, we change them in the range $[0.001,0.0001]$, and all get good performance. This shows the robustness of our method to hyper-parameters if they are chosen within a suitable interval.\nWe will add the analyses and details to the revised version.\n\n\n\n\n$\\textbf{Q3.}$ A projection of the sampled noise $\\hat{\\boldsymbol{v}}$ into the tangent space at the origin is needed.\n\n$\\textbf{A3:}$ \nYes, we indeed have applied such a projection to $\\hat{\\boldsymbol{v}}$.\nWe use the Poincaré ball model of hyperbolic space in this work, and such a projection in the Poincaré ball is defined as a re-scaling function [a,b], i.e., $\\hat{\\boldsymbol{v}} \\gets \\frac{\\hat{\\boldsymbol{v}}}{({\\lambda}_{\\boldsymbol{p}}^{c})^2}$, where $\\boldsymbol{p}$ is the tangent point. In the tangent space at the origin, the projection is $\\hat{\\boldsymbol{v}} \\gets \\frac{1}{4} \\hat{\\boldsymbol{v}}$. We will add it to the revised version.\n\n[a] Nickel et al. Poincaré Embeddings for Learning Hierarchical Representations. NeurIPS 2017.\n\n[b] B´ecigneul et al. Riemannian Adaptive Optimization Methods. ICLR 2019.\n\n$\\textbf{Q4.}$ Can the code, also the reverse mapping algorithm for the visualization made public?\n\n$\\textbf{A4:}$ Yes, we will release the code of the method and the reverse mapping algorithm, once the manuscript is accepted.", " This paper proposes a hyperbolic feature augmentation method to avoid the overfitting problem of hyperbolic learning in few data setting, the authors extend the method by using an infinite number of augmentations through an interesting upper bound analysis, promising results on few-shot learning and continual learning are given. Strengths:\nThe motivation of the method is clear and the paper is well-organized/presented, the method is novel/straightforward to understand, the proposal to do learning with infinite data augmentation provides a rigorous and sound way to train the classifier. Experiments on few-shot learning provides promising results in this area. \n\n\nWeaknesses: \nw1. The main weakness is related to the gradient flow network, there are multiple things to model, a practical question is whether these gradient flow networks learn correctly and generalize, especially in a few data setting, it would make the paper stronger if the authors can provide any empirical evaluation of these networks to further verify.\n\nw2: The optimization of the gradient flow networks are a little bit complicated, including both the inner- and outer-loop optimization process + many hyper-parameters, the authors should provide a detailed analysis in this aspect for real usage of this method in practice. \n\nOverall, this paper is novel to me and I'm glad to see hyperbolic models in this setting, the method is clear and straightforward, though a detailed guidance of the method should be made available. \n Q1: I think a projection of the sampled noise hat v into the tangent space at the origin is needed? \nQ2: can the code, also the reverse mapping algorithm for the visualization made public? \nQ3: can the method be made in an end-to-end manner for some example tasks? discussed above. ", " This paper introduces hyperbolic feature augmentation. This is done by way of introducing a neural-ODE distribution estimation scheme, whose continuous optimization process can approximate the real data distribution reasonably well in scarce data regimes. An upper bound of the augmentation loss is also derived; the paper makes use of this bound to train the final models without explicit augmentation. Experimental results on several few shot datasets give some evidence for the efficacy of the method. ### Strengths\n\n1. The paper is a novel early feature augmentation method in hyperbolic space. Feature augmentation has not been explored much outside of Euclidean space, and I believe this is an interesting area for work.\n\n2. The Table 1 results seem to give reasonable evidence for the efficacy of the introduced method. That is, under the extreme data scarcity seen in the few shot setting, the method does outperform several reasonable baselines, to a statistically significant extent.\n\n### Weaknesses\n\n1. While the method is an early work in hyperbolic feature augmentation, I am unsure as to why the paper focuses so much on few shot learning. All of the test datasets are few shot, and moreover, there is no particular reasons to assume they have an inherently hierarchical structure. This is rather puzzling. If the method is truly a good hyperbolic feature augmentation method, it should be helpful regardless of the few shot setting. I think the experiments are currently quite flawed and incomplete. At the minimum, the paper needs to demonstrate good performance on a reasonably hyperbolic dataset (e.g. the disease dataset from HGCN [b]), and at best, it should do this for several datasets with hyperbolic structure that are outside of the few shot setting.\n\nCertainly, if the margin is the best for the few shot case, so be it. But I would expect cases with actual hyperbolic structure to work better. Moreover, the few shot descriptor should not be an essential test dataset characteristic of a general feature augmentation method.\n\nUltimately, this additional evaluation will help determine if the improvement is truly due to the geometry captured by the method, or if the improvement is mostly due to some unrelated architectural modification.\n\n2. Regular Euclidean Neural ODEs are used to learn the feature densities, despite the fact that the method is clearly about hyperbolic feature augmentation. This should be augmented by using a Neural Manifold ODE [a] for hyperbolic space.\n\n3. The results in Tables 2 and 4 seem to be relatively marginal. Again this increases my suspicion that the evaluation is not being done on sufficiently hyperbolic datasets to see a real significant difference due to the captured geometry (i.e. due to the fact that the method is hyperbolic and not Euclidean). \n\n### Verdict\n\nOverall this paper has some promise as an early work attempting to do hyperbolic feature augmentation. However, lack of proper evaluation, as described above, make me question the significance of the proposed method. As a result, I recommend a borderline reject rating for this paper.\n\n### References\n\n[a] Neural Manifold ODEs. https://arxiv.org/abs/2006.10254\n\n[b] Hyperbolic GCN. https://arxiv.org/abs/1910.12933 ### Detailed Comments, Corrections and Questions\n\nOverall the writing in the paper is understandable, but has some grammatical issues.\n\nIn terms of questions, I can perhaps reiterate one of the key points from the weaknesses section here: why is the feature augmentation method focused so much on datasets that don't have hyperbolic structure, and moreover, so focused on the extreme data scarce case (few shot) as opposed to more traditional settings? Data augmentation should be helpful in general, not just at the extremes. \n\nA non exhaustive list of minor corrections is given below:\n\nL13: \"low data regimes\" -> \"scarce data regimes\". I will only highlight this correction once, but it should be applied many times throughouth the paper.\n\nL58: \"in the hyperbolic space\" -> \"in hyperbolic space\"\n\nL305: \"much computational loads\" -> \"much computational load\" The paper does reasonably address limitations in the final paragraph. Namely, the paper states the assumption that the data used with their method has a uniform hierarchical structure, whereas in actuality, real-world data may have be more complex and have complex hierarchical structure with varying local structures.", " This paper proposes a hyperbolic feature augmentation method to prevent overfitting when limited data is given. The authors derive an upper bound of the loss function with infinite data augmentation. Therefore, the classifiers and distribution estimators can be trained via a bi-level optimization manner without complex hyperbolic operations. They also provide extensive experimental results on few-shot learning and continual learning to demonstrate the effectiveness of their proposed method. Strengths\n1.\tThis paper uses Neural ODEs to estimate the distribution of hyperbolic features and then sampling augmented features from the learned distribution.\n2.\tThis paper provides an upper bound of the augmentation loss using an infinite number of augmentations. This upper bound allows the authors to make efficient augmentation without complex computation.\n3.\tThe authors conduct extensive comparison experiments, ablation experiments and visualization experiments. The results demonstrate the effectiveness of their proposed feature augmentation method.\n\n\nWeaknesses and Questions\n1.\tFor the distribution estimation, this paper uses three Gradient flow networks to learn different parameters. According to Section 4.3, the networks learn specific parameters for each class using different inputs (i.e., $\\bar x_j$). That is to say, the network $F_1$ will output ${c_j}_{j=1}^n$ for $n$ classes. However, in Line 142, the authors point that the parameter $c$ is shared between all classes. How to unify this $c$?\n2.\tHow to update $F_2$, and $F_3$ via minimizing Eq. (16)? When the classifiers are fixed, it seems that only the network $F_1$ can be trained.\n3.\tSome experimental details are missing.\n3.1.\tWhat is the ratio of training data $D_t$ to validation data $D_v$ in the training stage.\n3.2.\tWhat is the value of initial $c$.\n4.\tIn Table 2, the metric-based baseline FEAT performs similar accuracy to the proposed method. It’s better to discuss the superiority of the method in terms of time consumption. According to Algorithm 2, the upper bound Eq. (14) simplifies the training of classifiers, but Eq. (16) is still difficult to compute.\nTypo: In Table 2, the description does not match the content, e.g., \"Euclidean Metric\" (or \"Hyperbolic Metric\") and \"Model\".\n see weakness and questions Limitations are not given", " The paper considers the problem of feature augmentation, a special kind of data augmentation, in some low-data regime tasks such as few-shot learning. \nThe main idea is to consider a wrapped normal distribution for each category and use the reparametrization trick (in the tangent space of the hyperbolic manifold) to obtain a differentiable algorithm for sampling augmented data. A Neural ODE is then used to estimate the parameters of the wrap distribution. Since optimizing the problem for a large number of sampled data can be difficult, the paper proposes in Section 4.4 to upper bound the classification loss and optimize a tractable upper bound in the tangent space, which is Euclidean. \nAs explained in Section 4.5, the training process is done by exploiting the validation data in a bi-level optimization manner.\nExperimental results in the few-shot and continual learning tasks show the relevance of the approach. Strengths: The paper is clear and well written. I think that the approach is original and significant in the hyperbolic manifold learning community. In particular, it proposes different tricks to make the training algorithm tractable so that samples represented on a hyperbolic manifold are high quality (as shown in Figure 3) and improve classification performance. \n\nWeaknesses: The main \"weakness\" of the approach is that it does not quantitatively outperform baselines by a large margin (many baselines return similar performance on many datasets).\nHowever, I still think that the methodology is interesting and can influence future work in the same direction. - Have you tried to learn prototypes by using the provided category hierarchy (e.g., on tiered-Imagenet)? It does not seem to be the case. N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 4, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 3, 4 ]
[ "ooRuOV31f2x", "C-ICPik5ySd", "AC6wB_38xPf", "C-ICPik5ySd", "ooRuOV31f2x", "ooRuOV31f2x", "V5wUN21Wr_", "V5wUN21Wr_", "nips_2022_yoLGaLPEPo_", "nips_2022_yoLGaLPEPo_", "nips_2022_yoLGaLPEPo_", "nips_2022_yoLGaLPEPo_" ]
nips_2022_c4o5oHg32CY
TokenMixup: Efficient Attention-guided Token-level Data Augmentation for Transformers
Mixup is a commonly adopted data augmentation technique for image classification. Recent advances in mixup methods primarily focus on mixing based on saliency. However, many saliency detectors require intense computation and are especially burdensome for parameter-heavy transformer models. To this end, we propose TokenMixup, an efficient attention-guided token-level data augmentation method that aims to maximize the saliency of a mixed set of tokens. TokenMixup provides ×15 faster saliency-aware data augmentation compared to gradient-based methods. Moreover, we introduce a variant of TokenMixup which mixes tokens within a single instance, thereby enabling multi-scale feature augmentation. Experiments show that our methods significantly improve the baseline models’ performance on CIFAR and ImageNet-1K, while being more efficient than previous methods. We also reach state-of-the-art performance on CIFAR-100 among from-scratch transformer models. Code is available at https://github.com/mlvlab/TokenMixup.
Accept
The paper is about speeding up the saliency computation used in gradient based mixup algorithms(Puzzlemix, Co-mixup, etc). The authors propose employing the attention layer output of the transformer to replace the expensive saliency computation. Since focus of the the paper is on improving the speed and accuracy aspects of gradient based mixup algorithms, the tables 1 and 2 should remove I strongly suggest the authors to remove tables 1 and 2 as it only shows irrelevant accuracy results across different network architectures. Since the main focus of the paper is on improving over the previous mixup algorithms, I suggest the authors to run the following experiment and insert it as table 1 in the main paper. Current table 5 does not show fair comparison across different mixup methods. 1) fix the network architecture. Denote it as A (i.e. CCT-7/3x1) 2) report the following accuracy results. A; A + input mixup; A + manifold mixup; A + cutmix; A + puzzlemix; A + co-mixup; A + horizontal TM; A + vertical TM; A + horizontal TM + vertical TM Here, you would need to make sure the backpropagate all the way through the individual pixels as opposed to tokens to properly run the gradient based mixup algorithms. Also, I suggest the authors to add the end-to-end forward-backward computation time per image in addition to the avg. latency in table 3. Overall, I like the proposed method of speeding up the saliency computation via attention maps. However, the experiment protocol needs a lot of improvement.
test
[ "bcd4-8sNmzk", "Ez3QBLlS5GG", "DWt6W54RUcb", "0Ek_5Umyic7", "8nDPPRXyLwn", "Lvhc6dZrY_8", "Btscd3gRP7G", "sHLKpL7c7V9", "dkX4KprRGN7", "kk9v7etWaum", "aeaxyCT2hk" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the detailed clarifications and additional experiments.\n\nThe response addresses most of my concerns. Especially, the response to Q2 seems to show an interesting difference between gradient-based methods and attention-based methods. Also, the response to Q3 seems fair enough.\n\nBased on the response, I have raised my score.", " Dear reviewers,\n\nWe appreciate the insightful comments and your time in reviewing our paper. We have responded to the reviewers' questions and uploaded the revised version of the supplement. Please go over our responses and let us know if you have any further questions. Thank you!", " Thank you very much for the strong support for our method’s novelty. We are glad that the reviewer enjoyed our analyses. The questions will be addressed below.\n\n**Question 1. Is there extra time complexity in using ScoreNet?**\n\nThe ScoreNet is a small MLP classifier, which is applied to an intermediate layer. Considering its size, ScoreNet adds close-to-zero overhead. When measured on CIFAR-100 with RTX 3090 using batch size of 128, the average latency caused purely by ScoreNet (both forward and back propagation) was 0.004 second, which is only 1.9% of the average latency of one iteration. This is a negligible scale.\n\n**Question 2. Need for exhaustive hyperparameter search?**\n\nWe have answered the same question for reviewer Muyi. For your convenience, we repeated the answer here.\n\nLike most data augmentation methods, hyperparameter optimization is inevitable. However, our methods have at most two major hyperparameters; HTM entails $\\rho$ and $\\tau$, while VTM entails $\\kappa$. This is similar to previous Mixup methods; Input Mixup [1] (or Manifold Mixup [2]) with one hyperparameter and Puzzle Mix [3] with 4 major hyperparameters. Furthermore, as provided in the sensitivity analysis in Table 1 in Section C of the supplement, our method is quite robust to a wide range of hyperparameters. \n\n[1] Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2018). mixup: Beyond Empirical Risk Minimization. *ICLR*.\n\n[2] Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas, I., Lopez-Paz, D., & Bengio, Y. (2019). Manifold mixup: Better representations by interpolating hidden states. *ICML*.\n\n[3] Kim, J. H., Choo, W., & Song, H. O. (2020). Puzzle mix: Exploiting saliency and local statistics for optimal mixup. *ICML*.\n\n**Question 3. Request for additional baseline models on ImageNet.**\n\nAs suggested, we applied Horizontal TokenMixup with minor modification to the Pyramid Vision Transformer to evaluate its performance on ImageNet-1K. Please have a look at the general comment for experiment tables. We have also trained the ViT-Lite on CIFAR-100.\n\nHowever, PVT uses a convolution layer for key/value token projection. So, PVT is not resilient to the change in the number of tokens. Thus, we could not report the performance with VTM.\n\n**Question 4. Why does Manifold Mixup fail in contrast to TokenMixup?**\n\nIn AlignMixup [1], LeViT was trained on Imagenet with Input Mixup, Cutmix and Manifold Mixup. Input Mixup and Cutmix achieved top-1 accuracy of 68.3% and 68.7% each, while Manifold Mixup only achieved 67.8%. We believe this aligns with our observation that manifold mixup does not work relatively well in transformer models.\n\nFurthermore, we observed that Manifold Mixup severely damages the original attention map, becoming too smooth with frequent artifacts. Please have a look at Section J of the revised supplement for relevant qualitative examples; we have added Figure 3. We randomly sampled 8 mixed pairs each to compare the attention maps from the layer after TokenMixup and Manifold Mixup. Specifically, several attention heads (*e.g.* head 1 and head 2 in Supplement Figure 3) seemed to be defunct, rendering a very smooth attention distribution regardless of input. We believe these examples explain why Manifold Mixup did not perform well in our experiment settings. Note, the attention maps were retrieved from the CCT model trained on CIFAR-100.\n\n[1] Venkataramanan, S., Kijak, E., Amsaleg, L., & Avrithis, Y. (2022). AlignMixup: Improving Representations By Interpolating Aligned Features. CVPR.", " We appreciate the insightful comments and suggestions. We tried to address all the reviewer's questions below. We hope that our answers resolve the reviewer's concerns and lead to stronger support.\n\n**Question 1. Request for additional ablations for the TokenMixup components.**\n\nFirst of all, we provided a thorough ablation study of all the four components in Table 4 of the main paper. As requested, we here provide additional ablation study combining with Input Mixup & Cutmix. Since all our modules, except for “Sample difficulty assessment”, work with tokens and their attention scores, the only valid combination with image-level mixup methods is “Input Mixup & Cutmix + Sample difficulty assessment”. The experimental results are presented in the table below. Interestingly, we observed that ScoreNet slightly improved other mixup methods, while falling short compared to our HTM and VTM.\n\n| Component compositions | CIFAR-100 Accuracy |\n|---|:--:|\n| CCT-7/3x1 + Input Mixup & Cutmix | 82.87 |\n| &nbsp;&nbsp; &nbsp;&nbsp; (+) Sample difficulty assessment (ScoreNet) | 82.96 |\n| &nbsp;&nbsp; &nbsp;&nbsp; (+) Horizontal TM | 83.55 |\n| &nbsp;&nbsp; &nbsp;&nbsp; (+) Vertical TM | 83.42 |\n\n**Question 2. Why is the attention-based approach better than the gradient-based approach?**\n\nIn our qualitative analysis, we observed that gradient-based saliency maps occasionally have irregular representation of the input image, while attention-based saliency maps are usually smoother than gradient maps. To quantitatively demonstrate this tendency, we here compare the smoothness statistics of the two saliency maps. In the table below, we provide the average Total Variation and Variance of the saliency map. Total variation is derived either with the L1 or L2 norm, each computed as\n\n$\\begin{equation} TV_{L1} = \\sum_{i,j} |S_{i+1,j} - S_{i,j}| + |S_{i,j+1} - S_{i,j}|\\end{equation}$\n\n$\\begin{equation} TV_{L2} = \\sum_{i,j} \\sqrt{|S_{i+1,j} - S_{i,j}|^2 + |S_{i,j+1} - S_{i,j}|^2}\\end{equation}$\n\nwhere $S$ is the attention or gradient-based saliency map. Also note that both saliency map was normalized to sum to 1. From the table above, we can see that the gradient-based method renders higher TV and variance, indicating that the overall distribution is sharper than attention maps.\n\n| | Attention-based | Gradient-based |\n|--|:--:|:--:|\n|Total Variation (L1) | 0.5454 | 1.3176 |\n|Total Variation (L2) | 0.1441 | 0.3910 |\n|Variance | 4.88E-6 | 1.95E-5 |\n\n**Question 3. The method seems to be sensitive to hyperparameter settings.**\n\nWe have shown that our method is robust to the choice of hyperparameters, in section C of the supplement. The performances were consistently above their baselines.\n\nAs a concrete comparison with previous works, Puzzle Mix, which was published in ICML 2020, reported the standard deviation for CIFAR-100 errors across reasonable hyperparameter ranges. For hyperparameters $\\beta, \\gamma, \\eta, \\xi$, the reported standard deviations were 0.22, 0.20, 0.18, 0.27, respectively. In HTM, the standard deviations of $\\rho, \\tau$ are 0.24 and 0.29, while $\\kappa$ in VTM is 0.20. Considering that our sensitivity tests also included the harshest settings, we believe that our method is not especially sensitive to hyperparameters compared to previous outstanding works.\n\n| PuzzleMix parameters | Range | Mean Top-1 Error % (STD) |\n|:--:|--|:--:|\n|$\\beta$| [0.8, 1.6] | 16.19 (0.22) |\n|$\\gamma$| [0.0, 1.0] | 16.43 (0.20) |\n|$\\eta$| [0.1, 0.35] | 16.37 (0.18) |\n|$\\xi$| [0.4, 1.0] | 16.25 (0.27) |\n\n\n| TokenMixup parameters | Range | Mean Top-1 Error % (STD) |\n|:--:|--|:--:|\n|$\\tau$| {0.0, 1.0, 1.5, 2.0, 2.5, inf} | 16.84 (0.24) |\n|$\\rho$| {0.0, 0.001, 0.003, 0.005, 0.007, 0.010, inf} | 16.83 (0.29) |\n|$\\kappa$| {0, 3, 5, 10, 25, 50, 100} | 16.89 (0.20) |\n\n**Question 4. How is the performance of using HTM and VTM simultaneously?**\n\nWe report experimental results with CIFAR-100 models, CCT and ViT-Lite, when Horizontal TokenMixup and Vertical TokenMixup are applied simultaneously. From the table below, we can observe that utilizing both methods consistently improves its baseline performance as well as the performance when only one of the methods is used. \n\n| | ViT-Lite-7/4 | CCT-7/3x1 |\n|--|:--:|:--:|\n| vanilla model | 79.44 | 82.87 |\n| &nbsp;&nbsp;&nbsp;&nbsp; (+) Horizontal TM (ours) | 80.53 | 83.55 |\n| &nbsp;&nbsp;&nbsp;&nbsp; (+) Vertical TM (ours) | 80.44 | 83.42 |\n| &nbsp;&nbsp;&nbsp;&nbsp; (+) Horizontal TM + Vertical TM (ours) | **80.65** | **83.56** |\n\n**Question 5. Request for an experiment on other transformer architectures.**\n\nWe further applied our methods to two more transformer architectures: ViT-Lite-7/4 with CIFAR-100 and Pyramid Vision Transformer-B0 with Imagenet-1K. Please refer to the general comment above for experimental tables. We observed that our methods consistently improve their baselines.", " We thank all four reviewers for their strong support and constructive comments on our work. We are glad that the reviewers found our method novel and interesting. Here, we added a few more experimental results on CIFAR-100 and Imagenet-1K.\n\nAs suggested by the reviewers, we applied our method to one of our baselines, ViT-Lite-7/4, for the CIFAR-100 experiment. In the table below, * denotes the reproduced baseline performance by training for longer (1500) epochs. In the case of Imagenet experiments, we applied Horizontal TokenMixup to the PVTv2-B0 model. We observed meaningful improvements in performance in both settings.\n\n| | CIFAR-100 Accuracy |\n|-----|:-----------------:|\n| VIT-Lite-7/4 (1500) * | 79.44 |\n| VIT-Lite-7/4 (1500) + **Horizontal TM (ours)** | **80.53** |\n| VIT-Lite-7/4 (1500) + **Vertical TM (ours)** | 80.44 |\n\n\n| | ImageNet-1K Top1 | ImageNet-1K Top5 |\n|-----|:-----------------:|:-----------------:|\n| PVTv2-B0 | 70.46 | 90.16 |\n| PVTv2-B0 + **Horizontal TM (ours)** | **71.20** | **90.43** |", " Thank you for the detailed review and comments. We appreciate your strong support for our work. The questions will be addressed below.\n\n**Question 1. Does ScoreNet cause bias in saliency estimation if trained simultaneously with the main model?**\n\nThis is a very interesting question. We have investigated the effect of the supervision on ScoreNet by applying stop-gradient before ScoreNet propagation. However, we did not find a huge difference in the experimental results, implying that the estimated saliency map is not severely perturbed by supervision signals from ScoreNet training. We concluded that whether to use stop-gradient can be empirically determined. Specifically, we applied stop-gradient in our Imagenet experiment and did not in the CIFAR experiments.\n\n**Question 2. Are there methods other than ScoreNet for filtering data samples with higher confidence?**\n\nScoreNet is a small classifier applied to an intermediate layer. A straightforward alternative would be to directly use the final confidence score as a proxy of sample difficulty. Relevant experimental results were reported in Section F of the supplement. In the experiment, we found that the method performed inferior to ScoreNet, while even slowing down iteration time by 47% due to an additional forward pass to compute the final confidence score. \n\nOther possible options would be uncertainty (*e.g.* accepting the difficulty score only when entropy is low). However, computing uncertainty may require Monte Carlo sampling or complex model structures. Thus, exploring other alternatives is an interesting future direction but we believe that our ScoreNet is an efficient and effective method to filter difficult samples.", " Thank you very much for the comments. We especially appreciate the suggestions regarding code release. Below are our responses to the reviewer's questions.\n\n**Question 1. What does it take to add TokenMixup to Transformers, and is hyperparameter search necessary?**\n\nBased on the code we provided in the supplement, several alterations may be required. For instance, if the original Transformer module does not save the token and attention map (*i.e.* the saliency map) for the self-attention layers, additional lines of code will be needed to utilize our TokenMixup functions. However, architectural change was not needed, and therefore can be applied with ease. As requested, we provide the PyTorch-style pseudo-code below which describes how our method is utilized overall. Moreover, as suggested, we are planning to improve code readability & usability and modularize our source code for wider use, on acceptance.\n\nLike most data augmentation methods, hyperparameter optimization is inevitable. However, our methods have at most two major hyperparameters; HTM entails $\\rho$ and $\\tau$, while VTM entails $\\kappa$. This is similar to previous Mixup methods; Input Mixup [1] (or Manifold Mixup [2]) with one hyperparameter and Puzzle Mix [3] with 4 major hyperparameters. Furthermore, as provided in the sensitivity analysis in Table 1 in Section C of the supplement, our method is quite robust to a wide range of hyperparameters. \n\n```\nclass TypicalTransformer :\n\tdef __init__(self, **kwargs):\n\t\tself.layers = self.build_layers(kwargs)\n\t\tself.mixup_type = kwargs['tokenmixup_type']\n\t\tself.applied_layer = kwargs['applied_layer']\n\t\n\tdef build_layers(self, **kwargs) :\n\t\t\"\"\"\n\t\t code for initialization of transformer layers\n\t\t\"\"\"\n\t\treturn layers\n\n\tdef token_mixup(self, **kwargs):\n\t\t\"\"\"\n\t\t Horizontal or Vertical TokenMixup code\n\t\t\"\"\"\n\t\treturn x, y\n\n\tdef forward(self, x, y):\n\t\tattention_maps = list()\n\t\tprevious_tokens = list()\n\t for i, layer in enumerate(self.layers) :\n\t\t if self.applied_at == i and self.mixup_type == 'horizontal' :\n\t\t x, y = self.token_mixup(x, y)\n\t\t elif self.applied_at == i and self.mixup_type == 'vertical' :\n\t\t x, y = self.token_mixup(x, y, previous_tokens, attention_maps)\n\t\t previous_tokens.append(x)\n\t\t x, attention = layer(x)\n\t\t attention_maps.append(attention)\n\t\treturn x, y\n```\n\n[1] Zhang, H., Cisse, M., Dauphin, Y. N., & Lopez-Paz, D. (2018). mixup: Beyond Empirical Risk Minimization. *ICLR*.\n\n[2] Verma, V., Lamb, A., Beckham, C., Najafi, A., Mitliagkas, I., Lopez-Paz, D., & Bengio, Y. (2019). Manifold mixup: Better representations by interpolating hidden states. *ICML*.\n\n[3] Kim, J. H., Choo, W., & Song, H. O. (2020). Puzzle mix: Exploiting saliency and local statistics for optimal mixup. *ICML*.\n\n\n**Question 2. Request for additional experiments on CIFAR baseline models.**\n\nWe further applied our methods to ViT-Lite with CIFAR-100, as well as on Pyramid Vision Transformer with Imagenet-1K. The experimental results are presented in the general comment above. In the experiments, we observed consistent improvements in performance compared to the baselines.\n\n\n**Question 3. Can TokenMixup be applied beyond Transformer based models?**\n\nThe major target of our framework is parameter-heavy transformers with explicit attention maps. Hence, our framework was not built for non-transformer models without self-attention layers. Such limitation was discussed in section H of the supplement, and that Horizontal TokenMixup would be applicable if additional attention modules are adopted. On the other hand, Vertical TokenMixup-like feature map augmentation for convolution layers was previously not dealt with in literature. Although it is yet not straightforward to directly concatenate multi-scale features without affecting spatial structure, it would be an interesting research direction to augment feature maps in a vertical manner.", " The paper proposes a novel data augmentation technique for visual transformers. Strengths:\n\n- Interesting and novel idea\n- Faster performance\n- SOTA results\n- Comprehensive ablation studies\n\nWeaknesses:\n\n- It is not entirely clear what does it take and how easy it is to add this method to an arbitrary transformer. It would help to have an appendix with python code showing how a typical transformer is modified with the proposed algorithm. Is it as simple as adding a layer for which good hyperparameter values are known, or additional hyperparameter search is really necessary on case by case basis?\n- The work explores 2 baseline transformer methods applied to 3 datasets. There are other models that are cited in the paper such as ViT-Lite, NesT-T, NesT-B. It would be of interest to have your work applied to those as well.\n- I feel this method could have wider applicability, beyond Transformer based models (e.g. ResNets). It would be of interest to either see some empirical results exploring this area, or if this is impossible due to the limitations of the method, to have a discussion of this limitation and perhaps an outline of how this limitation could be overcome in future work.\n - Can you encapsulate your staff in a layer and show how to add it? The source code is based on a function. If you want your code to be used by others, having it as a standard module and a simple way to incorporate it in existing models would be a great plus to your work N/A", " In this paper, the authors proposed a novel saliency-aware token-level data augmentation method for assisting the training of vision transformers. Specifically, they proposed TokenMixup, which takes the self-attention as the inherent saliency detector, to mix intermediate token sets so that the saliency level of a batch is maximized. With TokenMixup, they achieve significant improvement on image classification tasks and meanwhile obtain higher efficiency than other gradient-based mixup approaches. Pros:\n\n1. This paper proposes TokenMixup, a novel saliency-aware token-level mixup strategy for improving vision transformers. It ingeniously adopts self-attention, the basic module in transformers, to serve as a substitute for conventional computationally-heavy saliency detector. \n\n2. The authors conduct thorough experiments for evaluating the effectiveness of the proposed method by comparing different network backbones and different augmentation approaches. The ablation study is sufficient for validating the importance of each component of TokenMixup as well. \n\n3. Based on the proposed TokenMixup, the authors propose a variant: vertical TokenMixup. By incorporating the features from previous layers, it can easily captures multi-scale feature for further improvement, which is simple and effective. \n\n4. The authors provide some explanations and discussions from a perspective of curriculum learning, which leaves some inspirations for further research. \n\nCons: \n\n1. Since ScoreNet is trained simultaneously with the main branch, there may exist some biases during the calculation of overall saliency. 1. Is there any other methods for replacing the ScoreNet for filtering data samples with higher confidence? There is no potential negative societal impact for this work. ", " This paper presents a token-level data augmentation method that is tailored to networks that have attention layers. The authors argue that recent gradient-based mixup methods are costly and propose a horizontal (mixing tokens between different samples) and vertical (mixing tokens between different layers) mixup schemes using attention. Both proposed methods improve the performance of baseline models on CIFAR and ImageNet. Strengths\n- As attention-based architectures (e.g. transformer) are becoming more and more important in many deep learning domains, finding a mixup method tailored for those architectures is timely and valuable.\n- The proposed method is much more cost-efficient compared to the previous saliency-based methods, which is especially important as transformers models are usually very large.\n\nWeaknesses\n- Since the proposed method consists of many components (Sample difficulty assessment, optimal assignment, token-level mixup, saliency-based label reassignment), I would like to see more ablations to verify if each component is necessary. For example, how does (sample difficulty assessment + Cutmix) work compared to HTM or VTM?\n- The most direct comparison to the saliency-based mixup baselines are given in Table 3 and L228-233. The authors note that the proposed attention-based approach, while being much more cost-efficient, shows higher performance than the gradient-based methods such as Co-Mixup. However, the authors do not provide any intuition on why the attention-based approach is better. Therefore, it is hard to interpret the performance gap especially since the authors motivate their approach as an efficient alternative to the gradient-based methods.\n- According to the results in Appendix C Table 1, the proposed method seems somewhat sensitive to the choice of hyperparameters. If one or two of the hyperparameters are not set properly, the performance drops similar to the baseline (CCT-7/3x1 + Input Mixup & Cutmix in main paper Table 5.). - The authors provide results on HTM and VTM, but does not provide results when both HTM and VTM are applied simultaneously. How is the performance of HTM + VTM?\n- Can the authors provide results on other transformer architectures to see if the proposed methods are generalizable? The main limitation of the method is that it is only applicable to attention-based models, which the authors address clearly on the appendix.", " This paper presents an efficient attention-guided data augmentation method, named TokenMixup. As the name implies, the goal is to provide an efficient mixup method based on attentions to improve the performance of classification models. Vertical TokenMixup is also introduced, which considers using tokens from different feature levels for mixup. Experiments on CIFAR and ImageNet show that the proposed approach performs better than the original mixup method. Overall speaking, the writing quality of this paper is good and it is easy to follow. Some strengths and weaknesses of this paper are listed as follows.\n\nPros:\n\n- The idea of this paper is interesting. Unlike previous follow-ups of the mixup method, this paper follows the curriculum learning framework and introduces a tiny scorenet to measure the saliency of this token. Besides, the Vertical TokenMixup method is also interesting. This is the first work I see that uses tokens from different token levels to conduct mixup operation and results in good performance in image classification.\n\n- The analysis of this paper is great. Both the main paper and the supplementary material provide sufficient experiments to demonstrate how the proposed approach works. I really like the analysis and thank the authors for providing these experiments.\n\nDespite the strengths in novelty and method analysis, I still have the following concerns.\n\nCons:\n\n- Despite the efficiency, the proposed method needs an extra ScoreNet to compute saliency. This would inevitably introduce extra training time, no matter how much it is.\n\n- In the original mixup method, there is only one hyper-parameter that needs to be tuned. However, in this paper, there are three new hyper-parameters as analyzed in the supplementary material. For CIFAR and ImageNet, the authors have provided suggestions on how to select them. However, when applied to other datasets, users still need to determine how to select them.\n\n- At present, only CCT and the original ViT are used as baselines. Since the first ViT work, there are a lot of new ViT models. I really think the authors should conduct experiments on more powerful ViT models, like Swin Transformer, Pyramid Vision Transformer, and more recent VOLO, etc. Results on powerful baseline models would definitely make this paper stronger.\n\n- In addition, from Table 5, we can see that the improvement is mostly from Input Mixup & Cutmix. Could the authors explain why the proposed approach succeeds while Manifold Mixup fails in lifting model performance? More analysis on the ImageNet dataset would make the analysis more convincing.\n\nFor more questions, please refer to the Strengths And Weaknesses part. Limitations have been mentioned in the supplementary material. No potential negative societal impact is found." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 2, 5 ]
[ "0Ek_5Umyic7", "nips_2022_c4o5oHg32CY", "aeaxyCT2hk", "kk9v7etWaum", "nips_2022_c4o5oHg32CY", "dkX4KprRGN7", "sHLKpL7c7V9", "nips_2022_c4o5oHg32CY", "nips_2022_c4o5oHg32CY", "nips_2022_c4o5oHg32CY", "nips_2022_c4o5oHg32CY" ]
nips_2022_Jw34v_84m2b
IM-Loss: Information Maximization Loss for Spiking Neural Networks
Spiking Neural Network (SNN), recognized as a type of biologically plausible architecture, has recently drawn much research attention. It transmits information by $0/1$ spikes. This bio-mimetic mechanism of SNN demonstrates extreme energy efficiency since it avoids any multiplications on neuromorphic hardware. However, the forward-passing $0/1$ spike quantization will cause information loss and accuracy degradation. To deal with this problem, the Information maximization loss (IM-Loss) that aims at maximizing the information flow in the SNN is proposed in the paper. The IM-Loss not only enhances the information expressiveness of an SNN directly but also plays a part of the role of normalization without introducing any additional operations (\textit{e.g.}, bias and scaling) in the inference phase. Additionally, we introduce a novel differentiable spike activity estimation, Evolutionary Surrogate Gradients (ESG) in SNNs. By appointing automatic evolvable surrogate gradients for spike activity function, ESG can ensure sufficient model updates at the beginning and accurate gradients at the end of the training, resulting in both easy convergence and high task performance. Experimental results on both popular non-spiking static and neuromorphic datasets show that the SNN models trained by our method outperform the current state-of-the-art algorithms.
Accept
This paper proposes a novel loss for training a spiking neural network that mitigates errors due to quantization. All reviewers agreed that the contributions of this paper were above the acceptance threshold.
train
[ "KfWMpN7C2oi", "3-I7LZjAjik", "PdSoinzYLbG", "TE8RbsU9iJS", "U9cb7cCZH-Y", "SF8BTqRXGSa", "20G-bbdeD9y", "sps0obxIVRK", "KN1Zj48w4AK", "vGWUB9a7mhX", "bT8n5SzM9Xr", "-qiLJrJmmUk", "-7Wn4C4Qlw", "xQDloNpEWq5", "CWllSUj-z6n", "VooeZWLQioH", "Ri6eIzCfVlq", "0DIxdIhAvIM", "5q0wTMn4OO", "rmSfgpfJkis", "gD3KBnidxGt", "gaC8d6L9np2", "nIKOaMSCJX8", "u72L0FBrqi3" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your timely reply. We are very grateful to meet such a responsible reviewer. We have taken your suggestion seriously and added an extra section to discuss the sparsity and efficiency cost in the revised version (see line 300-308 in revision). We'd like to know if you have any other questions. We would be very grateful that if you can re-consider your rating on the premise that we could address your concerns with our best efforts.", " Thank you for clarification and this additional experiment. I personally encourage the authors to include more discussions in this respect on the revised manuscript (including positive and negative aspects). Since running on GPUs for SNN is not the ultimate goal (which I believe will not bring significant advantages in terms of efficiency), running on specific hardwares(e.g. neuromorphic chips) that fits SNN's event-driven mechanism and sparseness should eventually reveal it's high efficiency. In this case, the overall firing rate becomes crucial.", " Thanks very much for your reply and kind reminder. we are very sorry for this neglect. Reviewer jVTP presented the same concern and we has provided the response in A5 for him/her . But we are very sorry for this neglect for you again. Here, we provide the efficiency analysis with tdBN[1] which is commonly used in SNNs. We define the average spike rate as total #spikes/#neurons on the test-set. For tdBN, the avg-rate of ResNet-19 on CIFAR10 are 0.35, 0.67, 0.82 with T = 2, 4, 6. while 0.81, 1.59, 2.37 for IM-Loss. However, with only T=2, our method outperforms the tdBN with T=6 by 0.69% accuracy with a low avg-rate. IM-Loss increases the fire rate but reduces the timesteps at the same time, i.e., the proposed regularizer can achieve a better trade-off between spiking sparsity and accuracy. On the other hand, SNNs can also run on GPUs. In this situation, the fewer timesteps, the better, and our method enjoys very few timesteps.\n\n[1] H. Zheng, Y. Wu, L. Deng, Y. Hu, and G. Li. Going deeper with directly-trained larger spiking neural networks. arXiv,10 2020.", " I thank the authors for their reponse and clarifications to my questions. I still holds the concerns on maximizing the information flow in SNN will possibly reduce the sparseness the SNNs and hence largely decrease their practicability.", " Thanks for your timely response too. We are very grateful to meet such responsible reviewer. Indeed, there are some work that introduces a regularization function to encourage full precision values around binary values[1]. However, there is still work like [2] that added extra shortcut between consecutive convolutional blocks to transmit the full precision values to binary values directly. These all interesting works to reduce information loss. i.e., the methods to reduce information loss can be different.\n\nOur method and [1] are not equivalent. [1] reduces the information loss from single thought, i.e., if every neuron will reduce the quantization error, the information loss for the whole model will be reduced, thus the regularization of [1] will add to every precision value. While we reduce the information from monolithic thought. i.e., we analyze the information expressiveness ability of the SNN based on information entropy first and derived the optimal distribution of spikes. Then we add a regularization to the whole spike distribution.\nNevertheless, the accuracy increasing mainly corresponds cross-entropy-Loss reducing. i.e., to reduce the information loss is only the manner, to reduce the cross-entropy-Loss is the goal. Ours and [1] both add a new constraint in the network optimization.\n\nSince the regularization changes the optimization objective, you may concern that it will lead to a configuration far from the global optimal of the cross-entropy loss function. However, prior theoretical [3,4] and empirical [5] work has shown that a deep neural network can have many high-quality local optima. Kawaguchi proved that under certain conditions, every local minimum is a global minimum [4]. Through experiments, Im et al. show that using different optimizers, the achieved local optima are very different [5]. These insights show that adding the regularization may deviate the training away from the original optimal, but can still lead to a new optimal with high accuracy. Moreover, the regularization diagnoses the poor conditions of the activation flow, and therefore may achieve higher accuracy. Our experiment results confirm this hypothesis.\n\n\n[1] Darabi S , Belbahri M , Courbariaux M , et al. BNN+: Improved Binary Network Training[J]. 2018.\n\n[2] Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In ECCV, pages 722–737, 2018.\n\n[3] A. Choromanska, M. Henaff, M. Mathieu, G. B. Arous, and Y. LeCun. The loss surfaces of multilayer networks. In Artificial Intelligence and Statistics, pages 192–204, 2015.\n\n[4] K. Kawaguchi. Deep learning without poor local minima. In Advances in Neural Information Processing Systems, pages 586–594, 2016.\n\n[5] D. J. Im, M. Tao, and K. Branson. An empirical analysis of deep network loss surfaces. 2016.\n\n\nIf you have further questions, please do not hesitate to reply to us. Thanks again.", " Thanks for your timely response. \nA1 & A2. Still, I am not fully convinced. Regarding minimizing quantization error, one should let U in [-inf, threshold/2] be as close to 0 as possible, and let U in [threshold/2, +inf] be as close to the threshold as possible. Why does it equivalent to maximizing information flow? How do you evaluate the difference between these two approaches? I am willing to increase my overall rating if this question is answered.\n\nA3. Sorry for my misunderstanding. It addresses my concerns. Surprisingly knowing the distribution is not affected by the resetting mechanism.", " A1: Thanks very much for your reply. We has confirmed that the information loss for ReLU is acceptable and ReLU dropping the negative part also plays a role of ``dropout\". Still ReLU dropping the negative part mainly want to retain the nonlinear transformation of network not to abandon information. Unfortunately, it can't convince you. \n\nHere, we want to provide you more prior works in BNNs, which also binarizes the activations even weights, to verify our statement. Liu et al. [1] thought binary activations will induce information loss and added extra shortcut between consecutive convolutional blocks to strengthen the representational capacity of the network (see on the third paragraph of page 2). [2][3] proposed to compensate to some extent for the quantization error (i.e. information loss, we also point that quantization of the spike activity function will cause information loss (see line 162)) caused by the binary approximation by re-scaling the output of the binary convolution using real-valued scale factors. (see on the second paragraph of page 2 of [2]). \n\n\n[1] Zechun Liu, Baoyuan Wu, Wenhan Luo, Xin Yang, Wei Liu, and Kwang-Ting Cheng. Bi-real net: Enhancing the performance of 1-bit cnns with improved representational capability and advanced training algorithm. In ECCV, pages 722–737, 2018.\n\n[2] Bulat A , Tzimiropoulos G . XNOR-Net++: Improved Binary Neural Networks[J]. 2019.\n\n[3] Rastegari, Mohammad , et al. \"XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks.\" Springer, Cham (2016).\n\n\nFrom another view, the activation function is usually fixed in DNNs without learning in the network. A fixed form is not easy designed to play such a role that compressing out non-relevant information and leaving useful information. \n\nThe firing function of the SNN is an irreversible transformation with serious information loss, since it forces all information only to two values. If we think the information loss is acceptable, an extreme example is one can force all information to 1 value. Obviously, the network will lose its classification ability completely in this situation.\n\n---\n\nA2: Ideally, the spike tensor O should reflect the information of the membrane potential tensor U as much as possible. In the view of the information flow, to maximize the information flow from the full-precision tensor U to the binary tensor O, the mutual information I(U ; O) of the random variables U and O should be maximized. Then p(0)=p(1)=0.5 is derived by information entropy theory.\n\n---\n\nA3: Sorry for this confusion, the firing threshold is 0.5 in the paper (see figure 3 or our provided code in line 11 of layer file). The resulting distribution has a mean value around 0.5 too.", " Thanks very much for your reply. Static image datasets are also commonly used for efficiency analysis [1,2]. Still, we will do the experiments on Cifar10-DVS and provide the efficiency analysis later. Futher more, the proposed regularizer increased accuracy by increasing the spiking rate indeed. However, the spiking rate is derived by our theory. Obviously, increasing the spiking rate to 100% will decrease accuracy.\n\n[1] Rathi N , Roy K . DIET-SNN: A Low-Latency Spiking Neural Network with Direct Input Encoding & Leakage and Threshold Optimization[J]. Institute of Electrical and Electronics Engineers (IEEE), 2021.\n\n[2] Deng S , Li Y , Zhang S , et al. Temporal Efficient Training of Spiking Neural Network via Gradient Re-weighting[J]. ICLR 2022.", " Q5. CIFAR10 is a static image dataset, so no temporal dependency is required. Even T=1 is sufficient to achieve successful training and achieve high accuracy. In my view, the results, on the contrary, prove that the proposed regularizer increased accuracy simply by increasing the spiking rate.\n\nQ6. Q7. See my response above.\n\nQ8. Thanks for your reply. The heuristically designed ESG function has a clean intuitive understanding. The discussion is nice and should be added to the manuscript.", " Thanks to the authors for taking the effort to address my concerns. However, I still hold my view.\n\n1. The authors admit that the information loss is necessary, but then suggest that information loss in activation layers is, on the contrary, disgusted. This is quite confusing. It is also controversial whether PReLU or leaky ReLU always outperforms ReLU (the most prevalent choice in ANN training).\n\n2. The authors suggest that temporal dependency will not change the fact the information flow will be maximized when p(0)=p(1)=0.5. But they do not answer the reason why. The response only mentions that considering temporal dependency is very complex and difficult.\n\n3. Thanks for your additional figure. However, it further confirmed my view: the mean value of membrane potential is *not* free to change. Despite the regularization term the author proposed to drive the mean value towards threshold 1, the resulting distribution has a mean value of 0.5. The resetting mechanism of the LIF neurons makes it hard to have half membrane potential lies above the threshold as expected. Therefore, the information flow is also not maximized as the author expected.", " Dear Chairs and Reviewers,\n\nWe are writing this letter to show our hope of having discussions with you. We received the borderline accept average rating at pre-rebuttal period, thus a further in-depth discussion will help the AC make a fair judgment. \n\nWe appreciate all your precious time for reviewing our work. There are two major concerns. 1) For information loss, we focus the information loss in activation layer, which is not like convolution/fully-connected layer that compresses out non-relevant information and leaves useful information by learning. The information loss in activation layer of SNNs should be avoided. 2) For Evolutionary Surrogate Gradient, it is very efficient, while DSpike is very time-consuming. We give detailed proofs and explanations in responses. \n\nCould you please take a little bit of time to read our responses and discuss with us? Thank you all!\n\nBest regards.", " Thanks for your time in reviewing our paper. From your assessments and questions, we can see your professionalism and much effort in checking our work and other related work carefully. Your assessments and questions also give us much insights in SNN field. Here, we want to provide you some different perspectives to discuss further. \n\nAbout Novelty.\n\nQ1: Adding homeostatic mechanisms to regularize spiking neuron's firing activity is not a new trick. Previously, [1] adds a regularizer to drive SNNs' firing activity more sparse to save energy. However in this work, the author proposed regularizer drives membrane potential near threshold, which potentially increases the firing activity and leads to poorer energy efficiency. \n\nA1: Indeed, adding homeostatic mechanisms to regularize spiking neuron's firing activity is not a new trick. We admire [1]'s work in adding homeostatic mechanisms to regularize spiking neuron's firing activity. However, our motivation comes from the issue of information loss that we notice in SNNs. Then we provide a new perspective/idea of handling such issue by maximizing the information flow in SNNs. In order to achieve this goal, we derive the optimal case of SNN information expression based on information theory. Finally we resort to regularize loss to find the optima. Our motivation, idea, perspective, and logical thought are different. Even our regularization method is much different from [1]. In this perspective, we think our novelty is sufficient. We will provide more soundness about information loss of SNNs and efficiency comparison in later questions.\n\n---\n\nQ2: Updating the shape of the surrogated function is also not new. [2] proposed to automatically adjust the temperature of the surrogated activation function in order to achieve the best training performance. Comparing to this work, [2] is more flexible.\n\nA2: Indeed, updating the shape of the surrogated function is also not new. However, to realize this idea, both [2] and ours adopt different methods. Here, we want to provide some discrepancies between the two methods. Our method is designed manually, and in our experiments, the SNN with ESG is better than the vanilla SNN but will induce no extra computation in the training. While the Dspike[2] is computed by finite difference method, however, evaluating finite difference could be time-consuming, since for each single weight, the model should run two times to evaluate the difference of the loss, and a model can have more than ten million parameters (e.g. 11,200,000 parameters for ResNet18), which greatly slows down the training process. To reduce the computation, the Dspike [2] chooses to only compute the finite difference in the first layer to represent the surrogate gradients of the whole model. However, this is still very time-consuming. Take ResNet20 in Cifar10 for example as introduced in [2], if we set batch-size as 128. The computation for the finite difference in the first layer once is equal to ResNet20 inferring about 4.5 epochs on training set. To sum up, the Dspike performs remarkably while its training is very time-consuming. Our method is more efficient with a relatively good performance. Hence, we think ESG function and Dspike function are both meaningful for SNN field. This is some like that SGD optimizer is designed manually from experience, while Meta-optimizer is learned [6] in the training. Though Meta-optimizer [6] performs better than SGD optimizer, it is time-consuming and more complex to use. And it cannot be concluded that Meta-optimizer is better and more flexible than SGD optimizer.\n\n[6] Andrychowicz M , Denil M , Gomez S , et al. Learning to learn by gradient descent by gradient descent[C], Proceedings of the 30th International Conference on Neural Information Processing Systems, 2016.", " About soundness.\n\nQ3: The theoretical reasonsthe authors proposed that lead to the IM-loss are questionable. First, information loss is necessary for classification tasks. Therefore we do not know whether maximizing informaton flow benefits the performance. Second, the measurement of information flow neglects the important temporal dependency of LIF neurons, so the result that p(0)=p(1)=0.5 maximizes the information flow based on elementary information theory is oversimplified. Third, the membrane potential actually does not follow the Gaussian dist with free to change mean value. The paper cites [3] here to back up their claim, yet [3] only mentioned that the membrane potential distribution approximately follows a zero mean Gaussian distribution.\n\nThanks for your time in reviewing our paper. From your assessments and questions, we can see your professionalism and much effort in checking our work and other related work carefully. Your assessments and questions also give much insights in SNN field. Here, we want to further elaborate more on our work.\n\nFirst, information loss is necessary for classification tasks indeed. However, the information loss in activation layer is disgusted. Information loss means the information cannot be fully reconstructed. There are two important modules in networks: convolution/fully-connected layer and activation layer. The convolution layer is an irreversible transformation, thus cannot be reconstructed. It usually compresses out non-relevant information and leaves useful information by learning. However, the activation layer usually plays the role of nonlinear transformation to make the network more complex. More deeper, more complex. At the same time, the information loss in activation layer is disgusted. A powerful proof is that PReLU[7] performs better than ReLU, where PReLU is a reversible transformation, thus without information loss, while ReLU is a irreversible transformation, thus with information loss. The form of PReLU is learned in the training. It may also be learned to be ReLU, while not in experiments. This can prove that activation layer without information loss is better. From another view, the activation function is usually fixed in DNNs without learning in the network. A fixed form is not easy designed to play such a role that compressing out non-relevant information and leaving useful information. Indeed, ReLU will introduce information loss but still be widely used. However, the information loss for ReLU is acceptable and ReLU dropping the negative part also plays a role of ``dropout\". Still ReLU dropping the negative part mainly want to retain the nonlinear transformation of network not to abandon information. However, the firing function of the SNN is an irreversible transformation with serious information loss, since it forces all information only to two values. In the work, we focus on solving information loss in activation layer. It doesn't conflict with your views that information loss is necessary for classification tasks, since this information loss you referred mainly comes from convolution layer not activation layer. \n\nSecond, we also advocate for your view that the information flow is temporal dependent. However, this will still not change the fact the information flow will be maximized when p(0)=p(1)=0.5 based on the information theory we adopted. Though the p(0)=p(1)=0.5 is seen as simple, it is the result derived from the theory but not the implementation manner of the theory. Considering the temporal dependency of LIF neuron is very complex and difficult to express explicitly and uniformly, we choose optimizing spiking rate only based on the information theory and do not design any explicit temporal dependency module in the IM-Loss. In this way, their dependency can be learned freely and implicitly in the training instead of manual design beforehand. This is like we do image classification task with learned CNNs instead of handcrafted descriptors. Furthermore, this design is very simple and general thus easy to be applied to other models. On the other hand, we do not mean embed the temporal dependency explicitly in the regularization is not good, but learning it implicitly and simply by learning manner is also a feasible scheme.\n\nThird, I'm sorry for this neglect of experiment proof. In our experiment, the membrane potential indeed follow the Gaussian-alike distribution with free mean value. We provide this experiment proof in the appendixes section of revised manuscript for you checking. tdBN[3] also follow the Gaussian-alike distribution with free mean value in our experiments and other work (see Fig. 2 in [8]). \n\n[7] He, K. , et al. \"Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification.\" CVPR IEEE Computer Society, 2015.\n\n[8] Guo, Y., Tong, X., , et al. Recdis-snn: Rectifying membrane potential distribution for directly training spiking neural networks. In: (CVPR2022).", " Q4: The heuristic description of the ESG is also questionable. How do the authors evaluate the \"weight updating ability\"? I understand that too sharp a surrogated gradient function cannot sustain training, but the function is also not \"the wider the stronger\". Too wide the function is also harmful for training. Besides, it is also dubious to claim adjusting the function more and more accurate benefits the training of SNNs. Please refer to discussions in [2] [4] [5].\n\nA4: Sorry for this confusion. We presented in the original paper that most prior works adopted fixed surrogate gradients (SG) to overcome the non-differentiability challenge (see line 148). It is difficult to find a suitable fixed SG, which is also verified by [4] in Section 2.3 that ``Surrogate Gradient Learning Is Sensitive to the Scale of the Surrogate Derivative.'' We noticed that the SG methods are also adopted in Quantization Neural Networks (QNNs) which also suffer from non-differentiability. Some changeable surrogate gradient methods that can achieve better performance by changing SG in the training are proposed in QNNs (see line 150-153). Then We suppose that designing a suitable ESG can also increase accuracy in SNNs(see line 220). This is our motivation which is much different from [2], [4], and [5]. \n\n From experiments, we find that a wide surrogated gradient function can enjoy a fast convergence (see figure 4), i.e., a strong weight updating ability. From figure 3 we can also see that a wide surrogated gradient function keep some relatively large derivative in a relatively large range, which means most parameters can be updated sufficiently in backward phase, i.e., a strong weight updating ability. On the contrary, a sharp surrogated gradient function will lose the ability to update most parameter, since most parameters will suffer a very small gradient in this situation, thus they cannot be updated sufficiently. An extreme example is using the real gradients of firing function, where the network will completely lose the ability to update parameters. Hence we said that a wide surrogated gradient function enjoys a strong weight updating ability. On the other hand, it does not mean ``the wider the better'', since the wider the more inaccurate gradient. it is like that in DNNs, a big learning rate is helpful for fast convergence, but a too big learning rate will be harmful for training.\n\n Also, we don't mean that a sharp surrogated gradient function can ensure a high accuracy. As we analysed above, a sharp surrogated gradient function will lose the strong weight updating ability, which is also important to the accuracy. Experiments in figure 4 can also verify this phenomenon. Nevertheless, the problem we focus is not whether a sharp surrogated gradient function or a wide surrogated gradient function better for training SNNs, but how to leverage the two important factors in training DNNs, i.e., strong weight updating ability and accurate gradien, since it is not easy to find a fixed suitable surrogated gradient function that can well balance these two factors. Just like that it is not easy to find a fixed suitable learning rate for DNNs. Too big or too small are both not good. But one adopting the dynamic strategy, where a relatively big learning rate at the beginning and a relatively small learning rate in the end will be a more suitable and better solution.\n\nObviously, the strong weight updating ability is more important at begin due to that all parameters can be updated sufficiently, while the accurate gradient is more important at end due to that the network can obtain the accurate optimization direction. In the same way, we adopt the dynamic strategy that a relatively wide surrogated gradient function at begin and a relatively sharp surrogated gradient function to make the SNN training more suitable, since in this situation the strong weight updating ability at begin and accurate gradient at end are both kept. \n\nOur contribution is that providing a dynamic strategy for surrogated gradient function and it performing better and can be easier to be obtained than a fixed surrogated gradient function in our experiments. Furthermore, the method is very simple and easy to be embedded in other works. From our analysis and experiments, we think our claim of using a dynamic strategy is better than a fixed strategy is convincing. Our observation from experiments that the parameters are learned sufficiently on the premise, adjusting the function to a more accurate gradient benefits the training of SNNs is also convincing.\n", " Here, we still wish to provide some discrepancies between our method and finite difference method [2][5]. First, the finite difference method is very time-consuming, which has been reported in [5] and can be deduced from the description in [2] as we analyzed in A2. Second, the derivation of finite difference can be traced back to the directional derivative and difference in mathematics. Such finite difference is reasonable to be used to approximate the instantaneous derivative at a certain point in ANNs, which have continuous and differentiable activations. The high-similarity curves in experiment 1 in [2] verify this conjecture. However, the finite difference technique is not suitable for estimating the derivative of holder-continuous and non-differentiable functions like SNNs, which have uncontinuous and non-differentiable activations. There may be a large gap between the instantaneous and average derivatives. Thus, the $\\epsilon$ should be selected elaborately, which also be verified in figure 2 in [2]. Third, could we use a small part of gradients of parameters to represent all gradients of parameters like [2] to find the optimal gradient may also need further discussions. Nevertheless, we are not denying the soundness and novelty of the finite difference method [2][5], but meaning that scientific research needs different views and arguments.\n\nWe can understand that you may wish to find the optimal changing strategy and our present strategy is not optimal in your opinion. That is true that we cannot ensure our strategy is optimal. However, neither [2] nor [5] can ensure that their strategies are optimal with much more computation as analyzed above. Both the two methods provide a possible scheme for the optimal strategy, and both the two methods are a relatively suitable methods from experiments. While we think that, providing a simple and relatively good scheme for designing SGs is also an important contribution in the SNN field.", " Other questions.\n\n---\n\nQ5: Can you prove that the proposed regularizer can achieve a better trade-off between spiking sparsity and accuracy?\n\nA5: Thanks for your constructive advice. Here, we provide some efficiency analysis. We define the average spike rate as total #$spikes/#neurons on the test-set. For tdBN, the avg-rate of ResNet-19 on CIFAR10 are 0.35, 0.67, 0.82 with T = 2, 4, 6. while 0.81, 1.59, 2.37 for IM-Loss. However, with only T=2, our method outperforms the tdBN with T=6 by 0.69\\% accuracy with a low avg-rate. IM-Loss increases the fire rate but reduces the timesteps at the same time, i.e., the proposed regularizer can achieve a better trade-off between spiking sparsity and accuracy. On the other hand, SNNs can also run on GPUs. In this situation, the fewer timesteps, the better, and our method enjoys very few timesteps.\n\n---\n\nQ6: If one includes the temporal dependency into consideration, will the results of information flow maximization change?\n\nA6: Thanks for your constructive advice. we advocate for your view that the information flow is temporal dependent. However, this will still not change the fact the information flow will be maximized when p(0)=p(1)=0.5 based on the information theory we adopted. Though the p(0)=p(1)=0.5 is seen as simple, it is the result derived from the theory but not the implementation manner of the theory. Considering the temporal dependency of LIF neuron is very complex and difficult to express explicitly and uniformly, we choose optimizing spiking rate only based on the information theory and do not design any explicit temporal dependency module in the IM-Loss. In this way, their dependency can be learned freely and implicitly in the training instead of manual design beforehand. This is like we do image classification task with learned CNNs instead of handcrafted descriptors. Furthermore, this design is very simple and general thus easy to be applied to other models.\n\n---\n\nQ7: What does the membrane potential distribution looks like in your experiment? Does it approximately follow the Gaussian distribution with a free-to-change mean value?\n\nA7: I'm sorry for this neglect of experiment proof. In our experiment, the membrane potential indeed follow the Gaussian-alike distribution with free mean value. We provide this experiment proof in the appendixes section of revised manuscript for you checking. tdBN [3] also follow the Gaussian-alike distribution with free mean value in our experiments and other work (see Fig. 2 in [8]). \n\n[8] Guo, Y., Tong, X., , et al. Recdis-snn: Rectifying membrane potential distribution for directly training spiking neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR2022). pp. 326–335 (June 2022)\n\n---\n\nQ8: How do you evaluate the difference between ESG and [2] mentioned above?\n\nDifferent methods are adoptted in our work and [2]. Here, we want to provide some discrepancies between the two methods. Our method is designed manually, and in our experiments, the SNN with ESG is better than the vanilla SNN but will induce no extra computation in the training. While the Dspike[2] is computed by finite difference method, however, evaluating finite difference could be time-consuming, since for each single weight, the model should run two times to evaluate the difference of the loss, and a model can have more than ten million parameters (e.g. 11,200,000 parameters for ResNet18), which greatly slows down the training process. To reduce the computation, the Dspike[2] chooses to only compute the finite difference in the first layer to represent the surrogate gradients of the whole model. However, this is still very time-consuming. Take ResNet20 in Cifar10 for example as introduced in [2], if we set batch-size as 128. The computation for the finite difference in the first layer once is equal to ResNet20 inferring about 4.5 epochs on training set. To sum up, the Dspike performs remarkably while is very time-consuming. Our method is more efficient with a relatively good performance. Hence, we think ESG function and Dspike function are both meaningful for SNN field. This is some like that SGD optimizer is designed manually from experience, while Meta-optimizer is learned by learning[6] from optimal scheme. Through Meta-optimizer[6] performs better than SGD optimizer, it is time-consuming and more complex to use. Hence it cannot be concluded that Meta-optimizer is better and more flexible than SGD optimizer.", " Thanks for your efforts in reviewing our paper and your recognition of our idea to maximize mutual information and soundness of the IM-Loss approach. Some other confusions we also want to explain piece by piece as followed.\n\n---\n\nQ1: The ESG function is similar to the Dspike function (NeurIPS 2021, Yuhang Li, et. al), which should be cited.\n\nA1: Thank for your kind reminder. We will cite (NeurIPS 2021, Yuhang Li, et. al) in the revised version. Here, we want to provide some discrepancies between these two methods. The ESG function is designed manually, and in our experiments, the SNN with ESG is better than the vanilla SNN but will induce no extra computation in the training. While the Dspike is computed by finite difference method, however, evaluating finite difference could be time-consuming, since for each single weight, the model should run two times to evaluate the difference of the loss, and a model can have more than ten million parameters (e.g. 11,200,000 parameters for ResNet18), which greatly slows down the training process. To reduce the computation, the Dspike chooses to only compute the finite difference in the first layer to infer the surrogate gradients of the whole model. However, this is still very time-consuming. To sum up, the Dspike performs remarkably while is very time-consuming. Taking ResNet20 in Cifar10 (NeurIPS 2021, Yuhang Li, et. al) as an example, if we set batch-size as 128. The computation for the finite difference in the first layer once is equal to ResNet20 inferring about 4.5 epochs on training set. Our method is more efficient with a relatively good performance. Hence, we think ESG function and Dspike function are both meaningful for SNN field.\n\n---\n\nQ2: In the experiment, except CIFAR10, the approach is outperformed by current SOTA SNN (e.g. Dspike (2021 NeurIPS), TET (2022 ICLR) have higher accuracies on CIFAR100, ImageNet and CIFAR10-DVS, though not listed on the table), leaving the general usage of the method questionable.\n\nA2: Thank for your very kind reminder. We will cite and compare these papers you mentioned in the revised version. For Dspike, it indeed performs better than our method in most datasets, however, it is much more time-consuming than our method. For TET, actually our method performs better on most datasets, such confusion comes from that we used different model architectures in the original papers. Here, we provide the comparison with same architectures bellow. For CIFAR-100, we used the same backbone ResNet19 as TET. It can be seen that our model achieved 78.51\\% top-1 accuracy with 6 timesteps, which outperforms its TET counterpart with 3.79\\% higher accuracy. For the ImageNet dataset, results we reported in original paepr show that our Spiking ResNet34 achieves a 2.64\\% increment on TET spiking ResNet34. The accuracy of our ResNet34 does not exceed TET SEW ResNet34. However, TET SEW ResNet34 transmits information with integers, which is not a typical SNN. \n\n|Dataset|Method|Architecture|Timestep|Spike form|Accuracy|\n|---|---|---|---|---|---|\n|CIFAR100|TET|ResNet-19|6|Binary|74.72%|\n|CIFAR100|TET|ResNet-19|4|Binary|74.47%|\n|CIFAR100|TET|ResNet-19|2|Binary|72.87%|\n|CIFAR100|Our method|ResNet-19|6|Binary|78.51%|\n|CIFAR100|Our method|ResNet-19|4|Binary|77.42%|\n|CIFAR100|Our method|ResNet-19|2|Binary|74.56%|\n|ImageNet|TET|Spiking-ResNet34|6|Binary|64.79%|\n|ImageNet|TET|SEW-ResNet34 |4|Integer|68.00%|\n|ImageNet|Our method|Spiking-ResNet34|6|Binary|67.43%|", " Q3: The ESG method seems quite intuitive and empirical, is there any theoretical justification that can help to support it?\n\nA3: Indeed, the ESG method design is intuitive and empirical like these learning rate design in DNNs. It also comes from our experiments and understandings in SNN field. However, we designed the ESG method form meticulously and carefully. From large amount of experiments and thoughts, we find two RULEs of designing K(i). First, it should have a growing trend. As explained in Section 4.3, using EvAF with a smaller k results in the SNN with strong weight-updating ability; while a larger k results in accuracy gradients. To obtain both weight updating ability in the early stage of training and accurate backward gradient at the end of the training, K(i) should have a growing trend. Second, it should enjoy long-term maintenance of weight updating ability. As shown in Fig. 4 in the paper, due to the stronger weight-updating ability, SNNs with the EvAF using a fixed smaller k are much easier to converge to better results than these using a fixed larger k. This means EvAF with smaller k values taking up more training time is better. According to these rules, we choose K(i) with exponential growth rather than other linear or logarithmic growth.\n\n---\n\nQ4: It would be helpful if the potential negative impact of moving membrane potential towards the spiking threshold can be discussed.\n\nA4: Thanks for your constructive advice. In our all experiments, driving the mean membrane potential near the threshold is better than doing nothing. However, as you said, driving the mean membrane potential near the threshold becomes an inductive bias, which could be suboptimal for the network performance under certain situation. Though the suboptimum is still better than the vanilla results, a better choice may be driving the mean membrane potential to a range. This suppose needs sufficient theory and experiments support. Nevertheless, our this work still provides a new perspective to understand and solve the information loss of SNNs and two simple and solid methods in this field.", " Thanks for your time in reviewing our paper and we are gratitude to have your recognition of our work and experiments. Your constructive advice for introducing and analyzing the prior excellent works comprehensively will make our paper more objective and substantial. Actually, we have also noticed some other very excellent work with extraodinary contributions in SNN field like [11] before and prepared to acknowledge in the camera-ready. Thanks for your constructive suggestion again and we think these important works should be comprehensively introduced. This suggestion will not only benefit for this revised version but also our future papers.\n\nFor the relationship and difference between BNTT[10] and IM-Loss, with adjusting the pre-activation, they both can make the SNN deeper and more accurate. However, BNTT start from finding a more suitable BN for SNN, while IM-Loss starts from reducing information loss of SNN based on information theory. Hence their implementation manners are very different. BNTT performs better than original BN in SNNs. While IM-Loss plays a part of the role of BN but without introducing any multiply-and-accumulate (MAC) operations in the inference phase as BN.\n\nConsidering your recognition of our work and the high evaluation that ``This work is very derivative and incremental.'', and the neglect of introducing and analyzing the prior excellent works comprehensively can be improved in the revised version as your rich suggestions, we really appreciate it if you can kindly re-check our valuable insights, ideas, methods, and contributions and re-consider your ratings.\n\n[11] Lottery Ticket Hypothesis for Spiking Neural Networks, Youngeun Kim, Yuhang Li, Hyoungseob Park, Yeshwanth Venkatesha, Ruokai Yin, Priyadarshini Panda, ECCV2022.", " Thanks for your precious time for reviewing our work and for recognition of our ideas for the loss design and surrogate gradient design. Here we will first respond to your comments piece by piece and then provide more deep thoughts about methods with respect to your concerns of our implementation.\n\n---\n\nQ1: Section 4.2 is confusing. If the spikes can not reach the output as shown in figure 2, how the SNNs can be trained at all?\n\nA1: Sorry for this confusion. The SNN without any normalization techniques or our IM-Loss is difficult to train since the spikes can not reach the output. With IM-Loss, the firing rate can be adjusted layer by layer until the spikes reach the output. More specifically, if the spikes could only reach some middle layer at the beginning, the firing rate of the middle layer would be rather low and turn to be 0 in its next layer. Since the IM-Loss accumulates the information loss of all the layers (Eq.11 in the original paper), it can directly optimize each layer's firing rate in the backward phase instead of gradually going back from the last layer to the first layer like the cross-entropy loss. When the IM-Loss adjust the middle layer's firing rate to a certain extent, its next layer will receive sufficient spikes and then fire spikes to the later layer. Such behavior will continue until the spikes reach the output layer. Similarly, the normalization technique can also change the firing rate of each layer directly instead of gradually going back from the last layer to the first layer.\n\n---\n\nQ2: The dependence of results on the integration time of outputs is stated only for selected numbers (lines 283-291). It would be helpful to instead plot them as in figure 4. \n\nA2: Thanks for your constructive advice. More detailed results are as follows and we will plot them in the revised paper.\n\n| Method | Architecture | Timestep | Accuracy |\n|----|----|----|----|\n|Diet-SNN|VGG-16|5|92.70%|\n|STBP-tdBN|ResNet-19|6|93.16%|\n|STBP-tdBN|ResNet-19|4|92.92%|\n|STBP-tdBN|ResNet-19|2|92.34%|\n|Our method|VGG-16|6|94.01%|\n|Our method|VGG-16|5|93.85%|\n|Our method|VGG-16|4|93.52%|\n|Our method|ResNet-19|6|95.49%|\n|Our method|ResNet-19|5|95.50%|\n|Our method|ResNet-19|4|95.40%|\n|Our method|ResNet-19|3|94.96%|\n|Our method|ResNet-19|2|93.85%|\n|Our method|ResNet-19|1|92.01%|\n\n---\n\nFor the concern of “implementation novelty”, we want to explain and clarify more in the following few aspects:\n\nIn terms of the method design principle. We want to design a simple but effective method as our other works. 1) We think with effective but simpler implementation/method, the correctness and validity of our ideas could be verified more convincingly. With correct and meaningful ideas, more following works can be inspired and more useful methods can be provided. This is also one of our original intentions and contributions. 2) The simpler, the more versatile. We wish our methods can be easily reproduced and applied to other methods without adding any burden. Then our methods may have a chance to be a followable benchmark for the future works. Based on this principle, we designed these too simple but effective methods.\n\nIn terms of the technical implementation. For IM-Loss, we advocate for your viewpoint that the network outputs are dependent. However, the dependency are difficult to express explicitly and uniformly, hence we choose optimizing spiking rate only based on the information theory. In this way, their dependency can be learned freely in the training instead of manual design beforehand. This is like we do image classification task with learned CNNs instead of handcrafted descriptors. On the other hand, this design is very simple and general, thus easy to be embedded in other work. For ESG, we have also been thinking about how to find a training-aware manner. However, it is not an easy work. The EvAF is only used in the backward but not in the forward. It does not involve in the loss calculation (can not be trained). Then we turn to find a simple but effective manual manner. From methodology and experimental analysis, we find two RULEs of designing K(i). First, it should have a growing trend. As explained in Section 4.3, using EvAF with a smaller k results in the SNN with strong weight-updating ability; while a larger k results in accuracy gradients. To obtain both weight updating ability in the early stage of training and accurate backward gradient at the end of the training, K(i) should have a growing trend. Second, it should enjoy long-term maintenance of weight updating ability. As showed in Fig. 4 in the paper, due to the stronger weight-updating ability, SNNs with the EvAF using a fixed smaller k are much easier to converge to better results than these using a fixed larger k. It means that it is better to take up more training time for the EvAF with smaller k values. According to these rules, we choose K(i) with exponential growth rather than other linear or logarithmic growth.\n", " The manuscript tackles the problem of supervised learning in spiking neural nets (SNNs). The authors claim that SNNs loose information in feed-forward pass because of the binary nature of their responses and propose an information-maximizing loss function to reduce such loss. \nFor gradient back-propagation they propose to extend the popular surrogate gradient approach with a surrogate gradient who's spike activity function changes during the training. In multiple experiments the authors demonstrate advantages of their approach. Strengths. The authors propose two interesting extensions of existing surrogate backprop learning method for SNNs. The first one, infomax-inspired loss, maximizes the entropy of individual neuronal outputs under the assumption of independent neuronal responses. This is a valuable insight that SNN community may find of interest. The second one, using a schedule to adjust the activation function is quite innovative. The experiments with two SNN architectures and ablations provide compelling evidence that the proposed approach has advantages over previous work.\nWeaknesses. While the aforementioned ideas are somewhat innovative, the implementation is quite limited. For example, the infomax assumes that the network outputs are independent, thus optimizing only spiking rate. Moreover, the \"Evolutionary Surrogate Gradients\" is essentially a manually set schedule for adjusting a (hyper)parameter, gradient width, with training time. THe form of ESG change with time is hand-picked and no experiments are offered to shed light as to why this form is preferred and how it compares with alternative schedules.\nI also found the term \"evolutionary\" highly confusing as the approach has nothing to do with evolutionary optimization, and only referes to the aforementioned manually selected dependence of gradient width on time. A much better term would be \"adjustable, train time-dependent\" or alike.\n\n Section 4.2 is confusing. If the spikes can not reach the output as shown in figure 2, how the SNNs can be trained at all?\n\nThe dependence of results on the integration time of outputs is stated only for selected numbers (lines 283-291). It would be helpful to instead plot them as in figure 4. Without these plots these results seem as cherry-picked.\n\n The authors did not address limitations of their approach.", " The paper proposes an IM loss based training method for SNNs that shows high accuracy at low latency. +The paper's experiments shows that the IM loss trains SNNs with SOTA results compared to works cited by the author.\n-The paper presents a direct training method using BP for SNNs. This work is very derivative and incremental. There is a lot of work from Priya Panda's group at Yale, Emre Neftci's group, and many others with regard to SNN training. The authors have failed to acknowledge most recent works and the method they are proposing is very incremental in the context of those works. Further, many recent works on SNNs have targeted larger datatsets including video segmenattion with direct training. \n-Recent works like [5] show SNN training at very low latency with novel architectures. [10] uses a BN rule to train temproal SNNs. Is there a relationship between the implicit IM loss normalizationa nd the expicit normalization employed in [10]?\n\nBelow is a list of publications (not exhaustive) that the author should check:\n\n[1] Towards spike-based machine intelligence with neuromorphic computing K Roy, A Jaiswal, P Panda Nature 575 (7784), 607-617\n\n[2] Enabling spike-based backpropagation for training deep neural network architectures C Lee, SS Sarwar, P Panda, G Srinivasan, K Roy Frontiers in neuroscience, 119\n\n[3] Rate Coding Or Direct Coding: Which One Is Better For Accurate, Robust, And Energy-Efficient Spiking Neural Networks? Y Kim, H Park, A Moitra, A Bhattacharjee, Y Venkatesha, P Panda ICASSP 2022-2022\n\n[4] Neuromorphic Data Augmentation for Training Spiking Neural Networks Y Li, Y Kim, H Park, T Geller, P Panda arXiv preprint arXiv:2203.06145\n\n[5] Neural architecture search for spiking neural networks Y Kim, Y Li, H Park, Y Venkatesha, P Panda arXiv preprint arXiv:2201.10355\n\n[6] Optimizing deeper spiking neural networks for dynamic vision sensing Y Kim, P Panda Neural Networks 144, 686-698\n\n[7] Federated Learning with Spiking Neural Networks Y Venkatesha, Y Kim, L Tassiulas, P Pand IEEE Transactions on Signal Processing 2021\n\n[8] Beyond classification: directly training spiking neural networks for semantic segmentation Y Kim, J Chough, P Panda arXiv preprint arXiv:2110.07742\n\n[9] Visual explanations from spiking neural networks using interspike intervals Y Kim, P Panda Scientific Reports 11, Article number: 19037 (2021)\n\n[10] Revisiting batch normalization for training low-latency deep spiking neural networks from scratch Y Kim, P Panda Frontiers in neuroscience, 1638 Please see my above comments on weakness and clarify teh technical novelty of the work. Please see weakness section comments.", " The paper proposed the information maximization (IM) loss for training deep SNN which maximizes information flow of the network and indirectly provides normalization during training. The IM loss is constructed to strengthen the mutual information between the membrane potential and the following spiking activity. In addition, the ESG method is propossed to improve network training. Experiments on benchmark image classification tasks show that the method improves the accuracy of SNN, and on CIFAR10 it surpass current SOTA SNN. The work proposed the IM loss to reduce information loss from membrane potential to spike, which moves the mean membrane potential towards the spiking threshold during training. The idea is new for the training of SNN and the soundness of the approach is sufficient, based on the derived equal relation between the mutual information (MI) of the membrane potential and the spike and the entropy of spiking distribution. However, driving the mean membrane potential near the threshold becomes an inductive bias, which could be suboptimal for the network performance under certain situation. Another outcome of the approach is that it increases the activation of the resulting network, which impairs the sparse computation advantage of SNN.\nThe ESG function is similar to the Dspike function (NeurIPS 2021, Yuhang Li, et. al), which should be cited.\nIn the experiment, except CIFAR10, the approach is outperformed by current SOTA SNN (e.g. Dspike (2021 NeurIPS), TET (2022 ICLR) have higher accuracies on CIFAR100, ImageNet and CIFAR10-DVS, though not listed on the table), leaving the general usage of the method questionable.\n The ESG method seems quite intuitive and empirical, is there any theoretical justification that can help to support it?\n It would be helpful if the potential negative impact of moving membrane potential towards the spiking threshold can be discussed.\nAll SOTA SNN works should be listed on table to avoid misleading.", " The paper focuses on improving spiking neural networks' performance, and presents two major contributions:\n\n### 1. IM-loss\n\nThe information maximization loss (IM-loss) is evaluated by the $l_2$ distance between membrane potential and the neuron threshold. This form is based on three assumptions:\n\n- Maximizing the information flow between membrane potential and output spike train can benefit SNNs' accuracy performance.\n\n- The information flow is maximized when the spike trains' probability of 0 and 1 are equal.\n\n- The membrane potential follows the Gaussian distribution, which implies the distribution's mean value is roughly equal to the distribution's half probability point (cumulative distribution function = 0.5).\n\n### 2. ESG\n\nThe evolutionary surrogate gradient (ESG) adjusts the shape of the surrogate gradient function along the training process. The strategy is based on two assumptions:\n\n- The wider shape of the surrogated gradient of the activation function, the stronger weight updating ability.\n\n- The more accurate (sharp) of the gradient, the better training accuracy can be reached.\n\n ### Strengths\n\nThe paper is well-drafted and presents the ideas clearly. The ideas are well evaluated experimentally.\n\n### Weaknesses\n\n1. Novelty. \n\n- Adding homeostatic mechanisms to regularize spiking neuron's firing activity is not a new trick. Previously, [1] adds a regularizer to drive SNNs' firing activity more sparse to save energy. However in this work, the author proposed regularizer drives membrane potential near threshold, which potentially increases the firing activity and leads to poorer energy efficiency. \n\n- Updating the shape of the surrogated function is also not new. [2] proposed to automatically adjust the temperature of the surrogated activation function in order to achieve the best training performance. Comparing to this work, [2] is more flexible.\n\n2. Soundness\n\n- The theoretical reasons (see summary above) the authors proposed that lead to the IM-loss are questionable. First, information loss is necessary for classification tasks (The whole image is compressed through a neural network and leaves its classes information only in the output layer). Therefore we do not know whether maximizing informaton flow benefits the performance. Second, the measurement of information flow neglects the important temporal dependency of LIF neurons, so the result that p(0)=p(1)=0.5 maximizes the information flow based on elementary information theory is oversimplified. Third, the membrane potential actually does not follow the Gaussian dist with free to change mean value. The paper cites [3] here to back up their claim, yet [3] only mentioned that the membrane potential distribution approximately follows a *zero mean* Gaussian distribution.\n\n- The heuristic description of the ESG is also questionable. How do the authors evaluate the \"weight updating ability\"? I understand that too sharp a surrogated gradient function cannot sustain training, but the function is also not \"the wider the stronger\". Too wide the function is also harmful for training. Besides, it is also dubious to claim adjusting the function more and more accurate benefits the training of SNNs. Please refer to discussions in [2] [4] [5].\n\n\n[1] Zenke F, Ganguli S. Superspike: Supervised learning in multilayer spiking neural networks[J]. Neural computation, 2018, 30(6): 1514-1541.\n\n[2] Li Y, Guo Y, Zhang S, et al. Differentiable spike: Rethinking gradient-descent for training spiking neural networks[J]. Advances in Neural Information Processing Systems, 2021, 34: 23426-23439.\n\n[3] Zheng H, Wu Y, Deng L, et al. Going deeper with directly-trained larger spiking neural networks[C]//Proceedings of the AAAI Conference on Artificial Intelligence. 2021, 35(12): 11062-11070.\n\n[4] Zenke F, Vogels T P. The remarkable robustness of surrogate gradient learning for instilling complex function in spiking neural networks[J]. Neural computation, 2021, 33(4): 899-925.\n\n[5] Yang Y, Zhang W, Li P. Backpropagated neighborhood aggregation for accurate training of spiking neural networks[C]//International Conference on Machine Learning. PMLR, 2021: 11852-11862. 1. Can you prove that the proposed regularizer can achieve a better trade-off between spiking sparsity and accuracy?\n\n2. If one includes the temporal dependency into consideration, will the results of information flow maximization change?\n\n3. What does the membrane potential distribution looks like in your experiment? Does it approximately follow the Gaussian distribution with a free-to-change mean value?\n\n4. How do you evaluate the difference between ESG and [2] mentioned above?\n Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "3-I7LZjAjik", "PdSoinzYLbG", "TE8RbsU9iJS", "Ri6eIzCfVlq", "SF8BTqRXGSa", "20G-bbdeD9y", "vGWUB9a7mhX", "KN1Zj48w4AK", "VooeZWLQioH", "-7Wn4C4Qlw", "nips_2022_Jw34v_84m2b", "u72L0FBrqi3", "u72L0FBrqi3", "u72L0FBrqi3", "u72L0FBrqi3", "u72L0FBrqi3", "nIKOaMSCJX8", "nIKOaMSCJX8", "gaC8d6L9np2", "gD3KBnidxGt", "nips_2022_Jw34v_84m2b", "nips_2022_Jw34v_84m2b", "nips_2022_Jw34v_84m2b", "nips_2022_Jw34v_84m2b" ]
nips_2022_70bBDacSpNn
Operator-Discretized Representation for Temporal Neural Networks
This paper proposes a new representation of artificial neural networks to efficiently track their temporal dynamics as sequences of operator-discretized events. Our approach takes advantage of diagrammatic notions in category theory and operator algebra, which are known mathematical frameworks to abstract and discretize high-dimensional quantum systems, and adjusts the state space for classical signal activation in neural systems. The states for nonstationary neural signals are prepared at presynaptic systems with ingress creation operators and are transformed via synaptic weights to attenuated superpositions. The outcomes at postsynaptic systems are observed as the effects with egress annihilation operators (each adjoint to the corresponding creation operator) for efficient coarse-grained detection. The follow-on signals are generated at neurons via individual activation functions for amplitude and timing. The proposed representation attributes the different generations of neural networks, such as analog neural networks (ANNs) and spiking neural networks (SNNs), to the different choices of operators and signal encoding. As a result, temporally-coded SNNs can be emulated at competitive accuracy and throughput by exploiting proven models and toolchains for ANNs.
Reject
Reviewers agree that manuscript presents a fresh attempt, but also that the manuscript is lacking in several aspects. The writing has a lot of room for improvement and not suitable for the NeurIPS community. It's neuroscientific claims are controversial, or relies on non-mainstream arguments without appropriate justifications. The theoretical and experimental results are limited.
train
[ "3ajulTFQFJ6", "mYYXLBUGwB", "rEhu_OYZXW", "Ykgulll0q-x", "pObWdspEpz1", "s3JZv3E8hvS", "ySFuI50QNk5", "KZKwlttBnh9", "D62fmi35e2Q", "OemuTBnfJt", "poGT58ZLp0V", "xXLca0vOeuA", "0PA5i0usmj" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your clarification. I see your point better. We will update it with this focus. If you have other points for us to clarify, please let us know ASAP. ", " First of all, as the title says, our paper is on a new representation (or language in your wording) for temporal neural networks. We do not have any objection in claiming that our representation covers tSNN/TTFS in a certain condition (like proposition 2) and event-driven approaches for more unrestricted x_i in Eq. 30. \n\nThen the bottom-line question is what for? Presently it is to prove out scalable performance competitive to the mainstream ANNs, which we believe is an essential condition for spike-based (or more generally even-driven) systems to remain a viable option for future AI systems. We have so far found no midsize benchmark results in the literature, such as CIFAR10/100 or higher as we demonstrated in sec 4.2. We only found various MNIST results for tSNN/NEST in the literature. For example, tSNN/TTFS for our ef. 12 reproduced in Figure 3 as tSNN(Euler), or NEST in the paper entitled Evaluation of the Effect of the Dynamic Behavior and Topology Co-Learning of Neurons and Synapses on the Small-Sample Learning Ability of Spiking Neural Network Xu Yang * , Yunlin Lei, Mengxing Wang, Jian Cai, Miao Wang , Ziyi Huan and Xialv Lin, file:///C:/Users/084566760/Downloads/Evaluation_of_the_Effect_of_the_Dynamic_Behavior_a.pdf. \nThose MNIST results are behind ANNs in accuracy and/or throughput. \n\nOnce we validate our approach for existing models and workloads, it should be quite interesting to investigate new functionalities maybe in between tSNN/TTFS and event-driven approaches as you pointed out. We should go step by step cautiously since it may not be a good practice to build a house on shaky ground. Since intelligence only appears in an above-threshold number of neurons, a scalable representation is MUST. Our intention is to present our new scalable representation to facilitate this kind of research in the community. \n\nWe come up with this new scalable representation by exploiting physics language. As discussed in our appendix, other methods such as kernel methods might do similar for the logical layer, but may not work as well when combined with the physical layer. The reason why we decided to adopt the more physics-oriented representation such as with spike creation/annihilation operators is to provide us a representation more transparent to spike physical dynamics, such as X_i = x_i f(t_i) in line 180 for the specific f in Eq. 27. \n\nHope you see our point. ", " Ok, so you simply consider a repetitive application of TTFS networks for each period of some theta-oscillation (say), while completely resetting the neurons with each period? If so, this would be a trivial extension. You seem to go beyond when you say you consider multiple spike interactions within a period. It is not clear, how you map this to ANNs, and what the role of the period is. Is the point that you consider only a finite number of spikes within a period, and the size of the ANN you map the tSNN depends on the number of spikes? \n\nIf there is no upper bound on the number of spikes within a period (I don’t see such an upper bound in your proposals), then you seem to simply describe event-based implementation of spiking neural networks, as most spiking network simulators do it, see e.g. the NEST simulator that is itself based on many papers. In this case you should show that your simulations are faster than those classical event-based simulations.\n\nSince you stress the global clock so much, there must be something in between time-to-fist-spike and event-based spike simulations. You should be able to describe this “more” in a language does not use your particle physics. Can you do that? If not, I fear you are just replicating in another language what others did (but I’m happy to be taught differently). ", " Thank you for the detailed reply. I saw these references originally, and tried to comb through them however, not having a strong background in this area made it difficult. While I do not think you should have to blindly duplicate information, I believe it is important that time be spent to communicate these ideas in a way that would maximize the amount of people able to read the paper and walk away with new insights. For example, even the bra-ket notation that I am slightly familiar with can be slightly confusing since it is not something I see very often; as another example, equation 16 seems very abrupt and did not bring me many additional insights. While the working details of general relativity or not simple, I am sure that there exist a layman explanation that many people can grasp. ", " We think your last statement in weakness 2: \"we can intuitively get the ultimate model (Eq.33) without the complex linear algebra.\" is incorrect. What you mentioned may be true if X_i = (x_i, t_i) i.e., just naively representing spatiotemporal coordinate (let us ignore the layer index (n) for simplicity here), \n\nHowever, we derived a specific relationship of X_i = x_i f(t_i) as written in line 180 with f(t_i) in Eq. 27. This specific choice cannot be obtained without incorporating the actual spike physical dynamics discussed earlier in sections 3.1-3.4. \n \n", " Thank you so much for spending your time on the review. \n\nFirst of all, we are glad to find that the motivation, framework, and benefit of our paper have been correctly captured on the whole. Also, we appreciate your sincere understanding on our challenge of writing a multidisciplinary paper. \n\nHere we would like to ask for your additional clarification to improve our paper. Frankly, we are a bit lost on how to handle your following comment: \n \n\"I would be okay with this if the references provided at least served as good introductions to these concepts and were presented in a way that I would be at least able to follow high level ideas presented in the paper.\" \n\n\"Greatly expanding the appendix in this case would not just be a benefit to the common reader, but a necessity in this case.\"\n\nWe thought we had already provided some excellent textbooks and review papers (such as ref 24-27, 38) in the field to deep dive into each content, and we are planning to add some more (such as P. Dirac's textbook for the original bra/ket notation). However, we do not see much reason to just blindly duplicate the content of those textbooks and reviews in our appendix. \n\nSo, could you be more specific about which part you have difficulties with? That kind of specific suggestion should greatly help us to improve the presentation of our paper for the NeurIPS audience. Otherwise, it may be like asking Einstein to explain his general relativity theory starting with differential geometry from scratch w/o identifying which part is difficult to understand. \n\nFor your first question, in short, both time scales are important, but the larger time scale can be handled with other techniques, such as time/positional embedding. Therefore, our focus of the present paper is to show how to systematically handle the smaller time scale. We will clarify this point better in the update. \n\nFor your second question, just in case you are not familiar with distributed RC v.s. LC transmission line, please refer to literature on telegrapher equation (e.g., https://en.wikipedia.org/wiki/Telegrapher%27s_equations, lossy transmission line). RC and LC play fundamentally different roles since they correspond to the diffusion equation and the wave equation, respectively. Operators for RC inherently ask to include dissipative loss via an imaginary part in the frequency. Thus, when forward signals are diminishing, backward could be exploding, which can make the system unstable, similar to the well-known gradient vanishing/explosion issue. \n\nFinally, we consider cubit (we should name it!) should be appropriately defined to bind the discrete spike physical signal to the neural information entity in the state space different to qubits (that is why we use double bra/ket):\n half or full => whether to include inhibitory\n normalized or unnormalized => algorithmically normalize or not\nIf appropriate, we may move this to the appendix for a more complete explanation. \n\nWe appreciate your further advice to improve the readability and presentation of our paper for NeurIPs audience. ", " Thank you for spending your time reviewing the paper. \n\nYou mentioned that the most important weakness of our paper is limited demonstration i.e. the paper only contains MNIST demonstration. However, there is a serious misunderstanding. The fat is that our paper also contains results on accuracy and throughput for CIFAR10&100 (with 10 different seeds) as given in section 4.2 as well as fig. 4. \n\n\n\n", " Thank you very much for spending time reviewing our paper. \n\nFirst of all, as is written in the first sentence of the abstract. the paper is on the artificial brain, not on the biological brain. We proposed a new representation and clarified the condition (not a restriction) in Theorem 1 such that tSNNs can more efficiently (energy, etc) process existing ANNs benchmarks/workloads. So it does not matter much whether our formulation reflects 100% of what the biological brain does. We consider our simpler formulation such as the detection w/o the gap, not a limitation, but instead a desired feature to start with when integrating billions of artificial neurons. \n\nThus, we consider weakness (1) from a neuroscience perspective to be rather irrelevant. Setting the global clock T_c \"closer\" to our behavioral time scale is \"inspired\" by well-known neuroscience literature (place cell, 2014 Nobel prize in our ref 17). which we believe is newly incorporated into tSNNs for the above benefit. \n\nThen, let us move on to weakness (2). You are correctly identifying that our present formulation is missing the layer dependency on w. So this typo has to be updated in the revision. Indeed as demonstrated in the evaluation section, w's in different layers have correctly been taken differently for the benchmarks to work. \n\nPlease kindly let us know whether this explanation makes sense or if you still have issues. \n\nWe do appreciate your time spent on this matter. ", " Thank you very much for spending time reviewing our paper. Please let us confirm your points to facilitate our understanding of your questions and the overall review process:\n\n(1) One of our main motivations is to clarify the condition (not a restriction) such that tSNN can become equivalent to ANN as proven in theorem 1 via our new representation. Furthermore, we actually try to overcome the restrictions of TTFS you pointed out (no multiple spikes in a single overall global clock period, limited temporal correlation/control) and go beyond. Indeed, our model does not have those restrictions, thanks to the wave-based superposable signal model in an \"analogy\" to the boson model as written in lines 105-6, and more complex delta_t correlations, for example, via Eqs. 27. \n\nWe are planning to update our paper to include specific comparisons to TTFS. Does this sounds reasonable or do we miss any of your points?\n\n(2-3) Could you clarify specifically whether you are requesting to augment the descriptions on already referred literature (lines 26-9) or to include a new one we might be missing? In the latter case, could you navigate us to the specific state-of-the-art literature you have in mind to facilitate the review process? We are aware that even-based simulation has a long tradition as some of the prior-art works and their limitations have been briefly described in lines 26-9. However, we have so far not yet found any event-driven simulation literature for tSNN that has proven to run standard benchmarks competitively (both throughput and accuracy) to ANNs. \n\n(4) The use of a slow global clock in the biological brain should be well known in the neuroscience literature, for example, in ref 17 (place cell) since the discovery won the Nobel prize in 2014. So we consider that the hint you asked for is already there. On the other hand, as you pointed out, the detailed microscopic mechanisms of the biological brain, such as how the field potential controls individual neurons, may not have been fully understood. However, they should not be considered as a serious show stopper when building an artificial brain, as the detailed biological mechanism of how the birds fly was irrelevant to building airplanes. So, we consider that your question has been already answered to an appropriate level. Could you clarify your question further in case you disagree? \n\nWe do appreciate the further time you send on this. ", " The paper applies a formalism from quantum theory to ANNs and SNNs that offers to deal with time discrete and asynchronous events. The theory is applied to temporal spiking neuronal networks (tSNNs) that are shown to be equivalent under certain conditions to ANNs. The coding is applied to CIFAR 10 & 100, and it is shown that the suggested operator-discretized tSNN have a better “throughput” than the Euler. discretization of tSNN, and that operator-discretized tSNN are comparable in “throughput” with ANN. It is certainly always interesting to transfer successful methods from other fields. Yet, in the current example, it does not become clear, what one really gains over existing techniques in simulating ANNs, or tSNNs transformed to ANNs. (1)\tThe equivalence between tSNNs and ANNs is only shown with some restrictions that are not so clearly spelled out. In general, tSNNs are treated as ANNs in the literature mainly for time-to-first-spike (TTFS) coding, assuming an overall clock that resets the state of the system at discrete times, and from there on considers time as the analog variable of the ANN. This special case does not include the processing of multiple spiking signals, emitted from each single neuron, in real and continuous time, but never synchronized with an external clock or with other neurons of the network. Does you theorem contribute to this more general case? In what sense does it go beyond the well studied TTFS coding?\n\n(2)\tEvent-based simulations of spiking neurons has a long tradition and is shown in many papers and simulation platforms to dramatically outperform naïve Euler-discretized simulations of spiking networks. In what sense does your approach go beyond these event-based simulations of SNNs?\n\n(3)\tIf the aim is to contribute to the simulation of tSNNs, then the comparison in the simulations should be done with state-of-the art methods to simulate tSNNs. Can you show such results?\n\n(4)\tSo far the paper simply shows that the performance is comparable to ANNs, supporting the theorem, but begging the question whether the technique can really be used for the analysis of SNN on behavioral time scales, as explained in the Introduction. It is said that at present, the networks are not fully asynchronous, and this is justified by the slow waves in the brain. But slow waves are seen on the level of field potentials, and are far away from globally imposing spike times in individual neurons, as it is assumed in the paper. Can you give a hint how the technique will overcome this?\n yes", " The authors used the linear algebra which is well established in the quantum theory to rewrite the transmission line model, which is as fine as the cable model. By adding some assumptions, they built the tSNN (operator) model which likes a rate model including the time delay information. The keys to simplify the cable-level model to the rate-level model are the use of the constant velocity assumption which can eliminate the distance x and the use of the inner-product-based detection which can eliminate the time t. Strengths:\n1) The work makes a connection between a cable-level model to rate-level model, though there are some unclear issues (see below).\n\nWeaknesses:\n1) From the perspective of neuroscience, from the start, the motivation of setting the global clock T_c closer to our behavioral time scale is wrong. At the circuit level (not the function level), the neural dynamics is continuous. The signal detection process (Eq.16) is far from reality. There is a gap between the aggregated signal (Eq.29) and the event firing time (Eq.30), which needs to clarified by the authors. This gap leads to another question that all neurons in the model share the same time clock (Eq.32), which is not true in reality. \n2) From the perspective of artificial intelligence, the author gave a model (Eq.30 or Eq.33) which is equivalent to a feedforward network in some cases. However, it seems that the connection matrix in the equivalent feedforward network have to remain the same between layers. Though the author claims that their tSNN can encode the time delay information, the model (Eq.33) has just two variables which is not interesting in AI. In fact, we can intuitively get the ultimate model (Eq.33) without the complex linear algebra.\n Overall, to me, this work does not bring any really new contribution to neuroscience and AI (see the above). Please clarify it. The authors should address the limitations of their model in detail, in particular, those assumptions used which are not biologically plausible.", " The authors propose a mathematical framework based on methods in quantum physics and category theory to interpret analog and spiking neural networks. I did not understand the motivation behind the study nor its contribution to the field. I am entering a brief review with low confidence score because the techniques employed in this paper are well outside my area of expertise. I will not comment on the technical details of this paper -- I have asked that the area chairs consider replacing me with another reviewer with more domain knowledge in quantum physics.\n\nNevertheless, I will comment briefly on the appeal of this paper to a general machine learning audience. In short, the paper is not well-written to appeal to this audience and on these grounds alone it may be justified to reject this paper. The abstract and introduction do not explain the basic ideas that will be employed and instead drops hints and jargon to those with a strong physics background (\"It is tempting for those with some physics background to apply techniques...\" and \"operator algebra has been applied to Hopfield networks [18]\"... without defining operator algebra.) I expect the technical details of this paper would not be easily digested to the average NeurIPS reader (e.g. Bra–ket notation is used without any explanation).\n\nPerhaps most importantly for a NeurIPS audience, the practical demonstrations of this work feel weak and lacking. The authors only show a brief proof-of-principle on MNIST. If this result is surprising or significant it is lost on me and should be better explained.\n\nFinally, I also want to note that several comments in the paper referring to biological networks strike me as concerning. For instance, \"the biological brain operates with low-frequency brain waves closer to our behavioral time scale\" -- is a dubious motivation for this study since (a) other neural oscillations are much faster than the theta frequency cited by the authors here, and (b) the biological function of all neural oscillations is still poorly understood and it is particularly controversial to suggest that these oscillations operate like a \"clock\" None. I don't believe there are any negative societal impacts or ethical concerns. My concerns regarding limitations are summarized above.", " The authors propose an alternative representation of artificial neural networks through the use of machinery usually used to analyze complex quantum systems. Through formalisms from operator algebra, they are able to link analog neural networks (ANNs) and spiking neural networks (SNNs) through different choices of operators. For example, operator representations typically applied to quantum wave packets are ported to artificial neural networks under the formalism outlined by the authors. Their proposed framework also allows for more advanced detection strategies in comparison to simple threshold detection commonly used. They also show the practical benefits of their representation. \n\nPast this, they they develop physical representations for the creation and annihilation operators for neural networks through considering transmission line models commonly used in quantum electrodynamics. Their operator formalism then also allows them to consider the case of non-stationary neural signals, defining appropriate time dependent operators through re-use of operators they defined in the stationary case. The main strength of the paper in my opinion is attempting to come up with a general framework to describe neural computation using notions from quantum computing. As the authors note, this kind of theory goes both way — i.e. it can aid in certain aspects of quantum computing, but it may also provide another means to understand neural computation. In addition to that, the authors also try to show the practical consequences/benefits of their framework by showing how it facilitates the conversion from temporally coded SNNs to ANNs. As hardware implementations of neural computing advance in parallel to quantum computing over time, I believe that drawing these analogies and coming up with a cohesive framework can be of great benefit.\n\n\nHowever, I find much of the paper difficult to read both conceptually and theoretically. In my opinion the paper assumes working knowledge of many different fields/concepts that a great deal of readers would struggle to grasp, i.e. quantum electrodynamics, categorical theories, operator theories, and so on. I would be okay with this if the references provided at least served as good introductions to these concepts and were presented in a way that I would be at least able to follow high level ideas presented in the paper. Greatly expanding the appendix in this case would not just be a benefit to the common reader, but a necessity in this case. I appreciate what the authors are trying to do, but the presentation in my opinion needs a great deal of work.\n Could the authors clarify -- when reading the introduction of the paper, the explanation of clock timings and comparing them to membrane time constants makes me think the point trying to be made there was to say that it is not the dynamics following much lower time scales that is important correct? but rather it is the dynamics that follow much larger time scales? If this is the case, after that analogy what I am failing to see is how the proposed representation takes advantage of this fact; I think it would be extremely helpful to hammer away at the relation throughout the paper since it seems like such an important part of the introduction. \n\nI also think more clarification related to Fig. 2 is required. I do not think I have the necessary requisite knowledge to fully understand the LC model for signals traveling between axons/dendrites -- but there is a great deal going on in the figure. I am curious as well, although the LC model has the advantage as you say that it can better transmit information, does this mean it is not possible to create an equivalent RC model? The greatest limitations of the paper in my opinion are the fact that it requires a great deal of requisite knowledge that many will not have. Fleshing out the appendix here would help a lot — and I think the authors really have to guide the reader a great deal more. For example, even jumping into the modified bra and ket notation for the newly introduced ‘cubits’ was a bit overwhelming and I think not necessarily well motivated. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 2, 1, 2 ]
[ "Ykgulll0q-x", "rEhu_OYZXW", "D62fmi35e2Q", "s3JZv3E8hvS", "poGT58ZLp0V", "0PA5i0usmj", "xXLca0vOeuA", "poGT58ZLp0V", "OemuTBnfJt", "nips_2022_70bBDacSpNn", "nips_2022_70bBDacSpNn", "nips_2022_70bBDacSpNn", "nips_2022_70bBDacSpNn" ]
nips_2022__iXQPM6AsQD
Could Giant Pre-trained Image Models Extract Universal Representations?
Frozen pretrained models have become a viable alternative to the pretraining-then-finetuning paradigm for transfer learning. However, with frozen models there are relatively few parameters available for adapting to downstream tasks, which is problematic in computer vision where tasks vary significantly in input/output format and the type of information that is of value. In this paper, we present a study of frozen pretrained models when applied to diverse and representative computer vision tasks, including object detection, semantic segmentation and video action recognition. From this empirical analysis, our work answers the questions of what pretraining task fits best with this frozen setting, how to make the frozen setting more flexible to various downstream tasks, and the effect of larger model sizes. We additionally examine the upper bound of performance using a giant frozen pretrained model with 3 billion parameters (SwinV2-G) and find that it reaches competitive performance on a varied set of major benchmarks with only one shared frozen base network: 60.0 box mAP and 52.2 mask mAP on COCO object detection test-dev, 57.6 val mIoU on ADE20K semantic segmentation, and 81.7 top-1 accuracy on Kinetics-400 action recognition. With this work, we hope to bring greater attention to this promising path of freezing pretrained image models.
Accept
This paper presents a study of how well pre-trained and frozen large models work across several downstream computer vision tasks. The paper initially received mixed reviews with two of them being borderline accept and one borderline reject. The reviewers shared their concerns about the novelty of the investigation and its impact with some additional questions about the setup. The authors provided a rebuttal that addressed some of the reviewers' concerns. Two out of three reviewers updated their reviews in the post-rebuttal phase. Reviewers generally agree that the paper should be accepted but still have concerns regarding the novelty. Due to the comprehensive empirical analysis, AC recommends acceptance but suggests that the authors are urged to look at reviewers' feedback and incorporate their comments into the camera-ready.
train
[ "2_Ywy2vvMA", "fuola9hp4KV", "u70Efa5UZS", "V5TtjoDMxex", "rlHsA_KHNza", "3yWjJC6xgSw", "6xX6p-lHsyUP", "rppWfLBpEQQ", "oA2ypPb1gg", "Zn5LitvYKpc", "A1sBSDPtWVw", "5pJM3jGEC_b" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your valuable and thoughtful comments. Do you have more comments or questions about this submission and our response? We can prepare answers or more results for them.", " We sincerely thank the reviewer for the constructive comments, which help us improve the paper. We would add the results and discussions accordingly in our revision.", " Thanks a lot for helping us improve this paper. We would add the additional results and discussions accordingly in the revision.", " I have looked through the authors' rebuttal. It's good to see that proposed approach leads to saving in train time/GPU memory, and also seems to generalize somewhat to ViTs. Authors should add the additional results into the final manuscript. I plan to keep my original rating.", " The authors have addressed my questions well by showing the experiments on ADE20K, K400, MAE pretraining, and conducting studies on training time/resource. It's interesting to see in Q1 the gap between frozen and finetuned models can be bridged by pretraining on a larger dataset. The studies in Table 2 and 3 of appendix also show that the decoder head matters to the gap between frozen/finetuned models. I'd recommend a score of 'accept' for this paper.", " Thanks for the careful and valuable comments.\n\n**Q1 Novelty**\n\nWe would first refer the reviewer to the **Technical Novelty** in our general response. In the general response, we discuss that this paper aims to provide a simple and effective solution for the less-studied problem of the frozen models. The studies on different pretraining tasks, data scales, model sizes and adding tunable parameters are interesting to share with the community.\n\nPlease note that adding more tunable parameters is intuitive but not trivial. We provide further intuition that both the number of tunable and well-placed parameters is the key to making frozen settings work well. Moreover, we hope our work would serve as a strong start for studies on frozen models and would bring more attention to this promising topic. \n\n**Q2 Analysis of training speed/memory usage**\n\nWe have listed the detailed training speed and memory usage in the **Speed/Memory analysis** of the general response. Tables in the general response show that compared to full finetuning, adding tunable parameters to the frozen backbone would significantly improve speed and reduce memory cost. This would enable users with limited resources to leverage large SOTA models.\n\n**Q3 \"What is the precise design of the \"global\" and \"temporal\" blocks added in Table 4? Also, how is the aggregation over the full video done?\"**\n\n\"Global\" indicates we compute global attention with all tokens across different frames and 2D locations, e.g., TxHxW tokens for input with shape (T, H, W).\n\n\"Temporal\" indicates we only compute attention on the dimension \"T\", e.g., T tokens of different frames at the same 2D location (x, y) for input with shape (T, H, W).\n\nFor frozen models with linear heads, the aggregation is simply performed by an average pooling operation over all frames and spatial locations on top of the backbone. For frozen models with global blocks in the head network, aggregation can be achieved by global attention.\n\n**Q4 \"...similar analysis on ViTs would make the paper stronger.\"**\n\nThanks for this suggestion. We have added the ViT results on K400 video action recognition in the **Analysis on ViTs** of the general response. It is observed that results are similar to those using Swin. \n\n**Q5 \"Paper's impact can be increased by adding few more computer vision tasks, such as vision-language tasks (VQA, captioning, etc), 3D/RGBD recognition tasks, etc.\"**\n\nThanks for the suggestion. This is an interesting direction to explore, and we leave this as future work.", " Thanks for the valuable comments.\n\n**Q1 \"Will these results change with different pretraining data scales?\" \"For example, adding experiments using the CLIP pretrained model, to see how that scale of pretraining data effects the results.\"**\n\nCLIP is developed with ViT, and since careful modifications should be made for ViT to apply to various downstream tasks, such as object detection and semantic segmentation, we did not consider ViT for study at the beginning. As it is a great suggestion to use CLIP models, we have conducted an experiment on K400 video action recognition.\n\nAs shown below, when using the CLIP-400M pretrained ViT-B/16 model with a head network containing 4x Global Blocks, the top-1 accuracy on K400 of the frozen setting is even better than the full finetuning setting (79.9 vs. 78.0), which further demonstrates the benefit of freezing the backbone, that the model can better utilize the knowledge learned in pretraining while avoiding the forgetting problem.\n\n In addition, the performance gap between the frozen setting and full finetuning is significantly reduced by using a better pretrained model with a larger-scale dataset: -4.6 for Swin-B-1K vs. -2.1 for Swin-B-22K vs. 1.9 for ViT-B-CLIP-400M. This validates that our results in Table 1 and Table 5 could successfully generalize with larger data scales.\n\n|Head Network| Frozen acc@1 | Full Ft. acc@1 |\n|--- | --- | --- |\nLinear | 70.9 | 77.6 |\n4x Global Blocks | 79.9 | 78.0 |\n\n**Q2 \"Are these methods really the key to using frozen image models? Are there other methods that could be better?\" \"Related to that, it would be interesting to compare adding trainable layers vs. making parts of the pretrained model trainable. E.g., instead of adding 10 new layers, only add 1, but make the last 9 of the pretrained model trainable.\"**\n\nWe would like to first refer the reviewer to **Technical Novelty** of the general response. As discussed in the general response, this paper aims to provide a simple and feasible solution to the problem of using frozen pretrained models. This problem is important but less-studied before. The simple solution of adding tunable and well-placed parameters is intuitive but not trivial. We hope it would be a good start for studies on frozen models.\n\nMoreover, we conduct experiments of partial finetuning with controlled trainable parameters, which has been mentioned by the reviewer. Specifically, we experiment with three settings: finetuning the last block, finetuning the last layer, and finetuing the last layer and the last nine blocks of the penultimate layer, resulting in 33.3M / 45.9M / 76.5M tunable parameters. As shown below, compared to partial finetuning, our solution is more efficient and effective, with fewer trainable parameters and better performance.\n\n| method | #param | Box mAP | Mask mAP |\n| --- | --- | --- |--- |\n| Frozen backbone | 22.9M | 51.7 |45.5|\n| Finetune last block | 33.3M | 49.0 | 44.0 |\n| Finetune last layer | 45.9M | 49.3 | 44.2 |\n| Finetune last layer and last nine blocks of penultimate layer | 76.5M | 50.3 | 45.1 |\n| Full ft.| 109.2M | 51.9 |45.7|\n\n**Q3 \"If this paper is trying to find the key to using frozen models, why are the results lower than finetuned? Is this an inherent limitation of using frozen models? Or has the method to properly utilize pretrained models not been found?\"**\n\nPhilosophically speaking, the frozen model has unique advantages and disadvantages. \n\nFor the advantages:\n- It can avoid catastrophic forgetting, that is, completely retaining the information obtained from pretraining.\n- It can also alleviate over-fitting. As shown in Table 1 in the appendix, the frozen setting surpasses full finetuning on COCO when using 1\\% and 10\\% data.\n- It can also reduce the computation and memory overhead of full finetuning (as shown in the general response), making it affordable for low-resource institutions to use large models.\n- It can also reduce the overhead of serving models as we just need to serve a shared backbone to solve various vision problems.\n\nFor the disadvantages, the frozen setting is highly demanding on pretraining. If the pretraining procedure does not cover the main information of the downstream tasks, there may be a performance loss compared to finetuning, e.g., SUP-1K (-4.6) and SUP-22K (-2.1) on K400 video action recognition as shown in Table 5. But if the pretrained model is sufficient enough, then the performance gap of the frozen and full finetuning setting can be greatly reduced, e.g., CLIP-400M (+1.9) on K400, and SUP-22K (+0.2) on COCO object detection. \n", " Thanks for the valuable comments.\n\n**Q1 What causes the gap on ADE20K semantic segmentation and K400 video action recognition?**\n\nFor ADE20K semantic segmentation, we agree that it's somewhat strange that the trend of ADE20K is inconsistent with COCO. To verify whether this is related to the model architecture, we have also conducted a similar comparison using the UPerNet framework, as shown in Table 2 and Table 3 in the appendix. Surprisingly, the gap between the frozen setting and the full finetuning setting almost disappears. The reason behind this phenomenon may be that the pixel decoders and Transformer decoders used in Mask2Former are hard to optimize when they get deeper (models in both the frozen and the full finetuning setting will crash when the depth of pixel decoder is larger than 6), while the optimization issue doesn't exist in UPerNet.\n\nFor K400 video action recognition, the gap between the frozen and full finetuning settings is due to the pretraining task and dataset, and this can be significantly reduced by using a better pretrained model with a larger-scale dataset: -4.6 for Swin-B-1K vs. -2.1 for Swin-B-22K vs. 1.9 for ViT-B-CLIP-400M. Specifically, as shown below, when using the officially released *CLIP* pretrained ViT-B/16 model with a head network of 4x Global Blocks, the top-1 accuracy on K400 of the frozen setting is further improved over supervised pretraining with ImageNet-1K and 22K. Note that this frozen setting works even better than the full finetuning setting (79.9 vs. 78.0), which also demonstrates the benefit of freezing the backbone, that the model can better utilize the knowledge learned in pretraining while avoiding the forgetting and over-fitting issues.\n\n| Head Network | Frozen acc@1 | Full Ft. acc@1 |\n| --- | --- | --- |\nLinear | 70.9 | 77.6 |\n4x Global Blocks | 79.9 | 78.0 |\n\n**Q2 \"For video action recognition, I'm wondering if using a frozen model pretrained on videos (rather than images) would help bridge the gap with full ft.\"**\n\nWe adopted VideoMAE pretrained models to explore the effect of pretraining on videos. As VideoMAE used global attention in the backbone while the main results in our work used spatial attention, we report results with both types of attention. As shown in the following table, there is still a performance gap of -4.7 (73.7 vs. 78.4) between the frozen setting and the full finetuning. This may be caused by the pretraining task of masked video modeling, which has difficulty capturing high-level semantics.\nAlthough the VideoMAE pretraining does not bridge the gap, the results of CLIP models in **Q1** validate that the frozen setting with a strong pretrained model would even surpass the full finetuning. \n\n| Backbone Attention | Head Network | Frozen acc@1 | Full Ft. acc@1 |\n| --- | --- | --- | --- |\nSpatial | Linear |20.8 | 73.3 |\nSpatial | 4x Global Blocks | 69.7 | 76.8 |\nGlobal | Linear | 27.6 | 76.2 |\nGlobal | 4x Global Blocks | 73.7 | 78.4 |\n\n**Q3 It might be helpful to show the benefits of frozen models by comparing training time or resources used.**\n\nThanks for the great suggestion which helps us to improve this paper. We would like to refer the reviewer to the **Speed/Memory analysis** part of our general response. In the general response, we show that training with a frozen backbone can significantly improve speed and reduce memory cost and thus enabling more users with limited resources to take advantage of large models.\n\n**Q4 \"In Table 1, it may be interesting to add MAE (masked autoencoder) style self-supervised learning and see if that shows any difference between frozen and full ft.\"**\n\nAs MAE is originally developed with ViT, we compare it with other pretrained ViT models in the **Analysis on ViTs** of our general response. It is observed that MAE has low accuracy in the frozen setting, and this is very similar to SimMIM. This result may not be surprising, as MAE and SimMIM share the masked image modeling paradigm.", " First, we sincerely thank the reviewers for their constructive feedback.\n\n**Q1 Analysis on ViTs**\n\nSince it is not very easy to apply ViT to various downstream tasks, such as object detection and semantic segmentation, we did not consider ViT for study at the beginning. But adding results of ViT is a good idea, so we conducted experiments with ViT-B on K400 video action recognition. Here, we test with five pretrained ViT-B models, including supervised pretraining on ImageNet-1K (SUP-1K), supervised pretraining on ImageNet-22K (SUP-22K), masked image modeling of MAE on ImageNet-1K (MAE-1K), masked video modeling of VideoMAE on Kinetics-400 (VideoMAE-K400) and vision-language pretraining of CLIP on 400M <image, text> pairs (CLIP-400M). \n\nFollowing the setting of Table 1 in the paper, we used a spatial-only transformer backbone and a linear head for K400 video action recognition. As shown in the following table, MAE and VideoMAE both perform poorly in the frozen setting, which is similar to SimMIM. This result could be foreseen as they share the masked image modeling paradigm. We can also observe that the CLIP model performs best and the SUP-22K model outperforms SUP-1K by a large margin, which indicates the substantial benefit that data scaling brings to the frozen setting.\n\n| Approach | Frozen (linear) | Full ft (linear) |\n| --- | --- | --- |\nSUP-1K | 58.8 | 75.6|\nSUP-22K | 64.0 | 77.1 |\nMAE-1K | 28.3 | 73.9 |\nVideoMAE-K400 | 20.8 | 73.3 |\nCLIP-400M | 70.9 | 77.6 |\n\n**Q2 Speed/Memory analysis**\n\nWe carefully analyzed the training speed and memory usage of **the frozen backbone with additional tunable parameters** and **full finetuning** on COCO object detection, ADE20K semantic segmentation and K400 video action recognition using various sized models from SwinV2-T to SwinV2-G. As shown below, even with more trainable parameters in the head network, training with a frozen backbone can significantly improve speed and reduce memory cost, especially for large-scale models. Moreover, freezing the backbone reduces the memory consumption of a billion-level model to less than 32G, which makes it possible to run on regular GPUs, thus helping institutions with limited resources to take advantage of such large models. \n\n| Model | Batch Size | COCO Frozen (5x BiFPN) | COCO Full Ft. (FPN) |\n| --- | --- | --- | --- |\nSwinV2-T | 16 | 0.44s / 9.74G | 0.46s / 9.78G |\nSwinV2-B | 16 | 0.50s / 10.38G | 0.65s / 17.29G |\nSwinV2-L | 16 | 0.57s / 10.56G | 0.84s / 25.81G |\nSwinV2-G | 8 | 1.2s / 31.05G | - / > 80G |\n\n\n| Model | Batch Size | ADE Frozen (6x Pixel Decoders) | ADE Full Ft. (1x Pixel Decoders) |\n| --- | --- | --- | --- |\nSwinV2-T | 16 | 0.28s / 4.24G | 0.34s / 5.04G |\nSwinV2-B | 16 | 0.31s / 4.42G | 0.41s / 7.84G |\nSwinV2-L | 16 | 0.34s / 4.84G | 0.48s / 11.77G |\nSwinV2-G | 8 | 1.23s / 23.84G| 3.08s / 78.77G |\n\n\n| Model | Batch Size | K400 (4x Global Blocks) | K400 Full Ft. (Linear) |\n| --- | --- | --- | --- |\nSwinV2-T | 64 | 0.31s / 10.43G | 0.51s / 15.87G |\nSwinV2-B | 64 | 0.52s / 13.46G | 0.96s / 31.47G |\nSwinV2-L | 32 | 0.45s / 12.80G | 0.78s / 27.05G |\nSwinV2-G | 16 |1.14s / 30.54G | - / > 80G |\n\n\n**Q3 Technical Novelty**\n\nThe main purpose of this paper is to design a feasible solution for frozen models on diverse computer vision tasks. For this less-studied problem, we conducted a careful empirical analysis on several key questions including what pretraining task fits best with this frozen setting, how to make the frozen setting more flexible to various downstream tasks, and the effect of larger model sizes. \n\nPlease note that adding more parameters when freezing the backbone is intuitive but not trivial. For example, for object detection, adding more parameters in FPN helps to bridge the gap, but adding in the head network does not. We provide further intuition that both the number of tunable and well-placed parameters is the key to making frozen settings work well.\n\nAlthough there may be other ways to handle this problem, our study provides a strong start with a simple, practical and feasible solution. With this work, we hope to bring greater attention to this promising direction of freezing pretrained image models.", " The paper presents a comprehensive analysis of the use of pretrained image models for some major vision tasks. The paper shows the importance of well-placed tunable parameters to bridge the gap between frozen and finetuning settings, and presents an analysis of feature activation to further showcase this. Strength\n* The paper presents an insightful analysis of the properties of frozen pretrained models for downstream tasks\n* On detection tasks, the gap between frozen model and finetuning is negligible, while the gap is still there for semantic segmentation and video action recognition.\n* The authors also analyze different sizes of base network and different pre-training strategy and report their properties on various downstream tasks. It’s interesting to larger models require less tunable parameter and achieve better performance without increasing trainable parameters.\n\nWeakness:\n* The results in Table 5 suggest that for some tasks frozen models are good enough (e.g. detection), but for others fine tuning still provides clear benefits (e.g. semantic segmentation, action recognition). It would be interesting to delve deeper into what causes the gap, e.g. pretraining methods, model design, task itself, or specific dataset. For example, it’s interesting that the AP mask on COCO shows no gap between frozen vs full ft, but there’s a 2 point gap on ADE20K mIoU. I would expect the trend between AP mask and mIoU to be similar as both involve pixelwise localization. For video action recognition, I’m wondering if using a frozen model pretrained on videos (rather than images) would help bridge the gap with full ft.\n* It might be helpful to show the benefits of frozen models by comparing training time or resources used.\n* In Table 1, it may be interesting to add MAE (masked autoencoder) style self-supervised learning and see if that shows any difference between frozen and full ft.\n See weakness.\n Not adequately. There's fairness risks around using frozen pretrained models depending on the datasets the frozen models are trained on.", " This paper studies way to use large, pretrained image models for many different tasks. The approach pre-trained a swin transformer model using either supervised learning, contrastive learning or masked image modeling. This model is then frozen, and new layers are added and trained for the specific task. This paper finds that supervised pretraining transfers best and adding more layers for the finetuning is generally beneficial. The paper is well written, clear and easy to follow. The topic is quite relevant as training large models is becoming common, and finding the right ways to use the pretrained models is quite important.\n\nHowever, there are a few weaknesses that should be addressed:\n\n(1) Table 1 results are only on small-ish pretraining datasets of ImageNet-1k and -21k. This could be limiting, as a main benefit of the contrastive training, e.g., CLIP and ALIGN has come from the massive dataset sizes (e.g., hundreds of millions to billions). It has also been shown that large image transformer models (e.g., ViT) benefit more from large supervised pretraining (e.g., billions of samples). These observations make it hard to know if the results in table 1, Fig 4, table 5 will generalize, or if they are specific to this setting. For example, adding experiments using the CLIP pretrained model, to see how that scale of pretraining data effects the results.\n\n(2) The paper doesn't really propose anything new. It is a useful study, however, it is limited by the few settings compared. E.g., adding more layers and BiFPN for object detection, adding more layers for segmentation/action classification. These aren't especially interesting insights. One of the questions this paper sets out to address is:\n\"What is the key to making the frozen setting work well when the downstream tasks are significantly different from the pretraining task?\" \nAnd I don't think the current experiments really answer that question. Unless the key is simply adding more tuneable layers. Related to that, it would be interesting to compare adding trainable layers vs. making parts of the pre-trained model trainable. E.g., instead of adding 10 new layers, only add 1, but make the last 9 of the pre-trained model trainable.\n\nThere are many other options not explored in this work, which limits the insight gained from this work.\n\nOverall, the paper has limited originality and significance due to those weaknesses. Please address the weaknesses above. Specifically,\n\n(1) Will these results change with different pretraining data scales?\n(2) Are these methods really the key to using frozen image models? Are there other methods that could be better?\n(3) If this paper is trying to find the key to using frozen models, why are the results lower than finetuned? Is this an inherent limitation of using frozen models? Or has the method to properly utilize pretrained models not been found? There was no impact section and the only limitations mentioned were that the approach does not perform as well as full finetuning.", " Authors present a relatively comprehensive study of how far frozen pre-trained models can get us for multiple downstream computer vision tasks: object detection, semantic segmentation and action classification. They specifically explore 1) which pretraining task leads to most useful representations (out of supervised, self-supervised and masked image modeling, supervised on In22K is best); 2) How is the performance affected by adding more tunable parameters (it helps significantly, but less so for larger frozen models -- similar to trends observed for large language models) and 3) What is the best results one can achieve with a very large frozen model (Swin-V2-G, 3B parameters). They obtain some good results on all the 3 tasks considered. ## Strengths\n\n1. [Significance] Impressive results with a frozen backbone: The results reported without finetuning the backbone are quite impressive and to my knowledge not previously reported. For instance 79.6% on Kinetics-400 with a frame-level feature extractor + few temporal/global aggregation blocks is comparable to recent work like TimeSformer, which is finetuned end-to-end. However it is obtained by adding additional tunable parameters on top of the frozen model, so it's not clear how favorably it compares in training compute cost to full finetuning.\n\n2. [Clarity] The paper is well written and easy to follow. The sections are organized well and flow naturally, making the paper an easy and pleasant read.\n\n## Weaknesses\n\n1. [Novelty] The results aren't particularly novel: While they haven't been reported before to my knowledge and hence are interesting for the community to know, they aren't surprising or unexpected. ImageNet-22K models have been well known to generalize better to downstream task; and masked-image-modeling based models have not generalized well without full finetuning. Adding more tunable parameters is expected to improve performance. \n\n2. [Quality] Analysis of training speed/memory usage: What does training with frozen backbone enable? One important gain would be making state of the art models accessible to users with limited resources. However the paper doesn't show explicit comparison on this axis. How does the training time of frozen + tunable parameter models compare to finetuning the frozen model without tunable parameters? Does it lead to significant training time or GPU memory savings that enable SOTA models to be trained on very small GPUs? Or do the additional trainable parameters somewhat nullify the gains by freezing the backbone?\n\n3. [Clarity] Some parts of the model enhancements are not clear. What is the precise design of the \"global\" and \"temporal\" blocks added in Table 4? Also, how is the aggregation over the full video done?\n\n4. [Quality] Additional analysis: While the paper has decent amount of analysis, it is limited to Swin Transformer based architectures; similar analysis on ViTs would make the paper stronger. Also, the set of tasks is somewhat limited -- object detection and semantic segmentation are quite similar (and authors also observe similar trends for both). Paper's impact can be increased by adding few more computer vision tasks, such as vision-language tasks (VQA, captioning etc), 3D/RGBD recognition tasks etc.\n\n## Overall\nThe paper is generally well written and motivated. The results are interesting however not totally novel or surprising. More analysis of the computational savings would make future versions of the paper stronger. Hence, I am borderline, however leaning towards acceptance given the range of results shown across object detection, segmentation and video classification, with different sized Swin models etc. I would be interested in seeing more analysis on computational savings (i.e. training time and GPU memory) of adding additional tunable parameters on frozen backbone VS finetuning the frozen backbone. If that improves accessibility of SOTA models to users with limited resources, that would increase the significance of this work. N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5 ]
[ "6xX6p-lHsyUP", "rlHsA_KHNza", "V5TtjoDMxex", "3yWjJC6xgSw", "rppWfLBpEQQ", "5pJM3jGEC_b", "A1sBSDPtWVw", "Zn5LitvYKpc", "nips_2022__iXQPM6AsQD", "nips_2022__iXQPM6AsQD", "nips_2022__iXQPM6AsQD", "nips_2022__iXQPM6AsQD" ]
nips_2022_IlYS1pLa9y
Searching for Better Spatio-temporal Alignment in Few-Shot Action Recognition
Spatio-Temporal feature matching and alignment are essential for few-shot action recognition as they determine the coherence and effectiveness of the temporal patterns. Nevertheless, this process could be not reliable, especially when dealing with complex video scenarios. In this paper, we propose to improve the performance of matching and alignment from the end-to-end design of models. Our solution comes at two-folds. First, we encourage to enhance the extracted Spatio-Temporal representations from few-shot videos in the perspective of architectures. With this aim, we propose a specialized transformer search method for videos, thus the spatial and temporal attention can be well-organized and optimized for stronger feature representations. Second, we also design an efficient non-parametric spatio-temporal prototype alignment strategy to better handle the high variability of motion. In particular, a query-specific class prototype will be generated for each query sample and category, which can better match query sequences against all support sequences. By doing so, our method SST enjoys significant superiority over the benchmark UCF101 and HMDB51 datasets. For example, with no pretraining, our method achieves 17.1\% Top-1 accuracy improvement than the baseline TRX on UCF101 5-way 1-shot setting but with only 3x fewer FLOPs.
Accept
All three reviewers lean towards the acceptance of the paper. The reviewers believe the rebuttal has addressed their concerns. The AC recommends acceptance of the paper, and suggest the authors to include the materials and the discussion they promised in the rebuttal in the final version of the paper.
test
[ "KdE3VJQ7jly", "xGWamRe-1XK", "CkXbZpyPmnN", "3JX4sDyDwElX", "pYZuMd8u8ys4", "6bG4bp3mTyof", "cQAH6Ym3vwo", "ocAphxfGLlz", "ARTI-w73XDm", "h1tgp9aGXs", "NFKorCJFun4", "dgc0Adr96_K", "zP1n3dVwbo", "KkAFflc2KBx", "uUw3NH4O7Yk", "GXq9UzQR2Ix", "gIVItTuSEb1", "XGAPSWZyRnC", "3JumZEX0Pk", "vdveXOdNY5", "2l4RgRUa9e5" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the effort made by the authors to provide additional experimental results. The response addressed all of my concerns. Therefore, I raise my rating to weak accept.", " Dear reviewer 4Tcp:\n\nWe sincerely thank you for your careful reviews and insightful comments. We have tried our best to respond to the questions raised and add corresponding experimental results to the updated manuscript and supplementary. Please let us know if you still have any unclear parts of our work. We will still try our best to respond and improve. Thanks again for your efforts to review our work.\n\nBest,", " Thank you for your support. I will add the relevant contents to the supplementary later. Thanks again for your patience.", " Thanks for stating the details on how search space will grow. I agree that there needs to be a compromise of selected axes. Please add this discussion to supplementary and put things in perspective.", " Thanks for your valuable advice. Our initial design goal is that the search space can accommodate both video understanding and feature extraction ability. And selecting the # head is mainly based on the consideration of search complexity. We hope to answer the reasons from the following folds:\n\n\n(1) The actual performance of NAS is determined by the space complexity and search efficiency. Considering too many dimensions may affect the performance of the searched model. In some previous Transformer NAS methods (e.g. AutoFormer [6], ViTAS [33]), it is mentioned that # head and channel dimension are indeed important for model design. But comparatively speaking, the search for # head is not easy to lead to huge search space. This is one of its advantages. In this work, the size of our search space is $4.74 \\times {10} ^{18}$. If we follow the setting of ViTAS [33] to search the token embedding dimension, the search space may grows to $4.74 \\times {10} ^{42}$. If we follow the setting of AutoFormer [6] to search the embedding dimension and MLP ratio (the ratio of hidden dimension to the embedding dimension in the multi-layer perceptron), the search space will become larger (even $6.32 \\times {10} ^{51}$). Therefore, we finally referred to the settings of TimeSformer [2] in the channel dimension without conducting channel search.\n\n\n(2) For spatio-temporal resolution, we divide the model into three stages with different resolutions, giving the model some freedom to choose spatio-temporal resolution. We also found that, when fewer input frames are fed in, more temporal attention blocks will be placed in the third stage to improve the action modeling capability.\n\n\n(3) For parallel/multi-path connections and components beyond self-attention, these are indeed not considered in this work. Because the search space in this paper is mainly inspired by Transformer-based video understanding and image classification models. Our original intention is to design a relatively simple and smart search space.\n\nThanks again for your patience here. Although some of the axes you mentioned are not directly studied in this paper, it is an exciting direction for us. Searching for channel search, parallel/multi-path connections and more other operations are expected to build a more unified and diverse search space and further excavate the potential of NAS in neural network design. We really hope to do this in our future work.\n\n\n", " Q1: I think it would be meaningful to include the differences with TRX alignment (mentioned above), in the main paper. It would help a reader to choose between TRX and the proposed method.\n\nA1: Thanks for your constructive advice. We have added relevant expressions to Section 4.3.3 in the main paper, from Line 273 to Line 286.\n\nQ2: I still do not agree with the term 'spatio-temporal alignment'. Although the input representation is spatio-temporal, the alignment process itself is only temporal. Renaming the term would be better than adding a clarification, but either would be okay.\n\nA2: Thanks for your valuable advice. According to your suggestion, we have renamed this term “Temporal Alignment (TA)” to avoid confusion. And we have updated all relevant descriptions in the main paper and supplementary materials. We hope that our improved manuscript can address your concerns.\n", " Thanks for your valuable advice. According to your suggestions, we have added the pretrained results to Section 4.3.4 in the main paper, from Line 301 to Line 307.", " The clarification provided by authors in A7 does not answer my question Q7. I agree that in the mentioned prior work it has shown the benefits of considering different spatio-temporal resolutions, divided space-time attention and so on. In your search space, selection of spatial and temporal components is reasonable. Among other options for search (eg: spatio-temporal resolution, channel-expansion, parallel/multi-path connections, components beyond self-attention), why do you select number of heads as the only other dimension of search? Why this is the optimal (or reasonable-with-budget) search space is still not well-motivated.", " Thanks for the clarifications.\n\n- I think it would be meaningful to include the differences with TRX alignment (mentioned above), in the main paper. It would help a reader to choose between TRX and the proposed method.\n\n- I still do not agree with the term 'spatio-temporal alignment'. Although the input representation is spatio-temporal, the alignment process itself is only temporal. Renaming the term would be better than adding a clarification, but either would be okay.", " I thank the authors for taking time to run the requested experiments. I am satisfied with these results. Regarding the final version, I suggest authors to find a way to include pretrained results in the main paper (maybe in the same table). Having the ablations under the same budget in the supplementary would be okay.", " Dear reviewer 4Tcp:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,", " Dear reviewer kc2E:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest,", " Q3: This paper only provides the experiments without pre-trained weights. I am curious about the results of the pre-trained weights with the ImageNet dataset. With this setting, can this algorithm still achieve a promising result?\n\nA3: Thanks for your valuable advice. We have supplemented relevant experiments in rebuttal.\nBecause the backbone of our model synchronously learns spatio-temporal features, it is a model of video understanding type. We can only use video datasets to train it. We report the performance of our model after pretraining on large-scale Kinetics-400 dataset. Similarly, all models here use 8 frames as input. Through pretraining on Kinetics dataset, all models are compared under a fair pretraining condition. After that, the few-shot dataset UCF101 is used for fine-tuning and testing, and the results are shown in the following table. As shown in Table 12, our proposed model still has the best performance after pretraining on large-scale dataset. With only 1/3 FLOPs, our method can surpass TRX by 2.5% and 10.8% on the UCF101 and HMDB51 dataset, respectively.\n\n| Method | Pretraining | | UCF101 | | | HMDB51 | |\n|------------------------|---------------------|-------------|---------------|---------------|-------------|---------------|---------------|\n| | | Acc | Params | FLOPs | Acc | Params | FLOPs |\n| TimeSformer [2] | - | 63.0 | 40.7M | 73.35G | 41.7 | 40.7M | 73.35G |\n| TimeSformer [2] | Kinetics-400 | 80.5 | 40.7M | 73.35G | 54.2 | 40.7M | 73.35G |\n| TRX [25] | - | 67.0 | 25.6M | 41.43G | 46.4 | 25.6M | 41.43G |\n| TRX [25] | Kinetics-400 | 85.1 | 25.6M | 41.43G | 60.7 | 25.6M | 41.43G |\n| Ours | - | 69.7 | 8.84M | 13.76G | 60.4 | 8.91M | 13.65G |\n| Ours | Kinetics-400 | 87.6 | 8.73M | 13.61G | 71.5 | 8.75M | 13.52G |\n\nDue to page limitations of manuscript, these experiment results are updated to the Supplementary Materials, from Line 625 to Line 631. We are very sorry for this late experiment.\n\nQ4: Fig 4 and Fig 5 analyze the effect of search space shrinking from the aspect of supernet. I notice that the test loss in Fig 5 starts from 2.5 epochs. Why not show the results from the 0 epoch? Besides, I do not think there is a direct correspondence between the training, testing, and removing useless operations. The supernet may get better training because of the smaller search space rather than removing redundant ops.\n\nA4: Thanks for your valuable advice. This is because the loss changes dramatically in the initial stage of training. Within the initial few epochs, loss often drops several times rapidly, e.g., 8.7->1.5. However, this figure is to highlight the differences of the model after it gradually tends to be stabilized. So we omit the performance of the first two epochs to highlight the overall performance of the follow-up.", " Q6: Specific hyperparameter settings are missing, especially in the shrinkage algorithm, which is probably important to include/discuss.\n\nA6: Thanks for your advice. To implement the shrinking strategy, we conduct an operation mask for all operations to control the selectivity of supernet for operations. The initial value of the mask is set to 1, indicating that the operation can be selected. During the training, the loss value and structure of each subnet will be recorded. We calculate the score of all candidate operations for every two epochs according to Eq. 6. Then, all operations are ranked according to their scores, and the top 10% of operations are discarded. The mask values corresponding to the discarded operations will be set to 0, indicating that these operations are in an unselectable state in the subsequent search. After that, the following search process can be carried out more efficiently on this shrunk space.\nWe have made corresponding modifications in the manuscript to promote the understanding of relevant explanations, and updated the ‘B. Details of Transformer Space Shrinking’ subsection in Supplementary Materials\n\nQ7: Why is this the best search space? This is not well-motivated. I believe the authors should discuss why these dimensions (spatial/temporal/heads) are the ones to search in and whether it is sufficient. There are many other aspects such as spatio-temporal resolution, channel-expansion, parallel/multi-path connections, components beyond self-attention. I agree that including all these axes will be practically impossible, but the choices of authors should be well-motivated compared to other options.\n\nA7: Thanks for your advice. As you state, there are massive operations and hyper-parameters for the model to choose from. In the process of designing search space, we drew from many prior excellent works. First, in terms of the overall structure of the model, [13] proposes that the video understanding model has different emphasis on the resolution of time and feature maps in different stages. And the manually designed X3D model has achieved great success in video understanding task. Few-shot action recognition places high demands on the ability of the video representation, which motivated us to utilize NAS to explore the model structure. We hope our model can spontaneously choose and focus on different types of information at different stages to obtain better representations. Second, in terms of video understanding through NAS, [21] and [41] explored the method of searching 3DCNN, and both achieved good performance. Considering the natural advantages of Transformer in sequence analysis, we plan to design search space based on Transformer. Third, through the comparison of various space-time modules, the manually designed TimeSformer confirms the effectiveness of the Divided Space-Time Attention module. Finally, we extract independent Space ''SAB'' and Time Attention Blocks ''TAB'' to build the final search space. It is true that there are still some axes that have not been explored in this work. How to find the optimal network architecture in a super large search space is also a challenging task. We are also willing to make further experiments and attempts in this larger field in the future.\n\nAccording to your comments, we have added a new subsection named ‘K. Motivations for Our Search Space’ to Supplementary Materials. Thank you for your constructive suggestions once again.\n\nQ8: Is the inequality in Eq. 5 correct? I think one should be reversed. Is it not true that when the budget is higher the loss is lower in general?\n\nA8: Thanks for your advice. This inequality is indeed reversed. As we replied in A6, the score of each operation is calculated according to Eq. 6. The main body of the score is based on the loss function, so the greater the score, the worse the operation. This is a mistake in our description. Our overall experiment still follows the correct understanding. We have modified Eq. 5 in revised manuscript. Thanks again for your patience here.\n\nQ9: I do not fully understand the definition of a subnet based on its operators (in L125). Please clarify this and better represent it in the paper.\n\nA9: Thanks for your advice. We use $\\bigcup_{i} \\sum_{j} 1_{j}^{i} O_{i, j}$ to represent the subnet $a$ because our method selects operators layer by layer. In each layer of supernet, subnet selects only one operation. Thus, for the indicator function $1_{j}^{i} \\in\\{0,1\\}$ in $i$-th layer, the sum of the indicator functions is 1: $\\sum_{j} 1_{j}^{i}=1$. Finally, the symbol $∪$ indicates that the selected operations of each layer are combined to form a complete network. We have modified related expressions in revised manuscript, from Line 133 to Line 135.", " Q2: The proposed Prototype Alignment module is well-motivated and a main contribution of the paper. However, I wonder if this is truely novel. It has similarities with the proposal in TRX which also presents a query-specific class prototype for computing distance.\n\nA2: Thanks for your advice. We politely disagree that our idea is not novel compared with TRX for the following three reasons.\n\nFirst, our video representation method is more reasonable and efficient. TRX constructs exhaustive pairs/triplets of sampled spatial features to generate the video representations. However, our searched model naturally generates frame-level spatio-temporal representation, which already contains rich temporal information, so the laborious combination operation is omitted. The disadvantage of TRX method is obvious: the complexity will increase rapidly with the increase of the number of frames. Here we list a table to compare the differences between two generated prototypes when dealing with different input frame numbers. It can be seen from the table that the number of combination and the size of attention map will increase rapidly with the increase of the number of input frames, which is not convenient for the possible application of longer videos in the future.\n\n| # Input Frames | TRX [28] | TRX [28] | Ours |\n|:---------------------:|:---------------:|:----------------:|:-------------------------:|\n| | #Pairs | #Triplets | Temporal Dimension |\n| 4 | 6 | 4 | 4 |\n| 6 | 15 | 20 | 6 |\n| 8 | 28 | 56 | 8 |\n| 12 | 66 | 220 | 12 |\n| 16 | 120 | 560 | 16 |\n\nSecond, proposed spatio-temporal prototype alignment is a nonparametric and concise solution suitable for few-shot video understanding, which can generate high-quality frame-level attention map without relying on complex subnetworks. TRX relies on a CrossTransformer subnetwork to perform alignment operations.\n\nThird, our proposed method greatly alleviates the difficulty of long video modeling, and makes it possible to align and match two complex actions. The pairs/triplets based methods have limited video understanding ability for complex actions (some complex human actions cannot be represented by only 2 or 3 sparse sampled frames). A simple example is that pairs/triplets-based approaches cannot distinguish whether a person hits the desk 3 times or 4 times. Because the maximum number of sampling frames is only 3, and it is difficult for the model to understand this repetitive action beyond triplet.\n\nQ3: Also, the authors call this module 'spatio-temporal alignment' (Section 3.4), when in fact, there is no spatial information at the input to the module. This only performs temporal alignment.\n\nA3: The reason why we named the proposed module ‘spatio-temporal prototype alignment’ is that the objective of the alignment operation is the spatio-temporal representation. Our model does generate a frame-level attention map with a dimension of TxT in the inference process, rather than spatial alignment. We have clarified relevant statements in the revised manuscript to avoid ambiguity for readers.\n\nQ4: Some information missing regarding the shrinkage strategy.\n\nA4: Thanks for your advice. In our shrinking strategy, the score of each operation is calculated according to Eq. 6. The main body of the score is based on the loss function, so the greater the loss, the worse the operation, that is, the greater the score, the worse the operation that should be discarded. We also made corresponding modifications in the manuscript to promote the understanding of relevant explanations, and updated the ‘B. Details of Transformer Space Shrinking’ subsection in Supplementary Materials. Thank you for your helpful suggestions once again.\n\nQ5: Writing in the paper is sometimes missing critical information and hard to follow. It can be improved. Make sure the paper is self-contained (eg: L113: 'as in [15]').\n\nA5: Thanks for your advice. We provide more detailed description for the ‘Spatial Downsampling Block’ in the revised manuscript. We added the content “To avoid the model being overly complex, We incorporate the dimension downsample block, named ''Spatial Downsampling Block (SDB)'' in Table 1 as in [15]. The core operation of this block is to use the down sampling in spatial self-attention, which maps an input tensor of size $(C,W,H )$ to an output tensor of size $\\left(C^{\\prime}, W / 2, H / 2\\right)$ with $C^{\\prime}>C$. Due to the change in scale, we can easily control the model complexity in different stages.” to the revised manuscript from Line 116 to Line 120.", " Thanks for your positive support and constructive opinions. The following answers will be revised accordingly in the final version. And we will discuss related papers (e.g., PAL and TA2N) in our final version.\n\nQ1: I understand that there may be some contradictions between pre-training and few-shot learning. However, since almost all of the state-of-the-art methods are based on pre-trained weights instead of random initialization, the authors should also report performances with pre-trained weights to ensure comparability among them.\n\nA1: Thanks for your valuable advice. We have supplemented relevant experiments in rebuttal.\nBecause the backbone of our model synchronously learns spatio-temporal features, it is a model of video understanding type. We can only use video datasets to train it. We report the performance of our model after pretraining on large-scale Kinetics-400 dataset. Similarly, all models here use 8 frames as input. Through pretraining on Kinetics dataset, all models are compared under a fair pretraining condition. After that, the few-shot dataset UCF101 is used for fine-tuning and testing, and the results are shown in the following table. As shown in Table 12, our proposed model still has the best performance after pretraining on large-scale dataset. With only 1/3 FLOPs, our method can surpass TRX by 2.5% and 10.8% on the UCF101 and HMDB51 dataset, respectively.\n\n| Method | Pretraining | | UCF101 | | | HMDB51 | |\n|------------------------|---------------------|-------------|---------------|---------------|-------------|---------------|---------------|\n| | | Acc | Params | FLOPs | Acc | Params | FLOPs |\n| TimeSformer [2] | - | 63.0 | 40.7M | 73.35G | 41.7 | 40.7M | 73.35G |\n| TimeSformer [2] | Kinetics-400 | 80.5 | 40.7M | 73.35G | 54.2 | 40.7M | 73.35G |\n| TRX [25] | - | 67.0 | 25.6M | 41.43G | 46.4 | 25.6M | 41.43G |\n| TRX [25] | Kinetics-400 | 85.1 | 25.6M | 41.43G | 60.7 | 25.6M | 41.43G |\n| Ours | - | 69.7 | 8.84M | 13.76G | 60.4 | 8.91M | 13.65G |\n| Ours | Kinetics-400 | 87.6 | 8.73M | 13.61G | 71.5 | 8.75M | 13.52G |\n\nDue to page limitations of manuscript, these experiment results are updated to the Supplementary Materials, from Line 625 to Line 631. We are very sorry for these late experiments.\n\nQ2: To demonstrate the superiority and generality of the proposed method, experimental evaluations should be performed on large-scale datasets of action recognition, such as Kinetics and Something-Something, just like the competitors did.\n\nA2:Thanks for your valuable advice. We supplemented the experiment of pre-training with large-scale Kinetics dataset, and proved the superiority of our method under the same pre-training conditions. In addition, on the two classic FSL datasets, UCF101 and HMDB51, we conducted a large number of fair experiments to prove the effectiveness of the method, including using and not using pre training weights. We also follow the suggestions of other reviewers to supplement the comparative experiment of training similar compute budget models from scratch to reduce the risk that large models are easier to overfit.\n\nQ3: Some closely related works, such as [a, b], should also be introduced and compared in the paper.\n\nA3: Thanks for your valuable advice. We are sorry for missing related references. According to your suggestion, we have revised the RELATED WORK accordingly, together with related references. Two relevant references are added into this subsection:\n\n[43]\tZhu, X., Toisoul, A., Perez-Rua, J.M., Zhang, L., Martinez, B., Xiang, T.: Few-shot action recognition with prototype-centered attentive learning. arXiv preprint arXiv:2101.08085 (2021)\n\n[23]\tLi, S., Liu, H., Qian, R., Li, Y., See, J., Fei, M., Yu, X., Lin, W.: Ta2n: Two-stage action alignment network for few-shot action recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 1404–1411 (2022)\n\nAnd, we added the content “[43] proposes a prototype-centered contrastive learning loss and a hybrid attentive learning mechanism to minimize the negative impacts of outliers and promote class separation in few-shot action recognition task. [23] designs temporal transform module to handle action misalignment, which consists of two parts: a localization network and a temporal affine transformation.” to the revised manuscript, from Line 75 to Line 79.\n", " Thanks for your positive support and constructive opinion. We will enhance our writing, and the following answers will be revised accordingly in the final version.\n\nQ1: The formulation of transformer space shrinking is good. It is interesting and reasonable that this paper leverages the expectation of subnets and budgets for evaluating different operations. However, this method is a little complicated. Do authors have any plan to open source the code for the contribution of the community?\n\nA1: We do have plans to open source. The code will be available in GitHub after camera-ready, and all experimental details and procedures will be disclosed.\n\nQ2: The authors propose a spatio-temporal prototype alignment method. I am considering the efficiency of this new method. Does the proposed method have the same computation budget as the old way?\n\nA2: Thanks for your advice. Our method is more efficient than the old way. For example, TRX aggregates the temporal information through the arrangement and combination of spatial information pairs/triplets. The complexity will increase rapidly with the number increase of the input frames. Moreover, this combination of sparse frames is not suitable for processing long videos, and its recognition ability for complex actions will also be limited (some complex human actions cannot be represented by only 2 or 3 sparse sampled frames). A simple example is that pairs/triplets-based approaches cannot distinguish whether a person hits the desk 3 times or 4 times. Because the maximum number of sampling frames is only 3, and it is difficult for the model to understand this repetitive action beyond triplet. Our method is to directly generate frame level spatio-temporal representation, which already contains rich temporal information, so the laborious combination operation is omitted. It can be considered as a more concise and general video feature alignment method. Here we make a table to show the combinatorial complexity explosion faced by the alignment method in TRX.\n\n| # Input Frames | TRX [28] | TRX [28] | Ours |\n|:---------------------:|:---------------:|:----------------:|:-------------------------:|\n| | #Pairs | #Triplets | Temporal Dimension |\n| 4 | 6 | 4 | 4 |\n| 6 | 15 | 20 | 6 |\n| 8 | 28 | 56 | 8 |\n| 12 | 66 | 220 | 12 |\n| 16 | 120 | 560 | 16 |\n", " Thanks for your positive support and constructive opinion. We will enhance our writing, and the following answers will be revised accordingly in the final version.\n\nQ1: It would be interesting to see the performance w/o any pretraining at a similar compute budget.\n\nA1: Thanks for your valuable advice. We have supplemented relevant experiments in rebuttal.\nFirst, we compared the performance of our method and the previous methods under the condition of similar compute budget on UCF101 dataset. We increase the number of searchable layers of our model to three times of the original version stage by stage, making its Params and FLOPs similar to that of TRX. Then we evaluate the performance of extended model in 5-way 5-shot setting. All methods here use 8 frames as input. As shown in Table 13, the extended model still surpasses TRX and TimeSformer and is even slightly better than the original version. This verifies that our method is still better than the previous method at similar compute budget. Increasing the scale of the proposed model does not lead to the drop of performance, which proves that it is feasible to train such a large-scale model on this dataset.\n\n| Method | Acc | Params | FLOPs |\n|:----------------------------------:|:-----------:|:-------------:|:-------------:|\n| TimeSformer [2] | 63.0 | 40.7M | 73.35G |\n| TRX [28] | 67.0 | 25.6M | 41.43G |\n| Ours | 69.7 | 8.84M | 13.76G |\n| Ours (Searchable layers x3) | 70.2 | 26.1M | 41.19G |\n\nIn addition, we also have relevant plans for the pretraining experiments on large-scale datasets that other reviewers are concerned about. We report the performance of our model after pretraining on large-scale Kinetics-400 dataset. Similarly, all models here use 8 frames as input. Through pretraining on Kinetics dataset, all models are compared under a fair pretraining condition. After that, the few-shot dataset UCF101 is used for fine-tuning and testing, and the results are shown in the following table. As shown in Table 12, our proposed model still has the best performance after pretraining on large-scale dataset. With only 1/3 FLOPs, our method can surpass TRX by 2.5% and 10.8% on the UCF101 and HMDB51 dataset, respectively.\n\n| Method | Pretraining | | UCF101 | | | HMDB51 | |\n|------------------------|---------------------|-------------|---------------|---------------|-------------|---------------|---------------|\n| | | Acc | Params | FLOPs | Acc | Params | FLOPs |\n| TimeSformer [2] | - | 63.0 | 40.7M | 73.35G | 41.7 | 40.7M | 73.35G |\n| TimeSformer [2] | Kinetics-400 | 80.5 | 40.7M | 73.35G | 54.2 | 40.7M | 73.35G |\n| TRX [28] | - | 67.0 | 25.6M | 41.43G | 46.4 | 25.6M | 41.43G |\n| TRX [28] | Kinetics-400 | 85.1 | 25.6M | 41.43G | 60.7 | 25.6M | 41.43G |\n| Ours | - | 69.7 | 8.84M | 13.76G | 60.4 | 8.91M | 13.65G |\n| Ours | Kinetics-400 | 87.6 | 8.73M | 13.61G | 71.5 | 8.75M | 13.52G |\n\nDue to page limitations of manuscript, we add the above two sets of experiments to the Supplementary Materials, from Line 624 to Line 638. We are very sorry for these two sets of late experiments.\n\n", " This paper presents a video model for few-shot action recognition. It has two main contributions: (1) NAS to search for model components from a Transformer search space w/ divided space-time attention (space: selecting Spatial/Temporal attention blocks, selecting attention heads in {6,12,16}), and (2) a Prototype Alignment module to compute similarity scores between query and support sets (similar to TRX [25]). Transformer search space is reduced during training based on a shrinkage strategy. The corresponding shrinkage score has the following intuition to determine the strength of an operator (i.e., S/T attention or different number of heads): If the expected loss of a subnet with a certain operator deviates (beyond a threshold) from the expected loss of subnets with the same compute budget, then the corresponding operator is dropped from the search space. Simply put, an operator is useless if it performs worse at the same budget. The authors validate their method with few-shot settings in UCF101 and HMDB51, but mainly consider models initialized from scratch (w/o any ImageNet pretraining) in contrast to previous work. Ablation studies further provide insights into different components of the proposed design. Strengths:\n\n- This paper presents a well-motivated idea and sound arguments in general. The selection of spatial/temporal components makes the design simple, and the search more convenient. It makes sense to vary the number of temporal components in settings w/ different number of input frames.\n\n- The claims of the paper are thoroughly validated in the experiments. Also, the provided ablations give insights into each component in the proposed design.\n\n\nWeaknesses:\n\n- The pretraining setup that the authors follow seems to be unfair. In fact, the authors initialize the models from scratch rather than ImageNet-pretrained weights (in contrast to previous work), citing that there is a possible information leak from the pretraining data to few-shot classes. I do agree with this comment. However, the baseline methods that the authors compare against have significantly-larger budget (params/FLOPs) and they may be at a disadvantage compared to the proposed method, if not pretrained on a large-enough dataset. Because, large models tend to overfit more on small training datasets. It would be interesting to see the performance w/o any pretraining at a similar compute budget. In Table 7, authors pretrain on UCF101 which is still a smaller dataset.\n\n- The proposed Prototype Alignment module is well-motivated and a main contribution of the paper. However, I wonder if this is truely novel. It has similarities with the proposal in TRX which also presents a query-specific class prototype for computing distance. Also, the authors call this module 'spatio-temporal alignment' (Section 3.4), when in fact, there is no spatial information at the input to the module. This only performs temporal alignment.\n\n- Some information missing regarding the shrinkage strategy (can be addressed with another equation). It says that an operator is discarded if its score 'deviates' from the expected score at a similar budget. The score could be significantly lower/higher in both directions beyond the threshold, and a lower score should be better as far as I can see. Is it not the case? This should be clarified.\n\n- Writing in the paper is sometimes missing critical information and hard to follow. It can be improved. Make sure the paper is self-contained (eg: L113: 'as in [15]').\n\n- Specific hyperparameter settings are missing, especially in the shrinkage algorithm, which is probably important to include/discuss.\n\n\n[Post-rebuttal]\n- Authors have responded to all my concerns and I am satisfied with their answers. - Why is this the best search space? This is not well-motivated. I believe the authors should discuss why these dimensions (spatial/temporal/heads) are the ones to search in and whether it is sufficient. There are many other aspects such as spatio-temporal resolution, channel-expansion, parallel/multi-path connections, components beyond self-attention. I agree that including all these axes will be practically impossible, but the choices of authors should be well-motivated compared to other options.\n\n- Is the inequality in Eq. 5 correct? I think one should be reversed. Is it not true that when the budget is higher the loss is lower in general?\n\n- I do not fully understand the definition of a subnet based on its operators (in L125). I see the motivation, but the notation seems off. Why use summation for each operator within a layer but union across layers? Also, what do you mean by saying the sum of all indicator functions in a layer equals to 1? Please clarify this and better represent it in the paper.\n\n[Post-rebuttal]\n- Authors have clarified my questions and I am satisfied with their answers. No major negative societal impact.", " This paper proposes a spatio-temporal feature matching and alignment method for the task of few-shot action recognition with the end-to-end design of models. To achieve this aim, the authors construct a transformer space with spatial and temporal attention elements and search for the optimal Spatio-Temporal representations from few-shot videos from the architecture perspective. In detail, this paper introduces a transformer space shrinking strategy to evolve transformer space and speed up the architecture search adaptively. Besides, this paper proposes a more efficient and general spatio-temporal prototype alignment method, which can be conveniently adapted to arbitrary length video matching. This paper is novel, and the promising results compared to baseline methods prove the efficientness of the proposed method. + The formulation of transformer space shrinking is good. It is interesting and reasonable that this paper leverages the expectation of subnets and budgets for evaluating different operations. However, this method is a little complicated. Do authors have any plan to open source the code for the contribution of the community?\n \n+ The authors propose a spatio-temporal prototype alignment method. I am considering the efficiency of this new method. Does the proposed method have the same computation budget as the old way?\n \n+ The visualization of searched architectures is very impressive, and the search architectures' analysis seems promising.\n \n+ With much smaller FLOPs and parameters, this algorithm achieves relatively good performance compared to the baseline methods.\n \n- This paper only provides the experiments without pre-trained weights. I am curious about the results of the pre-trained weights with the ImageNet dataset. With this setting, can this algorithm still achieve a promising result?\n \n- Fig 4 and Fig 5 analyze the effect of search space shrinking from the aspect of supernet. I notice that the test loss in Fig 5 starts from 2.5 epochs. Why not show the results from the 0 epoch? Besides, I do not think there is a direct correspondence between the training, testing, and removing useless operations. The supernet may get better training because of the smaller search space rather than removing redundant ops. As illustrated in the main review, I curious more about the code, computation budget, and some ablation studies of this algorithm.\n Yes, the authors adequately discussed the limitations of this paper.", " This paper presents a neural architecture search method for few-shot action recognition. The main contribution is the proposal of transformer space shrinking strategy and spatio-temporal prototype alignment. Experiments are conducted on HMDB51 and UCF101. Strengths:\n1. Ablation studies are conducted to evaluate the effectiveness of each component.\n2. The proposed transformer space shrinking strategy reduces the time and cost for architecture search.\nWeaknesses:\n1. I understand that there may be some contradictions between pre-training and few-shot learning. However, since almost all of the state-of-the-art methods are based on pre-trained weights instead of random initialization, the authors should also report performances with pre-trained weights to ensure comparability among them.\n2. To demonstrate the superiority and generality of the proposed method, experimental evaluations should be performed on large-scale datasets of action recognition, such as Kinetics and Something-Something, just like the competitors did.\n3. Some closely related works, such as [a, b], should also be introduced and compared in the paper.\n[a] Xiatian Zhu, Antoine Toisoul, Juan-Manuel Perez-Rua, Li Zhang, Brais Martinez, and Tao Xiang. \"Few-shot action recognition with prototype-centered attentive learning.\" In BMVC, 2021.\n[b] Shuyuan Li, Huabin Liu, Rui Qian, Yuxi Li, John See, Mengjuan Fei, Xiaoyuan Yu, and Weiyao Lin. \"TA2N: Two-Stage Action Alignment Network for Few-Shot Action Recognition.\" In AAAI, 2022.\n 1. Performance comparison with the state-of-the-art methods when using pre-trained weights\n2. Experimental results on large-scale datasets See above" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "GXq9UzQR2Ix", "2l4RgRUa9e5", "3JX4sDyDwElX", "pYZuMd8u8ys4", "ocAphxfGLlz", "ARTI-w73XDm", "h1tgp9aGXs", "KkAFflc2KBx", "uUw3NH4O7Yk", "XGAPSWZyRnC", "2l4RgRUa9e5", "3JumZEX0Pk", "vdveXOdNY5", "3JumZEX0Pk", "3JumZEX0Pk", "2l4RgRUa9e5", "vdveXOdNY5", "3JumZEX0Pk", "nips_2022_IlYS1pLa9y", "nips_2022_IlYS1pLa9y", "nips_2022_IlYS1pLa9y" ]
nips_2022_dIUQ5haSOI
Relation-Constrained Decoding for Text Generation
The dominant paradigm for neural text generation nowadays is seq2seq learning with large-scale pretrained language models. However, it is usually difficult to manually constrain the generation process of these models. Prior studies have introduced Lexically Constrained Decoding (LCD) to ensure the presence of pre-specified words or phrases in the output. However, simply applying lexical constraints has no guarantee of the grammatical or semantic relations between words. Thus, more elaborate constraints are needed. To this end, we first propose a new constrained decoding scenario named Relation-Constrained Decoding (RCD), which requires the model's output to contain several given word pairs with respect to the given relations between them. For this scenario, we present a novel plug-and-play decoding algorithm named RElation-guided probability Surgery and bEam ALlocation (RESEAL), which can handle different categories of relations, e.g., syntactical relations or factual relations. Moreover, RESEAL can adaptively "reseal" the relations to form a high-quality sentence, which can be applied to the inference stage of any autoregressive text generation model. To evaluate our method, we first construct an RCD benchmark based on dependency relations from treebanks with annotated dependencies. Experimental results demonstrate that our approach can achieve better preservation of the input dependency relations compared to previous methods. To further illustrate the effectiveness of RESEAL, we apply our method to three downstream tasks: sentence summarization, fact-based text editing, and data-to-text generation. We observe an improvement in generation quality. The source code is available at https://github.com/CasparSwift/RESEAL.
Accept
The paper describes a model for text generation, based on target dependency relations that should be in the output. The word-level output probabilities are modified to increase the likelihood of generating words that match the target relation. Evaluation is performed on several datasets, formulating the task as text generation based on dependency relations. The empirical gains are OK but not particularly large. What I find more compelling is the ability to control the output of the model, which is currently lacking in most approaches. The reviewer scores straddle the decision boundary and it was unfortunately not possible to get the reviewers to engage in a discussion but the authors did a good job addressing all initial comments/questions.
train
[ "-ZKlSYokhP", "GdqK_1kdGG2", "Yalb0vZsy7A", "hSgXY5q8pPja", "6LcqpiyIJ0x", "Cy_8tZ2E44", "Z2ocI_woalM", "a9Ymn4yK3Rf", "GZho3ImOGUh", "lYqR0LrN364", "1gE7hJE7VJg", "IZYnIEVJgfI", "mofdM99Lvk_", "2kmjN6Byaiz", "szJ7d0e99Sm", "Rgv-zMfw3nP", "OjxYfxOZGao" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We really appreciate all the reviewers for their efforts in reviewing our paper. We believe that we have responded to most of the concerns below. Since the discussion deadline is approaching, here we'd like to briefly summarize the updates we have made to the revised version of the paper:\n\n- We have clarified the external dependence issue in the introduction and added an appendix (Sec. E) to further discuss it, as suggested by the reviewers.\n- We have explained why RCD can help the downstream tasks in line 242-244, as suggested by Reviewer KEQA.\n- We have modified the social impact part, as suggested by Reviewer KEQA.\n- We have fixed some typos, as suggested by Reviewer brwW.\n- We have polished Sec. 3 (Methodology) and remove the python codes in Algorithm 2, as suggested by Reviewer bPF5.\n- We have noted that \"Every BLEU reported in this paper is BLEU-4\" in the paper, as suggested by Reviewer hFXg.\n- We have modified the caption of Table 2 to make it more clear, as suggested by Reviewer brwW.\n- Other experimental results (BLEU 1-4, BERTScore, BLEURT, human evaluation, data-to-text baselines) will be added with more pages available.\n\nAll the modified parts are marked as red color.", " Dear reviewer bPF5:\n\nWe sincerely appreciate your comprehensive and constructive comments. Since the end of the discussion is approaching, it would be great to let us know whether our responses can address your concerns. Could you please also let us know if there are any other concerns that we should address? We would be pleased to clarify them and revise our paper by the response deadline.\n\nBest regards, The Authors", " Dear Reviewers,\n\nThere are less than 48 hours until the end of the discussion phase (09 Aug). Could you please go over our responses and the revision so we can have more discussions? We have responded to your comments and faithfully reflected them in the revised version. We are wondering whether your concerns have been properly addressed. \n\nWe sincerely thank you for your time and efforts in reviewing our paper, and for your insightful and constructive comments.\n\nBest regards, The Authors", " Dear reviewer bPF5:\n\nWe sincerely thank you for the review and comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest regards, The Authors\n\n", " Dear reviewer KEQA:\n\nWe sincerely thank you for the review and comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest regards, The Authors", " We thank the reviewers for their constructive comments. The external dependence issue is the common concern of the reviewers, therefore it should be first answered in this section.\n\nWe agree with the reviewers that our RESEAL relies on the external relation identifier (or say \"relation predictor\"). If the relation identifier is poor, it would somewhat mislead the generation process and hurt the overall generation quality anyway. \n\n**However, at least in our experiment, the relation identifier is of high quality, and we believe that in the tasks of this work, it's not so difficult to train a relatively \"strong relation predictor\".** Specifically:\n- For dependency placement and sentence summarization tasks, we use the left-to-right dependency parser[1] as the relation identifier. Training the parser is easier and faster than training a language model. The training data is from the English Universal Dependencies Treebank, which is publicly available. **The parser achieves 90.93 UAS and 88.99 LAS on the test set of this benchmark, which shows its effectiveness for predicting dependency relations.**\n- For fact-based text editing and data-to-text tasks, we use a biaffine attention model[2] as the relation identifier. We use a single layer LSTM with single direction to extract features. We use two separate MLPs with one hidden layer to extract the features of heads and tails, respectively. Finally, we use a biaffine attention module to classify the relation types. Training this model is also easier and faster than training a language model. The training data can be heuristically constructed from the training set of these two tasks: we consider the target sentences as the inputs, and then use the given fact triple (head, relation, tail) to match the words in the target sentences. **The relation identifier reaches 93.14 F1 score on the test set, which shows its effectiveness for predicting factual relations.**\n\nThus we believe that all the relation identifiers used in our work are strong enough to produce high-quality results. **Training a stronger relation identifier would definitely enhance the performance, but that would be outside the scope of our paper because our paper focuses on the text generation, not the relation extraction task.**\n\n[1] Fernández-González, Daniel, and Carlos Gómez-Rodríguez. \"Left-to-Right Dependency Parsing with Pointer Networks.\" NAACL-HLT (1). 2019.\n\n[2] Dozat, Timothy, and Christopher D. Manning. \"Deep Biaffine Attention for Neural Dependency Parsing.\" ICLR. 2016.", " We sincerely appreciate your constructive comments. We respond to your main concerns below:\n\n> 1. \"For example, given the constraint triplet: (phone, amod, fancy), would a simple lexically-constraint decoder that uses (phone, fancy) produce those words in the right order anyway (due to a strong language model)? \"\n\n**It's relatively more difficult for lexically-constrained decoder to control the word order.** For the relation constraints (phone, amod, fancy), the lexically-constrained decoder may produce the right order \"fancy phone\", which may be simply due to **the similar training samples** used in pre-training or fine-tuning stage. It's hard to control this order because the training data distribution is complex. \n\n**With our proposed relation constraints, we can elegantly control the word order in the decoding stage.** With relation constraints (phone, amod, fancy), we can obtain a sentence like \"The fancy phone...\". With relation constraints (fancy, nsubj, phone), we can obtain a sentence like \"This phone is fancy...\".\n\n> 2. \"Would your relation-constrained decoder produce these words in the wrong order but still labeled with the right head-child dependency relation (due to a poor relation predictor)\"\n\nThis external dependence issue is also the concern of Reviewer brwW and Reviewer bPF5, so we add an official comment titled **\"Response to External Dependence Issue\"** on top of this webpage to explain this issue in detail. Please refer to that for more information, thanks!\n\n> 3. \"For the summarization problem, why do relations help? Perhaps the Data-to-Text and Fact-based Text editing tasks are easier to motivate, but the paper doesn't dedicate enough prose explaining these tasks.\"\n\n**For the sentence summarization, relations may help because some dependency relations in the source may be important. These important relations should be preserved in the summary.**\n\nThere are some examples from GigaWord dataset (# denotes a specific number after preprocessing):\n\n(1) **[Source]** Vietnam forecasts to gain export revenues of #.# billion US dollars **in** the two remaining **months** of this year , a year-on-year increase of ##.# percent .\n\n**[Summary]** Vietnam to post higher export earnings **in** next # **months**\n\n**[Relation Constraints]** (months, 'case', in)\n\n(2) **[Source]** As the indonesian people 's consultative assembly finished drafting decrees relating to president abdurrahman wahid 's accountability , some **large** political **parties** in the country would hold meeting to narrow possible differences **on** the **agenda** of the special mpr session scheduled for august # .\n\n**[Summary]** Indonesian **large parties** to meet **on** assembly session **agenda**\n\n**[Relation Constraints]** (parties, 'amod', large), (agenda, 'case', on)\n\nIn these cases, the key information of the source sentence can be expressed by several relation constraints. If we can preserve these relations, we can produce an informative summary. Our proposed RESEAL enables the model to focus on the important dependency relations explicitly, which helps to improve the summarization quality.\n\nApart from that, the Data-to-Text and Fact-based Text editing tasks use the relations more directly. We will add some contents explaining these downstream tasks in our revised version.", " > 4. \"The proposed method is difficult to understand. I think I got it after several readings, but there may be ways to improve especially the understanding of the general purpose of line 4 and 5 in Algo 1. One suggestion is to elaborate on the left part of Figure 1 (which explains line 4 at least).\"\n\nThanks for your suggestion. Actually, in the left part of Figure 1, we have shown the detailed processes of line 4 in Algo 1. In Figure 1, we showcase how our RESEAL works with an example of RCD. In this case, the given relation constraints are (phone, amod, fancy). At decoding time step 4, let us consider two candidates in the beam search, i.e., \"What a fancy\" (denoted as $Y_1$) and \"This phone is\" (denoted as $Y_2$). In standard decoding manner, model will output a next-token probability $p_{\\text{vocab}}$ according to the context. The next-token probability for $Y_1$ is shown in the top of Figure 1 (denoted as $p_{\\text{vocab}}^{(1)}$), and the next-token probability for $Y_2$ is shown in the bottom of Figure 1 (denoted as $p_{\\text{vocab}}^{(2)}$). Note that $p_{\\text{vocab}}^{(2)}$ includes the probability of the gray part. \n\nIn our work, RESEAL will operate the produced probability distributions according to the result $p_{rel}$ of a relation identifier. Since the token \"phone\" can form a \"amod\" relation with the token \"fancy\" in $Y_1$, which satisfies the given relation constraints, the relation identifier will predict a high $p_{rel}$ for \"phone\" in $p_{\\text{vocab}}^{(1)}$. On the contrary, the token \"fancy\" will form a wrong relation \"nsubj\" with the token \"phone\" in $Y_2$. Therefore, the $p_{rel}$ for \"fancy\" in $p_{\\text{vocab}}^{(2)}$ will be relatively low. Then, according to Equation 5, when calculating the augmented distribution $\\tilde{p}$, RESEAL will preserve the probability of \"phone\" in $p_{\\text{vocab}}^{(1)}$, while it will reduce the probability of \"fancy\" in $p_{\\text{vocab}}^{(2)}$ (as shown in Figure 1, the probability of \"fancy\" in $p_{\\text{vocab}}^{(2)}$ is cut down.) This is what line 4 in Algo 1 does.\n\nWe will add these explanations in the revised version.\n\n \n> 5. \"In Table 1, it seems that some of the lexically-constrained decoding methods (e.g. DBA) are worse than Base. Is that expected?\"\n\nWe believe that is expected. We list some reasons for your reference:\n- The \"Base\" is a very strong baseline, because it exploits the strong pre-train language model BART.\n- Some lexical-constrained decoding methods (DBA, DDBA) would **forcibly add the unsatisfied lexical constraints into the candidate word set.** This is the key to ensuring the presence of lexical constraints. However, doing this fails to consider the relations between words, thus would make the generated sentence less fluent. \n\n\n> 6. \"Are there speed comparisons between proposed method and the simple Rerank baseline?\"\n\nYes. We discuss the speed of our proposed method in Appendix D.2.\n\nLike most lexically-constrained decoding methods (DBA, NEUROLOGIC), RESEAL is usually slower than the standard beam search and the Rerank baseline, but the overall runtime is within an acceptable range. We summarize the time complexity as follows:\n\n- For a sentence of length $N$, standard beam search takes $N$ times of forward propagation.\n- For Rerank baseline, it takes extra $k$ times of forward to parse the $k$ candidates. Overall, it takes $N+k$ times forward.\n- For DBA, it takes $N$ times of forward. However, DBA needs some extra time to handle the candidate set.\n- For RESEAL, let $|C|$ denote the number of the relation constraints. Thus there are at most $2|C|$ candidate sentences to be parsed at time step $t$. It takes at most take $N(1 + 2|C|)$ times of forward for RESEAL, which still is $O(N)$ time complexity. Moreover, there are many implementation tricks to be applied to optimize the actual decoding speed. ", " > 7. \"Can you provide some example sentences of your decoder vs DBA in the main part of the paper? \"\n\nSure. We have provided some examples in Appendix D.3. We will move them to the main part of paper in the revised version with more pages available.\n\nWe list some examples for your reference:\n\n(1) **[Relation Constraints]** (thought, 'ccomp', ’s), (’s, 'nsubj', charges)\n\n**[DBA]** I **thought** the **charges** would be $ 10,000, but it **’s** $ 20,000.\n\n**[RESEAL]** I **thought** there **’s** some **charges**, but I was wrong.\n\n**[Reference]** I always **thought** there **’s** no custom **charges** for gifts.\n\n(2) **[Relation Constraints]** (was, 'nsubj', clients), (rushed, 'obl', time), (rushed, ‘advcl’, was), (clients, ‘nmod’, facility)\n\n**[DBA]** The **clients** of this **facility** were **rushed** out of the building by the **time** I got there and there **was** no one there to help me .\n\n**[RESEAL]** When there **was clients** in **facility** , they **rushed** out of the building at the same **time** as the clients were leaving .\n\n**[Reference]** Overpriced and the doctor acted arrogant and **rushed** at a **time** when there **was** very few **clients** in the **facility** .\n\nIn the first example, DBA misses both the (thought, 'ccomp', ’s) and (’s, 'nsubj', charges) constraints. In the second example, DBA misses the (was, 'nsubj', clients) constraints. DBA can satisfy all the lexical constraints, but it often fails to handle the relations between them. RESEAL can produce fluent sentences that are close to the grammatical structure of the references. \n\n> 8. \"It seems to me that the relations in the decoder are always predicted. It is a hidden variable, so it is somewhat different in nature to the words in the lexical constraints, which are observed. If you have a poor relation predictor, can it fool your decoder into believing that the constraints are satifised?\"\n\nWe agree that many existing works view the relations as a hidden variable. This is an **implicit** way of predicting relations. However, that is exactly the main difference between our work and these existing works. We propose an **explicit** way of handling the relations, which can somewhat improve the **interpretability and controllability** of the models.\n\nAbout the \"poor relation predictor\", we have addressed this issue in the official comment on the top.\n\n> 9. About social impact of our method\n\nWe're sorry for this unclear section. We have modified the Appendix E as follows:\n\n\"Our work has introduced a generic decoding method for Relation-Constrained Decoding (RCD). Similar to most of text generation techniques, our proposed RESEAL has a potential risk of being deployed to generate human-like fake text. We suggest that the users or the programmers can utilize the relation constraints to carefully control the generation, e.g., avoiding generating the wrong facts. Therefore, we still believe that the societal impacts of RESEAL are limited and under control. \"", " We sincerely appreciate your constructive comments. We address your concerns below:\n\n> 1. \"It does not sufficiently address existing work on data-to-text generation ... Therefore, work in this area should be covered under related research and relevant baselines ... is used as a baseline.\"\n\nWe are sorry that we have included the evaluation on the data-to-text task in such a minimal section due to the page limit. Thanks for your suggestion, and we have additionally included the task-specific baselines in the following table:\n\n| Models | BLEU-4 | \n| ------------ | --------- |\n| [1] Castro Ferreira et al. (2019) | 51.68 | \n| [2] Moryossef et al. (2019) | 47.24 | \n| [3] Zhao et al. (2020a) | 52.78 | \n| [4] Harkous et al. (2020) | 52.90 | \n| [5] Nan et al. (2021) | 45.89 | \n| T5-small | 56.34 | 42.78 | \n| T5-small+RESEAL | 56.87 | \n| T5-base | 59.17 | \n| T5-base+RESEAL | **59.59** | \n\nIt is worth to mention that our proposed RESEAL can be adapted to any model as a plug-and-play decoding algorithm. In our experiments, we adapt RESEAL to the current SOTA model i.e. T5, and observe a further improvement. This speaks volumes for the effectiveness of our approach. We will add these results in the revised version with more pages available.\n\n> 2. There is a nice section on dependency-guided generation, which has the same goal as the proposed task. However, none of the listed previous work seem to be reported as a baseline.\n\nAs you have mentioned, our dependency placement is a completely novel task, so actually there are no existing baselines directly designed for this task. We have listed some works related to dependency-guided generation to illustrate the effect of dependency for text generation. Some of them are designed for specific tasks, and others cannot directly used for dependency placement task. Moreover, it's worth noting that SemSum[6], which has been mentioned in \"dependency-guided generation\" section, is reported as one of the baseline for sentence summarization.\n\n> 3. BLEU 1-4 should be reported, along with METEOR. One BLEU is currently reported but it is unclear which one it is. BLEU-1 would not be sufficient to measure sentence quality.\n\n**All the BLEUs in our paper are BLEU-4**. We will clarify this in the revised version.\n\nThen we agree that more evalution metrics should be reported. Thanks for your suggestions. We will report them in the following table:\n\n| Models | BLEU-1 | BLEU-2 | BLEU-3 | BLEU-4 | METEOR | \n| ------------ | --------- | --------- | --------- | --------- | --------- |\n| Base | 40.09 | 25.78 | 17.40 | 11.92 | 20.12 |\n| Rerank ($k=20$) | 39.58 | 25.38 | 17.09 | 11.66 | 20.18 |\n| CGMH | 23.04 | 8.30 | 3.30 | 1.47 | 14.50 |\n| X-MCMC | 30.62 | 15.03 | 8.20 | 4.62 | 17.04 |\n| X-MCMC-C | 32.28 | 17.50 | 10.48 | 6.39 | 17.65 |\n| DBA | 39.93 | 25.33 | 16.93 | 11.47 | 20.12 | \n| DDBA | 41.08 | 26.34 | 17.75 | 12.22 | 20.12 |\n| NEUROLOGIC | 41.52 | 26.75 | 17.98 | 12.23 | 20.13 |\n| RESEAL (this work) | **43.79** | **27.81** | **18.64** | **12.62** | **20.40** |\n\n> 4. It is currently unclear how many dependencies are given as input to the model. And if they are not all the dependencies, then how are they chosen.\n\nOur Table 6 (please refer to the Appendix B) reports how many dependencies are given as input to the model:\n\n| | train | dev | test | \n| ------------ | --------- | --------- | --------- |\n| Avg number of dependencies | 2.43 | 2.04 | 2.04 |\n| Max number of dependencies | 18 | 13 | 11 |\n\nThey are not all the dependencies of the sentences. We select them by random sampling (at a ratio of 40%). Moreover, we only consider the dependency relations whose modifier is an adjective, noun, or verb. \n\nFor more details about the dataset construction process, please refer to Appendix B. Thanks!\n\n> 5. Table 8 in the appendix seems to have some issues.\n\nWe're sorry for this mistake, and we have fixed them in our revised version.\n\n> 6. A much more thorough discussion on the limitations\n\nThank you for your suggestions. We have added a new section in the Appendix discussing the limitations of our work.\n\n\n[1] Castro Ferreira et al. \"Neural data-to-text generation: A comparison between pipeline and end-to-end architectures\". EMNLP/IJCNLP (1) 2019: 552-562\n\n[2] Moryossef et al. \"Step-by-Step: Separating Planning from Realization in Neural Data-to-Text Generation\". NAACL-HLT (1) 2019: 2267-2277\n\n[3] Zhao et al. \"Bridging the Structural Gap Between Encoding and Decoding for Data-To-Text Generation\". ACL 2020: 2481-2491\n\n[4] Harkous et al. \"Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity\". COLING 2020: 2410-2424\n\n[5] Nan et al. \"DART: Open-Domain Structured Data Record to Text Generation\". NAACL-HLT 2021: 432-447\n\n[6] Jin et al. \"Semsum: Semantic dependency guided neural abstractive summarization\". AAAI 2020.", " We sincerely appreciate your constructive comments. We respond to your main concerns below:\n\n> 1. \"the writing of Sec. 3 needs to be polished. Also please don't use python codes in your Algorithm 2\"\n\nThanks for pointing out this. We improve our writing of Section 3 and pseudo codes in Algorithm 2. To check more details, please refer to our rebuttal revision version of paper.\n\n> 2. The model seems to be incremental to DBA / No big difference between the proposed model and DBA except that the former applies external knowledge to change the emission probabilities of the decoder, which is a bit trivial\n\n(1) First of all, our work has defined a completely novel decoding scenario, i.e., Relation-Constrained Decoding (RCD). As you have mentioned, RCD is an important problem that is worth investigation. However, the original DBA is designed specifically for lexically constrained decoding, which can not solve the problem of RCD properly. Therefore, it is actually very important to make these seemingly incremental improvements to tackle the problem of RCD. That is, to the best of our knowledge, the first of this kind of contribution.\n\n(2) As Reviewer KEQA have noted, our work has presented a novel technique that can introduce more control of the decoding process by incorporating relations. This can not be achieved by DBA, and is a novel contribution.\n\n(3) Moreover, we thoughtfully design the RESEAL (refer to our response for your Question 6). RESEAL achieves promising improvement on three down stream tasks (e.g., +1.27 ROUGE-L on average for sentence summarization, +4.59 SARI for fact-based text editing, +0.42 BLEU over T5-base for data-to-text generation), without any changes in the training stage. We only modify the decoding stage and achieve improvements on the same base model. That is, an important contribution of this work.\n\nThat we could make those non-trivial efforts seem effortless, we’ll take this as a compliment:)\n\n> 3. \"The reliance of the introduced model on external knowledge.\" / \"I also think the authors should even clarify the external dependence issue in the introduction, which is the main problem we are facing in the NLP industry.\"\n\nThis external dependence issue is also the concern of Reviewer brwW and Reviewer KEQA, so we add an official comment titled **\"Response to External Dependence Issue\"** on top of this webpage to explain this issue in detail. Please refer to that for more information, thanks!\n\nWe have modified our introduction part to include such content. Thank you for your advice!\n\n\n> 4. \"May you put some case studies to show how your model works in practice?\"\n\nSure. We believe that our model has many (potential) applications in practical scenarios, e.g., controllable text generation, improving the factual consistency in summarization or dialogue generation, or some data-to-text applications.\n\nWe also have provided some examples in Appendix D.3. We will move them to the main part of paper in the revised version with more pages available. We list some examples for your reference:\n\n- **[Relation Constraints]** (thought, 'ccomp', ’s), (’s, 'nsubj', charges)\n\n**[DBA]** I **thought** the **charges** would be $ 10,000, but it **’s** $ 20,000.\n\n**[RESEAL]** I **thought** there **’s** some **charges**, but I was wrong.\n\n**[Reference]** I always **thought** there **’s** no custom **charges** for gifts.\n\n- **[Relation Constraints]** (was, 'nsubj', clients), (rushed, 'obl', time), (rushed, ‘advcl’, was), (clients, ‘nmod’, facility)\n\n**[DBA]** The **clients** of this **facility** were **rushed** out of the building by the **time** I got there and there **was** no one there to help me .\n\n**[RESEAL]** When there **was clients** in **facility** , they **rushed** out of the building at the same **time** as the clients were leaving .\n\n**[Reference]** Overpriced and the doctor acted arrogant and **rushed** at a **time** when there **was** very few **clients** in the **facility** .\n\nIn the first example, DBA misses both the (thought, 'ccomp', ’s) and (’s, 'nsubj', charges) constraints. In the second example, DBA misses the (was, 'nsubj', clients) constraints. DBA can satisfy all the lexical constraints, but it often fails to handle the relations between them. RESEAL can produce fluent sentences that are close to the grammatical structure of the references. ", " > 5. \"Eq. (3) only considers past information; What about the future? Does this scoring style have a bias in this sense?\"\n\nThanks for raising this interesting concern. In our work, we do focus on the past information, since our method is designed for models with **autoregressive** decoders in the left-to-right manner. **Our method can be extended to consider future information.** We can lookahead to see the decoding result in the following several steps, and calculate the $p_{rel}$ by Eq.(3). However, in this sense, the decoding speed would be slower and the computational cost will be higher. Thus, we mainly focus on the previous generated sequence to do probability surgery to reach a good trade-off between generation efficiency and quality. You have proposed a good point in the question, and we will explore this in the future work.\n\n> 6. \"It seems that applying p_{rel} in every decoding step is not reasonable. Could you explain more about this, especially the intuition of g?\"\n\n**First of all, $p_{rel}(w|y_{<t},C)$ indicates the probability that the token $w$ satisfies the relation constraints $C$ given previous decoding result $y_{<t}=(y_1,...,y_{t-1})$**. The $p_{rel}$ serves as an external signal, so the model can be aware of whether the relation constraints are satisfied.\n\n**We aim to design an approach to combine $p_{rel}$ and $p_{vocab}$ (the next token probability predicted by the model) to produce an augmented distribution $\\widetilde{p}$.** There are many ways to combine these two distributions. In this paper, we use the form $\\widetilde{p}\\propto g(p_{rel})\\cdot p_{vocab}$ (Eq.(4)), where $g$ is a gate function (Eq.(5)) ranging from 0 to 1. \n\n**About the intuition of $g$**, we have introduced in Section 3 (line 132-142) and Appendix A. We summarize them as follows:\n- If adding $w$ at the end of $y_{<t}$ violates the relation constraints, $p_{vocab}$ of $w$ should be reduced. If adding $w$ at the end of $y_{<t}$ satisfies the relation constraints, or w is not included in the constraints, $p_{vocab}$ of $w$ should keep unchanged. (There is an alternative that directly increasing the $p_{vocab}$ if satisfying constraints. However, the experiment shows the alternative will make all the relation constraints be satisfied at the very beginning, thus affecting the overall generation quality.)\n- The function $g$ should not be a monotonically decreasing function, because we want the weight $g(p_{rel})$ to increase when $p_{rel}$ increases.\n- The output of function $g$ should not be zero. Outputting zero will result in negative infinite log-likelihood.\n- A threshold mechanism can be introduced to make the function form more flexible. If the $p_{rel}$ is larger than the threshold $\\rho$, it can be considered as satisfying relation constraints.\n\nTherefore, we find the form of $g$ introduced in our paper can meet all the above-mentioned requirements. We adopt this form and find it very effective for our proposed RESEAL.\n\nWith the above-mentioned definitions, **we can explain why we can apply $p_{rel}$ in every decoding step to solve the RCD problem**. Specifically:\n\n(1) Applying $p_{rel}$ in every step serves as a stronger signal to guide the relation-constrained generation. This enables to model to dynamically adjust the word probability during the generation process.\n\n(2) Applying $p_{rel}$ in every step can \"early stop\" candidates which violate relation constraints. For example, if a candidate sentence has unexpected relations or there are no relations between two specific words, $p_{rel}$ will be close to zero. Then this candidate sentence with low $p_{rel}$ will have a lower final score according to our Eq.(4). It would be more likely to be filtered out at early stage of decoding, which is more effective.", " We sincerely appreciate your constructive comments. We respond to your main concerns below:\n\n> 1. \"Dependence upon a suitable relation identifier is a prerequisite. In most cases, it won't be easy to obtain/train. Thereby limiting the applicability of RCD to very simplistic tasks. In most cases, a lexically constrained decoding paradigm is sufficient.\"\n\nThis external dependence issue is also the concern of Reviewer KEQA and Reviewer bPF5, so we add an official comment titled **\"Response to External Dependence Issue\"** on top of this webpage to explain this issue in detail. Please refer to that for more information, thanks!\n\nAbout the applicability of RCD, thank you again for pointing out this. We believe that adding more complex constraints (not limited to lexical constraints) is helpful to better control the text generation, and this would be a promising direction. We will explore more potential applications of RCD in future work.\n\n> 2. \"Evaluation on newer and robust NLG evaluators like BERTScore, and BLEURT is missing. No human evaluation of results.\"\n\nWe agree that these new metrics and human evaluation are helpful. We will add them in our revised version.\n\n> 3. Based on the formulation the candidates are just getting re-ranked. How is the signal guaranteeing the presence of relation constraint?\n\nThe signal is through beam allocation (please refer to line 10-12 of Algorithm 2). We propose to use **the number of correct relation constraints** of i-th candidates (denoted as $n_i$) to divide the banks. The banks determine the way of beam expansion. The larger $n_i$ means less beam expansion.\n\nThe overall effect of this operation is that (which is similar to the original DBA): \n- The candidate sentence with a small $n_i$ will get more chances to continue decoding. Therefore it will have more chances to satisfy the rest of the relation constraints.\n- The candidate sentence with a large $n_i$ will get fewer chances to continue decoding, since there are fewer relation constraints to be satisfied, it does not need so many beams to continue decoding.\n\n> 4. Why is 2 needed in the denominator?\n\nIt's needed because we have 2 kinds of probabilities $p_{trans}$ and $p_{type}$ in Eq (3).\n\n> 5. Shouldn't w/o prob = DBA? Why is the result different from DBA (Table 1)?\n\nIn Table 2, \"w/o prob\" indicates without probability surgery but still having RG-Top-K. Sorry for the potential misleading. We have fixed this in our revised version.\n\n> 6. What is the word+rel strategy?\n\nThe \"word+rel\" means that we change the meaning of $n_i$ mentioned above. For \"word+rel\", $n_i$ denotes the number of satisfied lexical constraints + the number of (dependency) relation constraints. DBA only includes the former, while our RESEAL only includes the latter.\n", " The paper describes a model for text generation, based on target dependency relations that should be in the output.\nThe word-level output probabilties are modified to increase the likelihood of generating words that match the target relation.\nDuring beam decoding, the candidate construction method also takes the target relations into account.\nEvaluation is performed on several datasets, formulating the task as text generation based on dependency relations. The model is interesting.\nThe paper is well written and clear.\nThe specific method of modifying the output probabilities based on the given relations can be useful.\nResults are demonstrating the benefit of the proposed model over the chosen baselines.\n\nThe work is presented as a completely novel task. However, it does not sufficiently address existing work on data-to-text generation. \nIn data-to-text, the task is very similar - generation text based on relation tuples. \nWhile data-to-text systems do not typically restrict themselves to only dependency relations, they could definitely be applied on this task.\nTherefore, work in this area should be covered under related research and relevant baselines should be used for comparison.\nThere is currently a very minimal section for evaluation on a data-to-text dataset, but only a very generic model (not designed for data-to-text) is used as a baseline.\n\nThere is a nice section on dependency-guided generation, which has the same goal as the proposed task. However, none of the listed previous work seem to be reported as a baseline.\n\nThe proposed evaluation metrics are interesting for diagnostics but overall not very convincing.\nIt is claimed that overlap-based metrics are not appropriate and parser-based metrics are used instead. \nThis doesn't really reflect how readable the sentence is at all. \nFor a high score, the model could just linearise the triplets that are provided as input, without producing a coherent sentence. \nThe UC and LC metrics only measure recall, not precision, so there is no penalty for generating loads of different relations in the output.\nSimilarly, the Word% metric could just copy over the model input into the output in order to get a 100% score.\nBLEU 1-4 should be reported, along with METEOR. One BLEU is currently reported but it is unclear which one it is. BLEU-1 would not be sufficient to measure sentence quality. It is currently unclear how many dependencies are given as input to the model. And if they are not all the dependencies, then how are they chosen.\n\nTable 8 in the appendix seems to have some issues. The dependencies in the first row do not match the sentences.\n There is a very brief discussion of computational complexity in the appendix. There could be a much more thorough discussion on the limitations of the system, the task, the evaluation metrics and text generation based on dependencies in general.", " This paper proposes a new paradigm in constrained text generation called relation-constrained decoding (RCD). RCD ensures that the tokens in the constraint set are adhered to (lexically constrained decoding) while decoding from NLG models without compromising on the underlying \"relation\" between the tokens. To address RCD, the paper presents a decoding time algorithm called RESEAL that can handle different types of relations (syntactic, factual, or entity based). On top of the standard evaluation, the paper also highlights results on 3 NLG tasks that help understand the RCD paradigm's applicability. \n\nImplementation of RESEAL involves two main strategies, probability surgery, and relation-guided lexical decoding. Strengths:\n1. The paper is well written and easy to understand.\n2. Comprehensive evaluation of a variety of NLG tasks.\n3. Good results against competitive baselines.\n4. Fact-based editing is an elegant application of this paradigm.\n\nWeaknesses:\n1. Dependence upon a suitable relation identifier is a prerequisite. In most cases, it won't be easy to obtain/train. Thereby limiting the applicability of RCD to very simplistic tasks. In most cases, a lexically constrained decoding paradigm is sufficient.\n2. Evaluation on newer and robust NLG evaluators like BERTScore, and BLEURT is missing.\n3. No human evaluation of results.\n 1. Line 102: Based on the formulation the candidates are just getting re-ranked. How is the signal guaranteeing the presence of relation constraint?\n2. Line 130: Why is 2 needed in the denominator?\n3. Table 2: Shouldn't w/o prob = DBA? Why is the result different from DBA (Table 1)?\n4. Line 225: What is the word+rel strategy?\n\nTypo: \nLine 111: Line 1-7 -> Line 1-6 The paper highlights some of the limitations in section 5.1(results) and societal impacts in the appendix.\n\nLimitation: The model needs performance depends on access to a good relation identifier. The errors in the relation-identifier propagate into the model. ", " This paper presents a way to incorporate \"relation constraints\" in decoding. For example, given the constraint triplet: (phone, amod, fancy), the decoding process will attempt to generate a sentence with that includes words \"phone\" and \"fancy\" while also satisfying an \"amod\" relation based on a relation predictor (e.g. \"This is a fancy phone\"). This is an extension of the lexically-constrained decoding problem, though it seems that the relation constraints here are only satisfied probabilistically. \nStrength\n- This is an interesting problem. Techniques that give us more control of the decoding process is generally a nice contribution to the field. I am not aware of previous work that incorporates relations. \n\nWeakness: \n- While relation contraints are interesting, it is not immediately clear in what applications they are helpful. Various tasks and results are shown, but I don't see intuitively why the relation constraint is important. For example, given the constraint triplet: (phone, amod, fancy), would a simple lexically-constraint decoder that uses (phone, fancy) produce those words in the right order anyway (due to a strong language model)? In contrast, would your relation-constrained decoder produce these words in the wrong order but still labeled with the right head-child dependency relation (due to a poor relation predictor)? Similarly, for the summarization problem, why do relations help? Perhaps the Data-to-Text and Fact-based Text editing tasks are easier to motivate, but the paper doesn't dedicate enough prose explaining these tasks. \n\n- The proposed method is difficult to understand. I think I got it after several readings, but there may be ways to improve especially the understanding of the general purpose of line 4 and 5 in Algo 1. One suggestion is to elaborate on the left part of Figure 1 (which explains line 4 at least). - In Table 1, it seems that some of the lexically-constrained decoding methods (e.g. DBA) are worse than Base. Is that expected? \n- Are there speed comparisons between proposed method and the simple Rerank baseline? \n- Can you provide some example sentences of your decoder vs DBA in the main part of the paper? This may help motivate the use of relations. \n- It seems to me that the relations in the decoder are always predicted. It is a hidden variable, so it is somewhat different in nature to the words in the lexical constraints, which are observed. If you have a poor relation predictor, can it fool your decoder into believing that the constraints are satifised? Appendix E writes: \"... our proposed methods may be used to generate more human-like fake text. However, the impacts are more apparent when considering deployed applications, while our proposed methods as the methodologies can not have any direct negative societal impacts.\" \n\nI am actually not sure what you are saying here. Do you mean that as a general technique, this does not have direct negative social impact, because the fault is with the application that uses this technique? I guess I see your point partly but I think the paragraph can be modified to say things in a better way. If your method is so good it makes it more difficult to detect fake text, isn't that a direct negative impact compared to other methods? ", " The paper is under the theme of constrained decoding, which incorporates predefined rules into a generated sentence. Compared with previous approaches mainly applied to words or phrases, the authors focus on the relations among them, which are called as Relation-Constrained Decoding (RCD). In terms of this setting, they introduce a model, RESEAL, that contains two special modules: RElation-guided probability Surgery and bEam ALlocation, which respectively modify the emission probabilities from the decoder and utilize the constraint information to guide the candidate selection of beam search. Pros: \n1) well-defined problem that is worth investigation;\n2) experiments on many tasks, which seems to be solid.\n\nCons:\n1) the writing of Sec. 3 needs to be polished. Also please don't use python codes in your Algorithm 2;\n2) the model seems to be incremental to DBA and is not flexible since it resorts to external knowledge (e.g., a well-trained dependency parser); I also think the authors should even clarify the external dependence issue in the introduction, which is the main problem we are facing in the NLP industry.\n 1, May you put some case studies to show how your model works in practice?\n2, Eq. (3) only considers past information; What about the future? Does this scoring style have a bias in this sense?\n3, It seems that applying p_{rel) in every decoding step is not reasonable. Could you explain more about this, especially the intuition of g? I've mentioned previously in 'Strengths And Weaknesses': 1) No big difference between the proposed model and DBA except that the former applies external knowledge to change the emission probabilities of the decoder, which is a bit trivial; 2) the reliance of the introduced model on external knowledge.\n\nWill improve my score if the authors can address my concerns.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 4, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 5 ]
[ "nips_2022_dIUQ5haSOI", "OjxYfxOZGao", "nips_2022_dIUQ5haSOI", "OjxYfxOZGao", "Rgv-zMfw3nP", "nips_2022_dIUQ5haSOI", "Rgv-zMfw3nP", "Rgv-zMfw3nP", "Rgv-zMfw3nP", "2kmjN6Byaiz", "OjxYfxOZGao", "OjxYfxOZGao", "szJ7d0e99Sm", "nips_2022_dIUQ5haSOI", "nips_2022_dIUQ5haSOI", "nips_2022_dIUQ5haSOI", "nips_2022_dIUQ5haSOI" ]
nips_2022_hyc27bDixNR
Margin-Based Few-Shot Class-Incremental Learning with Class-Level Overfitting Mitigation
Few-shot class-incremental learning (FSCIL) is designed to incrementally recognize novel classes with only few training samples after the (pre-)training on base classes with sufficient samples, which focuses on both base-class performance and novel-class generalization. A well known modification to the base-class training is to apply a margin to the base-class classification. However, a dilemma exists that we can hardly achieve both good base-class performance and novel-class generalization simultaneously by applying the margin during the base-class training, which is still under explored. In this paper, we study the cause of such dilemma for FSCIL. We first interpret this dilemma as a class-level overfitting (CO) problem from the aspect of pattern learning, and then find its cause lies in the easily-satisfied constraint of learning margin-based patterns. Based on the analysis, we propose a novel margin-based FSCIL method to mitigate the CO problem by providing the pattern learning process with extra constraint from the margin-based patterns themselves. Extensive experiments on CIFAR100, Caltech-USCD Birds-200-2011 (CUB200), and miniImageNet demonstrate that the proposed method effectively mitigates the CO problem and achieves state-of-the-art performance.
Accept
This work studied the few-shot class-incremental learning in the margin-based classification. It presented a deeper analysis about the dilemma between the base-class and novel class performance, from the perspective of positive and negative patterns corresponding to positive and negative margins. Although this dilemma had been observed and analyzed in some previous works, the analysis in this work is deeper and novel. The provided method is inspired by the analysis. Although it is simple, but reasonable and effective verified in experiments. The authors also provide convincing responses to most concerns. In summary, I think this work is well motivated, well writing. It is a professional work, could be accepted to NeurIPS.
train
[ "Fmxt1UeJXVT", "GR31jK81Ulv", "CL7YUAV-Jak", "e2_a-WLTRDx", "7YgcTK6hBeY", "pG6U43rH0rb", "pa9SPLtWJx", "6yRPDVcBmyt", "C1ewEXcbgRg", "Eln98Q3D1oM", "y_gtP-LhC3L", "x__z3CGl4xd", "WD35hHxuo3J", "xcXJKxRrtce" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for reading our response! May I know if our response have addressed your questions? Please feel free to let us know if you have any question. We are very much looking forward to having the opportunity to discuss with you.", " Thank you very much for reading our response! May I know if our response have addressed your questions? Please feel free to let us know if you have any question. We are very much looking forward to having the opportunity to discuss with you.", " Thank you very much for reading our response. May I know if our response have addressed your questions? We would like to report our method implemented on face datasets as follows, where we follow (Wen et al. 2022) to train on VGGFace2 and test on other datasets.\n\n| |LFW|AgeDB-30|CA-LFW|CP-LFW|Combination|\n|-----|------|------|-------|-------|------|\n|baseline|99.46|92.56|92.88|90.90|93.67|\n| + Ours|99.60|94.03|93.36|91.35|94.39|\n\nFrom these results, we can see improvements compared with the baseline method.\nPlease feel free to let us know if you have any question. We are very much looking forward to having the opportunity to discuss with you!\n\n## Reference\nWen, Yandong, et al. \"SphereFace2: Binary Classification is All You Need for Deep Face Recognition.\" International Conference on Learning Representations. 2022.", " We really appreciate your constructive review and your precious time! We have added the new analysis and experiments to the revision of the appendix (Please see appendix section D), and we would make them concise to add into the paper. If you have any additional concerns please let us know and we would be happy to follow up. Thanks! \n", " I appreciate the explanations and analysis made by the authors, which are encouraged to be added to the paper to justify the claim. ", " # 2. Comparison with methods in face recognition\n\nWe implemented some methods in the face recognition community that might be relevant to our work, including\n\n(1) Adaptive-margin-based methods (CurricularFace (Huang et al. 2020), AdaCos (Zhang et al. 2019), ElasticFace (Zhang et al. 2022), AMR-Loss (Zhang et al. 2021)), such as adjusting the margin value according to the intra/inter-class angles. However, these methods always rely more on the angles across the training time than on angles across all training classes, which differs from our class-relation-based mapping mechanism which relies more on angles across all training classes.\n\n(2) Relational-margin-based methods (TRAML (Li et al. 2020)), which maps the class relationship to the margin. However, this work takes the semantic embedding (such as attributes) of each class as input, and utilizes a network to learn the relational margin, which differs from our work in that we don’t need further training a network nor the attributes to obtain the relational margin. Moreover, we specifically design the search space of hyper-parameters of the linear mapping to allow similar classes to have a margin with a larger absolute value, so that the mapping is more interpretable than mapping by a learnable black-box neural network.\n\nBelow we empirically compare our method with the above face recognition methods on CIFAR100 following settings in our paper. Our aim of experiments includes (1) verifying whether they can solve the dilemma between base-class performance and novel-class generalization and (2) verifying whether they can effectively capture the relationship between classes.\n \n|CIFAR100 | overall | novel | base |\n|---|---|---|---|\n|baseline | 47.02 | 37.4 | 72.32 |\n|baseline + CurricularFace \t\t\t |\t47.22 |\t35.77 |\t72.67|\n|baseline + AdaCos\t \t\t\t|\t44.48 |\t26.07 |\t72.55|\n|baseline + TRAML \t \t\t\t|\t47.31 |\t36.32 |\t73.15|\n|baseline + ElasticFace \t \t\t\t|\t47.09 |\t36.60 |\t72.40|\n|baseline + AMR-Loss \t\t\t|\t46.66 |\t33.37 |\t72.58|\n|baseline + NM/PM (ours) \t\t\t|\t**49.21** |\t**40.22** |\t**73.72**|\n|baseline + NM/PM + CurricularFace\t |\t49.48 |\t40.07 |\t74.10|\n|baseline + NM/PM + AdaCos \t\t | \t45.24 |\t27.20 |\t73.83|\n|baseline + NM/PM + TRAML \t\t | \t48.18 |\t39.35 |\t73.73|\n|baseline + NM/PM + ElasticFace \t\t |\t49.43 |\t40.70 |\t74.09|\n|baseline + NM/PM + AMR-Loss \t\t|\t49.22 |\t38.92 |\t74.15|\n|baseline + NM/PM + relation (ours)\t| \t**50.25** |\t**41.17** |\t**74.20**|\n \nFrom this table, we can see that (1) these methods cannot solve the dilemma by adding directly to the baseline method, since none of them can achieve higher base and novel accuracy simultaneously compared with the baseline performance, while ours (baseline + NM/PM) can; (2) with the NM/PM architecture design, these methods can hardly capture the relationship between classes, since they could not achieve performance significantly higher than the baseline + NM/PM ones, which verifies the effectiveness of our method under the few-shot class-incremental learning task for general data recognition. We add this experiment to the revision (See Appendix section E).\n\n# 3. Axis meaning\n\nThe horizontal axis denotes the adopted margin value, and the vertical axis represents the accuracy (left, L92-93), L1 norm (red, mid, L107-110), activation value (blue, mid, L124-125), other-class activation (red, right, L133), cosine similarity (blue, right, L146-147). We add this illustration to the revision (See Fig.2).\n \n# 4. Experiments on face recognition datasets\n\nDue to the time limit, we could hardly have enough time to prepare, train, and tune hyper-parameters for the face recognition datasets from scratch. We would further tune hyper-parameters before the final version and add this experiment to the final version.\n \n \n# Reference\n\nHuang, Yuge, et al. \"Curricularface: adaptive curriculum learning loss for deep face recognition.\" proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\n\nZhang, Xiao, et al. \"Adacos: Adaptively scaling cosine logits for effectively learning deep face representations.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.\n\nBoutros, Fadi, et al. \"Elasticface: Elastic margin loss for deep face recognition.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\nZhang, Zhemin, Xun Gong, and Junzhou Chen. \"Face recognition based on adaptive margin and diversity regularization constraints.\" IET Image Processing 15.5 (2021): 1105-1114.\n\nLi, Aoxue, et al. \"Boosting few-shot learning with adaptive margin loss.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020.\n\n", " We thank the reviewer for the valuable advice. We address each below.\n \n# 1. Novelty compared with face recognition\n\nWe would like to emphasize that although both face recognition and our work utilize the concept of margin-based classification, our work differs significantly from face recognition in the following aspect:\n\n(1) Different data distribution: face recognition dataset could be viewed as one of the finest-grained classification datasets, where the classification relies on the very subtle differences between two images. But our few-shot class-incremental learning (FSCIL) task handles data that are more general, such as ImageNet classes, where larger semantic gaps and nature class relations (such as WordNet hierarchical relations) exist between classes. This intrinsically makes the designing concept to be different for these two tasks. For example, for face recognition datasets, patterns (such as eyes, nose, ears) are easier to be shared between classes so that a large positive margin benefits the model, even for the novel face identities. But for our general-class classification task, we need to consider the case where no shared patterns could be learned between classes due to the semantic gap, therefore a negative margin is needed.\n\n(2) Different task: we consider both the base-class performance and novel-class generalization, which is the core of this work.\n\nSpecifically, in terms of our method, our difference with current face recognition methods is in\n\n(1) Different view of analysis: our analysis is from the aspect of pattern learning, and find that the class relationship is the source of learning class-shared or class-specific patterns.\n\n(2) Different architecture design: we put the negative margin (NM) in the lower layers and positive margin (PM) in the higher layer. In the relation-mapping module, based on our findings of the relevance between class relationship and pattern learning, for the NM (PM) branch, we specifically search for smaller negative margins (larger positive margins) for similar classes, encouraging the model to learn more class-shared (class-specific) patterns, which has novelty in the observation, analysis, architecture design, mapping mechanism design, and hyper-parameter search space.\n\nMore detailed comparison will be illustrated below.", " # 4. Relation to supervision collapse\n\nThe class-level overfitting problem in our paper is to some extent relevant to the supervision collapse observed in (Doersch et al. 2020) in that we both observe the patterns learned on base classes tend to represent only the base classes. Moreover, by applying a large margin, this problem is even exacerbated. (Doersch et al. 2020) handles this problem by integrating self-supervised learning and classifying through comparing object parts, so as to reduce the reliance on class labels. Compared with (Doersch et al. 2020), we treat this problem from a different perspective: our work specifically targets the problem caused by applying margins, provides deeper analysis from the aspect of pattern learning, and finally handles this problem by building positive-margin-based patterns from negative-margin-based patterns. In summary, our observations are relevant, but our insight and solutions are different.\n \n# References\n\nKornblith, Simon, et al. \"Similarity of neural network representations revisited.\" International Conference on Machine Learning. PMLR, 2019.\n\nYosinski, Jason, et al. \"Understanding neural networks through deep visualization.\" arXiv preprint arXiv:1506.06579 (2015).\n\nDoersch, Carl, Ankush Gupta, and Andrew Zisserman. \"Crosstransformers: spatially-aware few-shot transfer.\" Advances in Neural Information Processing Systems 33 (2020): 21981-21993.\n", " By applying our method, for the negative-margin-based (NM) patterns, we set $f_{target}$ to be the output of the backbone network (i.e., $f(\\cdot)$ in Fig.5) and set the positive margin to 0.3, and report the CKA below.\n\n| Margin | -0.5 | -0.4 | -0.3 | -0.2 | -0.1 | 0.0 |\n| ----------------|------------|------------|------------|------------|------------|------------|\n| CKA (baseline) |0.2432 | 0.2245 | 0.2010 | 0.1661 | 0.1510 | 0.1306 |\n| CKA (NM) | 0.1867 | 0.1779 | 0.1724 | 0.1638 | 0.1552 | 0.1427 |\n\nWe can see the CKA similarity decreases clearly compared with the baseline method with small margins, which validates that the negative-margin-based patterns learned by our method are more complex and less similar to edges and corners than the baseline method, **verifying the mitigation of the easily-satisfied constraint problem by extra supervision from the learning of positive-margin-based patterns**.\n\nAlso, we report the CKA for the positive-margin-based (PM) patterns by setting the $f_{target}$ to the output of the $F(\\cdot)$ in Fig.5, and set the negative margin to -0.4 below.\n\n|Margin | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 |\n| --- | --- | --- | --- | --- | --- | \n|CKA (baseline) | 0.1306 | 0.1149 | 0.0837 | 0.0642 | 0.0576 |\n|CKA (PM) |0.1476 | 0.1439 | 0.1430 | 0.1214 | 0.1100 |\n\nWe can see that the CKA is larger than that of the baseline method with large margins, which validates that the positive-margin-based patterns learned by our method is more similar to the simplest patterns such as edges or corners than the baseline method, which makes it less overfitting the base classes, **verifying the mitigation of the easily-satisfied constraint problem by extra supervision from the learning of negative-margin-based patterns**.\n\nIn summary, the above experiments validate that the baseline model tends to learn simple patterns that are easily shared between classes given negative patterns, while it tends to learn complex patterns that are easily overfitting a base class given positive margins. Moreover, by applying our method, such easily-satisfied constraint problems are mitigated. We add this experiment in the revision (See appendix section D).\n \n# 2. Determination and sensitivity of $m_{ave}$ and $m_{upper}$\nWe would like to point out that the setting of the hyper-parameters is illustrated in the paper L241-242. Specifically, we first set $m_{ave}$, without applying the relation mapping module, to the ordinary cosine margin applied in Eq. 5, because the average of class relations reflects a global margin that is effective for most classes (L238). Then, we fix $m_{ave}$ and search for the $m_{upper}$. Suppose the search spaces for $m_{ave}$ and $m_{upper}$ are $S_{ave}$ and $S_{upper}$, the complexity of searching for optimal $m_{ave}$ and $m_{upper}$ is O(|$S_{ave}$| + |$S_{upper}$|) rather than O(|$S_{ave}$| * |$S_{upper}$|) of the ordinary grid search, which means the determination of these two hyper-parameters is much easier than the ordinary grid search.\n\nWe studied the sensitivity of both parameters in Fig. 7, 8 and Fig.9 (sub-figures denoted as All Classes). Specifically, in Fig. 7, we can see that negative $m_{ave}$ for negative-margin-based patterns gives a stable performance improvement when margin is in [-0.4, -0.1]. Similarly, $m_{ave}$ for the positive-margin-based patterns in Fig. 8 shows a stable performance improvement when margin is in range [0.1, 0.3], $m_{upper}$ consistently promotes to the performance when in [-0.6, -0.3] and $m^P_{upper}$ stably improves the performance when in [0.4, 0.8] in Fig. 9. The above non-narrow range of hyper-parameters indicates that the improvements of our method are not sensitive to the two hyper-parameters.\n \n# 3. Baseline performance.\n\nWe would like to point out that our implementation is directly based on the code released by CEC [26] and will be released (L265). For the CIFAR100 and miniImageNet, the baseline performance is identical to previous works. For CUB200, our baseline overall performance is not higher than [4]. On CUB200, since the pre-training of the backbone is adopted following [28, 26], we scale the learning rate of the backbone network to 10% of the global learning rate (L266-267), which is a well-known method in finetuning (e.g., Stanford cs231: cs231n.github.io/transfer-learning).\nNote that the improvements of our method are not fully attributed to the baseline performance, our analysis and solution to the dilemma works on CIFAR100 and miniImageNet and gives state-of-the-art performance, which validates the effectiveness of our insight and method.\n", " We thank the reviewer for the valuable advice. We address each below.\n \n# 1. Verification of easily-satisfied constraints\n\nWe would like to first clarify that the easily-satisfied constraint refers to easily learning simple patterns shared between classes given negative margins, and we just use corners or edges for an example. Also, given positive margins, the easily-satisfied constraint refers to easily learning complex patterns only specific to a certain class, and the complex texture is just an example.\n\nFor justifying that the generation of the dilemma is the result of the easily-satisfied constraint, we need to analyze how simple or how complex the learned patterns are. However, qualitative experiments like visualization of patterns are subjective to some extent, and due to the small input image size (e.g., 32 $\\times$ 32 for CIFAR100), small visual areas such as textures are hard to see. Instead of qualitative verification by visualization, we are inspired by (Kornblith et al. 2019) to quantitatively measure the simplicity/complexity of patterns by the similarity between the extracted feature and the simplest feature (e.g., corner or edge features).\n\nWe first use the baseline model to train on CIFAR100, and use the first convolution layer as the simplest feature extractor (denoted as $f_{simple}$), since many works (e.g., (Yosinski et al. 2015)) has shown that the first convolution layer tends to capture corners or edges. Then, we train models with and without our proposed methods with different margins, and use the backbone network for feature extraction (denoted as $f_{target}$). After that, we extract $f_{simple}$ and $f_{target}$ features from all images in base classes. Finally, we compare the CKA similarity (Kornblith et al. 2019) for measuring the similarity between $f_{simple}$ and the $f_{target}$.\n\nFor a sanity check, we first report the similarity between different layers within the baseline model.\n\n| $f_{target}$ | CKA |\n| --------------- | ------------------- |\n| Conv1-output | 1.0 \t |\n| Stage1-output | 0.8876 |\n|Stage2-output | 0.5664 | \n|second-last-Conv | 0.2097 |\nbackbone-output | 0.1306 |\n\nWe can see that the shallower the layer is, the higher the CKA similarity would be, which means the more similar they are to the $f_{simple}$, i.e. the simpler the pattern would be (Yosinski et al. 2015), and the more transferable but less discriminative the feature would be [25]. Then, we report the comparison of the CKA similarity between $f_{simple}$ and $f_{target}$ (backbone feature) of the baseline model trained with margins.\n\n|Margin |-0.5 |-0.4 |-0.3 |-0.2 |-0.1 |0.0 |0.1 |0.2 |0.3 |0.4 |\n|------------|------------|------------|------------|------------|------------|------------|------------|------------|------------|------------|\n|CKA \t|0.2432 |0.2245 |0.2010 |0.1661 |0.1510 |0.1306 |0.1149 |0.0837 |0.0642 |0.0576 |\t\n\nWe can see that by applying a negative margin, the CKA similarity clearly increases (even larger than that of the second last convolution layer when margin < -0.3), which means the backbone network captures patterns more similar to simplest ones such as edges or corners. And **this is the verification that the model tends to learn simple patterns that are easily shared between classes**. When applying a positive margin, the captured patterns grow to be more complex and tend to overfit base classes, making the CKA much smaller than the baseline, **which is the verification that the model tends to learn complex patterns that are easily to be specific to a given base class**.", " We thank the reviewer for the constructive comments. We address each below.\n \n# 1. Based on existing findings [15] though much more sophisticated\n\nWe would like to emphasize that our work significantly differs from [15] in that\n\n(1) Although inspired by the findings in [15], we further analyze the cause of the dilemma between negative and positive margins from the aspect of pattern learning.\n\n(2) Our analysis and interpretation of the dilemma can be directly utilized to develop methods for handling the dilemma (e.g., building positive-margin-based patterns from negative-margin-based patterns), while [15] did not address this problem.\n\n(3) Based on the observation of the relevance between class relationships and pattern learning, we further mitigate the dilemma by designing to map class relations to margins.\n\n(4) We focus on both the base-class performance and novel-class generalization, while [15] only emphasizes the novel-class generalization.\n\nMoreover, our solution is not sophisticated to implement. Therefore, although inspired from [15], our work still provides remarkable novel insights and contributions to the research community.\n \n# 2. Verify the proposed margin positions\n\nWe would like to report the experiments with all possible combinations of margins on CIFAR100 below, where the horizontal axis represents the margin attached to higher layer, and the vertical axis denotes the margin for the lower layer.\n\n| margin | -0.5 | -0.4 | -0.3 | -0.2 | -0.1 | 0.0 | 0.1 | 0.2 | 0.3 | 0.4 |\n| -------- | ---------- | ---------- | ---------- | ---------- | ---------- | ---------- | --------- | --------- | --------- | --------- |\n|-0.5 \t|41.16 \t|42.51 \t|43.30 \t|45.68 \t|47.93 \t|49.33 \t|49.25 \t|49.27 \t|48.53 |48.01\t|\n|-0.4 \t|42.24 \t|42.37 \t|44.37 \t|45.96 \t|48.17 \t|48.93 \t|48.94 \t|48.76 \t|49.01 |47.15\t|\n|-0.3 \t|43.99 \t|44.45 \t|45.06 \t|45.85 \t|47.51 \t|48.90 \t|49.38 \t|49.09 \t|48.87 |48.08\t|\n|-0.2 \t|45.05 \t|44.71 \t|45.61 \t|45.85 \t|48.2 \t|48.08 \t|49.6 \t|48.65 \t|48.53 |48.08\t|\n|-0.1 \t|47.50 \t|47.11 \t|46.71 \t|46.69 \t|47.99 \t|48.87 \t|48.92 \t|49.21 \t|48.55 |47.99\t|\n|0.0 \t\t|47.38 \t|47.70 \t|47.90 \t|47.94 \t|47.43 \t|48.48 \t|48.67 \t|49.14 \t|47.96 |47.87\t|\n|0.1 \t|47.32 \t|48.14 \t|48.01 \t|47.96 \t|47.95 \t|48.37 \t|47.62 \t|48.02 \t|48.35 |47.32\t|\n|0.2 \t|47.44 \t|46.66 \t|47.70 \t|47.05 \t|47.27 \t|47.21 \t|47.4 \t|47.55 \t|47.64 |46.65\t|\n|0.3 \t|46.61 \t|46.23 \t|46.29 \t|46.21 \t|47.01 \t|47.11 \t|46.73 \t|46.37 \t|46.55 |45.68\t|\n|0.4 \t|45.37 \t|45.35 \t|45.81 \t|45.70 \t|46.26 \t|46.69 \t|45.79 \t|46.51 \t|46.08 |45.56\t|\n\nWe can see that the margins adopted in the paper (i.e., positive margin on the higher layer + negative margin on the lower layer, the top right area) show clear improvements over the baseline (i.e., margins on both layers are 0). Instead, other combinations of margins (e.g., negative margin on the higher layer + positive margin on the lower layer, the left bottom area) show a much lower performance compared with the baseline, which verifies our choice of hyper-parameters and our insight: build positive-margin-based patterns from negative-margin-based patterns. We add this experiment in the revision (See the appendix secion C).\n \n# 3. Dataset for Sec. 4.5\n\nThe dataset for Fig.7, 8, and 9 ($m_{upper}$) is CIFAR100, while that for Fig.9 ($m^P_{upper}$) is CUB200. We add this illustration in the revision (See Fig. 7,8,9 captions).\n", " This paper tackles Few-shot Class-incremental Learning (FSCIL) problem which is designed to incrementally recognize novel classes with only few training samples after the (pre-)training on base classes. A dilemma exists that we can hardly achieve both good base-class performance and novel-class generalization simultaneously. \n\nThis paper analyzes the effects of positive margin and negative margin loss to the novel and based classes and designs a new network in which negative margin loss is attached to the lower layer and positive margin loss is attached to the top layer. Further improvement method based on the adjacency matrix is also proposed. Extensive experiments on three datasets show the effectiveness of the proposed method. Strengths\n- Enhancing the novel class generality while keeping the discriminative features for the base model with positive and negative margin losses, which are attached to lower and upper layers is an elegant solution. \n- Further improvement method based on the adjacency matrix of all classes is proposed. \n- Detailed analysis of the dilemma of FSCIL is shown. \n- Experimental results show that the proposed method outperforms SOTA methods on FSCIL. \n\nWeaknesses\n- The proposed method is based on existing findings in the paper on the negative margin matters[15]. (Thought the method and analysis of this paper are more sophisticated.)\n - For the ablation study, how about the case when the positive margin loss is attached to the lower layer, and negative margin loss is attached to the higher layer? Is the proposed design of the loss function positions really better? \n\n- For the experiment of Sec.4.5, which dataset is used? \n The authors did not address the potentially negative societal impact of this work. ", " This paper studies margin-based few-shot class-incremental learning (FSCIL), which is motivated by an observation that positive (negative) margins adopted in base training can negatively influence the transferability (discriminability) of the representations. The authors explained this dilemma as a class-level overfitting problem from the perspective of pattern learning and claimed the cause of such class-level overfitting is due to the fact that the constraint of learning shared or class-specific patterns can be easily satisfied. To address this problem, the authors proposed simultaneously adopt positive and negative margin and learn positive-margin-features from negative-margin features. Experiments on a set of FSCIL datasets show the proposal can achieve STOA performance. Paper Strength\n\n+ The connection between margins and pattern learning is interesting. It is an interesting observation that positive margin pushes the sparsity and fitness of patterns but harms the transferability of the patterns. This observation provides insight in understanding the generalisation of deep neural networks. \n+ Thorough experiments are conducted to evaluate the effectiveness of the proposed method. The results reveal that the method can simultaneously improve the performance of base and novel classes and thus is empirically proven to be able to overcome the dilemma of the margin-based classification. \n\nWeakness\n- There lacks evidence to support some important claims made in this paper. For example, the authors claimed the cause of the class-level overfitting problem lies in the easily-satisfied constraint of learning shared (e.g., corner or edge features) or class-specific patterns (complex class-specific features that cannot generalise). But how to verify this? Additional visualization seems necessary. \n- It makes sense to consider the class relations into class-specific margin. But the necessity to setting two key hyper-parameters $m_{upper}$ and $m_{ave}$ manually makes the method less elegantly. How these two parameters are determined? Is this method sensitive to such parameters?\n- From Table 3 it can be the baseline method performs very well even better than most of the comparing methods. More detailed explanation to the baseline and its performance should be added. \n 1. How to justify the claim about the dilemma is from the easily-satisfied constraint of learning shared (e.g., corner or edge features) or class-specific patterns (complex class-specific features that cannot generalise).\n2. How $m_{upper}$ and $m_{ave}$ are set?\n3. Why baseline method performs so well?\n4. the authors find large margin can lead to sparse patterns. Is this observation to some extent relevant to supervision collapse observed in [1], which described lose any information that is not necessary for performing the training task, including information that may be necessary for transfer to new tasks or domains? Yes", " This paper deals with the few shot incremental learning task. The authors identify the inferior trade-off between base- and novel-class as a class-level overfitting problem during the margin-based training, and propose a new margin-based learning method with extra constraint to overcome the problem. The experiments show the advantage of the proposed method on certain commonly used benchmarks. Originality\nPros: Two branches for pos and neg margin setting, respectively, are new and interesting to me.\nCons: The class relation in the inter-class logit is not new, but is studied by many methods in the face recognition community.\n\nQuality\nPros: The overall quality of this paper is good. The method is well formulated, based on the solid observation and analysis. The experiments are clear to show the effectiveness.\nCons: It lacks the comparison with the similar methods in the face recognition community.\n\nClarity\nPros: The method and the experiments are well presented.\nCons: Missing definition of horizontal and vertical axes in Fig. 2.\n\nSignificance\nPros: The proposed method will be useful for practical application of incremental learning task.\nCons: -\n It is better if the paper includes the extra experiments on face recognition datasets.\n\n No obvious negative societal impact observed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 5 ]
[ "x__z3CGl4xd", "xcXJKxRrtce", "pG6U43rH0rb", "7YgcTK6hBeY", "Eln98Q3D1oM", "pa9SPLtWJx", "xcXJKxRrtce", "C1ewEXcbgRg", "Eln98Q3D1oM", "WD35hHxuo3J", "x__z3CGl4xd", "nips_2022_hyc27bDixNR", "nips_2022_hyc27bDixNR", "nips_2022_hyc27bDixNR" ]
nips_2022_Setj8nJ-YB8
Zeroth-Order Negative Curvature Finding: Escaping Saddle Points without Gradients
We consider escaping saddle points of nonconvex problems where only the function evaluations can be accessed. Although a variety of works have been proposed, the majority of them require either second or first-order information, and only a few of them have exploited zeroth-order methods, particularly the technique of negative curvature finding with zeroth-order methods which has been proven to be the most efficient method for escaping saddle points. To fill this gap, in this paper, we propose two zeroth-order negative curvature finding frameworks that can replace Hessian-vector product computations without increasing the iteration complexity. We apply the proposed frameworks to ZO-GD, ZO-SGD, ZO-SCSG, ZO-SPIDER and prove that these ZO algorithms can converge to $(\epsilon,\delta)$-approximate second-order stationary points with less query complexity compared with prior zeroth-order works for finding local minima.
Accept
This paper designs new algorithms for finding second order stationary points using only function value queries (0th order information). The main novelty is in designing two approaches for negative curvature finding. The new subroutines can be used in a wide range of algorithms for finding second order stationary points (most using first order information) and result in new 0th order algorithms with reasonable guarantees. The reviewers had some concerns but most are addressed in the response. In general the reviewers agree that this is a solid contribution to nonconvex optimization.
val
[ "6OUK9F20sbP", "SQOcKIb-xzi", "RxeUxU3kCVk", "C7DYHQnGL0W", "OG1B0EdZoAn", "DCoNl-jm3gA9", "DKWLtMkkfhm", "8f5c7AtqXGf", "Ejb36Zj9hO0", "_kbIiP8150", "4L3td_u4RPN", "aeWa9D645tN", "jT8eA9ylkr8", "LjReTs9H3M5", "f8_yqDtieT_", "fdoOzmEenrB", "UvLwbREd76n", "cQ1Hudikrp" ]
[ "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for reading our response and thanks for raising the score.", " Thanks again for your further feedback and thanks for raising the score!", " Thank you for the detailed replies to my questions and comments. I especially appreciate the detailed explanation on iteration complexity and the addition of empirical comparisons with Neon2+GD. Thank you for explaining the tradeoff between approximation error and iteration complexity, and that the dimensionality of the problem is included in the oracle call (function evaluation) itself. My point, however, is that iteration complexity itself may not capture the true cost of zero-order methods. And that the low approximation error of coordinate gradient estimate (vs that of random gradient estimator) is not free.\n\nNevertheless, I overlooked the experiments that were in the appendix. It's much more clear now that the experiments are in the main body, and the practical advantage of the algorithm is also clearer. Fixing some typos and the general reorganization of the paper (e.g. adding a conclusion) makes the paper stronger. Thus, I increase my score from 6 to 7.", " Dear Authors,\n\nThank you for the thoughtful reply, and my apologies for the late response.\n\nI appreciate adding the experimental section into the paper (I somehow must have glossed over them in the appendix). The results look great (including the cubic regularization and regularized non-linear least squares examples in the appendix). Due to the inclusion of the experiments and the revised presentation, I will raise my score to a 7.", " In short, it seems that momentum with Chebyshev polynomial cannot improve convergence rate and complexity of stochastic optimization. \nThanks for your clarification. ", " Thank you for pointing this out. Our Algorithm 1 can be seen as a zeroth-order version of the Oja's method, which is one of the most popular algorithm for PCA in stochastic setting and is theoretically optimal up to some log factors https://arxiv.org/pdf/1701.01722.pdf. A natural way to accelerate the Oja's method is to utilize the momentum scheme, which has a strong connection with the Chebyshev polynomial. An interesting work http://proceedings.mlr.press/v84/xu18a/xu18a.pdf studied the relationship of the momentum scheme and Chebyshev polynomial and concluded that \"that adding momentum to a stochastic method like Oja’s does not always result in acceleration.\" Another recent work http://proceedings.mlr.press/v108/kim20e/kim20e.pdf also applied the variance reduced technique and heavy ball technique (momentum) to the Oja's method and achieved similar oracle complexity. In order not to increase the complexity of this paper, we don't consider these acceleration techiniques.\n\n", " Reviewer keP3 agrees with the authors' reply and keeps rating 6.\nFor Q11, I am curious why stochastic Algorithms 1-2 cannot use Chebyshev polynomial. \nThank you. ", " We thank all the reviewers for their constructive feedback! In our new submission, we made the following main revisions according to the reviewers' feedback:\n\n- We have fixed all minor issues and typos in the revised version.\n\n- We rewrite the pseudocode of the Algorithm 4, 5 & 7 according to suggestions from Reviewer keP3, which does not change the logic and complexity, just to increase the readability of the algorithms. \n\n- We reorganized section 4. Specifically, we summarize the application of the zero-order negative curvature finding framework to the algorithms zo-scsg and zo-spider in Section 4.2, and only keep the description of the algorithm and the informal theorem results. We defer the specific algorithms and formal theorems to the appendix.\n\n- Two reviewers mentioned that they didn't see the experimental results (actually, due to the space limitation, the experiments are already contained in Appendix G in our first submission), and one reviewer suggested that the experimental results should be presented somewhere in the main article. Thus, we add the numerical experiments section in the revised version. In this section, we compare our ZO-GD-NCF method with ZPSGD, PAGD, and RSPI on the octopus function with growing dimensions. The experimental results show the effectiveness and efficiency of our method in escaping saddle points.\n\n- Finally, we add a conclusion section to summarize this work and discuss future research directions.", " >Q4 typos/ minor issues\n\nThanks for pointing these issues out.\n- In the revised version, we have made $\\rho$ and $\\mu$ defined earlier (Line 28 and Line 126 in the revised version). \n- $C_0$ in Algorithm 1 and $C_1$ in Algorithm 3 are sufficiently large constants. Proper choice of $r$ means as long as the value of $r$ is not too large, then the approximation error $\\mathcal{O}(\\sqrt{d}r^2)$ of the zeroth-order Hessian-vector estimator can be efficiently bounded. In the revised version, we have rewritten this sentence to be \"with our choice of $r$ in Algorithm 1, the error term is still efficiently upper bounded.\"\n- Yes, it should be \"factor of d\". \n\n>Q5: For example, which part of the theoretical analysis required different techniques than that used in the Neon2 paper?\n\nThe novel analysis techniques are as follows: \n- In Appendix A.1, we analyse the properties of zeroth-order gradient estimators $ \\nabla_{coord} f(x) $ and $\\nabla_{rand} f(x)$ under the $\\rho$-Hessian Lipschitz condition. Previous work only exploited the $\\ell$-smoothness property of the gradient of f. We believe it is useful for studying the second-order convergence properties with zeroth-order methods. Also, we study the approximation error of the zeroth-order Hessian-vector product under the $\\rho$-Hessian Lipschitz.\n- In the applications of the zeroth-order negative curvature finding frameworks, since we need to verify if the norm of the current gradient is small enough, we propose Proposition 1. By using a batch of coordinate-wise gradient estimators with proper choice of the smoothing parameter, we can achieve this with high probability in the stochastic setting.\n- We propose to use two different kinds of zeroth-order gradient estimators in ZO-SGD and ZO-SCSG for finding second-order stationary points, which need novel and different analysis. To apply the ZO negative curvature finding algorithm to the SPIDER-based algorithm, we first analyse the high probability convergence property of the zeroth-order SPIDER method for finding the first-order stationary points in Appendix F.1.", " Dear Reviewer, thank you for taking the time to review our submission. We sincerely appreciate your valuable comments. Please find detailed responses to your questions below.\n\n>Q1: Are there empirical results that their gradient-free methods outperform existing methods in these situations?\n\nYes, such empirical results can be found in the references we cite (Line 55-56). There are several application scenarios in machine learning where gradient-free methods are very useful, especially when the explicit gradient form is unavailable: Take the example of adversarial attack, the attacker has no access to parameters and the explicit formula of the model (like DNN) in the black-box scenario, while would like to maximize the loss regarding by adding imperceptible perturbation for a clean sample. Thus, zeroth-order optimization is a good tool for attacking black-box neural networks since we can approximate the gradient through the output of the model.\n\n>Q2: Experiments would also help to strengthen the claim that iteration complexity does not significantly increase.\n\nIn the revised version, we add an experiment on the octopus function to compare the iteration performance between ZO-GD-NCF and Neon2+GD (see ). The results clearly show that our ZO-GD-NCD method will not significantly increase the iteration complexity compared to Neon2+GD.\n\n>Q3: Is this dependence on d included in the iteration complexity of the framework?\n\nThe dimension d is not included in the iteration complexity (number of iterations) of the framework but in the oracle complexity (number of function queries). This is because the coordinate-wise gradient estimator can approximate the gradient with high accuracy (please refer to Lemma 8 in Appendix A.1). Then the ZO NCF framework can be seen as the FO NCF framework with minor perturbations and will not increase the number of iterations. In the literature of zeroth-order optimization, we mainly focus on the function query complexity. Since in each iteration we need to calculate the coordinate-wise gradient estimator, thus d will be included in the final function query complexity.\n\n>Q3: For example, Remark 2 says \"CoordGradEst has a lower approximation error and thus can reduce the iteration complexity by a fact[or] of d\". Does this include the summation within the calculation of CoordGradEst? Similar question for Theorem 4 in the comparison of the complexity of CoordGradEst and that of RandCoordEst: does the O(d/epsilon2) vs O(d2/epsilon2) include the computation of CoordGradEst as well?\n\n- Thanks for pointing this point out. We will start by explaining the relationship between approximation error and iteration complexity. In Lemma 8 and Lemma 9 in Appendix A.1, we derive that $\\| \\nabla_{coord} f(x) - \\nabla f(x) \\|^2 \\le \\frac{\\rho^2 d \\mu^4}{36}$ and $\\mathbb{E} \\| \\nabla_{rand} f(x) - \\nabla f_{\\mu} (x) \\|^2 \\le \\mathbb{E} \\|\\hat{\\nabla}_{rand} f(x)\\|^2 \\le d \\|\\nabla f(x)\\|^2 + \\frac{\\rho^2 d^2 \\mu^4}{36}$, respectively. This means that the coordinate-wise gradient estimator always has a low approximation error while needing $\\mathcal{O}(d)$ times more function queries in each iteration. On the other hand, we see that the random gradient estimator suffers from a high variance when the true gradient is large due to the term $d \\|\\nabla f(x)\\|^2$. The results in algorithms 4 & 5 require $\\mathcal{O}(d)$ times more iterations to ensure the convergence for Option II (Detail proofs can be found in Appendix D.1 and D.2). As a result, the total function queries for both Option I and Option II are almost the same in Line 5 and 6 of Algorithm ZO-SGD-NCF in the revised version (or Line 3 and 4 in ZO-GD-NCF). \n\n- In terms of the total function query complexity in Theorem 4, for both Option I and Option II, we need to evaluate the magnitude of the gradient by using the coordinate-wise gradient estimator in each iteration (Line 3 in ZO-SGD-NCF or Line 2 of ZO-GD-NCF, revised version). This results in $O(d/\\epsilon^2)$ (Line 2 + Line 3) function query complexity for Option I and $O(d^2/\\epsilon^2)$ (Line 2 + Line 4) function query complexity for Option II. To tackle this problem, a core idea is to reduce the frequency of evaluating the magnitude of the gradient. For example, in the stochastic setting, we can use the variance reduction-based algorithm like ZO-SCSG since we only need to evaluate the magnitude once in each epoch (i.e., the outer loop). Another way is to use the average random gradient estimator: $\\frac{d}{q}\\sum_{i=1}^q \\frac{f(x+\\mu u_i) - f(x - \\mu u_i)}{2\\mu} u_i$ in https://arxiv.org/abs/1805.10367. Then the term $d\\|\\nabla f(x)\\|^2$ in the bound of variance of the random estimator will be reduced to $\\frac{d}{q} \\|\\nabla f(x)\\|^2$ and thus we can reduce the iteration complexity.", " Re to Q12, Q15: Thank you for your suggestions. We will fix it in the revised version.\n\nRe to Q13: Yes, $\\mathcal{O}\\left( \\left(\\frac{128\\sigma^2}{\\epsilon^2}+1 \\right) \\log \\frac{1}{p} \\right)$ means the batch size. With our choice of the smoothing parameter $\\mu$ we have $\\| \\nabla f(x) - \\nabla_{coord} f(x) \\| \\le \\frac{\\epsilon}{4}$. So $\\| \\nabla_{coord} f(x)\\| \\ge \\frac{3\\epsilon}{4}$ implies $\\|\\nabla f(x)\\| \\ge \\frac{\\epsilon}{2}$ and $\\|\\hat{\\nabla}_{coord} f(x)\\| \\le \\frac{3\\epsilon}{4}$ implies $\\|\\nabla f(x)\\| \\le \\epsilon$.\n\nRe to Q14: Yes, the query complexity means the number of queries of $f(x)$ in the deterministic setting and $f_i(x)$ in the stochastic setting.\n\nRe to Q16: Thanks for your suggestion, in the revised version, we will add some descriptions of SCSG and SPIDER.\n\nRe to Q17: We don't agree with this statement because we move along the negative curvature direction $w_2$ by splitting it into $\\delta/(\\rho \\eta)$ equal length mini-steps with size $\\eta$ and in each mini-step, the direction $w_2$ cannot be changed. So this step should be put in the outer loop.\n\nRe to Q18: The novel analysis techniques are as follows: \n- In Appendix A.1, we analyse the properties of zeroth-order gradient estimators $\\nabla_{coord} f(x)$ and $\\nabla_{rand} f(x)$ under the $\\rho$-Hessian Lipschitz condition. Previous work only exploited the $\\ell$-smoothness property of the gradient of f. We believe it is useful for studying the second-order convergence properties with zeroth-order methods. Also, we study the approximation error of the zeroth-order Hessian-vector product under the $\\rho$-Hessian Lipschitz.\n- In the applications of the zeroth-order negative curvature finding frameworks, since we need to verify if the norm of the current gradient is small enough, we propose Proposition 1. By using a batch of coordinate-wise gradient estimators with proper choice of the smoothing parameter, we can achieve this with high probability in the stochastic setting.\n- We propose to use two different kinds of zeroth-order gradient estimators in ZO-SGD and ZO-SCSG for find second-order stationary points, which need novel and different analysis. To apply the ZO negative curvature finding algorithm to the SPIDER based algorithm, we first analyse the high probability convergence property of the zeroth-order SPIDER method for finding the first-order stationary points in Appendix F.1.", " Dear reviewer, thank you for taking the time to review our submission. We sincerely appreciate your valuable comments. Please find detailed responses to your questions below.\n\nRe to Q1: Thanks for pointing this out. Yes, Line 5 of Algorithm 4 and Line 4 of Algorithm 7 mean that we can use a batch of coordinate-wise gradient estimator to verify whether $\\|\\nabla f(x)\\| \\ge \\epsilon/2$. In the revised version, we have rewritten these components to avoid misunderstandings.\n\nRe to Q2: Thanks for pointing this out. We can boost the confidence of Theorems 5 and 6 to $1-p$ by running $\\log\\frac{1}{p}$ copies of ZO-SCSG-NCF and ZO-SPIDER-NCF. So the total function query complexity of Theorems 5 and 6 will not change much since we use the notation $\\tilde{\\mathcal{O}}$ to hide the log factors. In the revised version, we will add the corresponding remarks to explain this.\n\nRe to Q3: We are sorry we didn't include the limitation in the first submission. In the revised version, we have added the limitation of this work in the conclusion. The main limitation of this work we think is that we don't give a unified analysis on how to use the zeorth-order negative curvature finding frameworks to any first-order stationary point finding ZO algorithms. This is still an open question and we may explore it in future work.\n\nRe to Q4: Yes, when it comes to the oracle complexity in zeroth-order optimization, we focus on the function query complexity; when it comes to the oracle complexity in first-order optimization, we focus on gradient query complexity. Direct comparison between the two oracle is unfair since the problem setting and accessible information are different. Thus, we can focus on the iteration complexity, which is related to the convergence rate of the optimization algorithm. Our motivation is to design zeroth-order algorithms with comparable convergence rates to the first-order methods and not increase the iteration complexity. \n\nRe to Q5: Sorry, it should be A.1 here.\n\nRe to Q6: Yes, in theory, we should set $\\mu$ as small as possible to reduce the error term. However, this is not applicable in practice due to the numerical errors. To fill this gap, $\\mu$ should be tuned empirically such that the error term is not too large, but large enough to avoid numerical errors.\n\nRe to Q7: Thanks for your suggestions on writing, we will consider it seriously in the revised version. Other than replacing $\\nabla f(x+v) - \\nabla f(x)$ with $\\mathcal{H}_f(x)v$, we also dynamically set the smoothing parameter $\\mu_t$ to be $\\|x_t - x_0\\|$. By doing so, the smoothing parameter is not always too small and will grow rapidly so the numerical stability is guaranteed to some extent.\n\nRe to Q8: In the revised version, we have rewritten this sentence to be \"Note that, although the error bound is poorer by a factor of $\\mathcal{O}(\\sqrt{d})$ as compared to $Neon^{online}_{weak}$ in [4] which used the difference of two gradients to approximate the Hessian-vector product and achieve an approximation error up to $\\mathcal{O}(r^2)$, with our choice of $r$ in Algorithm 1, the error term is still efficiently upper bounded.\"\n\nRe to Q9: Thank you for pointing this point out. In the revised version, we have fixed it.\n\nRe to Q10: Since $f$ is $\\ell$-smooth, then we have all eigenvalues $\\lambda_i(\\nabla^2 f(x_0)) \\in [-\\ell, \\ell]$. Since $\\lambda_i(M) = -\\frac{1}{\\ell}\\lambda_i(\\nabla^2 f(x_0)) + (1 - \\frac{3\\delta}{4\\ell})$, we have $\\lambda_i(\\nabla^2 f(x_0)) \\in [\\frac{-3\\delta}{4}, \\ell]$ implies $\\lambda_i(M) \\in [\\frac{-3\\delta}{4 \\ell}, 1] \\subset [-1,1]$.\n\nRe to Q11: The goal of Line 6 in algorithm 3 is to stably compute the approximate matrix Chebyshev polynomial $\\mathcal{T}_t(M)$ since we have only access to the zeroth-order information, which is known as the inexact backward recurrence for stable computation of matrix Chebyshev polynomial (Section 6 in https://arxiv.org/pdf/1608.04773.pdf, appendix B.1 in https://arxiv.org/pdf/1711.06673.pdf). In our method, we prove that the zeroth-order Hessian-vector estimator also satisfies the precondition of the stable Chebyshev sum theorem (Theorem 6.4 in https://arxiv.org/pdf/1608.04773.pdf). \n\n> why not directly applying Algorithms 1 and 2 to the deterministic case by replacing $\\mathcal{H}_{f_i}$ by $\\mathcal{H}_f$\n\nThis is because if we do it that way, the algorithm will be reduced to the classical power iteration method and the total iteration complexity will be $\\Omega(\\frac{\\ell}{\\delta})$, which is poorer than $\\Omega(\\sqrt{\\frac{\\ell}{\\delta}})$ by computing the approximate matrix Chebyshev polynomial. We have discussed this point below the algorithm.", " Dear Reviewer, thank you for taking the time to review our submission. Please find detailed responses to your questions below. We hope that they will improve your opinion of our work and we kindly ask you to consider the possibility of raising your score.\n\n> I felt that the work seems a bit thrown together.\n\nThank you for pointing this out. In the revised version, we have made some adjustments to the main paper based on your comments. Specifically, we summarize the application of the zero-order negative curvature finding framework to the algorithms zo-scsg and zo-spider in Section 4.2, and only keep the description of the algorithm and the informal theorem results. We defer the specific algorithms and formal theorems to the appendix.\n\n> The last issue is with the (lack of) experimental results.\n\nIn fact, we did some experiments to verify that the proposed algorithm can effectively escape from saddle points, but due to space limitations, we deferred the experimental results to Appendix G. In the revised version, we add the numerical experiments section after section 4. In this section, we compare our ZO-GD-NCF method with ZPSGD, PAGD, and RSPI on the octopus function with growing dimensions. The experimental results show the effectiveness and efficiency of our method in escaping saddle points. The detailed parameter settings and other experimental results can be found in Appendix G.\n\n> Is Remark 2 supposed to come after Theorem 4? \n\nThanks for pointing this out. Sorry, we made a small mistake here. This remark should come after Theorem 3. Since we have $\\delta = \\sqrt{\\rho \\epsilon}$ in the classical setting, the dominant term should be $\\tilde{\\mathcal{O}}(d/\\epsilon^4)$ for Option I and $\\tilde{\\mathcal{O}}(d^2/\\epsilon^4)$ for Option II. In the revised version, we have fixed this mistake.", " Dear Reviewer, thank you for taking the time to review our submission. We sincerely appreciate your valuable comments. Please find detailed responses to your questions below.\n\n> the main article has not focus on this unified framework. \n\nThanks for pointing this out. Actually, how to apply the negative curvature finding frameworks to any FOSPs finding algorithm to turn it into a SOSPs finding algorithm is still an open question. The main difficulty is due to the fact that different first-order stationary point finding algorithms have different forms of descent lemmas. For example, in ZO-SGD with coordinate-wise gradient estimator we have $f(x_t) - \\mathbb{E}[f(x_{t+1})] \\ge \\Omega(\\|\\nabla f(x)\\|^2 - \\frac{\\epsilon^2 }{8})$ while in ZO-SCSG with coordinate-wise gradient estimator we have $f(x_t) - \\mathbb{E}[f(x_{t+1})] \\ge \\Omega(\\mathbb{E}\\|\\nabla f(x)\\|^2 - \\frac{\\epsilon^2 }{8})$. Therefore, we need to analyze each algorithm separately. The study of the unified framework is beyond the scope of this paper, but we will explore this interesting research topic in future work.\n\n> The experimental results should have been presented somewhere in the main article.\n\nThank you for your suggestion. In the revised version, we put the experiment of the octopus function in the section on numerical experiments.\n\n> I would like to see the technical challenges of this paper compared to [47]. Part of the difficulty should come from the diversity and complexity of the algorithms considered in this paper, but anything else?\n\n- [47] proposed the perturbed approximate gradient descent (PAGD) method for escaping saddle points, which approximates the gradient by the forward finite difference. The approximation error of this gradient estimator is bounded by $\\|\\hat{\\nabla}_{coord} f(x) - \\nabla f(x)\\| \\le \\mathcal{O}(\\ell \\sqrt{d} \\mu)$ when $f$ is $\\ell$-smooth. In contrast, we approximate the gradient through the central finite difference and prove that the approximation error is bounded by $\\mathcal{O}(\\rho \\sqrt{d} \\mu^2)$ when $f$ is $\\rho$-Hessian Lipschitz, which is a basic assumption in literature of analyzing the second-order convergence properties.\n\n- The main technique of PAGD for escaping saddle points is utilizing random perturbations. Specifically, when the norm of the current gradient is small, the algorithm adds a random perturbation in the region of a small ball $B_{\\tilde{x}}(r)$ with radius r centered at current point $\\tilde{x}$. It is proved in [1] that when $\\tilde{x}$ is a saddle point, then by adding a random perturbation that is not in a small stuck region in $B_{\\tilde{x}}(r)$, the function value will get further decrease after a number of steps. In contrast, the task of negative curvature finding is to directly find an approximate minimum eigenvector direction.\n\n[1] Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, and Michael I. Jordan, How to escape saddle points efficiently.", " The paper addresses the problem of finding stationary points in non-convex settings that are not saddle points (i.e. satisfies second order optimality). It proposes two negative curvature finding (NCF) frameworks that only use zero-order oracles: one offline deterministic and the other online stochastic. The NCF frameworks provide two options of gradient estimation: a coordinate-based estimator and a randomized estimator. The paper then applies this framework to four algorithms and show that these algorithms converge to second-order stationary points. Convergence analysis gives complexity in terms of function queries. Strengths:\n - Paper is well-written and logically organized. The notation is generally clear and easy to follow.\n - The paper addresses an important topic within optimization: escaping saddle points. \n\nWeaknesses:\n - No experimental results.\n - Would like clarification on claim that their method turns first-order NCF methods into zero-order NCF methods without increasing the iteration complexity. 1. The authors describe situations when gradient calculations are expensive or infeasible (lines 55-56). Are there empirical results that their gradient-free methods outperform existing methods in these situations?\n2. Experiments would also help to strengthen the claim that iteration complexity does not significantly increase. It would be good to have empirical results on the accuracy versus number of iterations needed, as well as accuracy versus training time, benchmarked against state of the art methods.\n3. CoordEstGrad is calculated by summing across d finite differences calculation (where d is the dimensionality of the problem). This means that there is a dependency on d embedded within CoordEstGrad itself. Is this dependence on d included in the iteration complexity of the framework? For example, Remark 2 says \"CoordGradEst has a lower approximation error and thus can reduce the iteration complexity by a fact[or] of d\". Does this include the summation within the calculation of CoordGradEst? Similar question for Theorem 4 in the comparison of the complexity of CoordGradEst and that of RandCoordEst: does the O(d/epsilon2) vs O(d2/epsilon2) include the computation of CoordGradEst as well?\n4. Some typos/ minor issues\n - Line 30. rho is undefined until line 108. It may be useful to have rho defined earlier.\n - Line 124. Although clear from context, mu is not defined until line 142.\n - Algorithm 1. What is C\\_0? And what does ``with proper choice of r'' (line 171) mean? r is a calculated value in the algorithm based on d, C\\_0 and sigma.\n - Algorithm 3. What is C\\_1?\n - Line 244. Typo? Should be \"factor of d\" instead of \"fact of d\".\n5. Because of the similarity to the Neon2 paper, it could be useful to explicitly discuss the paper's contribution beyond what was done in Neon2. For example, which part of the theoretical analysis required different techniques than that used in the Neon2 paper? This work does not appear to me to have negative social impact.", " This paper investigates escaping saddle points of nonconvex problems with zeroth order gradient descent method. A general framework is proposed to capture zeroth-order GD, SGD, and some other closely related algorithms. The main result is the convergence to approximate second order stationarities of these algorithms. The main idea is believed to be inspired by [18] and [47], but the results, to my best knowledge, is novel. Strength: The problem this paper addresses is well motivated, the writeup is easy to follow, and the novelty and contribution are easy to identified.\n\nWeaknesses: As highlighted in the beginning of the paper, the main technique is to introduce a framework to capture differential ZO methods, but the main article has not focus on this unified framework. Instead of elaborating all the algorithms, the main idea and novelty should have been treated in the main to some extent.\n\n\nThe experimental results should have been presented somewhere in the main article. Reasons of doing this: the main difficulty of using perturbed methods (which have been proven escaping saddle points) is the setting of parameters, the experiments apparently have some hints on the effects of the parameters. Moreover, it is good to see experiments on the Octopus function since it is an important motivation of using any perturbed methods to escape saddle point efficiently instead of using much simpler algorithms like GD (and many first order methods) that has been proven not converging to saddle points either. Brief discussion on these results will make the paper more attractive.\n As far as I have understood, a very related former work is [47]. Even the escaping saddle points results holds for algorithms that are not studied in [47], I would like to see the technical challenges of this paper compared to [47]. Part of the difficulty should come from the diversity and complexity of the algorithms considered in this paper, but anything else? I might have missed some points as reading the paper, it would be great if the authors are willing to provide some guidance. Considered the choice of hyper parameters and the assumption of Lipschitz Hessian, current experiments is not sufficient for wide application of algorithms.", " This work proposes a zeroth-order negative curvature framework (NCF) that escapes saddle points and converges to a second-order stationary point (SOSP). This framework can be applied to previous zeroth-order algorithms in order to converge to SOSPs. The theoretical results show that ZO-GD achieves similar convergence results to other deterministic ZO methods (with maybe a slightly better dependence on dimension $d$) when $\\delta = \\mathcal{O}(\\sqrt{\\epsilon})$. The stochastic results also achieve similar convergence results as other stochastic ZO methods except in the case $\\delta = \\mathcal{O}(\\epsilon^{2/3})$. To the best of my knowledge, this work seems to be quite novel. I have not seen any zeroth-order NCF methods, especially ones that obtain similar convergence results as its ZO peer methods. This is a big strength, and the literature review is in depth and helpful for the reader.\n\nThe theoretical analysis seems to be sound. I was unable to carefully examine all the proofs in the appendix. The paper seems to be only a theoretical contribution, which is fine but experimental results would be a boost to the overall work to show that this method is feasible and effective in practice (more on this later). \n\nOne major issue with this work is its cohesion: I felt that the work seems a bit thrown together. The theoretical results are lengthy, with many theorems/lemmas/corollarys/propositions not including detailed, descriptive, or informative remarks. The theoretical results should be described in a compact and easily digestable manner. In the current state, the results are quite sprawled in Section 4. The same is true for the algorithms. There are no detailed descriptions of Algorithms 4-8. This is problematic. Finally, there is no Conclusion section or any real end of the work. This makes the work feel like a work in progress.\n\nThe last issue is with the (lack of) experimental results. Many of the other ZO methods obtain similar convergence results and showcase their performance in practice. It would be beneficial to show if your ZO NCF method can perform better than a ZO random perturbation method. In my opinion this is something that is critically lacking in the current work.\n\nMinor Issue:\n1. Is Remark 2 supposed to come after Theorem 4? In the remark, it states that the \"dominant term of the function query complexity in Option I is $\\mathcal{\\tilde{O}(d/\\epsilon^2)}$\". However, while true in Theorem 4, the dominant term seems to be $\\mathcal{\\tilde{O}(d/\\epsilon^2 \\delta^3)}$ in Theorem 3.\n\nOverall, I feel that the development of a ZO-NCF method with rigorous theoretical analysis is a great contribution. However, I believe a lot of work still needs to be done to experimentally backup the convergence claims and make the paper cohesive.\n\nI thank the authors for their submission and am looking forward to their response!\n\n===========================================\nUpdate after author rebuttal:\n\nI had missed the experiments in the appendix, but the authors moved a single experiment into the main section (Section 5). Other experiments remain in the appendix. The authors also did a nice job in cleaning up the presentation of the work. For this, I have bumped up my overall score from a 5 to a 7 and presentation from a 2 to a 3. Great job!\n\n**[R1]** Emmanouil-Vasileios Vlatakis-Gkaragkounis, Lampros Flokas, and Georgios Piliouras. Efficiently avoiding saddle points with zero order methods: No gradients required. Advances in Neural Information Processing Systems, 32, 2019. All of my questions are listed in the strengths and weaknesses above. N/A", " This work proposed zeroth-order negative curvature finding (NSF) algorithms for both deterministic and stochastic nonconvex minimization problem that finds $\\epsilon$-SOSP with the same iteration complexity as the state-of-the-arts for finding $\\epsilon$-FOSP, and improved the query complexity of existing algorithms for finding $\\epsilon$-SOSP. Originality: This work is a novel combination of zero-th order algorithms with NSF algorithms. The related work looks adequate and clear to me. \n\t\nQuality: The submission is a complete work. The claims are well supported by theorems and experiments. I did not find out where limitations of this work are explicitly mentioned. \n\t\nClarity: The introduction is clearly written and well organized. There are many unclear points about algorithms and theorems as shown in my questions. \n\t\nSignificance: The results are important since it demonstrates that the proposed zero-th order algorithms can escape saddle point faster than the previous zero-th order algorithms. (1) (Important) Line 5 of Algorithm 4 and Line 4 of Algorithm 7 use $\\|\\nabla f(x_t)\\|\\ge\\frac{\\epsilon}{2}$ while line 4 of Algorithm 5 uses $\\|\\widehat{\\nabla}_{coord}f(x_t)\\|\\ge\\frac{3\\epsilon}{4}$, why do they differ? Due to proposition 1, I think you mean CoordGradEst can verify whether $\\|\\nabla f(x_t)\\|\\ge\\frac{\\epsilon}{2}$ with probability $1-p$, so we can always use CoordGradEst as in line 4 of Algorithm 5, yes? If not, the computation of the full gradient $\\nabla f(x_t)$ in Algorithm 4 and 7 involves $n$ function queries and thus the complexity should involve $n$, which seems to eliminate the advantage of SGD for large $n$. \n\t\n(2) (Important) Your Theorems 3 and 4 have probability at least $1-p$. Could you also guarantee that for Theorems 5 and 6 which only have probability at least 2/3 and 3/4 respectively? (e.g. maybe by repeating algorithms multiple times or changing hyper-parameters) How will you accordingly adjust complexity? This is to guarantee that the complexity comparison is fair. \n\n(3) (important) The checklist said you have listed limitations. Where are they? \n\n(4) In the research question in line 87, should not we focus on query complexity instead of iteration complexity? \n\n(5) Line 130 said \"Central Difference vs. Forward Difference (please refer to 131 Appendix B.1)\", where is such Appendix B.1? \n\n(6) Will the small $\\mu$ in the denominator cause numerical error? If so, How to avoid that?\n\n(7) I did not fully understand your Algorithms 1 and 2 until I read Theorem 1. I think it better to make the big picture more clear at the beginning of Section 3. For example, you might first describe the goal of NCF problem: to find the the negative curvature direction $v$ at point $x_0$ defined as $v^{\\top}\\nabla^2 f(x_0)v\\le ??$ such that when $x$ is a nearly saddle point, going along this direction $v$ will escape saddle point $x$. Is my understanding correct? Also, do your Algorithms 1 and 2 differ from [4] only in that you use $\\mathcal{H}_f(x)v$ instead of $\\nabla f(x+v)-\\nabla f(x)$ in [4]? You might make this more clear in the paper. \n\n(8) In line 168, \"Note that, although the error bound is poor by ... up to $\\mathcal{O}(r^2)$\". This sentence seems incomplete since the whole sentence lies in \"although\". For example, do you mean \"although ..., our algorithm only requires function values while [4] requires both function values and gradient values.\"? Also, it is better to use \"poorer\" than \"poor\".\n\t\n(9) In line 6 of Algorithm 2, you might use $\\sum_{k=1}^m$ and $f_{i_k}$ to differentiate from $j$ in $z_j$.\n\t\n(10) In line 196, $\\lambda_i(M)=-\\frac{1}{\\ell}\\lambda_i\\big[\\nabla^2 f(x_0)\\big]+\\big(1-\\frac{3\\delta}{4\\ell}\\big)\\in [-1,1]$ implies $\\lambda_i\\big[\\nabla^2 f(x_0)\\big]\\in \\big[-\\frac{3\\delta}{4},2\\ell-\\frac{3\\delta}{4}\\big]$, yes?\n\t\t\n(11) The intuition of Algorithm 3 looks unclear to me. For example, does $T_t(M)\\xi$ correspond to $y_t$? How does $x_{t+1}=x_0+y_{t+1}-\\mathcal{M}(y_t)$ in line 215 correspond to line 7 of algorithm 3? Also, why not directly applying Algorithms 1 and 2 to the deterministic case ($n=1$) by replacing $\\mathcal{H}_{f_i}$ with $\\mathcal{H}_f$? \n\n(12) In Algorithms 4 and 5, the two options can be moved to inside \"if\" since we do not need to compute them when \"else\" holds. \n\n(13) In Proposition 1, does $\\mathcal{O}\\left(\\left(\\frac{128 \\sigma^{2}}{\\epsilon^{2}}+1\\right) \\log \\frac{1}{p}\\right)$ mean the batchsize? Also, the statement \"$\\|\\nabla f(x)\\|\\ge \\epsilon/2$ or $\\|\\nabla f(x)\\|\\le \\epsilon$\" always holds. So I wonder: do you actually mean that we can verify whether $\\|\\nabla f(x)\\|\\ge \\epsilon/2$, and similarly we can verify whether $\\|\\nabla f(x)\\|\\le \\epsilon$? It is better to make it more clear. \n\t\n(14) In all the Theorems, do the query complexities mean the number of queries for $f(x)$, or for $f_i(x)$? \n\t\n(15) Line 3 of Algorithm 7 can be moved into \"if $\\|\\nabla f(x_t)\\|\\ge \\frac{\\epsilon}{2}$\" since we do not need to compute ZO-SCSG when \"else\" holds. Also, I think it will be more concise if you remove \"for $j=1,...,T$ do\" in Algorithm 6 since we only need to run it for 1 epoch. \n \t\n(16) You might briefly introduce what is SCSG and SPIDER at the beginning of Sections 4.2 and 4.3. For example, you might say they are variance reduction techniques for non-convex finite-sum problems, and SPIDER has the near-optimal sample complexity, etc. \n \t\n(17) In Algorithm 8, line 3 can be moved into \"if $w_1\\neq \\bot$\". \n\n(18) What novel analysis techniques are used in your proof? The checklist said you have listed limitations. I did not find them. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "C7DYHQnGL0W", "RxeUxU3kCVk", "_kbIiP8150", "jT8eA9ylkr8", "DCoNl-jm3gA9", "DKWLtMkkfhm", "4L3td_u4RPN", "nips_2022_Setj8nJ-YB8", "_kbIiP8150", "f8_yqDtieT_", "aeWa9D645tN", "cQ1Hudikrp", "UvLwbREd76n", "fdoOzmEenrB", "nips_2022_Setj8nJ-YB8", "nips_2022_Setj8nJ-YB8", "nips_2022_Setj8nJ-YB8", "nips_2022_Setj8nJ-YB8" ]
nips_2022_JRAlT8ZstmH
Latency-aware Spatial-wise Dynamic Networks
Spatial-wise dynamic convolution has become a promising approach to improving the inference efficiency of deep networks. By allocating more computation to the most informative pixels, such an adaptive inference paradigm reduces the spatial redundancy in image features and saves a considerable amount of unnecessary computation. However, the theoretical efficiency achieved by previous methods can hardly translate into a realistic speedup, especially on the multi-core processors (e.g. GPUs). The key challenge is that the existing literature has only focused on designing algorithms with minimal computation, ignoring the fact that the practical latency can also be influenced by scheduling strategies and hardware properties. To bridge the gap between theoretical computation and practical efficiency, we propose a latency-aware spatial-wise dynamic network (LASNet), which performs coarse-grained spatially adaptive inference under the guidance of a novel latency prediction model. The latency prediction model can efficiently estimate the inference latency of dynamic networks by simultaneously considering algorithms, scheduling strategies, and hardware properties. We use the latency predictor to guide both the algorithm design and the scheduling optimization on various hardware platforms. Experiments on image classification, object detection and instance segmentation demonstrate that the proposed framework significantly improves the practical inference efficiency of deep networks. For example, the average latency of a ResNet-101 on the ImageNet validation set could be reduced by 36% and 46% on a server GPU (Nvidia Tesla-V100) and an edge device (Nvidia Jetson TX2 GPU) respectively without sacrificing the accuracy. Code is available at https://github.com/LeapLabTHU/LASNet.
Accept
The paper proposes latency-aware spatial-wise dynamic neural networks under the guidance of a latency prediction mode. reviewers arrived at a consensus to accept the paper.
train
[ "gR79srqyTk", "mkiq_dx3bGh", "EZfOuGqbanv", "iNvtjdMj8ra", "Xl3cgaDHkHY", "_znD4W2JmHq", "mA_3B-eQ1oo", "ZeRzlERlxl4", "iA9EoCGz0K", "pGhm1vB_uEY", "7fWrBv6caS", "TtnqGqrV1nv", "oq7oqhMKml", "MgtixVVmfpp", "SqzOlA5e9g", "L1DNLhxnDdK", "yGC04AvrBF4", "E5r8zz_pc-y", "OG1T3DO0pWf" ]
[ "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nWe wanted to reach out to see if our most recent reply and paper revisions have addressed the concerns. With only ~17 hours left in the rebuttal period, we were hoping that the reviewer could confirm that the revisions have addressed the concerns about the problem statement, further clarify the concerns, and/or provide actionable suggestions for further improving the paper. We would be happy to further revise the paper to clarify any concerns or continue the discussion regarding any remaining concerns.\n\nWe thank the reviewer for all their help so far in improving the paper!", " Thanks for your appreciation, we will polish the code and release it upon acceptance.", " Thanks authors for providing detailed answers. I will tend to keep my initial score. It will be critical to release the code as I think community will benefit from latency predictor even for full layers with no spatial adaptivity. ", " Thanks again for your time!\n\nWe kindly ask the reviewer to reassess the paper in light of this comment if it clears things up. We are happy to answer more if you have any remaining concerns or questions.", " Thanks for your appreciation!", " The response addressed most of my concerns. I will raise my rating if no major issue is found by other reviewers.", " Thanks for replying!\n\n- First, we would like to clarify that the most significant difference between our predictor and the mentioned estimators is that:\n - The mentioned estimators are built based on the latency data **tested** on targeted hardware devices. The estimators are implemented either as **a lookup table or a trainable neural network**. These approaches are practical only for **static** networks, as all the static operations are well supported by existing libraries (e.g., cuDNN). We assume it is more appropriate to call it \"**testing and querying/approximating** the latency\" by treating the hardware as a **black box** in the NAS community.\n - In contrast, we predict the latency by treating hardware as a **white box** and **modeling its detailed behaviors and properties**.\n - **Why**? Currently, no software library provides the efficient implementation of our **dynamic** operators. An efficient implementation of the dynamic operators requires laborious code optimization for each granularity (S), each activation rate (r), and each hardware device. It would be quite **time-consuming and engineering** if we deploy the dynamic operators on specific devices and test the latency as in the NAS community.\n - **How**? We are able to accurately predict the latency because we comprehensively consider the cost of both **data movement** and **parallel computation** in the latency prediction procedure. We model a hardware device with a three-level architecture (Fig. 3 of the paper) and estimate the latency based on detailed hardware parameters, such as the frequency, the number of processing engines, the bandwidth, etc., as mentioned in the supplementary material.\n- The advantages of our predictor include the following aspects:\n - **Dynamic operators are supported for the first time**. As mentioned before, the current libraries do not support dynamic operators yet. Our latency prediction model enables the analysis of dynamic operators for the first time;\n - **Efficiency**. We can conveniently compare the performance of different granularity settings on arbitrary hardware devices **without a laborious deployment** (especially for dynamic operators). Furthermore, we only need to adjust the hardware parameters to obtain the performance of an operator on different hardware devices;\n - **Our predictor can play as an analytic tool**. We can easily analyze the key factor to the latency of each operator **(e.g., memory bounded or computation bounded)**. However, the mentioned estimators in the NAS community can only provide the overall latency result and tell us which static operator is faster. Our predictor can **guide** the efficient implementation (e.g., the operator fusion) of dynamic operators. For example, when the memory access (data movement) contributes more to the overall latency of the masker and conv1, we can merge them to reduce the memory access cost. In this way, the overall latency can be reduced even if the computation is increased.\n- Of course, our latency prediction model **can also be easily used in NAS**.\n- Moreover, inspired by your comment, we believe that our proposed predictor can facilitate the automatic searching of dynamic networks (**NAS for dynamic networks**). For example, we could include dynamic operators in search space. This could further expand our work since we only focus on the granularity of spatially adaptive computation in this paper. More types of dynamic operators (e.g., channel skipping) will be supported in the future.\n\n", " Thank you for the thorough rebuttal and useful insights provided.\n\nGiven the updated description you provided on the latency predictor, can you elaborate more its benefits to other estimators, mostly examined in the NAS community, e.g.:\n\n[1] Cai, Han, Ligeng Zhu, and Song Han. \"Proxylessnas: Direct neural architecture search on target task and hardware.\", ICLR 2019\n\n[2] Dai, Xiaoliang, et al. \"Chamnet: Towards efficient network design through platform-aware model adaptation.\", CVPR 2019\n\n[3] Li, Chaojian, et al. \"Hw-nas-bench: Hardware-aware neural architecture search benchmark.\", ICLR 2021\n\n[4] Wu, Bichen, et al. \"Fbnet: Hardware-aware efficient convnet design via differentiable neural architecture search.\", CVPR 2019", " We thank all reviewers for their valuable comments! We are encouraged that the reviewers appreciate that:\n\n1. Our work is well-motivated, which tries to solve a valuable and practical problem (all reviewers);\n2. The proposed latency prediction model will be useful to the community (Reviewer 3HuP);\n3. Our experimental results validate the effectiveness of our method (Reviewer BmLK), and the empirical analysis is insightful (Reviewer MLaG).\n\nWe have addressed the raised concerns \n\n1. A more detailed description of our latency prediction model is provided in the paper (Section 3.3) and the supplementary material;\n2. The detailed experimental settings and the speed testing environments are presented in the supplementary material;\n3. Experiment results on the instance segmentation task are included in the supplementary material;\n4. The language and the presentation of Figure 2 are improved.\n\nNext, we address each reviewer's detailed concerns and questions point by point. We hope we have addressed all your concerns. Discussions are always open. Thank you!", " Thanks for your contrustive comments. The mentioned weaknesses and questions are addressed as follows:\n\n### Weaknesses\n\n> The novelty of the proposed approach is limited, as it directly generalises a widely studied problem, going from pixel-level to block-level. Similar generalisation in the context of early-exit networks has recently been studied in the literature:\n>\n> - Liu, Zhuang, et al. \"Anytime Dense Prediction with Confidence Adaptivity.\" International Conference on Learning Representations. 2022.\n\n- Actually, our work significantly differs from the mentioned reference (and many previous works) in the following aspects:\n - **The studied problems and the motivations are totally different:** the mentioned reference proposes a **hardware-agnostic** algorithm for semantic segmentation. In contrast, we explore a problem which is rarely studied by previous researchers: improving the **practical** efficiency of dynamic backbone networks by **simultaneously considering the algorithm, the scheduling strategy and the hardware properties**.\n - **The research approaches are different:** the mentioned work performs early exiting in segmentation networks, while our LASNet is more flexible and general which could be implemented in arbitrary backbone architectures. \n - Moreover, we propose a **latency prediction model** to efficiently estimate the real latency of dynamic operators, and the algorithm design is guided by this latency predictor rather than the theoretical computation as done in previous works. To the best of our knowledge, we are the first to propose a latency prediction model which supports dynamic operators.\n - **The products of our works are different:** like many previous works, the mentioned paper proposes an algorithm/network architecture. However, our main contribution is the **latency-aware co-designing framework**, which could be widely used by other researchers for designing various types of static/dynamic networks in the future.\n\n\n\n> The description of the latency prediction model (Sec 3.3) is very high-level, making it hard to justify its contribution .\n\n- Because the latency prediction model considers many hardware details (please also refer to our response to Question 1), we did not describe these details in the paper for simplicity. We have **updated the paper (Sec 3.3) and the supplementary material to describe more details of the latency prediction model**, and the code will be released upon acceptance.\n- To summarize, **the latency prediction model estimates the latency of a dynamic operator by considering both the data movement latency and the computation latency.** This latency prediction model enables us to **accurately estimate the real latency of dynamic operators, which has not been explored by other researchers as far as we know**. The predictor plays an important role in our latency-aware co-design framework, as **it guides both our algorithm design and the scheduling optimization**.\n\n\n\n> Some interesting aspects of this work (eg the generic formulation combining spatial computation skipping and layer skipping - Sec 3.2) are only theoretically mentioned, but not experimentally studied.\n\n- We omitted the detailed discussion for S=56-28-14-7 (the coarsest granularity) in the paper because spatially adaptive inference will reduce into a layer skipping operation in this situation. However, the main purpose of this paper is to explore the practical efficiency of the former paradigm. \n- We empirically compare our models with the mentioned variant (with similar computational cost). It has been observed that although being slightly faster (13ms) than our LAS-ResNet-50 (S=4-4-2-1, 16ms) the accuracy could be significantly degraded from **76.6%** to **76.1%**. \n- Moreover, we have witnessed that when consuming similar computational cost and achieving similar accuracy (~78.8%) on ImageNet, our LAS-ResNet101 with S=4-2-2-1 (the proposed coarse-grained spatially adaptive computation, t=0.5 AP^box: **41.0%**, AP^mask: **37.0%**) significantly outperforms the variant with S=56-28-14-7 (layer skipping, t=0.5, AP^box: **40.4%**, AP^mask: **36.5%**) on COCO instance segmentation. When achieving comparable performance, the layer skipping scheme (S=56-28-14-7, t=0.6 AP^box=40.9%, AP^mask=36.8%) will run slower than our model (S=4-2-2-1, t=0.5, AP^box=**41.0%**, AP^mask=**37.0%**) by ~24.4% on TX2 (**535.6 ms** vs **404.7 ms**).\n- Based on the above experiments, we conjecture that **the extreme situation of dropping an entire feature map may be too aggressive, which will degrade the network performance.** In contrast, our spatially dynamic computation extracts better dense features, which is beneficial to both classification and downstream tasks. Therefore, we mainly focus on exploring the granularity settings in the spatial-wise dynamic computation paradigm.", " \n\n> Some of the claimed contributions and ablation studies are mostly comprise technical/implementation aspects (Sec 3.4).\n\n- As mentioned before, we tackle an unexplored (yet very important) problem in dynamic neural networks. **The main contribution is not just designing a new network architecture, but proposing our latency-ware co-designing framework.** This has also been appreciated by the other two reviewers. Moreover, we have proposed **a new analytic method (latency prediction model) to efficiently and accurately evaluate the real latency of dynamic operators**. This model **guides** our implementation improvements (Sec. 3.4) and experiment design (Sec. 4.2). We believe that the overall framework can not only provide insights for the research community, but also facilitate the deployment of dynamic deep networks in practice.\n\n\n\n> Evaluation is focused on Classification/detection, but it is unclear how the proposed methodology can be applied to more dense CV tasks, such as instance/semantic segmentation. The reviewer believes that this mapping won't be intuitive and this should be discussed in the limitations of the proposed approach.\n\n- First, it has been proved by many previous works (e.g. reference [7,27,31] in the paper) that spatial-wise dynamic networks can be easily applied to dense prediction tasks. Our method can also work on segmentation tasks, as it is very convenient to substitute the backbone network in popular frameworks (such as mask R-CNN) with our LASNets. Since our main contribution is not designing a new powerful backbone, we did not report the experiment results on dense prediction tasks in the original submission.\n- We have tested our method in the Mask R-CNN framework on COCO instance segmentation. The results are listed in the table below, and are included in the supplementary material (Table 3). It can be observed that our LAS-ResNet101 significantly outperforms the baseline model. For example, to achieve comparable performance in AP^mask and AP^box, our LAS-ResNet101 (S=4-4-2-1, t=0.4) runs **~51% faster** than the static ResNet101 on TX2. Moreover, LAS-ResNet101 (S=4-4-2-1, t=0.5) can improve the AP^box and AP^mask of baseline by 1.0% (40.0% to **41.0%**) and 0.9% (36.1% to **37.0%**) while still running faster on GPUs.\n\n| Segmentation Framework | Backbone | Backbone FLOPs (G) | Backbone Latency (ms) on V100 | Backbone Latency (ms) on GTX1080 | Backbone Latency (ms) on TX2 | AP^mask (%) | AP^box (%) |\n| :--------------------: | -------------------------------------- | ------------------ | ----------------------------- | -------------------------------- | ---------------------------- | ----------- | ---------- |\n| | ResNet-101 (Baseline) | 141.2 | 39.2 | 118.0 | 720.7 | 36.1 | 40.0 |\n| | LAS-ResNet-101 (S_net =4-4-2-1, t=0.4) | 69.2 | 42.4 | 81.2 | **352.6** | 36.1 | 40.0 |\n| Mask R-CNN | LAS-ResNet-101 (S_net =4-4-2-1, t=0.5) | 80.5 | 48.5 | 93.3 | 404.7 | **37.0** | **41.0** |\n| | LAS-ResNet-101 (S_net =4-4-7-1, t=0.4) | **68.8** | **33.1** | **76.1** | 391.6 | 36.2 | 40.0 |\n| | LAS-ResNet-101 (S_net =4-4-7-1, t=0.5) | 82.0 | 38.0 | 88.3 | 457.4 | 36.6 | 40.3 |\n\n\n\n> Presentation issues\n\n\n- We have revised the paper and the supplementary material to include more details. The language will be further polished.", " ### Questions\n\n> How is the latency predictor implemented? Is it trainable, or based on an analytical performance model/ cycle accurate simulator? Can it capture the effect of more complex operations, such as dilated or separable convs, skip connections etc?\n\n- The latency predictor is an analytical performance model and it is not trainable. We model the behavior of the accelerators and we define each operation using nested for-loop. We map the operation to hardware using various schedules and evaluate the latencies and choose the optimal schedule. The model considered the number of PE, the off-chip memeory bandwidth, the on-chip memeory bandwidth, and the throughput of FP32 computation of a PE to calculate the data transfer time and the computation time. **We omitted these details in the main paper for simplicity, and have updated the paper and the supplementary material to include a more detailed description of our latency prediction model.**\n- We now only implement the operations required by our experiments. It supports the group convolution in RegNets (and separable convolutions, of course). Skip connection is supported. Dilated convs will be included in the future.\n\n\n\n> The case where a large S can emulate layer skipping should be considered in the ablation.\n\n- Please see our response to Weakness 3.\n\n\n\n> How would the proposed approach be applied to dense tasks, such as semantic segmentation, dense prediction etc ?\n\n- Please see our response to Weakness 5.\n\n\n\n> Is there any benefit on allowing overlapping pixel blocks to be considered during masking? Qualitative results indicate that such a setting may allow for larger values of S.\n\n- An interesting question. As far as we know, this paradigm has not been studied by researchers in this field. To our understanding, the proposed formulation will bring more flexibility in spatially adaptive inference, but may also have some implementation issues, since the computation of a pixel will be determined by multiple elements in a mask. This might raise considerable challenges in searching for the optimal scheduling during inference. Moreover, the difficulty of training the dynamic model will be increased, as the current Gumbel Softmax reparameterization technique might be impractical in this situation. Moreover, as our results in Fig 7 show, it's not always beneficial to increase S, especially on less powerful devices such as TX2 and Nvidia Nano. In all, we believe this is a topic worth exploring in the future.\n\n\n\n### Limitations\n\n> Some limitations are discussed in the conclusion section. The authors are encouraged to enhance this discussion.\n\n- A more detailed analysis is updated in the supplementary material.", " Thanks for the positive comments. Our responses to your mentioned weaknesses are as follows:\n\n\n\n### Weaknesses\n\n> The framework can only be applied to models with ResNet block whose major computations are located in the 3x3 conv of the network blocks. I don’t think it work well for other efficient models such as MobileNet and ShuffleNet that does not have heavy 3x3 convs.\n\n- We would like to clarify that the effectiveness of our method is **not affected by the shape of convolution**, because **our method makes use of the spatial redundancy in feature maps, rather than the structure redundancy in convolutional kernels**. Whatever the convolutional kernel size is, the redundant computation on less informative regions can be skipped. In other words, the computation of both 3x3 conv and 1x1 convs can be reduced during inference.\n\n- Actually, the **RegNets that we experimented on do not have heavy 3x3 convolutions**. We compute the FLOPs that each layer consumes in a RegNetY-800M, and the results show that the 3x3 convs only takes ~**20%** computation in most blocks (and the 1x1 convs take ~80%). Our results in Fig. 7 (b) validate that **our method works well on such efficient models without heavy 3x3 convs, and the LAS-RegNets outperform MobileNets in both theoretical and practical efficiency**.\n\n- We also tested our method on MobileNet-v2. When achieving similar accuracy (~72.1%) on ImageNet, **our conclusion that coarse-grained spatially adaptive inference (S>1) achieves better efficiency than the previous methods (S=1) still holds**. For example, the inference latency of our LAS-MobileNet-v2 with S=8-4-7-1 and S=1-1-1-1 are 5ms and 7ms respectively (the former is 28.5% faster) on TX2 GPU. \n\n\n\n> Only GPU platforms are tested. Does this approach generalize to other types of hardware such as CPU?\n\n- We would like to point out that it is has been validated by many previous works (references [7,27,31] in our paper) that the realistic speedup of spatially adaptive computation on CPUs **can already be achieved**, because their is a strong correlation between latency and FLOPs on CPUs. In contrast, **the realistic speedup on GPUs remains a more challenging problem and is rarely explored by researchers**. Therefore, we mainly focus on the GPU platform in this paper. We have updated the introduction section (line 41-43) to clarify this.\n- Of course, our models can achieve realistic speedup on CPUs. For example, we have conducted speed tests on Intel(R) Xeon(R) CPU (E5-2698 v4 @ 2.20GHz). The results suggest that the **practical speedup ratio of our coarse-grained spatially adaptive computation (LASNets with S>=2) generally matches the theoretical speedup ratio**. In contrast, the speedup ratio of the finest granularity (S=1) lags behind the theoretical results by 30%~60%. More comprehensive empirical study will be included in the paper.\n\n\n\n> For the speed comparison, what kind of cuda runtime environment are you using for the experiments? What are the cuda version, cudnn version? are you using TensorRT? These details are not given in the main text.\n\n- The inference code is implemented in C++/CUDA. The CUDA version is 11.6. Because **cuDNN and TensorRT have not supported some operations required by dynamic inference** (patch-level scatter-add and gather-conv3x3 operations), we implement the dynamic kernels without cuDNN or tensorRT. The compared speed of the baseline (static) models are also tested in our own implementation. According to our experiments, **our implementation of static operators runs ~16% faster than cuDNN-implemented static operators.** Therefore, the advantage of our dynamic operators over the cuDNN-implemented static operators is more significant.\n- These details have been included in our updated supplementary material, and we will release the code upon acceptance. ", " Thanks for your detailed comments. The mentioned questions are addressed as follows.\n\n### Weaknesses\n\n> Knowledge distillation.\n\n- First, we would like to clarify that we obtained the baseline results from our reference [27] (dynConv). Moreover, the dynConv method could be seen as one variant (S=1-1-1-1, which is trained with exactly same strategy with other S settings) in our framework. Therefore, we believe that the comparison (especially among our models with different granularity settings) in terms of the trade-off between accuracy and efficiency is fair.\n- Second, we have validated that under the same setting, our models are still more efficient while achieving comparable accuracy. We finetuned the original pre-trained ResNet-101 with the same teacher for distillation. After finetuned for 100 epochs, the original ResNet-101 achieves **78.9%** Top-1 accuracy while consuming ~**7.8G** FLOPs (**38ms** on TX2). In contrast, our dynamic model (S=4-4-2-1) achieves **78.8%** accuracy while consuming ~**5.6G** FLOPs (**28ms** on TX2). It can be observed that even with the same training strategy, our dynamic models can significantly improve the inference efficiency of the original model when yielding comparable accuracy.\n\n\n\n> Training recipe.\n\n- Actually, the training pipelines of spatial-wise dynamic models adopted by previous works vary a lot. There are mainly two lines: one is the pretrain-finetune paradigm adopted in SACT [5] and dynConv [27], and the other trains the models from scratch, but for more (200) epochs for better convergence [31, 7]. As our model architectures are most similar to those in dynConv [27], **we follow the training setup in [27]** to avoid the \"dead residual problem\". We conjecture that the training of spatial-wise dynamic models is generally more difficult than the layer skipping scheme in Conv-AIG, since the networks are required to make more complex decisions.\n- We use the standard data augmentation (normalization, RandomResizedCrop, and RandomHorizontalFlip) which is the same as most compared methods, and adopt the cosine-shape decaying scheduler for the learning rate. We have updated the supplementary material to clarify this.\n- Finally, we would like to emphasize that **all our models with different \"granularity\" settings are trained with the same recipe**. Therefore, the analysis in the experiment section and **our main conclusion that a properly \"coarse granularity\" (S>1) leads to better efficiency on GPUs is not affected by the training strategy**. \n\n\n\n> There are questions in the section before that are not clear. Understanding them will help to evaluate paper better.\n\n- We have updated the supplementary material (Section B.2) to include the aforementioned training details.\n\n\n\n### Questions\n\n> How does hardware parameter vector H look like?\n\n- The considered **hardware properties** include the number of processing engines (PE), the off-chip memory bandwidth, the on-chip global memory bandwidth, and the throughput of FP32 computation of a PE. We omitted these details in the main text due to the page limit. **We have updated the paper (Section 3.3) and the supplementary material to include a more detailed description of our latency prediction model.** ", " \n\n> The activation factor r, is it preselected or computed automatically? What would be con\\pros of both solutions?\n\n- The activation rate r is **computed automatically based on the output of the masker**. It cannot be preselected, because we want the maskers to **adaptively** decide which pixels on a feature map to be calculated in different blocks. The calculated r determines the FLOPs of the block, and then the overall FLOPs of a network can be obtained. We add a regularization item in the loss function to optimize the overall FLOPs of our networks to a certain target (Sec. 3.5, line 203 of the paper). By doing so, the maskers will learn to adaptively select the most important regions in feature maps during inference (please also refer to our visualization results in Fig. 9 of the paper).\n\n\n\n> What is the architecture of masker?\n\n- **As described in the paper (line 138-139), the masker is composed of a pooling layer and a conv1x1 layer.** Experimental results show that the architecture design of the makers is not crucial to accuracy, but a complex design would result in inefficiency without significant accuracy gains. Therefore, we finally adopt this design for faster inference.\n\n\n\n> Is the mask the same for all input/output channels? Or it does vary?\n\n- **Yes, the mask is the same for all channels.** We did consider masking different regions for different channels. Empirical studies suggest that such an \"overly\" flexible operator would increase the difficulty of searching for the best scheduling strategy to achieve realistic speedup. Therefore, we keep the mask the same for different channels and only explore the granularity of spatially adaptive inference in this paper.\n\n\n\n> Why the case of S=56 is not considered? If my understanding is correct then the entire feature map will be removed which is helpful to reduce latency.\n\n- We omitted the detailed discussion for S=56-28-14-7 (the coarsest granularity) in the paper because spatially adaptive inference will reduce into a layer skipping operation in this situation. However, the main purpose of this paper is to explore the practical efficiency of the former paradigm. \n- We empirically compare our models with the mentioned variant (with similar computational cost). It is observed that although being slightly faster (13ms) than our LAS-ResNet-50 (S=4-4-2-1, 16ms), the accuracy could be significantly degraded from **76.6%** (ours, S=4-4-2-1) to **76.1%** (S=56-28-14-7). \n- We have also observed that the layer skipping scheme (S=56-28-14-7) degrades the performance on downstream tasks such as COCO instance segmentation. When achieving similar ImageNet accuracy (78.8%), LAS-ResNet101 (S=4-4-2-1, AP^box: **41.0%**, AP^mask: **37.0%**) significantly outperforms the layer skipping variant (S=56-28-14-7, AP^box: **40.4%**, AP^mask: **36.5%**).\n- Based on the above analysis, we conjecture that **the extreme situation of dropping an entire feature map may be too aggressive. Therefore, we mainly explore the granularity settings in the spatial-wise dynamic computation paradigm.**\n\n\n\n> Explain how sparsity is set of different layers and why S is different for different models and layers in section 4.3.1\n\n- As mentioned in our response to Question 2, **the sparsity is automatically calculated** based on the output of the maskers (please also refer to Sec 3.1, line 116 of our paper). We use a **regularization item in the loss function** (line 203 of the paper) to **control the overall computational cost of a network**, which leads to different sparsity in different layers.\n- As we describe in Section 4.2 (line 244-250), the **speedup ratios of different S settings are analyzed based on our latency prediction model**. We plot the relationship between speedup ratio and S for different blocks in Fig 6 of the paper, and a proper S is chosen for different blocks and models to achieve a better trade-off between latency and flexibility.\n\n\n\n> Implementation of the 2 branch model in Figure 2 is not clear. If the mask is adaptive then there might be more than 1 candidate with sparse convolution, how would Gumbel softmax sample them? How M is regularized?\n\n- **We have updated the presentation of Fig.2 in the paper**. In fact, our model is not a two-branch architecture. The two \"branches\" in the original Fig. 2 denote the training-time masking scheme and the test-time operator fusion, respectively.\n- In our case, **each element of a mask makes a binary decision** on whether to allocate computation to a pixel/patch on the feature map. A masker takes the input feature as input and adaptively decides which locations should be computed. Gumbel Softmax is a commonly used technique that facilitates the end-to-end training of such \"discrete\" decisions. \n- As we mentioned above, a regularization item is used in the loss function to control the overall computational cost of a network.", " > How is latency computed in Figure 7? Was the setup the same (operator fusion etc) for all models considered? Those questions raise because skipping the entire block as in Convolutional-AIG should be more efficient in terms of latency reduction.\n\n- The overall latency is obtained by **summing up the latency of all the blocks in a network**. The latency of each block is estimated using the latency prediction model by considering the latency of both data movement and computation. More detailed description of our latency prediction model is included in the updated paper and supplementary material.\n- **The setup of operator fusion is decided based on the averaged sparsity of a block.** For example, when the sparsity is high (very few pixels are selected), the latency bottleneck would be memory access rather than computation. In this situation, it would be more efficient to conduct operator fusion. We calculate the averaged sparsity of each block on the ImageNet validation set and decide whether to fuse some operations. This is practical thanks to our proposed latency prediction model, which helps us to efficiently analyze the latency bottleneck.\n- Although skipping the entire block as in Conv-AIG (or our coarsest granularity S=56-28-14-7) is easier to implement for fast inference, it might degrade the network performance (please also refer to our response to Question 5). Note that in our experiments for the variant of S=56-28-14-7 (which is similar to Conv-AIG), the operator fusion is considered in the same way as other granularity settings.\n\n\n\n> Where was the framework implemented? If it was C++ then comparing with cuDNN and TesnorRT would be valuable as there is additional model optimization.\n\n- The latency predictor is implemented in Python and the inference code is implemented in C++/CUDA. Because **cuDNN and TensorRT have not supported the dynamic operators in our method**, we can only conduct comparisons in our framework. Our results have shown that **the implemented dynamic operators run faster than the static operators**.\n- We have also compared the static operators implemented in our framework with cuDNN. The results show that **our implementation of static operators is also more efficient than the cuDNN library**. For example, our implementation of a 3x3 convolution layer in the first stage of a ResNet runs faster than a cuDNN-implemented layer by ~16%. \n- Based on the above analysis, the conclusion is that **the dynamic operators (our implementation) outperform the static operators (our implementation), and the later is faster than cuDNN-implemented static operations**. Therefore, the advantage of our dynamic operators over the cuDNN-implemented static operators is actually more significant.\n\n\n\n> Will the code be released? The latency prediction model might be useful to community for future research.\n\n- Thanks for your appreciation. We will release the code upon acceptance.\n\n\n\n> In the first section, the coarse granularity is mentioned. It would be helpful to explain it once it appears in the text.\n\n- Thanks, we have updated the paper (line 57-59) to explain this.\n\n\n\n### Limitations\n\n> Limitations are not listed by authors\n\n- Actually, we discussed the limitation at the end of the paper. A more detailed analysis is included in the updated supplementary material.\n\n\n\n> There are seem to be a possible a set of hyper-parameters that need to be tuned like S etc.\n\n- As mentioned in our response to Question 6, **S is decided based on our latency analysis experiments in Section 4.2. This can be achieved thanks to our proposed latency prediction model.** Other hyper-parameters are quickly decided in our early experiments.\n\n\n\n> Some details are missing and clarifying them will be helpful.\n\n- Thanks, we have updated the paper and the supplementary to include more details. The presentation will be further polished.\n\n\n\n> It is not clear how different convolution implementations (like Winograd) will benefit from the work.\n\n- Winograd can benefit from our coarse-grained dynamic inference. For 3x3 convolution, we can use the Winograd algorithm when the input of a patch has 4x4 or more pixels (S>=2). Because the normal implementation of convolution is the most commonly used, we now only experiment on the normal implementation of convolution. Acceleration in other implementations is one of the interesting research directions which could be explored in the future.", " Paper tackles the problem of dynamic inference where the forward graph will depend on the input data. It focuses on real speedups instead of theoretical one, this makes the work more valuable. There is a latency prediction model that can estimate the latency by considering the algorithms, scheduling and hardware properties. Experiments are performed on image classification and demonstrate latency reduction of 23 to 45% depending on the hardware superiority. Strengths:\n\n+ Dynamic inference is a field of great interest. Naturally, people spent different effort to perform vision tasks so should deep network. \n+ The method considers the real latency and not FLOPs. Additionally, demonstrating real speed-up is greatly appreciated. \n+ Multiple paths are considered during training, and a single path is executed during inference. \n+ Pare is well written and the content comes smoothly. \n\nWeaknesses:\n\n- Using distillation during fine-tuning will increase accuracy and might be not fair comparing to the original model.\n- When compared to the previous work like Conv-AIG, the training recipes might be different (authors start from trained model and do 100 more epochs). The over training pipeline is not described as well, things like augmentations, learning rate scheduler etc should be mentioned. If those are not the same (as number of epochs) then authors should perform a fait comparison with those being the same. \n- There are questions in the section before that are not clear. Understanding them will help to evaluate paper better. - How does hardware parameter vector H look like?\n- The activation factor $r$, is it preselected or computed automatically? What would be con\\pros of both solutions?\n- What is the architecture of masker? \n- Is the mask the same for all input/output channels? Or it does vary? \n- Why the case of S=56 is not considered? If my understanding is correct then the entire feature map will be removed which is helpful to reduce latency. \n- Explain how sparsity is set of different layers and why S is different for different models and layers in section 4.3.1 \n- Implementation of the 2 branch model in Figure 2 is not clear. If the mask is adaptive then there might be more than 1 candidate with sparse convolution, how would Gumbel softmax sample them? How M is regularized? \n- How is latency computed in Figure 7? Was the setup the same (operator fusion etc) for all models considered? Those questions raise because skipping the entire block as in Convolutional-AIG should be more efficient in terms of latency reduction. \n- Where was the framework implemented? If it was C++ then comparing with cuDNN and TesnorRT would be valuable as there is additional model optimization. \n- Will the code be released? The latency prediction model might be useful to community for future research. \n\nMinor:\n\n- In the first section, the coarse granularity is mentioned. It would be helpful to explain it once it appears in the text. - Limitations are not listed by authors\n- There are seem to be a possible a set of hyper-parameters that need to be tuned like S etc. \n- Some details are missing and clarifying them will be helpful.\n- It is not clear how different convolution implementations (like Winograd) will benefit from the work.\n", " This paper presents a dynamic inference paradigm based on selective inference of convolutions on the spatial dimension. Specifically, for each convolutional block, it uses a masker layer to predict the masked region of the feature map and only conduct convolution on the masked region. Such convolution has less computation than a full convolution and improves efficiency of the model. Moreover, it utilizes a latency prediction model to estimate the latency of different mask operations. With a differentiable training framework, the model can be optimized according to configurations different hardware platform. Strength:\n-\tAlthough masked inference is not a new idea, this work targets at a more practical inference speed optimization compared to existing dynamic-masking approaches. Several critical modifications for the real efficiency of this approach are adopted in the inference pipeline: fused operations, granularity choices of the mask etc.\n-\tExperiments with ResNet and RegNet validates the effectiveness of this method on several tasks: ImageNet classification and object detection. It reduces inference time by 23% and 45% on V100 and TX2 platform respectively without significant drop on accuracy.\nWeaknesses:\n-\tThe framework can only be applied to models with ResNet block whose major computations are located in the 3x3 conv of the network blocks. I don’t think it work well for other efficient models such as MobileNet and ShuffleNet that does not have heavy 3x3 convs.\n-\tOnly GPU platforms are tested. Does this approach generalize to other types of hardware such as CPU? \n-\tFor the speed comparison, what kind of cuda runtime environment are you using for the experiments? What are the cuda version, cudnn version? are you using TensorRT? These details are not given in the main text.\n See above. Yes", " This submission introduces a spatially dynamic neural network approach, developed in a latency-aware manner that can generate realistic speed-ups during inference. The proposed methodology allows to dynamically skip computation at different spatial regions of the feature maps at a coarser granularity (of blocks of pixels), directly generalising the widely studied pixel-level approaches from the literature. This leads to more regular computation pattern and reduces the overheads, to maximise the attainable inference speed gains. Additionally, the granularity of the pixel blocks is tuned in a latency-aware manner to balance the speed-accuracy trade-off. Results indicate that the proposed approach is effectively improving efficiency, across hardware platforms and models for image classification and object detection tasks. Strengths:\n- The paper addresses a very important real-world issue, which is well-motivated and demonstrated experimentally.\n- The proposed approach, although rather simple, is able to generate realistic speed-ups with no effect in accuracy, showcasing improved efficiency compared to baselines.\n- Useful insights are provided in the experiments analysis.\n\nWeaknesses (see questions for more details):\n- The novelty of the proposed approach is limited, as it directly generalises a widely studied problem, going from pixel-level to block-level. Similar generalisation in the context of early-exit networks has recently been studied in the literature:\n - Liu, Zhuang, et al. \"Anytime Dense Prediction with Confidence Adaptivity.\" International Conference on Learning Representations. 2022.\n- The description of the latency prediction model (Sec 3.3) is very high-level, making it hard to justify its contribution . \n- Some interesting aspects of this work (eg the generic formulation combining spatial computation skipping and layer skipping - Sec 3.2) are only theoretically mentioned, but not experimentally studied.\n- Some of the claimed contributions and ablation studies are mostly comprise technical/implementation aspects (Sec 3.4).\n- Evaluation is focused on Classification/detection, but it is unclear how the proposed methodology can be applied to more dense CV tasks, such as instance/semantic segmentation. The reviewer believes that this mapping won't be intuitive and this should be discussed in the limitations of the proposed approach.\n\nPresentation\n- Syntax/grammatical errors exist sporadically across the manuscript, mostly in Sec1, 3.3 and 4.5\n- Some aspects of the proposed methodology are described in rather high-level, making it impossible to reproduce. 1. How is the latency predictor implemented? Is it trainable, or based on an analytical performance model/ cycle accurate simulator? Can it capture the effect of more complex operations, such as dilated or separable convs, skip connections etc?\n2. The case where a large S can emulate layer skipping should be considered in the ablation.\n3. How would the proposed approach be applied to dense tasks, such as semantic segmentation, dense prediction etc ?\n4. Is there any benefit on allowing overlapping pixel blocks to be considered during masking? Qualitative results indicate that such a setting may allow for larger values of S. Some limitations are discussed in the conclusion section. The authors are encouraged to enhance this discussion." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "OG1T3DO0pWf", "EZfOuGqbanv", "L1DNLhxnDdK", "ZeRzlERlxl4", "_znD4W2JmHq", "oq7oqhMKml", "ZeRzlERlxl4", "TtnqGqrV1nv", "nips_2022_JRAlT8ZstmH", "OG1T3DO0pWf", "OG1T3DO0pWf", "OG1T3DO0pWf", "E5r8zz_pc-y", "yGC04AvrBF4", "yGC04AvrBF4", "yGC04AvrBF4", "nips_2022_JRAlT8ZstmH", "nips_2022_JRAlT8ZstmH", "nips_2022_JRAlT8ZstmH" ]
nips_2022_sexfswCc7B
Learning to Break the Loop: Analyzing and Mitigating Repetitions for Neural Text Generation
While large-scale neural language models, such as GPT2 and BART, have achieved impressive results on various text generation tasks, they tend to get stuck in undesirable sentence-level loops with maximization-based decoding algorithms (\textit{e.g.}, greedy search). This phenomenon is counter-intuitive since there are few consecutive sentence-level repetitions in the human corpus (e.g., 0.02\% in Wikitext-103). To investigate the underlying reasons for generating consecutive sentence-level repetitions, we study the relationship between the probability of repetitive tokens and their previous repetitions in context. Through our quantitative experiments, we find that 1) Models have a preference to repeat the previous sentence; 2) The sentence-level repetitions have a \textit{self-reinforcement effect}: the more times a sentence is repeated in the context, the higher the probability of continuing to generate that sentence; 3) The sentences with higher initial probabilities usually have a stronger self-reinforcement effect. Motivated by our findings, we propose a simple and effective training method \textbf{DITTO} (Pseu\underline{D}o-Repet\underline{IT}ion Penaliza\underline{T}i\underline{O}n), where the model learns to penalize probabilities of sentence-level repetitions from synthetic repetitive data. Although our method is motivated by mitigating repetitions, our experiments show that DITTO not only mitigates the repetition issue without sacrificing perplexity, but also achieves better generation quality. Extensive experiments on open-ended text generation (Wikitext-103) and text summarization (CNN/DailyMail) demonstrate the generality and effectiveness of our method.
Accept
This paper investigates the source of repetition in text generation from a language model and presents a training method to mitigate this problem. Their experiments show the proposed method not only reduces repetition, but also improves generation quality. I think this is a good paper. All reviewers agreed with me so I recommend acceptance to NeurIPS.
train
[ "nIGNs_iSwAd", "hXN3tX2Ynat", "ItGMbRDqQUL", "CAGIFK8EsKj", "xLIJRg4uXfc", "ZjFAghnwJvH", "szhBPP31hZ", "2kas10y2IOs", "IAquLGyuuF3" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Reviewers have addressed all questions I had. I have increased the rating.", " We thank the reviewers for their insightful and constructive reviews. We first provide responses to several points raised by multiple reviewers. Responses to individual reviewers are provided below. **The Appendix that contains additional experiments is attached in the Supplementary Materials as a separate pdf file.**\n\n**Design of the loss function (Reviewers 95eQ and HDLP)**\n\nThanks for the insightful question. Indeed, we have tried other loss functions\nsuch as the MSE loss and the margin loss, and the experiments of other loss functions\ncan be found in Appendix C.\nIn particular, we have explained why the margin loss achieves inferior results. When trained with margin loss, -log( 1 - max(cur - alpha * prev, 0 )), the model is encouraged to learn a smaller probability than that of the last repetition. As shown in Figure 8, in order to minimize the margin loss, the model learns a ‘cheap’ solution, in which the probability of repeating the previous sentence is directly reduced to almost 0,\nregardless of the probability of the previous repetition. This clearly causes over-penalization: the model quickly reduces TP to close to 0 even if there is only one sentence repetition. Hence, the margin loss leads to worse performance than our loss function. \n\nThe motivation of our loss function (Eqn.1) is two-fold: 1) cur should not be much larger than alpha * prev to avoid the self-reinforcement effect; 2) cur should not be much smaller than alpha * prev to avoid over-penalization that directly reduces cur to an extremely low value as shown in Figure 8. Another possible choice is to restrict the value of cur to be in [lower\\_bound, pre*cur] to avoid over-penalization, it introduces an extra hyper-parameter\nlower\\_bound. Empirically, we find that the loss function in Eqn.1 works the best.\n\n**Regarding our explanation for why the sentence repetition occurs (Reviewer HDLP and tE1F)**\n\nIn this manuscript, we view the language model $P_{\\theta}(\\cdot|\\mathbf{x}_{<t})$ as a black box, investigate the relationship between the probability of the token and the number of repetitions in context, and propose a training-based method, DITTO, to avoid self-reinforcement effect and mitigate repetitions. \nOur main observations, and our explanation for why the sentence repetition occurs\ncan be summarized as follows:\n\n * Through our quantitative experiments across various corpus, we have several interesting observations: \n - ob1: The model tends to raise the probability of repeating the previous sentence. Specifically, even if there is only one sentence-level context repetition, the probability of repetition at the current step increases in most cases.\n - ob2: Self-reinforcement effect: the probability of repetition increases almost monotonically with the number of historical repetitions.\n - ob3: Sentences with higher initial probabilities usually have a stronger self-reinforcement effect.\n* Furthermore, combined with maximization-based decoding algorithms such as greedy decoding, we provide explanations about the reasons why sentence repetition occurs\n - Since previous sentences have a high likelihood (generated by greedy search), they have more potential to be repeated according to ob1 and ob3.\n - Once the model repeats the sentence several times, the model would get stuck in the sentence loop due to the self-reinforcement effect according to ob2.\n\nWe admit that there must be deeper reasons for why the model raises the probability of repeating tokens from the perspective of model embedding, neural network architecture or intrinsic characteristics of language. Our current analysis has not touched these aspects and hence we do not claim to have a deep and comprehensive explanation of this phenomenon. Several such directions are still under exploration and we leave them as important future work. To avoid over-claiming, we will discuss it in Sec.6.", " 1 **[Design of the loss function]**\n\nWe thank the reviewer for the suggestion of modifying the loss function. We refer the reviewer to the general response above, where we argue why our loss function is more suitable than other losses such as the margin loss.\n\n2 **[Coverage of the experiments]**\n\nWe thank the reviewer for the suggestion. In this paper, we follow most relevant previous works such as unlikelihood training (Wellneck et al., 2019) and straight to gradient (Lin et al., 2021) to conduct experiments on wikitext-103 and CNN/DM for comparison, and perform detailed studies on hyper-parameters and different decoding lengths. Due to the limited time for rebuttal, we leave more experiments on other tasks as future work.\n\n3 **[Two-hop sentence repetition as pseudo data and more data augmentation technique for constructing pseudo data]**\n\n In principle, we can view the two sentences as a group and then construct pseudo data by repeating the group for training. According to the advice, we conduct experiments and present the results in Appendix D.2 and Table 2. Although there is no significant difference in the results of Table 2, we believe that constructing more effective pseudo samples for training is an exciting and promising direction to further improve the learning ability and generalizability of language models, which is still under exploration.", " 7 **[Line 63-65: justification of the statement]**\n\nTo show that sentences with higher initial probabilities usually have a stronger self-reinforcement effect, we further conduct experiments to divide the $D_{wiki}$ into 5 groups according to their initial probabilities and compare their results as shown in Appendix D.1 and Figure 9. From the figure, it can be clearly seen that sentences with higher initial probabilities reach higher TP and WR as the number of repetitions increases, meaning that these sentences have higher probabilities of being repetitively generated (stronger self-reinforcement effect). For decoding scenarios, if maximization-based decoding algorithms such as greedy decoding are employed, previously generated sentences have a higher initial likelihood (since these sentences themselves are selected with the maximization criterion) and thus, have higher probabilities of being repeated.\n\n8 **[Line 89: reword 'statistically investigate', Line 123: reword 'surprise']**\n\nThanks for your comments. We have modified the word 'statistically' to 'quantitatively' and removed the phrase 'To our surprise' for a more precise expression.\n\n9 **[Lines 125-132: answer to the question 'Why sentence repetitions occur?', Lines 133-137 answer to 'Why model get stuck into sentence-level loop'?, Lines 144-153, problems about analyses]**\n\nIn this paper, we mainly explain why sentence repetition occurs from the probabilities of repeating previous sentences. However, we do not provide experiments to explain why the model raises the probability of repeating from the perspective of model embedding or architecture. We refer the reviewer to the general response for a detailed discussion about it and why the model gets stuck in the sentence-level loop. We believe that understanding why the model raises probabilities for repetition from the model and architecture perspective is an important research topic, which we are currently working on but is out of the scope of this paper. To make our claim clear to readers, we will add more discussion in Sec.6 about the limitation of our current analysis and future important directions.\n\n\n10 **[Line 190-193: UL-seq it is not true that it penalizes any repetitions]**\n\nWe have modified the 'unlikelihood' to the 'token-level unlikelihood training'. Thanks!", " 1 **[Related Work]**\n\nHe et al.[1] find that, when given the ground-truth context as the prefix or repetitive sentences generated by the model itself as the prefix, the LM can generate continuations with similar quality which shows the model has the self-recovery ability.\nHowever, during decoding, they adopt ancestral sampling to generate continuations rather than maximization-based decoding methods. When the model generates tokens by ancestral sampling such that the current sentence structure is different from previous sentences, the model stops repeating the sentence since their sentence-level context are different. Different from their findings, our analyses reveal that the probability of generation repetitive tokens will increase if they share the same sentence-level context. Thanks for the suggestion. We will cite the work and other relevant works mentioned in the comments, and discuss them in the related work section of the revised version.\n\n[1] Tianxing He, Jingzhao Zhang, Zhiming Zhou, James R. Glass. Exposure Bias versus Self-Recovery: Are Distortions Really Incremental for Autoregressive Text Generation? EMNLP 2021.\n\n2 **[Comparison with UL-seq with short input pseudo data]**\n\nUL (Wellneck, et al., 2019) is the first work in mitigating repetition during the training phase, and our work is also motivated by the work. In our experiments, we exactly follow the original implementations of UL-seq. Following your advice, we re-train the DITTO by shorting the repetitive sequence to 150 tokens and study whether long sequence penalization is necessary to overcome the self-reinforcement effect. The results are shown in Appendix D.2 and Figure 10. From the results, we can find that long sequence penalization is actually necessary. However, compared with UL-token+seq, DITTO still enjoys two benefits: 1) DITTO can directly feed longer sequences for penalization training without significantly increasing the computational cost while UL-token+seq needs to auto-regressively generate sequences for penalization; 2) with similar penalization length (150 tokens), DITTO is more effective on overcoming self-reinforcement effect. We will add the discussion on the experimental parts in the revised paper. Thanks for your constructive suggestion!\n\n3 **[Loss Function]**\n\nWe thank the reviewer for the suggestion of modifying the loss function. We refer the reviewer to the general response above, where we argue why our loss function is more suitable than other losses such as the margin loss.\n\n4 **[SimCTG is a training-based method. Comparison with SimCTG.]**\n\nThanks for your advice. SimCTG is a contrastive framework to mitigate repetitions from both training and decoding. We have modified the description of the SimCTG. To further resolve your concern, we conduct experiments to compare DITTO with SimCTG. Note that the results of SimCTG in their paper are not comparable since they use bpe-level tokens and implement methods on the huggingface codebase. We reproduce their methods based on their public codes on the fairseq codebase and run the experiments at the word level. The results are shown in Appendix D.3 and Table 4. In most decoding algorithms including greedy decoding, top-k and nucleus sampling, DITTO can achieve\nsuperior performance. Especially, for top-k and nucleus decoding algorithms, DITTO can achieve the highest MAUVE score of 0.96 with higher accuracy and lower perplexity, demonstrating the effectiveness of our method. \n\n5 **[Line 40-41: Difference with Holtzman et al. work]**\n\nHoltzman et al. present cases where the probability of a repeated phrase increases with each repetition. Since we find the model has a stronger preference for consecutive sentence-level repetitions, this paper mainly focuses on the sentence repetition issue. Compared to their work, our analyses have several differences: 1) we conduct quantitative experiments\nto show that self-reinforcement holds across various sentences; 2) Since we compare the self-reinforcement in various sentences, we find that sentences with higher initial probabilities have stronger self-reinforcement effect; 3) combined with maximization-based decoding algorithms, we explain why sentence-level repetition occurs (see also the general response to all reviewers). In sum, we provide a more systematic and quantitative study of the repetition problem and provide an explanation of repetition in the decoding phase base on our analyses.\n\n6 **[Line 52-56: evidence of \"The cause ... may be that ...\", Line 61, 63, 64: using 'can' and 'may']**\n\nThanks for your advice. We will re-phrase these sentences in the revised version.", " 1 **[Exact explanation about why sentence repetition occurs and discuss limitations]**\n\nWe thank the reviewer for the insightful comments. In this paper, we explain why sentence repetition occurs by investigating the relationship between the probability of the token and the number of repetitions in context. However, we admit that there must be deeper reasons for why the model raises the probability of repeating tokens from the perspective of model embedding or neural network architecture. We refer the reviewer to the general response for a more detailed discussion. To avoid over-claiming, we will also discuss it in Sec.6.\n\n2 **[Appending the previous sentence to the repetitive sentences]**\n\nWe conduct experiments by appending the previous context to the repetitive sentences, as shown in Appendix D.2 and Table 3. From the table, we find that appending context as the prefix is slightly better in terms of repetition metrics. We will adopt it as the default method to construct repetitive sentences in the revised paper. Thanks for your advice.\n\n3 **[Relationship between Table 1 and Figure 4]**\n\nTable 1 presents the results of generating the next 100 tokens. The average length of sentences is 11.4, so the results of 100 tokens correspond to the results of TP, IP, and WR of 8.77 repetitive sentences. For 8.77 repetitive sentences, TP(UL-token+seq) < TP(UL-token) $\\approx$ TP(SG), which is consistent with the ranking of repetition metrics in Table 1. However, we need to point out that the results of Table 1 and Figure 4 are from two different settings, where the former is obtained with auto-regressive decoding and the latter is obtained with teacher-forcing decoding by feeding pseudo repetitive sentences. Thus, their rankings may not be strictly the same.", " The paper investigates the repetition issue in natural text generation. The described loop issue was caused by the maximization-based decoding algorithm we use. Machine generated text has higher chance to be repetitive compared to human corpus. The paper starts with analysis on individual cases and then some more scalable experiments with a few standard metrics. It introduces a “self-reinforcement effect” where the model will be more likely to repeat if the similar context has shown before. The more times of previous occurrences, the more likely it will repeat. To tackle the problem, the paper proposes a simple and intuitive training objective, DITTO, to reduce the repetition. The paper conducts experiment on open ended text generation and text summarization on CNNDM. \n The paper tackles a core challenge in NLG. The “loop” of the paper is complete and convincing. The proposed technique is inspired by experiments and analysis. The idea of the paper is generally clear and easy to follow. \nThe proposed technique is relatively simple to implement and use. \n\nIt compares with some prior work and shows its effectiveness. Human evaluation is also provided to validate the proposed technique. \n\nThe empirical results are good. The proposed method is competitive compared to prior work including unlikelihood training and StraightGradient.\n\nWeakness\n\nI have a major question regarding the function of DITTO (Equation 1).\nEquation 1: \n-math.log( 1 - abs(cur - alpha * prev))\nLet’s set alpha = 1 for now. \nIf the model is well-trained to avoid repetition, the current probability for a target token v P(x_{n,l} = v) is lower than the probability if shows before P(x_{n-1,l} = v), then this is an ideal scenario that the model did great. However, the equation actually penalizes this situation. \nHere is my experiment:\n(Good case) If cur=0.1, prev=0.5, alpha=1, the loss is 0.51.\n cur=0.1;prev=0.5;alpha=1\n-math.log( 1 - abs(cur - alpha * prev ))\n0.5108256237659907\n\n(Bad case) If cur=0.9, prev=0.5, alpha=1, the loss is still 0.51\ncur=0.9;prev=0.5;alpha=1\n-math.log( 1 - abs(cur - alpha * prev ))\n0.5108256237659907\n\nThis doesn’t make sense to me that the loss is the same for both cases. In the “good” case, the model decrease the chance of repetition so it should be rewarded rather than penalized. \n\nIf the alpha is set to be <1, it’s better than the case described when alpha=1, but still I think it’s unfair to penalize a model to not be repetitive. \ncur=0.1;prev=0.5;alpha=0.5\n-math.log( 1 - abs(cur - alpha * prev ))\n0.16251892949777494\ncur=0.9;prev=0.5;alpha=0.5\n-math.log( 1 - abs(cur - alpha * prev ))\n1.0498221244986778\n\n\nIf I understand the equation correctly, should it be max(_, 0) instead of abs(_)?\n\ncur=0.1;prev=0.5;alpha=1\n-math.log( 1 - max(cur - alpha * prev,0 ))\n-0.0\n\ncur=0.9;prev=0.5;alpha=1\n-math.log( 1 - max(cur - alpha * prev,0 ))\n0.5108256237659907\nTo conclude, the designed equation penalizes the model even the current pred has less “overlap” with previous pred. I am not sure if there is some misunderstanding of the equation. I apologize if there is, and I am open to discussion. Authors and reviewers are welcomed to comment on this. \n\n\nThe coverage of the experiment could be further improved. There are only two datasets used in this paper, Wiki103 for LM and CNNDM for summarization. It would be better if the paper can provide more evidence on more datasets. \n The equation 1 is designed to penalize “one-hop” similarity. Let’s say you have a sequence: The orange is blue. The orange is red. The orange is blue. The orange is red. [repetition] \nIn this case, every two sentences are different, but 2-sentence repetition is not a problem here. Is it possible to boost the equation so you can cover all the previous sentences? The DITTO equation tackles limited scenario of repetition. Some data augmentation or contrastive learning techniques can be applied to make the training less “synthetic”. \n", " This work proposes a novel way of mitigating sentence-level repetition in autoregressive sequence generation: they form an artificial set of negative samples by repeating the same sentence from training corpus up to the max length of prediction. After that they use a loss function to penalize those instances from being produced during fine-tuning. They compared their method with multiple variants from previous work and concluded the superior quality of generated sequences. They have done both automatic and human evaluation for open-ended generation and automatic evaluation for directed generation. # Originality\n\n## Strengths\n\n* Authors proposed a novel way of viewing inter-sentence repetition which is a more global way compared to n-gram repetition (which can also be inter-sentence for large enough n-gram length)\n\n## Weaknesses\n\n* Many claims around idea justification are written in a vague way, which I will touch later in the questions section.\n\n# Quality\n\n## Strengths\n\n* Authors uploaded a well-done code base with experiments and instruction of how to run it. The code is based on fairseq which makes it not hard to use in the future work by other researchers.\n\n* Authors composed comparison with other methods which avoids this type of degeneracy in addition to the comparison with the baseline method.\n\n## Weaknesses\n\n* Related work section could be improved. For instance, there is a recent work (not related with myself in any way) about the ability of the LM to break the reinforcing loop: https://arxiv.org/pdf/1905.10617.pdf . In other words, there exists previous work directly related and opposite to observations and claims for this work. I want to stress that this is totally fine, but authors should consider discussing such results and providing some explanation of how the setup is different etc.\n\n* Experiment comparison is not very transparent. For instance, sequence-level UL models are trained to penalize continuations generated by up to 150 tokens. It was observed earlier that after such 150 tokens it may tend to repeat again (looks like this is exactly what happens in Fig.4). In other words, the length of generation influences the ability of the model to not repeat up to such extent. In DITTO the model is being penalized by the max length of generation. You may have done the experiment where DITTO would be trained on negative sentences of similar length to match the UL training protocol. If you are not going to do that, then at least clarifying this is necessary, because it addresses your claim that UL does not break the reinforcing loop. Because it may do so if you train UL model in similar fashion (with longer generated seqs).\n\n# Clarity\n\n## Strengths \n\n* The experimental protocol is clear.\n\n## Weaknesses\n\n* The section I was very curious about 2.2 is extremely vague and does not provide any answers (I will touch this later in the questions)\n\n# Significance\n\n## Strengths\n\n* I believe that this general method provides a significant contribution for future work beyond this specific use case: using an external set of negative samples which are easy to form and optimize. \n\n## Weaknesses\n\n* On the other hand from the the strong point #1, the loss function is not well justified (I touch this again in the questions) as in why such construction (with modulo difference) is better than eg the margin based version.\n\n* Even though Contrastive framework for text generation was discussed (it was mistakenly categorized as decoding based method while in fact it is training based too), it was not used for experimental comparison. Here I will write both my comments and questions:\n\nLine 30: [30] is also training based method.\n\nLine 40-41: How does your analysis differ from one in Holtzman et al. work? I can see the introduction of several metrics to measure the rate of self-reinforcing loop, but other than that it doesn't look different. If you want to stress the point about analyzing sentence-level repeat, then it is a good place to do so.\n\nLine 52-56: When you say \"The cause ... may be that ...\", then the overall claim does not sound strong at all. If you do not have sufficient evidence (e.g. by showing how this effect does not hold on some other test sets without repeats or so, then I am not sure how this is different from all other existing observations about n-gram level repeat.\n\nLine 61, 63, 64: Please avoid using \"can\" or \"may\". if it is not always the case, then quantify how often this happens. After rewriting these sentences the overall reading will be improved.\n\nLine 63-65: Where is the justification for this statement? It sounds incomplete without any reasoning afterwards.\n\nLine 89: what does it mean to \"statistically investigate\"? Other wording will improve this sentence.\n\nLine 123: \"To our surprise\": why is it surprising? All previous work was showing that this is indeed a large problem. I agree that random token sentence repeat is more surprising, but authors don't say that.\n\nLines 125-132: I do not see the answer to the question \"Why sentence repetitions occur?\". I see observations, but no verified hypothesis of the reason behind it. For instance, Contrastive Generation framework paper [30] specifically introduce the hypothesis of very similar embeddings of repeating tokens which they further verified. I do not see any similar process here. Perhaps authors had some valid answer, but it is not presented here.\n\nLines 133-137: Where is the answer for the question? The observation of self-reinforcing loop on artificial corpus is not an answer. Moreover, there is previous work showing that LM has the ability to break such a loop: https://arxiv.org/pdf/1905.10617.pdf I would be happy to see if authors can elaborate more about this.\n\nLines 144-153: Due to the unclear claims in the section 2.2 the statements in \"Analyses\" paragraph do not sound strong.\n\nLine 167: why did you choose the loss function to look like this? It is interesting to see if there is some theoretical benefit of using such construction of log(1-|p1-p2|). For instance, the gradient of the loss may show some benefits. Alternatively you could have optimized the difference between log-probabilities: max(margin, logp(x_n) - logp(x_{n-1})) . Some justification around the loss function design would help readers to understand the motivation better.\n\nLines 190-193: At least for UL-seq it is not true that it penalize any repetitions. In fact it is specified by the length of ngram. In other words, if n=15, then it will only penalize those sentence-a-like ngrams. \n\nLines 256-258: DITTO is specifically designed to mitigate repeats up to max generation lengths, while other methods you compared to did not. In other words, other methods could have a chance to show better performance if they match the design of negative samples similarly to DITTO. And the same holds in opposite direction I believe: if DITTO would be trained for e.g. 150 tokens only, then it may fall in the same repetition loop again after generation that number of tokens. I think this is very important point to mention in this section.\n\nLines 305-207: As I said earlier, this can be a result of not 100% fair comparison between training and evaluation protocols. Relevant related work which is missing (sorry if it is there and I missed it):\n\nhttps://arxiv.org/pdf/1905.10617.pdf\n\nhttps://arxiv.org/abs/2007.14966\n\nhttps://www.semanticscholar.org/paper/High-probability-or-low-information-The-paradox-in-Meister-Wiher/49a7515b1c0250a12c0e6b6fa0a8b6d46fc42adf\n\nhttps://arxiv.org/abs/2002.02492", " This paper presents an empirical study showing the problem of repetitions in text generation (especially sentence repetitions), and proposed a training method called DITTO to address this problem where they add a training objective to penalize the probabilities of sentence level repetitions for the synthetically created examples (repeated sentences). Authors evaluated their method on different tasks and datasets, open domain text generation and summarization, and showed their method has much less repetition while still achieves competitive performance on other metrics. \n Strength: \nThe paper presents a good empirical analysis for the repetition problem in text generation. Authors defined a few metrics, and evaluated on different datasets demonstrating the problem. Though the community is aware of such problems, this is the first time I see such an analysis systematically showing the empirical results. \n\nBased on the repetition pattens authors identified (higher probabilities when repetitive sentences are included in the context), authors proposed a training method to penalize such repetition. The proposed training objective is well motivated. \n\n\nWeakness: \n\nFor the repetition problem, though the empirical analysis is good, I feel the authors are overclaiming slightly. They do show the sentence repetition happens, and models has a self-reinforcement effect. However, they say that’s the reason that sentence repetition occur (I.e., the model has seen the previous sentence in the context). I feel this is not exactly the explanation yet. The underlying cause of this phenomenon still needs additional analysis, probably more inside the neural networks. \n\n In sec 3:\nAuthors mentioned \"We find that, appending another sentence sampled from the training corpus randomly as the prefix to the pseudo repetitive data x can achieve better and more stable performance.\" \nCan you clarify this? is this providing some context to the repetitive sentences? but if so, why do you need to randomly sample another sentence, rather than just the previous sentence? \n\nFig 4: 
I’m a bit confused by the results for UL. \nEarlier tables showed it has much lower repetition rate than other methods, which seems to be inconsistent with the results presented here. \n\n\n Authors didn't discuss this. " ]
[ -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "CAGIFK8EsKj", "nips_2022_sexfswCc7B", "szhBPP31hZ", "xLIJRg4uXfc", "2kas10y2IOs", "IAquLGyuuF3", "nips_2022_sexfswCc7B", "nips_2022_sexfswCc7B", "nips_2022_sexfswCc7B" ]
nips_2022_Y6A4-R_Hgsw
Toward a realistic model of speech processing in the brain with self-supervised learning
Several deep neural networks have recently been shown to generate activations similar to those of the brain in response to the same input. These algorithms, however, remain largely implausible: they require (1) extraordinarily large amounts of data, (2) unobtainable supervised labels, (3) textual rather than raw sensory input, and / or (4) implausibly large memory (e.g. thousands of contextual words). These elements highlight the need to identify algorithms that, under these limitations, would suffice to account for both behavioral and brain responses. Focusing on speech processing, we here hypothesize that self-supervised algorithms trained on the raw waveform constitute a promising candidate. Specifically, we compare a recent self-supervised model, wav2vec 2.0, to the brain activity of 412 English, French, and Mandarin individuals recorded with functional Magnetic Resonance Imaging (fMRI), while they listened to approximately one hour of audio books. First, we show that this algorithm learns brain-like representations with as little as 600 hours of unlabelled speech -- a quantity comparable to what infants can be exposed to during language acquisition. Second, its functional hierarchy aligns with the cortical hierarchy of speech processing. Third, different training regimes reveal a functional specialization akin to the cortex: wav2vec 2.0 learns sound-generic, speech-specific and language-specific representations similar to those of the prefrontal and temporal cortices. Fourth, we confirm the similarity of this specialization with the behavior of 386 additional participants. These elements, resulting from the largest neuroimaging benchmark to date, show how self-supervised learning can account for a rich organization of speech processing in the brain, and thus delineate a path to identify the laws of language acquisition which shape the human brain.
Accept
This paper compares learned self-supervised speech representations to brain fMRI representations for more than 400 subjects speaking English, French, and Mandarin. Through the rebuttal period, the authors and reviewers interacted extensively to discuss the contribution, results, and analysis provided in the paper. Most of the reviewers' concerns have been addressed by improvements to the analysis and presentation of the paper. One main concern was a concurrent research work that appeared on arxiv about one week after the NeurIPS submission deadline. The novelty of this paper should not be impacted by that other work, given the timing of both papers.
train
[ "J3Zq1tZktp", "Vy8rBhMEgVf", "i_Bce2-0BMy", "eu-4WyxfUuS", "Ky7cHltP8dG", "4Xmysa_vpG", "fX52mL4Rqcw", "Nf_1GjZCFmG", "pYX72_a132T", "ayqpTPN2Wy4", "nGGiIj1CSz", "s8Emrgt3HY", "sbwaY_CFz5", "7C2odCAKXdn", "y9mJfbXz522", "vMPCQiMFbiT", "j_CycJwcqXK", "Vi_oPIs20Si", "0bmDGO6cagc", "CrO6ujDjud", "FqFnp5wxrTS", "vanVvzNspxW", "yZ4pOmOvCu", "0R1K4tAz12", "dKdhPqll7a8", "HAI7Bi4lwH", "ojfzNkBnzLH", "3d2uEI_1P7b", "KS0XGH7Zg6H", "c87EsO3F3J3", "IYqZ2Q_T__w", "xZaau6dfjNH", "7f3bddenKKb" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response. I am impressed by the new analyses and thorough responses made by the authors and would like to increase my score by 1 more point. However, I urge the authors to include some of the discussion we had here in the paper especially regarding the perils of over-interpretation like in my second & third points above. I believe they have several different methods and results now with the only limiting factor being how to interpret all of them together which perhaps is beyond the scope of the current stage.\n\nI look forward to the revised manuscript!", " We thank Reviewer 2 for this 3rd review [(summary of the 2nd round here)](https://openreview.net/forum?id=Y6A4-R_Hgsw&noteId=pYX72_a132T)\n\n1\\. Regarding the low effect size some brain areas. We thank R2 for these elements. As indicated in our previous response, we agree to report an additional Table based on a different criterion, including significance testing. We would like to add that we agree with the factors listed by R2, and propose to integrate them as such in the discussion paragraph comparing studies. Finally, we note that these elements do not impact the validity of our methods or our conclusions.\n\n2 and 3. Regarding interpretation: We agree that these elements are interesting and remain open to future research (and indeed partially studied in the concurrent paper by Vaydia, Jain & Huth). At this stage, however, interpreting neural networks in general, and probe analyses in particular are intrinsically limited: they are here attempt to test whether the high dimensional activation vectors linearly represent the features theorized by linguists and phonologists. While these analyses may help understand what each area represents, it is doomed to fail if the theories of linguistic and phonology are suboptimal models of speech processing in the brain.\n\nWe will clarify these elements in the discussion. We thank Reviewer 2 for this thorough review, and for the multiple analyses and discussion suggestions, which helped significantly improve our manuscript.", " I greatly appreciate the authors for their detailed responses and for engaging with my questions + suggestions. However, I still have some persistent questions:\n1. I think some of my comments were misunderstood. On further thought, I think the raw correlations being reported for putative speech areas like Broca's (`r=~0.03`) are unexpected given that its fMRI, a speech perception task and speech encoding models. From literature, these values definitely seem lower than expected and I think this could stem from multiple reasons:\n- Training data: I am aware that no cross-subject models were trained. However, since prediction performance greatly varies with the amount of training data, by averaging correlations between participants that have say, 1 hour vs. 5 hours of data, can be confusing. This could artificially lower the values and affect the interpretation of trends.\n- Lack of a significance test: Without a significance test, one might be averaging correlations across both well predicted voxels and those that don't respond well to natural speech, again artificially lowering the correlations.\n- In my opinion, a better metric would be to report the number of voxels about a threshold or the min correlation of top 10% voxels.\n\n2. The probing results show no meaningful trends for non-English networks and the higher levels of abstraction like phoneme/words/sentences. What do the authors make of this?\n\n3. I think I am misunderstanding while trying to put all the results together. So the probing shows that the upper-middle layers care about higher levels of abstraction in the native language and the layer wise pref shows that higher-level areas in cortex pick these layers. But Figure 4 shows that a lot of these areas actually have similar performance between native, non-native...random models. What does this mean? Does it imply that the information probed from the high-level areas in native language is not what's driving performance?\n", " > 3\\. The authors do not address the large differences across models in probing performance, especially for Mel. Furthermore, other studies have shown larger gaps in layer-wise selectivity for features like Mel vs. word unlike the small range shown here (9-12).\n\nThe large differences in Table 2 are partly due to the argmax. Supplementary Figure S1 shows that the trained networks consistently encode relevant spectro-temporal representations in the first layers of the Transformer blocks (Layer 8). Yet, for the acoustic network, the performance is more distributed across layers, thus explaining the difference in Table D (best layer=12). \n\n> 4\\. I understand there were both space and time limitations but I do believe that the paper would greatly benefit from a way to link the probing results to the layer-preference maps. Currently, looking at them in tandem is not sufficient to understand what types of information each region could be representing. Especially, given the small range over which the averages vary per task.\n\nWe implemented the probing analyses requested by Reviewer 2, although we believe that they are adjacent to our goal: i.e. testing *whether* (not *why*) SSL suffices to build an efficient model of speech processing in the brain.\nWhile interesting, this novel request goes beyond the scope of the present study, whose primary goal is not to understand the representations encoded in each voxel of the brain (or each layer in the model), but to test whether SSL on small amount of speech suffices to build an efficient model of speech processing in the brain. We briefly mentioned this issue in the proposed paragraph above (question 2)\n\n> 5\\. Is TIMIT available in other language or are the authors using English TIMIT to test even the non-English models? Wouldn't that be a possible source of confound?\n\nWe restrict these analyses to the English TIMIT. The non-English models tested on the English TIMIT are used to analyze the effect of non-nativity, while the English model tested on the English TIMIT is used to test the effect of nativity. \n\nWe leave the testing the non-English models on their native corpora, and the English model on non-English corpora to future research, which will need to build multilingual corpora with comparable data and annotations across languages. \n\n> 6\\. What are the accuracies achieved by the best layers per model and task?\n\nThe accuracies are reported in Supplementary Figure S1. \n", " >>By contrast, neither Kell et al nor Millet & King study self-supervised models, and Vaidya, Jain & Huth did not evaluate how different training led the same architecture to be more-or-less similar to the brain.\n\n> I agree with the above and the important point the authors noted that one of the studies was made public concurrently with the current work. However, I still maintain my comment that given a lot of recent studies on language encoding models and Kell et al. + Millet et al., is it really surprising that training deep networks leads to better brain prediction than a random network?\n\nWe agree that this particular result is expected given previous work. However, “training deep networks leads to better brain prediction than a random network” is not stated as a major contribution. Instead, we indicate: “Here, we test whether self-supervised learning on a limited amount of speech suffices to yield a model functionally equivalent to speech perception in the human brain.” \n\n> Additionally, I understand that the aim was \"to test whether (not why)\" but hasn't this question been answered indirectly by the huge body of work showing the types of information these models capture and their ability to transfer to downstream tasks? \n\nData, learning and computational constraints are extremely different between brains and algorithms. Consequently, showing that some artificial neural networks can be efficiently fine-tuned to some downstream tasks does not address the functional similarity between these models and the brain.\n \n> And I appreciate the authors efforts at doing the probing analysis. However, what about self-supervision helps? How does it impact the types of information learned and consequently, the types of information used for down-stream prediction?\n\nFollowing R2's request, our new Supplementary Figure S1 show that: \n1. SSL leads to a hierarchy of features (from MEL, to word and sentence-level representations)\n2. the type of language used for SSL primarily impact the learning high-level features (phonemes and words)\n3. word and sentence-level representations (as opposed to MEL) are encoded deeper in the supervised network (best layer=18) than in the unsupervised network (best layer=14). \n\nWhile they shed interesting light on our results, these interpretability points go beyond the scope of the present study: They give *some* clues about what makes wav2vec 2.0 similar to the brain, but does not alter our conclusion.\n \n>> Second, we discover new hierarchical gradients, e.g. within the prefrontal cortex, a region tightly linked to human cognition, learning and control and whose functional organization remains largely unknown (Fuster, 2015).\n\n> What gradients are being referred to here?\n \nThe functional gradients are revealed by the mapping of layer-wise tuning displayed in Figure 3. This figure shows spatially organized turning within the infero-frontal gyrus (IFG) and sulcus (IFS), as well as across IFS and motor areas. To clarify this isse, we propose to add a supplementary cross-section of these regions.\n \n>> we provide empirical evidence of language-specific representations of speech in the brain\n\n> What does this mean?\n \nThis sentence refers to the results of Figure 4. \nWe trained three distinct models with three distinct languages, and showed that the superior temporal sulcus and gyrus are best explained by the models trained in the language of the corresponding participants. This result suggests that these brain areas are tuned to the speech sounds specific to the language learnt by the participant. This result is long-expected established in the behavioral literature, but did not, to date, find a neural signature.\n \n> Re probing analysis: I greatly appreciate the author's efforts to run the probing analysis in such a short time. I will consequently increase my score by 1 point.\n\n> 1\\. What does the performance look like across layers? I believe taking the max across layers could hide lack of any significant trends across layers.\n\nWe apologize for the lack of clarity of our previous answer: the probing performance of each layer, network and task is provided in Supplementary Figure S1, added to the Appendix section of the manuscript. \n\n> 2\\. The effect of training is a clean result and imo the authors should discuss the connection between this and prediction performance changes with training explicitly.\n\nWe agree and will add: \n```“Our probing analyses show that the models trained with SSL have learnt relevant acoustic and linguistic representations (Supplementary Figure S1). This result suggests that the difference of neural predictivity observed between the random, non-native and native models (Figure 2C) may be driven by the corresponsing spectro-temporal, phonetic, word and sentence-level representations. These results are consistent with the recent in-depth analyses of Vaydia, Jain & Huth (2022) and will necessitate carefully-controlled words and speech sounds to be tested explicitly.” ```", " > I thank the reviewers for clarifying some points in the discussion and estimating the noise ceiling. Re noise ceiling: I think the authors have a potentially interesting setup here- is there a meaningful way to link the probing results across different types of models, the % of noise ceiling that's reached and layer-wise selectivity to better understand the differences between anatomical areas? I think the information is there but I'd be excited to see a more in-depth analysis of why the noise ceiling %s look so different across ROIs and how this relates to the types of information that arises in different networks. \"new gradients in PFC\": I am still not sure what the \"new\" gradients are. How are the layer wise preference maps revealing a new gradient? What does said gradient correspond to?\n\nWe implemented the noise-ceiling analyses requested by the Reviewer as well as the several probing analyses. The results strengthened the interpretability of our findings, and did not impact our original conclusions. Why some regions have relatively large noise ceilings is an open question, which highlights the necessity to go beyond the present model and learning rule.\n \nWe now respond to this point above. Specifically, we propose to add a supplementary cross-section to clarify the functional gradients identified in the prefrontal cortex.\n\n> Re the following (I believe it was misunderstood): “Furthermore, it appears that the upper-middle layers have high correlations in several regions across cortex, albeit not the highest. What do the authors make of this?”. I think this points to a possibility that the upper-middle layers encode both acoustic information as well as more abstract information like words, which allows them to predict so much of the cortex. I'd urge the authors to explore this.\n\nThis is an interesting interpretation. Note, however, that the high coverage of predictability being caused by learning words is difficult to reconcile with the fact that non-native models significantly predict most regions, even though they cannot learn the corresponding lexicon. As indicated above, we proposed to amend our manuscript to indicate: ```\"These results are consistent with the recent in-depth analyses of Vaydia, Jain & Huth (2022) and will necessitate carefully-controlled words and speech sounds to be tested explicitly.\"```\n", " \n> - The noise ceiling %s hide the fact that the actual differences in correlation are pretty small. For ex., 0.02 between random and native speech in HG. I'd urge the authors to test for significance!\n\nWe agree. These effect sizes were originally reported in absolute values to avoid giving a false view of our results.\n\nWhile small, the differences between the networks are highly significant: between random and native speech, $p<10^{-46}$ (Figure 1), in HG specifically: $p<10^{-28}$. We now report the p-values for each difference in representative regions: \n\n| | Avg | Top10NoiseCeil | Heschl | STG | STS | IFG | Motor |\n|:------------------------|:-----------|:-----------------|:-----------|:-----------|:-----------|:-----------|:----------|\n| Non speech - Random | $10^{-14}$ | $10^{-22}$ | $10^{-13}$ | $10^{-27}$ | $10^{-12}$ | $10^{-5}$ | n.s. |\n| Non native - Non speech | $10^{-8}$ | $10^{-14}$ | $10^{-12}$ | $10^{-19}$ | $10^{-8}$ | $10^{-6}$ | n.s. |\n| Native - Non native | $10^{-10}$ | $10^{-15}$ | $10^{-7}$ | $10^{-11}$ | $10^{-11}$ | $10^{-5}$ | n.s. |\n| Native - Random | $10^{-31}$ | $10^{-41}$ | $10^{-28}$ | $10^{-42}$ | $10^{-32}$ | $10^{-17}$ | n.s. |\n\n**Table G**: Significance of the difference in neural predictivity scores between the networks (random, non-speech, non-native and native). Significance is assessed using a two-sided Wilcoxon test provided by Scipy, testing whether the difference is different from zero. \n\n> Re diff amounts of training data: Given the very strong effect of amount of training data on encoding model performance, I am confused why the authors report results that average across differently sized datasets. This could artificially deflate the performance across cortex or differences between layers, models etc.\n\nWe do *not* fit encoding models across languages, datasets or subjects. One encoding model is fitted for each subject, on the dataset corresponding to the amount of speech to which they listen to while being recorded with fMRI. Thus, the amount of training data is comparable across encoding models. ", " > Re the following which I believe was misunderstood:\n> In figure 3, why is primary auditory (or what is possibly Heschl’s gyrus) marked in red, There is a band of \"red\" regions overlapping the \"blue\" regions. Can the authors explain this?\n\nThe thin band of red voxels in this area disappears with a slightly more conservative threshold. \nWe now propose to add: \n\n```“Some voxels around the auditory cortices seem to be tuned to the highest layers of wav2vec 2.0. Additional research focusing on a higher resolution of this region remains necessary to disentangle the role that feedback may play to explain this unexpected phenomenon.”```\n\n> In the neural predictive scores table shown in the rebuttal, I'd urge the authors to either use a correlation threshold or significance test to select voxels per ROI. the correlations for IFG look suspiciously low- why is that?\n\nThe correlations in IFG are relatively low because the level of noise is relatively high in IFG (Supplementary Figure S2). We are happy to add the table based on a correlation threshold.\n\n> Re individual maps: I wanted to see the individual maps to gauge how noise the circular mean is as a metric for layer wise preference.\n\nWe provide in Supplementary Figure S4 the correlation scores for each layer, *before applying the circular mean*. Figure S4 shows that the argmax layer in IFG is deeper than the argmax layer in Heschl. \n\n> Re different quantization targets across diff language models- I agree that one cannot consider the targets to be phonemes but an imperfect approximation. It is still unclear to me why Dutch models performing better than English at predicting Dutch speech elicited brain responses is surprising? Given what both the authors and I noted: Precisely, they suggest that English models better distinguish English phonemes than French ones, because the models are optimized to capture the statistics of speech signals in the respective languages, including sub-phonemic and suprasegmental properties.\n> Very interesting point, but why does this happen? how does it relate to the probing results?\n\nThe probing results confirm such observations: Supplementary Figure S1 shows that the English network has better probing performance for phoneme classification than the non-English networks. We agree that, ultimately, the goal is to build a model that automatically demonstrates these properties. The aim of this analysis is to test and quantify such emergence phenomenon, and show where the corresponding representations specifically map onto the brain.\n\n> It is also interesting to highlight that random, and non-speech models (which also have a quantization module) actually perform quite well – a result that is consistent with the fact that these models account for a fairly large portion of the brain responses to speech.\nBefore I interpret the results in Fig. 4 and tables S1,2:\n\n> - Why are the neural predictive scores so small in every region but HG?!\n\nThe absolute neural predictivity scores are small in regions as IFG because the level of noise is high in those regions (Supplementary Figure S2). The amount of explainable signal, though, is high compared to the level of noise (e.g. 23% in IFG). \n", " # Summary\nWe thank Reviewer 2 for their detailed review.\n\nWe note that the results of the analyses, namely (1) the noise ceiling, (2) the alternative models, and (3) the systematic probing (MEL, word and sentence-level representations $\\times$ each layer of the model $\\times$ training type) requested by the Reviewer all strengthened our original conclusions.\n\nThe main remaining concern of Reviewer 2 is on originality and interpretation:\n\n> I agree with the above and the important point the authors noted that one of the studies was made public concurrently with the current work. However, I still maintain my comment that given a lot of recent studies on language encoding models and Kell et al. + Millet et al., is it really surprising that training deep networks leads to better brain prediction than a random network?\n\n> Additionally, I understand that the aim was \"to test whether (not why)\" but hasn't this question been answered indirectly by the huge body of work showing the types of information these models capture and their ability to transfer to downstream tasks? \n\n> I believe the authors have a very promising dataset and methodology but the results in their current form are not unexpected. Unfortunately, I do not think they tell us anything new about how the brain/deep networks function. For example, the speech hierarchy across cortex is known and has been shown with deep learning-based encoding models. \n \nWe appreciate these elemlents of concern. To limit a debate of opinions about what could have been expected, we list the main fact, as follows:\n \nFirst, our study is timely: SSL has only recently proved efficient in deep learning, and has thus not been studied in speech neuroscience. This novelty explains why our study finds similar conclusions to Vaydia, Jain & Huth's strong but concurrent submission. \n \nSecond, the other studies indicated by Reviewer 2 are either (1) not focusing on spoken language (Kell et al, n=8 participants listening to snippets of various sounds) or (2) previously rejected, and thus, the very basis of the present study (Millet & King, n=102 participants).\n \nThird, while some elements of the speech hierarchy in the cortex are “known”, _the learning rules that allow the human brain to learn speech processing certainly remains one of the greatest challenges to cognitive and computational neuroscience_ (and linguistics, and A.I. for that matter). \n \nHere, we show, with unprecedentedly large amount of data (n=412 participants, 3 languages) that SSL brings us one step closer to solve this issue: with a plausible amount of speech (only 600 hours) and no supervision, this learning objective allows the learning of brain-like representations, organized according to a similar hierarchy, and tuned to the native language. None of these elements had been shown before.\n \nOur objective is not to solve and understand language in the brain, but to test whether a simple learning principle could suffice to explain its functional organization. \n\nWe now turn to the more specific questions.", " Re the following which I believe was misunderstood:\n> In figure 3, why is primary auditory (or what is possibly Heschl’s gyrus) marked in red, \n\nThere is a band of \"red\" regions overlapping the \"blue\" regions. Can the authors explain this?\n\nIn the neural predictive scores table shown in the rebuttal, I'd urge the authors to either use a correlation threshold or significance test to select voxels per ROI. the correlations for IFG look suspiciously low- why is that?\n\nRe individual maps: I wanted to see the individual maps to gauge how noise the circular mean is as a metric for layer wise preference.\n\nRe different quantization targets across diff language models- I agree that one cannot consider the targets to be phonemes but an imperfect approximation. It is still unclear to me why Dutch models performing better than English at predicting Dutch speech elicited brain responses is surprising? Given what both the authors and I noted:\n> Precisely, they suggest that English models better distinguish English phonemes than French ones, because the models are optimized to capture the statistics of speech signals in the respective languages, including sub-phonemic and suprasegmental properties.\n\nVery interesting point, but why does this happen? how does it relate to the probing results?\n> It is also interesting to highlight that random, and non-speech models (which also have a quantization module) actually perform quite well – a result that is consistent with the fact that these models account for a fairly large portion of the brain responses to speech.\n\nBefore I interpret the results in Fig. 4 and tables S1,2:\n- Why are the neural predictive scores so small in every region but HG?!\n- The noise ceiling %s hide the fact that the actual differences in correlation are pretty small. For ex., 0.02 between random and native speech in HG. I'd urge the authors to test for significance!\n\nRe diff amounts of training data: Given the very strong effect of amount of training data on encoding model performance, I am confused why the authors report results that average across differently sized datasets. This could artificially deflate the performance across cortex or differences between layers, models etc..", " I thank the reviewers for clarifying some points in the discussion and estimating the noise ceiling.\n\nRe noise ceiling: I think the authors have a potentially interesting setup here- is there a meaningful way to link the probing results across different types of models, the % of noise ceiling that's reached and layer-wise selectivity to better understand the differences between anatomical areas? I think the information is there but I'd be excited to see a more in-depth analysis of why the noise ceiling %s look so different across ROIs and how this relates to the types of information that arises in different networks.\n\nRe \"new gradients in PFC\": I am still not sure what the \"new\" gradients are. How are the layer wise preference maps revealing a new gradient? What does said gradient correspond to?\n\nRe the following (I believe it was misunderstood):\n> “Furthermore, it appears that the upper-middle layers have high correlations in several regions across cortex, albeit not the highest. What do the authors make of this?”\n\nI think this points to a possibility that the upper-middle layers encode both acoustic information as well as more abstract information like words, which allows them to predict so much of the cortex. I'd urge the authors to explore this.", " > By contrast, neither Kell et al nor Millet & King study self-supervised models, and Vaidya, Jain & Huth did not evaluate how different training led the same architecture to be more-or-less similar to the brain.\n\nI agree with the above and the important point the authors noted that one of the studies was made public concurrently with the current work. However, I still maintain my comment that given a lot of recent studies on language encoding models and Kell et al. + Millet et al., is it really surprising that training deep networks leads to better brain prediction than a random network?\n\n> Self supervision leads the model to be more similar to the brain than supervision. \n\n> Our objective is to test whether (not why) an unsupervised model of speech processing in the brain can effectively account for its functional organization as measured with fMRI.\n\nAdditionally, I understand that the aim was \"to test whether (not why)\" but hasn't this question been answered indirectly by the huge body of work showing the types of information these models capture and their ability to transfer to downstream tasks? And I appreciate the authors efforts at doing the probing analysis. However, what about self-supervision helps? How does it impact the types of information learned and consequently, the types of information used for down-stream prediction?\n\n> Second, we discover new hierarchical gradients, e.g. within the prefrontal cortex, a region tightly linked to human cognition, learning and control and whose functional organization remains largely unknown (Fuster, 2015).\n\nWhat gradients are being referred to here?\n\n> we provide empirical evidence of language-specific representations of speech in the brain\n\nWhat does this mean?\n\nI believe the authors have a very promising dataset and methodology but the results in their current form are not unexpected. Unfortunately, I do not think they tell us anything new about how the brain/deep networks function. For example, the speech hierarchy across cortex is known and has been shown with deep learning-based encoding models. The novelty here is to show the same thing with self-supervision which is unsurprising given other adjacent studies in both speech ML and language neuroscience.\n\nRe probing analysis: I greatly appreciate the author's efforts to run the probing analysis in such a short time. I will consequently increase my score by 1 point.\n\nFollow-up questions:\n1. What does the performance look like across layers? I believe taking the max across layers could hide lack of any significant trends across layers.\n2. The effect of training is a clean result and imo the authors should discuss the connection between this and prediction performance changes with training explicitly. \n3. The authors do not address the large differences across models in probing performance, especially for Mel. Furthermore, other studies have shown larger gaps in layer-wise selectivity for features like Mel vs. word unlike the small range shown here (9-12).\n4. I understand there were both space and time limitations but I do believe that the paper would greatly benefit from a way to link the probing results to the layer-preference maps. Currently, looking at them in tandem is not sufficient to understand what types of information each region could be representing. Especially, given the small range over which the averages vary per task.\n5. Is TIMIT available in other language or are the authors using English TIMIT to test even the non-English models? Wouldn't that be a possible source of confound?\n6. What are the accuracies achieved by the best layers per model and task?", " I really appreciate the detailed and thoughtful rebuttal by the authors. Most of my concerns have been addressed. I still maintain my reservations about fMRI being ill-suited for fine-grained analysis of speech processing, however. I'm happy to increase my score and recommend acceptance based on the rebuttal. ", " | Model | Neural Predictivity |\n|:-----------------------------|:----------------------|\n| MEL | 0.0101 +/- 0.0004 |\n| RMS | 0.0123 +/- 0.0005 |\n| Random Wav2Vec 2.0 base | 0.0184 +/- 0.0005 |\n| Wav2Vec 2.0 base, - pretrained on multilingual speech (100K hours from voxpopuli) | 0.0230 +/- 0.0006 |\n| Wav2Vec 2.0 base - pretrained on multilingual speech (10K hours from voxpopuli) | 0.0233 +/- 0.0006 |\n| Wav2Vec 2.0, 300M parameters - pretrained on multilingual speech (436K hours) | 0.0240 +/- 0.0006 \n| Wav2Vec 2.0 base - pretrained on english speech (53K hours) | 0.0242 +/- 0.0006 |\n| Wav2Vec 2.0 base (ours) - trained english speech (600 hours) | 0.0245 +/- 0.0005 |\n| Wav2Vec 2.0, 1B parameters - pretrained on multilingual speech (436K hours) | 0.0260 +/- 0.0006 \n\n**Table A: Neural predictivity of self-supervised pre-trained models.** Neural predictivity scores, averaged across all voxels and participants, for the MEL spectrogram, the root mean square, a Wav2Vec 2.0 (base) architecture with random weights, Wav2Vec 2.0 (base) pre-trained with self supervised learning on 100K hours from Voxpopuli (Wang, 2021) (`wav2vec2-base-100k-voxpopuli` from huggingface), on 10K hours from Voxpopuli (`wav2vec2-base-10k-voxpopuli`), on 53K hours of english (`wav2vec2-base`), two models pre-trained on the same corpus of 436K hours, with 300M (`wav2vec2-xls-r-300m`) and 1B parameters (`wav2vec2-xls-r-1b`), respectively, and our model trained on 600 hours of english speech. +/- refers to standard errors of the mean across participants. ", " We thank Reviewer 3 for their thorough review, as well as for their valuable remarks, which we will address below.\n\n> “I believe the results are interesting to be shared, but I do not see any machine learning innovation in this paper. Basically the authors have used an already proposed self-supervised method to compare with the brain action. I believe this could be a better fit for some speech related journals.”\n\nAs specified on the [Neurips guidelines](https://nips.cc/Conferences/2022/CallForPapers), the contributions of a submission need not be a new machine learning technique, but can target neuroscience (and “neural coding” in particular) as well as “machine learning for sciences”.\n\nWe believe that this study effectively provides such a contribution: it shows that self supervised learning can provide an effective framework to quantifiably account for the functional organization of speech processing in the human brain. This finding is important, as speech is a trait uniquely developed in the human species, and this cognitive faculty, directly dependent on our cultural environment, is learnt largely without supervision.\n\n> “R is some times used as Neural predictivity score and sometimes as Pearson correlation coefficient. Please make it clear through the paper.”\n\nWe thank you for noticing this lack of homogeneity. We will now correct the manuscript to use a single terminology.\n\n> “R is some times very small, e.g. Figure 4 B. is small R still something to rely on and make conclusion for?”\n\nThis is a good point. We now computed the noise ceiling on 290 subjects who listened to the very same audio stories, and can thus provide a basis for a simple “noise ceiling” analysis.\n| | Avg | Top10 | Heschl | STG | STS | IFG | Motor |\n|:--------------|:---------------|:-----------------|:---------------|:---------------|:---------------|:---------------|:---------------|\n| Unsupervised | 0.03 +/- 0.001 | 0.09 +/- 0.002 | 0.21 +/- 0.007 | 0.09 +/- 0.003 | 0.06 +/- 0.002 | 0.03 +/- 0.001 | 0.01 +/- 0.001 |\n| Supervised | 0.02 +/- 0.001 | 0.09 +/- 0.002 | 0.21 +/- 0.007 | 0.09 +/- 0.003 | 0.06 +/- 0.002 | 0.03 +/- 0.001 | 0.01 +/- 0.001 |\n| Noise ceiling | 0.12 +/- 0.006 | 0.22 +/- 0.006 | 0.29 +/- 0.008 | 0.18 +/- 0.006 | 0.20 +/- 0.006 | 0.15 +/- 0.006 | 0.09 +/- 0.006 |\n| Ratio | 0.19 +/- 0.006 | 0.38 +/- 0.010 | 0.74 +/- 0.025 | 0.40 +/- 0.013 | 0.31 +/- 0.011 | 0.23 +/- 0.010 | 0.14 +/- 0.014 |\n\n\nThese new results show that the neural predictivity scores substantially improve when taking the noise ceiling into account. We thank R3 for pointing this out, and helping us put the original finding in perspective.\n\n> “There are some related work, e.g. https://www.science.org/doi/abs/10.1126/science.1245994, that could be mentioned in the paper.”\n\nWe thank Reviewer 3 for this suggestion. This important study should certainly have been cited. We now indicate:\n\n```“These elements, consistent with previous electrophysiological studies (Mesgarani & Chang Science 2014), provide a coherent spectrum of evidence for the location of acquired speech representations in the brain.”```\n\n(Note that other studies from Eddie Chang and Nima Mesgarani were cited.)\n\n> “Although the results are promising, please make the conclusion linked to the current set up and do not strongly generalize it to all cases.”\n\nWe agree that some of our conclusions can be qualified. We now propose to add the following paragraph to the discussion:\n\n```“It is important to stress that the scope of the present study could be broadened in several ways. First, our study focuses on adult speakers, whose cultural and educational background is not representative of the population (Henrich et al, 2010). Second, we focus on the passive listening of three languages. Third, we focus on one self-supervised learning architecture (Baevski et al. 2021), and its functional alignment with fMRI, whose temporal resolution is notoriously limited. Generalizing the present approach to more languages (Malik-Moraleda et al, 2022), a larger spectrum of children and adult participants recorded with a variety of electrophysiological and neuroimaging devices will thus be essential to confirm, precise, and/or mitigate the present findings.”```\n", " | | MEL | Phone | Word embedding | POS | Sentence embedding | Avg |\n|:------------------------------|------:|--------:|-----------------:|--------:|---------------------:|------:|\n| Random Wav2Vec 2.0 | 2 | 8.7 | 8 | 8.9 | 8.1 | 7.14 |\n| Acoustic Wav2Vec 2.0 | 12.5 | 15.7 | 14 | 14.4 | 14.2 | 14.16 |\n| Mandarin Wav2Vec 2.0 | 9.1 | 11.9 | 12.2 | 11.9 | 13 | 11.62 |\n| French Wav2Vec 2.0 | 8 | 11 | 12.7 | 11.8 | 13 | 11.3 |\n| Dutch Wav2Vec 2.0 | 18.9 | 11.4 | 12 | 12.4 | 13 | 13.54 |\n| English Wav2Vec 2.0 | 8 | 15.2 | 14 | 14.4 | 14 | 13.12 |\n| English Wav2Vec 2.0 (supervised) | 8 | 16.9 | 18 | 18 | 18 | 15.78 |\n| Avg | 9.5 | 12.9714 | 12.9857 | 13.1143 | 13.3286 | 12.38 |\n\n**Table D.** For each model (row) and target (column), the layer that maximizes probing performance (Figure S1), averaged across the 10 cross-validation folds. See Appendix for corresponding Figure.", " > 1\\. “Under-explored/Limited analyses: Although the paper presents encoding model performance in every major analysis, the trends observed are not discussed further. For example, what information do different layers capture that could drive selectivity for lower layers in low-level brain regions vs. selectivity for deeper layers in high-level regions? “\n> “Does this information emerge as a consequence of training? “\n> “Does it emerge only when models are trained on speech?”\n> “How about specificity of information for the native language?”\n\nOur objective is to test whether (not why) an unsupervised model of speech processing in the brain can effectively account for its functional organization as measured with fMRI. However, we agree that interpretation is an interesting topic.\n\n### Results\nFollowing Reviewer 2’s remark, we now add a layer-wise inspection of our models of spectro-temporal, phonetic and word representations. We will add the following paragraph in the paper (using the additional page authorized after the discussion period):\n\n*“To investigate what features the different layers of Wav2Vec 2.0 encode, we perform a ridge regression on the [timit dataset](https://catalog.ldc.upenn.edu/LDC93S1) to predict five auditory and linguistic features from the activation functions of each layer and model of the present paper. We explore the following features: \n- the MEL spectrogram of the audio, computed using librosa (McFee et al. 2015, d=128)\n- the phonemes (categorical features). We use the transcripts and alignments provided in Timit. \n- the word embedding and part-of-speech of the words. The time alignments for words are provided by Timit. We use spaCy to compute the word embedding (medium model, d=300), and their part-of-speech (categorical feature, d=19)\nthe sentence embedding of each sample, provided by Laser.*\n\n*We use a subset of 1680 samples from Timit, each sample being an audio recording of a short sentence (<10 seconds) from 24 speakers. The model's activations were mean-pooled to the sampling rate of each feature.*\n\n*The results show that the layers of Wav2Vec 2.0 partially follows the hierarchy predicted from neuro-linguistics (Hickok and Poeppel, 2007) (Table D): the first layers of the transformer best account for the spectro-temporal information, whereas deeper layers best account for the phonetic, word-level and sentence level information. While all of these features emerge with training (Figure S1), only the highest-level features (phone, word and sentence-level) appear to be specific to speech and to the language with which wav2vec 2.0 was trained (Figure S1).”*\n\n\n### Discussion\n*“Interpreting the representations of deep learning models is notoriously difficult, and the topic of dedicated studies. For example, Pasad et al. 2021 explored the encoding of local acoustic features, phone identity, word identity and word meaning across layers. Similarly, Millet, Chitoran and Dunbar (2021) compared representations to human behavioral data to assess whether they better captured listeners’ perception of higher-level phonemic properties or of lower-level subphonemic properties of speech stimuli. Finally, Vaidya, Jain & Huth’s recent study explores filter banks, spectrograms, phonemes and words across layers. Here, we complement these analyses by showing that self-supervised learning allows wav2vec 2.0 to learn represents, along its hierarchy the representations of MEL spectrograms, phonetic categories and word embeddings (Figure S1).*\n\n*\"Critically, only the highest-level features, namely phonetic- and word- and sentence-level features appeared to be specific to (1) speech and (2) to the language with which it was trained (Figure S1). Interestingly, the word and sentence-level features are encoded deeper in the supervised network (best layer=18 in Table D) compared to the unsupervised network (best layer=14), which suggests that self-supervised learning generates a reservoir representations in its middle layers, reservoir which may partly overlap with the labels used in supervised learning. Together with our ABX tests, and layer-wise tuning of each voxel (Figure 3), these elements suggest that the representations of speech shaped by our experience are learnt and instantiated in the superior temporal gyrus and sulcus. These elements, consistent with previous electrophysiological studies (Mesgarani & Chang Science 2014), thus provide a coherent spectrum of evidence for the location of acquired speech representations in the brain.”*\n\nWe thank Reviewer 2 for helping us improve this facet of our study.\n", " Our objective is here to investigate whether a biologically plausible learning mechanism – namely self-supervised learning on a small amount of data – suffices to account for the functional organization of speech processing learnt in the brain.\n\n## Approach\n\nFor this, we analyze the fMRI of n=412 participants (compared to n=6 for Vaidya, Jain & Huth and n=8 for Kell et al) from n=3 different native languages (compared to n=1 in previous encoding studies). We compare these brain responses to those of different models, all trained with the same architecture but with different objectives (supervised vs non-supervised) and different datasets (600 hours of either non-speech, non-native or native speech sounds). By contrast, neither Kell et al nor Millet & King study self-supervised models, and Vaidya, Jain & Huth did not evaluate how different training led the same architecture to be more-or-less similar to the brain.\n\n## Contributions\n\nThe strength of this approach (considerably more data, minimal comparison between models) allow us to reveal three main results:\n\n* Result 1: Self supervision leads the model to be more similar to the brain than supervision. Unlike Vaidya, Jain & Huth (arXiv 2022), we make this comparison with the same architecture, and the same amount of data, and can thus be confident that it is the learning objective that proved important. \n\n* Result 2: The hierarchy of this self-supervised model is similar to the brain’s. To our knowledge, this hierarchy analysis leads to much clearer results than previous studies. First, we reveal, for each voxel of the whole cortex, its preferred layer in the model. By contrast, Kell et al (2018) only investigate the preference for middle versus deep layers within the temporal lobe; Vaidya, Jain & Huth explore a similar issue and draw similar conclusions to ours by inspecting the loading of a PCA. Second, we discover new hierarchical gradients, e.g. within the prefrontal cortex, a region tightly linked to human cognition, learning and control and whose functional organization remains largely unknown (Fuster, 2015).\n\n* Result 3: For the first time, we provide empirical evidence of language-specific representations of speech in the brain: i.e. our self-supervised models best align with the brain when they have been exposed to the native language of the participants. As previous studies were monolingual, this language_model x language_participants comparison could not be tested.\n\nOverall, we believe that Vaidya, Jain & Huth’s study is strong and corroborates several of our findings. Despite the fact that this concurrent study was put on arXiv after the conference submission deadline, we agree that our manuscript would benefit from adding the above discussion to the manuscript.\n\n\n\n| | Kell et al | Millet & King | Vaidya, Jain & Huth | Ours |\n|-------------------------------------|-----------------|------------------|-----------------|------------------|\n| Released/Accepted before submission | yes | no | no | |\n| # Subjects | 8 | 102 | 6 | 417 |\n| # Languages | 1 | 1 | 1 | 3 |\n| Scope | Temporal lobe | Full brain | Full brain | Full brain |\n| Stimuli | Auditory scenes | Speech | Speech | Speech |\n| Models | Supervised | Supervised | Self-supervised | Self-supervised |\n| Comparison | Architectures | Learning schemes | Architectures | Learning schemes |\n\n\n> Please cite related work appropriately. The authors have missed several relevant citations from PIs like Jack Gallant, Leila Wehbe, Alex Huth, Tom Mitchell, Christopher Honey, Jonathan Brennan, Alona Fyshe and more. For example, papers that were the first to build language encoding models or present the style of analyses reported here are missing.\n\nWe thank Reviewer 2 for pointing this out. Note that most of these authors are cited: Jack Gallant (n=3), Leila Wehbe (n=2), Alex Huth (n=2), Christopher Honey (n=2), Jonathan Brennan (n=1).\n\nNevertheless, we agree that several seminal papers are missing. We thus added Mitchell et al (Science 2008), Wehbe & Mitchell et al (PLoS One 2014), Jain & Huth (Neurips 2018) and Jain, Huth et al (Neurips 2020) and well as the recent Vaidya, Jain & Huth preprint.\n", " We thank Reviewer 2 for their thorough review, as well as for their these valuable questions and remarks, which we now address below.\n\n> Weaknesses:\n> 1\\. Significance/Originality: The central premise of this paper lies in exploring self-supervised architectures for building encoding models. The results presented include 1) encoding model performance maps 2) layer wise selectivity maps 3) encoding performance viewed through a cascade of model selectivity (native > non-native....> chance). However, due to several reasons highlighted in detail below (like no discussion/analysis on what information is represented at different layers and how does this explain patterns of variation in layer selectivity, for example) and due to the identical results+analyses in prior work on supervised models and concurrent work on self-supervised models, the novelty of this work is not clear.\n> - Kell et al., Neuron 2018 - exploring layer selectivity for diff supervised task architectures + specificity for speech vs. non-speech\n> - Millet et al., arXiv 2021 - exploring many of the analyses presented here but with supervised models\n> - Vaidya, Jain & Huth., ICML 2022 - exploring 4 self-supervised models, layer selectivity and probing to infer information at each layer\n\nWe agree that these three studies are relevant to the present work, and indeed devote a discussion paragraph to the supervised convnets studied in Kell et al (Neuron 2018) and Millet & King (arXiv 2021).\n\nFor context, Millet & King (arXiv 2021) is a preprint that was not accepted; we here capitalize on the criticisms it received on OpenReview to build the present study.\n\nFurthermore, Vaidya, Jain & Huth (arXiv 2022) was released after the present submission. According to the [Neurips guidelines](https://neurips.cc/Conferences/2022/PaperInformation/NeurIPS-FAQ), this study, which share several of our conclusions, cannot be used to discard the novelty of our work: “​​Papers appearing less than two months before the submission deadline are generally considered concurrent to NeurIPS submissions. Authors are not expected to compare to work that appeared only a month or two before the deadline.”\n\nFor completeness, we will nevertheless clarify how the present paper makes significant contributions beyond these three studies.", " \n> “in lines 290-300, the authors discuss several studies on the functional specificity of different brain regions during speech processing, functional gradients in PFC etc.. However, how is this being inferred for the data? I may have missed the link but iiuc this is not supported by the evidence presented.”\n\nThe functional specificity corresponds to the voxel-wise comparison between random, non-speech, non-native and native models described in Figure 4. The functional gradients correspond to the voxel-wise analysis of layer specificity displayed in Figure 3.\nWe now added the references to the figure in the corresponding discussion paragraph.\n\n\n> “the functional hierarchy of its transformer layers aligns with the cortical hierarchy of speech in the brain, and reveals the whole-brain organisation of speech processing with an unprecedented clarity\" - Why is the layer wise selectivity important? “\n\nThis result suggests that self-supervised learning suffices to generate a hierarchy of representations similar to the brain’s. The layer-wise selectivity is a simple method to reveal this hierarchical organization. The concurrent submission of Vaidya, Jain & Huth (2022) reaches a similar conclusion – although with significantly noisier cortical maps – using an alternative method based on the interpretation of PCA coefficients. \n\nFinally, and as detailed above, we now clarify the information encoded in each layer of wav2vec 2.0, and show that those specific to language are encoded in the middle and deep layers.\n\n> “Furthermore, it appears that the upper-middle layers have high correlations in several regions across cortex, albeit not the highest. What do the authors make of this?”\n\nThis is an interesting remark. We will add a discussion of the features encoded in each region. In particular:\n\n*“Interestingly, the word and sentence-level features are encoded deeper in the supervised network (best layer=18 in Table 5) compared to the unsupervised network (best layer=14), which suggests that self-supervised learning generates a reservoir representations in its middle layers, reservoir which may partly overlap with the labels used in supervised learning.”*\n", " \n> - “It is interesting to compare supervised vs. unsupervised models in terms of the information they capture and consequently, their ability to predict brain data. However:\n> (i) given the implausibility of the learning mechanisms employed in these networks (as noted in the paper too; Lines 301-312)\n> (ii) the fact that they do not follow the same patterns of errors/do not mimic human behavior\n> (iii) the fact that the paper currently does not explore the types of information captured (what makes one model perform better than the other? how is this related to the purported functions of different regions?)\n> , why is this a realistic model of speech perception? [...] is it fair to state that we are moving towards brain-like models? (more detailed arguments in Guest & Martin, 2021) For example, lines 259-261: “Here, we test whether self-supervised learning on a limited amount of speech suffices to yield a model functionally equivalent to speech perception in the human brain. “\n\nWe agree that the goal expressed in this last sentence may be excessive, and that our discussion on the remaining gaps between wav2vec 2.0 and the brain could be improved. We will indicate in the extended version of the revised manuscript:\n\n*“Here, we test whether self-supervised learning on a limited amount of speech effectively accounts for the organization of speech processing in the human brain as measured with fMRI.”*\n\nFurthermore, we propose to amend the discussion as follows:\n\n\n*“Our study shows that a self-supervised model, trained on a remarkably small amount of unlabelled data, effectively accounts for cortical responses to speech. This learning scheme is considerably more realistic than the classic supervised models, thus promises to help understand how the human brain learns and organizes speech processing.*\n\n*However, these results do not imply that wav2vec 2.0 has learned all of the speech representations used by the brain. Indeed, several major gaps remain between self-supervised speech models like Wav2Vec 2.0 and the brain. First, its transformer layers are not temporally constrained: each layer can access all elements within the contextual window. This differs from the necessarily recurrent nature of processing in the brain. Second, Wav2Vec 2.0 shows differences with humans in recent behavioral studies: it shows a higher sensitivity to band-pass filtering and an under-reliance on fine temporal structures (Weerts et al. 2021). It also fails to predict categorical effects on perception (Millet et al. 2021). Third, recent studies show that Wav2Vec 2.0 encodes significantly less semantic information than text-based models (Pasad et al. 2021) (Vaidya, Jain & Huth 2022). Our analyses suggest that learning allows Wav2Vec 2.0 to capture some lexical features in its deep layers (Figure S1). However, whether these layers also whether they capture complex syntactic structures, such as recursive syntactic trees, remains an open question (Lakrertz et al 2022). We speculate that these limitations may be due to the time scales of Wav2Vec 2.0 which, unlike humans, learns very short-term representations of speech.*\n\n*Overall, these differences likely explain why the neural predictivity scores of Wav2Vec 2.0 remain substantially lower than our noise-ceiling (19% on average, and up to 74% in Heschl Gyrus, Table S1). Together with epistemological accounts (Guest and Martin 2021) and concurrent results (Vaidya, Jain & Huth 2022), our results highlight the remaining gap between current self-supervised models and the human brain.”*", " > “Claims of supervision or lack thereof that match human ability:\n> - Firstly, while the effect sizes are significantly different, these differences are still considerably small between the random/non-speech/non-native/native models.”\n\nReviewer 2 is right that the present effect sizes are relatively small.\n\nFirst, the order of magnitude of the main effects are similar to those of previous studies from multiple groups (Huth et al Nature 2016, Jain & Huth Neurips 2018, Toneva & Wehbe Neurips 2019, Caucheteux & King, Nature Comm 2022). The low effect sizes are expected given that we here try to model individual voxels at the single TR level, which is notoriously noisy. For clarity, we have now performed a noise-ceiling analysis, and also report our results in proportion to what this analysis estimates to be explainable. The new results suggest that wav2vec 2.0 explains 19% of the noise ceiling on average across the whole cortex, and up to 74% in Heschl’s gyrus.\n\nSecond, many of the effects reported in our study are averaged across all cortical voxels, for simplicity and to avoid cherry picking regions of interest. Because many voxels do not respond much to speech (Malik-Moraleda et al, Nature Neuroscience 2022), these averages expectedly become very small. For clarity, we add a supplementary table for each critical comparison (non–speech vs random, non-native vs non-speech and native vs non-native). The results show that the relative improvements can peak to 26%, 13% and 9% in IFG, for the non-speech, non-native and native models, respectively (Table E).\n\n| Model A | Model B | Heschl | STG | STS | IFG |\n|:-----------|:-----------|:-----------|:------------|:------------|:------------|\n| Non speech | Random | 6% +/- 0.8 | 25% +/- 6.0 | 27% +/- 3.9 | 26% +/- 5.0 |\n| Non native | Non speech | 3% +/- 0.4 | 14% +/- 6.0 | 9% +/- 1.7 | 13% +/- 2.4 |\n| Native | Non native | 2% +/- 0.3 | 5% +/- 2.1 | 9% +/- 1.3 | 9% +/- 1.9 |\n\n**Table E.** Relative improvement of the neural predictivity scores for each model pair. Precisely, for each pair of models (model A, model B), we compute the relative improvement in neural predictivity scores of model A over model B (A-B/B). Scores are averaged across subjects, and voxels within each regions of interest. ", " | | Avg | Top10 | Heschl | STG | STS | IFG | Motor |\n|:---------------------|:-------|:--------|:---------|:-------|:-------|:-------|:--------|\n| Random Wav2Vec 2.0 | 13.9% | 29.0% | 66.9% | 32.0% | 21.8% | 15.9% | 11.9% |\n| Non-Speech | 16.4% | 33.9% | 71.0% | 36.8% | 26.9% | 19.0% | 11.7% |\n| Non-Native | 17.6% | 35.9% | 73.0% | 39.0% | 29.1% | 21.0% | 12.9% |\n| Native, Supervised | 18.3% | 36.7% | 74.2% | 39.6% | 29.8% | 21.2% | 13.6% |\n| Native, Unsupervised | 18.8% | 37.9% | 74.4% | 40.3% | 31.3% | 22.8% | 13.8% |\n| Noise ceiling | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% |\n\n**Table S1. Neural Predictivity with noise ceiling normalization.** Neural predictivity divided by the noise ceiling, for the Narrative dataset, on average across all voxels (`Average`), for the 10% best voxels of the noise ceiling (`Top10`, Figure S2) and the voxels of five regions of interests.\n\n| | Avg | Top10 | Heschl | STG | STS | IFG | Motor |\n|:---------------------|-------:|--------:|---------:|-------:|-------:|-------:|--------:|\n| Random Wav2Vec 2.0 | 0.0189 | 0.0693 | 0.1921 | 0.0712 | 0.0443 | 0.0237 | 0.0113 |\n| Non-Speech | 0.0223 | 0.0804 | 0.2046 | 0.0814 | 0.0545 | 0.0283 | 0.0112 |\n| Non-Native | 0.0239 | 0.0849 | 0.2106 | 0.0859 | 0.0589 | 0.0315 | 0.0123 |\n| Native, Supervised | 0.0247 | 0.0865 | 0.2133 | 0.0871 | 0.0602 | 0.0318 | 0.013 |\n| Native, Unsupervised | 0.0254 | 0.0893 | 0.2144 | 0.0886 | 0.0635 | 0.0343 | 0.0131 |\n| Noise ceiling | 0.1174 | 0.219 | 0.2873 | 0.1808 | 0.1961 | 0.149 | 0.0938 |\n\n**Table S2. Neural Predictivity without noise ceiling normalization.** Same as Table S1, but without dividing by the noise ceiling estimates.\n\n> Question 6: What is the amount of data per participant in each language & stimulus set? Iiic, the English datasets have far more data per participant than French. How does the paper circumvent differences in the quality of model fits due to different amount of data being used? (especially for cross-language comparisons)\n\nThe 48 English, 33 Mandarin and 28 French participants who listened to the Little Prince were scanned during 94, 90 and 97 minutes of effective speech, respectively. \n\nIn addition, we will specify:\n*“The 303 [English] participants [from the Narrative datasets] listened different subset of the audio, from 7 to 98 min of fMRI data with speech (mean=26min)”*\n\nAll Wav2Vec models are trained with the same amount of data (600 hours). The mapping between a Wav2Vec 2.0 model and the brain is done at the *participant* level (i.e. independently of all other participants). The cross-language comparisons are applied within each participant: i.e. For each English subject, w2v_english is compared to two non-native models: w2v_mandarin and w2v_french; and we use second-level statistics across participants to report the summary metrics.\n", " \n> Question 3: How were models trained for the ABX task? What are the specifications of the dataset and the task? Which layers were used to do this?”\n\nThe ABX task does not require additional training. We agree with Reviewer 2 that the description of section 2.4 could be improved. We will indicate in the extended revised version of the manuscript (using the extra page allowed after the discussion period):\n\n*“To compare the phonetic representations of our models to those of humans, we use the method of Schatz et al 2016 and forced-choice discrimination judgements evaluated by online participants and publicly available (CC0: https://docs.cognitive-ml.fr/perceptimatic/). \nSpecifically, for each triplet, humans had to judge whether the stimulus X was closer to A or B. Analogously, we computed the Euclidean distance in the most discriminative layer (here transformer layer 5) to determine whether X was closer to A or B. Additional data, analyses and model-human comparison can be found in (Millet and Dunbar 2022).”*\n\n\n> Question 4: ABX task results: I imagine that the authors ran clustering for the English/French/non-speech data independently to get different quantization targets for each. Given that these targets roughly correspond to phonemes, is it surprising that the English model was better at distinguishing English phonemes than French? Similarly, is it surprising that English models outperform non-speech or untrained models that don’t have English quantization targets or have not learned how to use them respectively?\n\n\nThis is correct; all models are trained independently and thus end up with different quantized targets. These quantized targets, however, are much more numerous and substantially shorter than phonemes, and insufficient research has been done to establish how well-aligned they are with phonemes in general; analyses of a model trained on English by Baevski et al. 2021 (Appendix D) suggested an imperfect relationship to phonemes. Precisely, they suggest that English models better distinguish English phonemes than French ones, because the models are optimized to capture the statistics of speech signals in the respective languages, including sub-phonemic and suprasegmental properties. Here, we formally test this prediction - It is also interesting to highlight that random, and non-speech models (which also have a quantization module) actually perform quite well – a result that is consistent with the fact that these models account for a fairly large portion of the brain responses to speech.\n\n\n> Question 5: In Fig. 4D, it is surprising that only auditory cortex seems to care about language/speech specific features although we know that a large part of cortex processes natural speech. Can the authors comment on why this might be and perhaps visualize the individual correlations for all 5 conditions in a few example voxels per color?\n\nWe agree that this point deserves additional comments, and thus add two supplementary tables with the individual correlation for all of the five conditions, with (Table S1) or without noise ceiling normalization (Table S2). We also propose to add the following paragraph to the Results section (in the extended version of the revised manuscript):\n\n*“Our analyses suggest that the native speech features are represented relatively high in the auditory processing hierarchy, i.e. in the superior temporal sulcus and gyrus (STS/G). This suggests that these regions are uniquely sensitive to the fine auditory structures specific to each language, and that higher level regions, such as the prefrontal and parietal cortices, may be not or less tuned to them. \nWe speculate that this new result fits the classical view of modular processing in the cortex (e.g. Fodor, 1983). For example, the Global Workspace Theory (Dehaene & Changeux 2011) predicts that higher level regions (PFC) would only be input with the result of sensory hierarchy, and would thus be blind to the many of its lower levels.”*", " > Question 1: Selectivity for deeper layers in primary AC: In figure 3, why is primary auditory (or what is possibly Heschl’s gyrus) marked in red, ie., selective for deep layers? It would be useful to visualize the correlations for different layers in a few voxels to see if the curve is relatively flat and if this is an artifact of using a circular mean.\n\n\nAs indicated in Figure 3A-C, the primary auditory cortices are primarily correlated with early (not deep) layers, as expected. In supplementary materials, we now report the neural predictivity score for each layer, averaged across subjects, for two brain regions associated with acoustic processing (Heschl’s gyrus and sulcus), and three regions associated with higher level language processing (IFG) (Figure S4, Table below). While in Heschl, the voxels are mainly tund to the first transformer layer, in IFG, the voxels are mainly tuned to the seventh and eighth layer of the transformer. For clarity, we here provide these results across the main regions of interest.\n\n\n| | Heschl G | Heschl S | IFG Tri | IFG Op | IFG Orb |\n|:--------|:---------------------|:---------------------|:---------------------|:---------------------|:---------------------|\n| Conv. 1 | 0.090 +/- 0.0034 | 0.094 +/- 0.0040 | 0.012 +/- 0.0012 | 0.013 +/- 0.0011 | 0.012 +/- 0.0013 |\n| Conv. 2 | 0.103 +/- 0.0043 | 0.108 +/- 0.0049 | 0.018 +/- 0.0013 | 0.017 +/- 0.0011 | 0.015 +/- 0.0015 |\n| Conv. 3 | 0.105 +/- 0.0044 | 0.110 +/- 0.0050 | 0.020 +/- 0.0014 | 0.019 +/- 0.0012 | 0.018 +/- 0.0016 |\n| Conv. 4 | 0.103 +/- 0.0045 | 0.110 +/- 0.0049 | 0.019 +/- 0.0014 | 0.018 +/- 0.0011 | 0.019 +/- 0.0015 |\n| Conv. 5 | 0.112 +/- 0.0043 | 0.115 +/- 0.0048 | 0.023 +/- 0.0014 | 0.021 +/- 0.0011 | 0.023 +/- 0.0014 |\n| Conv. 6 | 0.093 +/- 0.0040 | 0.100 +/- 0.0044 | 0.019 +/- 0.0013 | 0.017 +/- 0.0010 | 0.019 +/- 0.0014 |\n| Conv. 7 | 0.109 +/- 0.0039 | 0.113 +/- 0.0043 | 0.023 +/- 0.0014 | 0.020 +/- 0.0011 | 0.022 +/- 0.0014 |\n| Tr. 1 | **0.184 +/- 0.0057** | **0.190 +/- 0.0068** | 0.029 +/- 0.0016 | 0.027 +/- 0.0013 | 0.028 +/- 0.0015 |\n| Tr. 2 | 0.183 +/- 0.0056 | 0.189 +/- 0.0066 | 0.032 +/- 0.0015 | 0.027 +/- 0.0013 | 0.029 +/- 0.0016 |\n| Tr. 3 | 0.182 +/- 0.0057 | 0.188 +/- 0.0067 | 0.034 +/- 0.0016 | 0.028 +/- 0.0013 | 0.030 +/- 0.0016 |\n| Tr. 4 | 0.181 +/- 0.0058 | 0.186 +/- 0.0067 | 0.034 +/- 0.0016 | 0.028 +/- 0.0013 | 0.030 +/- 0.0017 |\n| Tr. 5 | 0.179 +/- 0.0058 | 0.183 +/- 0.0068 | 0.035 +/- 0.0016 | 0.029 +/- 0.0013 | 0.031 +/- 0.0017 |\n| Tr. 6 | 0.176 +/- 0.0056 | 0.181 +/- 0.0066 | 0.038 +/- 0.0017 | 0.031 +/- 0.0013 | 0.032 +/- 0.0016 |\n| Tr. 7 | 0.174 +/- 0.0057 | 0.181 +/- 0.0067 | **0.038 +/- 0.0017** | **0.031 +/- 0.0013** | 0.033 +/- 0.0017 |\n| Tr. 8 | 0.176 +/- 0.0057 | 0.183 +/- 0.0066 | 0.037 +/- 0.0017 | 0.031 +/- 0.0013 | **0.033 +/- 0.0018** |\n| Tr. 9 | 0.173 +/- 0.0056 | 0.180 +/- 0.0066 | 0.036 +/- 0.0017 | 0.030 +/- 0.0013 | 0.033 +/- 0.0017 |\n| Tr. 10 | 0.157 +/- 0.0053 | 0.160 +/- 0.0061 | 0.031 +/- 0.0017 | 0.027 +/- 0.0013 | 0.029 +/- 0.0017 |\n| Tr. 11 | 0.120 +/- 0.0046 | 0.122 +/- 0.0054 | 0.024 +/- 0.0014 | 0.022 +/- 0.0012 | 0.023 +/- 0.0015 |\n| Tr. 12 | 0.118 +/- 0.0043 | 0.119 +/- 0.0049 | 0.024 +/- 0.0015 | 0.021 +/- 0.0012 | 0.024 +/- 0.0016 |\n\n**Table F**: Neural predictivity scores, averaged across voxels, for each layer of the unsupervised Wav2Vec 2.0 model, on average across all voxels, and for five regions of interest. See Figures 3C and S4 for additional results.\n\n \n\n> Question 2: What do the individual participant maps look like for figure 3? Is the group-averaging introducing smoothness in the gradients that is not visible in individual maps ( due to the circular mean estimate being noisy)?\n\n\nReviewer 2 is correct that the population-level analysis can introduce smoothness that would not be present at the individual subject level, as indicated in our discussion: *“it should be noted, however, that our results aggregate a large cohort of individuals which could mask a more modular organization at the individual level.\"*\n\nThis consideration is not specific to the circular mean, which is here used to limit a regression-to-the mean-artifact.\nWe believe that the study of inter-individual variability is outside the scope of the present research, and require the development of advanced methods based on tiling (Huth et al Nature 2016) and/or optimal transport (Thual et al, arXiv 2022).", " We thank Reviewer 1 for their thorough review, as well as for their valuable questions and remarks, which we will address below.\n\n\n> 600 hours is only 25 days, which seems short compared to the developmentally relevant time scales for humans, so could you please elaborate on this comparison?\n\nWe agree that this number (600 h) can be confusing, and will thus propose to systematically replace it by “600 hours of effective speech”. \nThis quantity does represent much more than 25 days of existence, as children are mainly exposed to non-speech sounds. Note that this estimate varies across cultures and individuals and should thus be taken as a coarse approximation (see the Supplementary Table 2 of Dupoux, Cognition 2018, for more details).\n\n\n> The Discussion section mentions some tasks that can seemingly be done by humans with a few hundred hours of speech, but those tasks are very low level tasks which do not represent all facets of human speech perception.\n\nReviewer 1 is correct that the tasks originally discussed only represent a small facet of human speech.\nIn particular, a few hundreds of hours of speech appear sufficient for children to learn complex syntactic structures (Emmanuel Dupoux 2018; Friedmann, Belletti & Rizzi, 2021). \nTo clarify this issue, we will amend the discussion as follows in the revised manuscript (using the supplementary page authorized *after* the discussion period):\n\n*“Our study shows that a self-supervised model, trained on a remarkably small amount of unlabelled data, effectively accounts for cortical responses to speech. This learning scheme is considerably more realistic than the classic supervised models, thus promises to help understand how the human brain learns and organizes speech processing.\nHowever, these results do not imply that wav2vec 2.0 has learned all of the speech representations used by the brain. Indeed, several major gaps remain between self-supervised speech models like Wav2Vec 2.0 and the brain. [...] Third, recent studies show that Wav2Vec 2.0 encodes significantly less semantic information than text-based models (Pasad et al 2021, Vaidya, Jain & Huth 2022). Our analyses suggest that learning allows Wav2Vec 2.0 to capture some lexical features in its deep layers (Figure S1, Table S4). However, whether these layers also capture complex syntactic structures, such as recursive syntactic trees, remains an open question (Lakrertz et al 2022).”*\n\n\n> Relatedly, what would the performance (e.g. neural predictivity etc.) look like if you used pretrained self-supervised models trained on much more data, say, even more data than in a human lifetime (thus not necessarily restricting yourself to 600 hours).\n\nWe agree that extending our analyses to models trained on larger amounts of speech data would improve the conclusions of our study. Consequently, we now tested an additional set of five self-supervised models pretrained on more speech data (from 10K to 436K hours). These new analyses show that the brain is best aligned with a large model of 1 billion parameters, trained on 436K hours of multilingual speech. Interestingly, however, this neural predictivity is remarkably close to the one obtained with our own training (106% of our model trained with only 600 hours, where the same architecture “only” reaches 70% before training, Table A below, new Figure S3). Furthermore, when controlling for the number of parameters, using more data does not appear to yield a better neural predictivity score. For instance, our own model based on 600 hours of effective English speech is comparable to a model trained on 53K hours of English speech (101%, Table A). Similarly, a model trained on 10K hours of multilingual speech marginally outperforms a model trained on 100K hours at predicting brain activity (101%, Table A). \n\n\nThese results thus strengthen our conclusion: a small amount of effective speech suffices Wav2Vec 2.0 to efficiently account for the functional organization of speech processing in the brain.", " > supervised vs self-supervised: I do have to note phoneme recognition is a very low-level task, perhaps a higher level classification task would give better results for the supervised model (which is already pretty close to the self-supervised one, to begin with).\n\n \n We agree that phoneme discrimination is a low-level task. However, we agree that it would be interesting to explore higher-level tasks. Consequently, we now implemented a new supplementary analysis to explore this issue (Table C below). We propose to add the following paragraph in Supplementary: \n\n*“We had originally chosen a relatively low-level task (automatic speech recognition, ASR) for our supervised models, because it is widely used in the speech community, and many models were developed to solve it (Lee and Hon 1989; Chan, Jaitly, Le and Vinyals 2016; Amodei et al 2016; Baevski et al 2020). \nHowever, it is possible that higher-level classification tasks could give better neural predictivity scores. To explore this issue, we compare the neural predictivity scores of a Wav2Vec 2.0 fine-tuned for ASR to those obtained by a Wav2Vec 2.0 fine-tuned on Language Identification, and on Keyword Spotting (Yang et al. 2021). We find that ASR leads to the highest neural predictivity scores (Table C). A myriad of other high-level tasks remain to be explored, however, and include emotion recognition, prosody tracking and syntactic processing. To our knowledge, however, these features are typically evaluated on very small annotated datasets, and rather constitute a basis for zero-shot evaluations rather than supervised learning (e.g. Nguyen et al. 2022).\"* \n\n| Model | Training task | Neural Predictivity |\n|:----------------------|:------------------------------------------|:----------------------|\n| Wav2Vec 2.0 + LC | SSL + Language Classification | 0.0226 +/- 0.0007 |\n| Wav2Vec 2.0 + KS | SSL + Keyword Spotting | 0.0256 +/- 0.0008 |\n| Wav2Vec 2.0 + ASR (960h) | SSL + Automatic Speech Recognition (960h) | 0.0262 +/- 0.0008 |\n| Wav2Vec 2.0 + ASR (100h) | SSL + Automatic Speech Recognition (100h) | 0.0264 +/- 0.0008 |\n\n**Table C: Neural predictivity for networks trained with different supervised objectives.** Neural predictivity (x-axis) for networks pre-trained using SSL, and fine tuned with Automatic Speech Recognition (ASR) on either 100 hours or 960 hours of english speech, on Keyword Spotting (KS, Yang et al. 2021) and Language Classification (LC). \n\n\n> ABX accuracy of the models is much higher than human accuracy, which suggests these models might not be behaviorally adequate models of human speech processing (and possibly of human auditory processing more generally).\n\n\nThis is an important remark. We will add the following paragraph to the discussion in the extended version of the revision (using the additional page): \n\n*“Several qualitative and quantitative similarities have been recently observed between the behavior of wav2vec 2.0 and humans (Millet and Dunbar 2022, Adolfi et al 2022). While our ABX results strengthen this ensemble, the ABX accuracy of the model is significantly higher than participants’. This quantitative difference may be partially explained by the fact that participants – and online participants in particular – undergo fluctuating attention, and adopt strategies which can negatively impact performance (Humphreys, 1939). More systematic comparison may, however, be necessary to identify and understand the core differences that remain between humans and this model (Adolfi et al 2022).\"*\n\n", " > I do have to note the correlations shown in Fig. 2B-C are very low (R). This may reflect the intrinsic SNR limitations of the fMRI signal. This makes me wonder if Fig. 1C is cherry-picked.\n\n\nReviewer 1 is correct that the R values are relatively low. These effect sizes are similar to those obtained in previous fMRI studies, using different datasets and different models (Huth et al Nature 2016; Toneva & Wehbe ; Caucheteux & King 2021 ; Shain & Huth 2019)\nFigure 1C was designed for illustrative purposes. We had originally selected the best voxel on the test set of the first story of the Narrative dataset (“Pieman”). To provide a more representative illustration, we now changed Figure 1C to three voxels randomly selected from the 10th percentile of best voxels identified by the noise ceiling analysis. These voxels reach, on average, R=0.17, R=0.38 and R=0.21 in this story.\n\nWe thank R1 for pointing this out, as it provides both a fairer representation of our results, but also clarifies that the neural predictivity is performed on each voxel separately.\n\n\n> Relatedly, what is the noise ceiling performance in these recordings? That is, if I tried to predict the fMRI of one participant from the fMRIs of other participants, what would the performance look like?\n\nThis is a good point. We now computed the noise ceiling on 290 subjects who listened to the same stories, by evaluating how well we can predict each subject from the average fMRI of all of the other subjects (Appendix A6). As expected, the “corrected” results provide reasonably high neural predictivity scores in most cortical areas. The following table provides the neural predictivity score obtained by Wav2Vec 2.0 (600h), versus the neural predictivity score obtained from the average of all other subjects.\n\n| | Average | Top10 | Heschl | STG | STS | IFG | Motor |\n|:--------------|:---------------|:-----------------|:---------------|:---------------|:---------------|:---------------|:---------------|\n| Unsupervised | 0.03 +/- 0.001 | 0.09 +/- 0.002 | 0.21 +/- 0.007 | 0.09 +/- 0.003 | 0.06 +/- 0.002 | 0.03 +/- 0.001 | 0.01 +/- 0.001 |\n| Supervised | 0.02 +/- 0.001 | 0.09 +/- 0.002 | 0.21 +/- 0.007 | 0.09 +/- 0.003 | 0.06 +/- 0.002 | 0.03 +/- 0.001 | 0.01 +/- 0.001 |\n| Noise ceiling | 0.12 +/- 0.006 | 0.22 +/- 0.006 | 0.29 +/- 0.008 | 0.18 +/- 0.006 | 0.20 +/- 0.006 | 0.15 +/- 0.006 | 0.09 +/- 0.006 |\n| Ratio | 0.19 +/- 0.006 | 0.38 +/- 0.010 | 0.74 +/- 0.025 | 0.40 +/- 0.013 | 0.31 +/- 0.011 | 0.23 +/- 0.010 | 0.14 +/- 0.014 |\n\n**Table B: noise ceiling analysis.** Neural predictivity score averaged across all cortical voxels (`Average`), across the 10% voxels with the highest noise ceiling (`Top10`) and the voxels of different regions of interests, for the following models: the unsupervised and supervised Wav2Vec 2.0 model, the noise ceiling and the noise ceiling ratio (for each voxel, the scores averaged across subjects of the unsupervised model divided by the noise ceiling of this particular voxel). Scores are averaged across all subjects and +/- refers to the SEM across subjects. \n\n\n\nThis new analysis suggests that Wav2Vec 2.0 captures 19% of the signal on average across all cortical voxels, 38% in the top-10 less noisy voxels, and up to 74% in Heschl’s gyrus and sulcus (new Figure S2).\nWe have added the corresponding brain maps in Supplementary Figure S2, and will add this analysis to the manuscript together with the following paragraph in the extended revised version:\n\n*“Overall, these differences likely explain why the neural predictivity scores of Wav2Vec 2.0 remain substantially lower than our noise-ceiling (19% on average, and up to 74% in Heschl’s gyrus and sulcus, Table B, Figure S2). Together with epistemological accounts (Guest and Martin 2021) and concurrent results (Vaidya, Jain & Huth 2022), our results highlight the remaining gap between current self-supervised models and the human brain.”*", " > I understand that the differences in Fig 4C are significant, but they are tiny differences. Shouldn’t I interpret this as saying the non-native model, for example, is actually a pretty good model of the native auditory cortex?\n\n\nWe agree with Reviewer 1. These differences, while highly significant ($p<10^{-18}$ in STS, $p<10^{-7}$ in Heschl), are small, and thus suggest that the non-native model already provides a decent model of the auditory cortex. In fact, the random model and the non-speech model reach 86% and 96% of the A1/A2 neural predictivity score obtained by the native model, respectively, suggesting that they, too, provide fairly decent models of the auditory cortex.\n\nConsequently, we propose to indicate:\n\n*“[...] While surprising at first, this result could, in retrospect, have been expected: the auditory cortex is continuously bombarded by non-speech input. Consequently, many non-native – and indeed many non-speech – representations need to be processed and learnt by these areas. These elements thus highlight the importance of using large amounts of neuroimaging data to effectively compare minimally different models of speech in natural settings.”*\n\n> Whatever makes the auditory cortices of, say, Mandarin speakers different from the auditory cortices of French speakers doesn’t seem to be captured by these models (or perhaps more likely by fMRI itself).\n\nOn average in Heschl’s gyrus and sulcus, voxels *are* better modeled by the native than the non-native Wav2Vec 2.0 ($p = 10^{-7}$). It is true, however, that this comparison does not hold at the single-voxel level after correction for multiple comparisons (Figure 4C). By contrast, this native versus non-native difference is robust both at the region and the single-voxel level in the superior temporal sulcus and gyrus (Figure 4D).\n\n\nWe propose to add the following paragraph to the discussion:\n\n*“The representations specific to the native model were located in the superior temporal sulcus and gyrus (STS/G). Interestingly, these areas are known to represent phonetic features (Mesgarani & Chang, 2014). Together, these results thus confirm that and now show where phonetic representations are shaped by our experience (Liberman, Harris, Kinney, and Lane 1964; Werker and Tees 1984; Kondaurova and Francis 2008). It should be stressed, however, that the effect sizes of this analysis, while highly significant, are small (Figure 4). In fact, the random model and the non-speech model reach 67% and 87% of the neural predictivity score obtained by the “native” model, respectively in STS/G. While surprising at first [...]”*\n\n\n> Finally, I also do have to note most of the results here (hierarchy, very slightly better prediction of native vs. non-native fMRI responses, etc.) are not really too surprising given what we know from earlier literature and prior expectations.\n\n\nWe agree with Reviewer 1 that several of the elements presently identified directly fit with the overall organization of speech processing that has been described over the years, including: \n1. the identification of a network specific to speech processing (e.g. Malik-Moraleda et al, Nature Neuroscience 2022),\n2. its hierarchical organization (e.g. Friederici 2017), and \n3. the location of phonetic processing (e.g. Mesgarani and Chang, Science, 2014).\n\nTo date, however, no unsupervised model and learning rule had been proposed and demonstrated to effectively account for such functional organization. Our study precisely addresses this issue: a model trained without supervision on a small amount of speech can automatically capture the hierarchical organization, and the features specific to speech processing. Our new noise ceiling analysis also delineates a clear path for improvement: our current estimate suggests that, while 74% of the explainable signal in the primary auditory can be accounted for by Wav2Vec 2.0 trained with only 600 hours of speech cortex, more than 80% remained unaccounted for by the present model on average in the cortex.", " ### Summary of the reviews\nWe thank our reviewers for their serious reviews and helpful remarks. Overall, the reviewers praised the present study for its “sound” and “extensive” results (R1), “interesting observations” (R3) and its experiments that were “competently and carefully” executed, “with a large number of participants” (R2). The presentation was judged as “good” (R1,R2,R3), and the paper “well written and easy to read” (R2). \n \n \nYet, the ratings were ambivalent (borderline accept, weak accept and reject) because of\n1. an insufficiently clear contribution (R2, R3),\n2. a recent study that compromises the novelty of our results (R2),\n3. concerns about the low scores and their interpretation (R1, R2, R3). \n\n\nOur reviewers suggested several actionable experiments and discussion points to address these concerns, which we have now addressed.\n\n### Contribution\nAs pointed out by R3, our primary contribution is neuroscientific, and thus target the Neuroscience (“neural coding”) and “ML for sciences” pillars of the [Neurips call](https://nips.cc/Conferences/2022/CallForPapers). Specifically, we successfully show that a self-supervised model (Wav2Vec 2.0) trained with a plausible amount of unlabelled speech effectively accounts for the hierarchical organization and language specificity of speech processing in the brain, hence marking an important step towards understanding a unique trait of the human species.\n\n\n### Novelty \nAs R2 pointed out, a concurrent study by Vaidya, Jain & Huth was released on arXiv one week *after* the present submission. As now detailed below, the slightly different methods and data employed by these two studies (Vaidya, Jain & Huth n=6 English participants; Our, n=412 English, Chinese, and French participants), lead to consistent and complementary results, and thus strengthen our original conclusions. \n\n \n \n### Robustness and interpretation\nFollowing our reviewers’ suggestions, we now include:\n1. A noise ceiling experiment showing that Wav2Vec 2.0 account for more than 70% of the explainable signal in some brain regions; \n2. The analyses of ten new models, trained with more data, and/more more parameters, which obtain only marginally better performances than Wav2Vec 2.0 trained with only 600 hours of effective speech; \n3. Five new interpretation analyses of the features encoded in each layer of Wav2Vec 2.0, which show that phonetic and word representations (as opposed to MEL) specific to each language are learnt by the middle and deep layers of Wav2Vec 2.0. \n \n \n \nWe are grateful to our reviewers for their useful suggestions which greatly improved and clarified our study. We now address their specific comments below.\n", " This paper uses self-supervised (and supervised) models trained with speech or auditory scene data to predict fMRI activity in the auditory cortices of English-, French-, and Mandarin-speaking participants. It shows that this can be done at a level higher than some baseline models (e.g. random untrained models) and the models recapitulate some aspects of the neural organization such as hierarchical processing. Strengths:\n\nThe experiments and analyses were done competently and carefully with a large number of participants and the results seem to be sound.\n\nWeaknesses: please see the questions and limitations sections. The authors claim 600 hours of data is comparable to the amount of auditory experience humans receive during their development. But 600 hours is only 25 days, which seems short compared to the developmentally relevant time scales for humans, so could you please elaborate on this comparison? The Discussion section mentions some tasks that can seemingly be done by humans with a few hundred hours of speech, but those tasks are very low level tasks which do not represent all facets of human speech perception. Relatedly, what would the performance (e.g. neural predictivity etc.) look like if you used pretrained self-supervised models trained on much more data, say, even more data than in a human lifetime (thus not necessarily restricting yourself to 600 hours). I do have to note the correlations shown in Fig. 2B-C are very low (R). This may reflect the intrinsic SNR limitations of the fMRI signal. This makes me wonder if Fig. 1C is cherry-picked. A more representative example would be much better here (with an R similar to the average R). Relatedly, what is the noise ceiling performance in these recordings? That is, if I tried to predict the fMRI of one participant from the fMRIs of other participants, what would the performance look like?\n\nRe supervised vs self-supervised: I do have to note phoneme recognition is a very low-level task, perhaps a higher level classification task would give better results for the supervised model (which is already pretty close to the self-supervised one, to begin with).\n\nABX accuracy of the models is much higher than human accuracy, which suggests these models might not be behaviorally adequate models of human speech processing (and possibly of human auditory processing more generally).\n\nI understand that the differences in Fig 4C are significant, but they are tiny differences. Shouldn’t I interpret this as saying the non-native model, for example, is actually a pretty good model of the native auditory cortex? Whatever makes the auditory cortices of, say, Mandarin speakers different from the auditory cortices of French speakers doesn’t seem to be captured by these models (or perhaps more likely by fMRI itself).\n\nFinally, I also do have to note most of the results here (hierarchy, very slightly better prediction of native vs. non-native fMRI responses, etc.) are not really too surprising given what we know from earlier literature and prior expectations.", " In this work, the authors build speech encoding models using wav2vec 2.0 and experiment with 4 different regimens:\n1. Untrained network\n2. Network trained on non-speech sounds\n3. Network trained on non-native speech\n4. Network trained on native speech\n\nTheir main findings are:\n1. Representations from wav2vec 2.0 predict brain activity well in several regions. Although significantly different, the untrained networks have comparable effect sizes to the trained models.\n2. Diff regions across cortex are best predicted by diff layers of the model such that low-level regions like the auditory cortex are best predicted by the lower layers.\n3. Much like humans, the models do better at a phoneme discrimination task on the native language than non-native. The non-speech networks do worse, followed by the untrained networks. Edit after rebuttal: I comment the authors efforts to do a new analyses and the thorough responses to the review. Consequently, I have increased my score by 1. However, at this point I cannot recommend the paper for acceptance as some things stand out to me (more detail in follow-up comments):\n- There is no established link between the different types of models tested, the layer-wise selectivity and the probing result. To this end, it is hard to interpret what the posited function of each region is or,\n- what new things did we learn about the brain/deep speech networks that wasn't know previously? The paper seems largely confirmatory and the methods employed are not novel.\n- some of the prediction correlations are surprisingly small even in areas robustly engaged during speech perception. Furthermore, some of the claims on selectivity for native vs. non-native vs. non-speech info is based on extremely small, possibly insignificant differences. This make interpretation hard.\n\n###################################################################################################\n\nStrengths:\n1. Clarity: The paper is well written and easy to read. \n\nWeaknesses:\n1. Significance/Originality: The central premise of this paper lies in exploring self-supervised architectures for building encoding models. The results presented include 1) encoding model performance maps 2) layer wise selectivity maps 3) encoding performance viewed through a cascade of model selectivity (native > non-native....> chance). However, due to several reasons highlighted in detail below (like no discussion/analysis on what information is represented at different layers and how does this explain patterns of variation in layer selectivity, for example) and due to the identical results+analyses in prior work on supervised models and concurrent work on self-supervised models, the novelty of this work is not clear. \n- Kell et al., Neuron 2018 - exploring layer selectivity for diff supervised task architectures + specificity for speech vs. non-speech\n- Millet et al., arXiv 2021 - exploring many of the analyses presented here but with supervised models\n- Vaidya et al., ICML 2022 - exploring 4 self-supervised models, layer selectivity and probing to infer information at each layer\n2. Under-explored/Limited analyses: Although the paper presents encoding model performance in every major analysis, the trends observed are not discussed further. For example, what information do different layers capture that could drive selectivity for lower layers in low-level brain regions vs. selectivity for deeper layers in high-level regions? Does this information emerge as a consequence of training? Does it emerge only when models are trained on speech? How about specificity of information for the native language? Without digging deeper into these questions, I am not sure of the significance of the results or how they inform our understanding of both speech models and the brain.\n3. Overstating claims:\n- Claims of supervision or lack thereof that match human ability:\n - Firstly, while the effect sizes are significantly different, these differences are still considerably small between the random/non-speech/non-native/native models.\n - It is interesting to compare supervised vs. unsupervised models in terms of the information they capture and consequently, their ability to predict brain data. However:\n - given the implausibility of the learning mechanisms employed in these networks (as noted in the paper too; Lines 301-312)\n - the fact that they do not follow the same patterns of errors/do not mimic human behavior\n - the fact that the paper currently does not explore the types of information captured (what makes one model perform better than the other? how is this related to the purported functions of different regions?)\n\n , why is this a realistic model of speech perception? I agree that the model uses far less training data than LMs (which, contrary to how it is stated in the paper, are self-supervised too) or supervised speech models. However, without a serious/exhaustive analysis on the types of speech tasks these models do well on, there ability to mimic human speech errors etc., is it fair to state that we are moving towards brain-like models? (more detailed arguments in Guest & Martin, 2021) For example, lines 259-261: “Here, we test whether self-supervised learning on a limited amount of speech suffices to **yield a model functionally equivalent to speech perception in the human brain.** “ \n\n- in lines 290-300, the authors discuss several studies on the functional specificity of different brain regions during speech processing, functional gradients in PFC etc.. However, how is this being inferred form the data? I may have missed the link but iiuc this is not supported by the evidence presented. \n- \"the functional hierarchy of its transformer layers aligns with the cortical hierarchy of speech in the brain, and **reveals the whole-brain organisation of speech processing with an unprecedented clarity**\" - Why is the layer wise selectivity important? Not to belabour the emphasis on what is captured by these representations, but without a discussion/analysis to reason why these differences might arise across regions, I am not sure what to make of these results or how they clarify the organization of speech processing across cortex. Furthermore, it appears that the upper-middle layers have high correlations in several regions across cortex, albeit not the highest. What do the authors make of this?\n- Please cite related work appropriately. The authors have missed several relevant citations from PIs like Jack Gallant, Leila Wehbe, Alex Huth, Tom Mitchell, Christopher Honey, Jonathan Brennan, Alona Fyshe and more. For example, papers that were the first to build language encoding models or present the style of analyses reported here are missing. 1. Selectivity for deeper layers in primary AC: In figure 3, why is primary auditory (or what is possibly Heschl’s gyrus) marked in red, ie., selective for deep layers? It would be useful to visualize the correlations for different layers in a few voxels to see if the curve is relatively flat and if this is an artifact of using a circular mean.\n2. What do the individual participant maps look like for figure 3? Is the group-averaging introducing smoothness in the gradients that is not visible in individual maps ( due to the circular mean estimate being noisy)?\n3. How were models trained for the ABX task? What are the specifications of the dataset and the task? Which layers were used to do this?\n4. ABX task results: I imagine that the authors ran clustering for the English/French/non-speech data independently to get different quantization targets for each. Given that these targets roughly correspond to phonemes, is it surprising that the English model was better at distinguishing English phonemes than French? Similarly, is it surprising that English models outperform non-speech or untrained models that don’t have English quantization targets or have not learned how to use them respectively?\n5. In Fig. 4D, it is surprising that only auditory cortex seems to care about language/speech specific features although we know that a large part of cortex processes natural speech. Can the authors comment on why this might be and perhaps visualize the individual correlations for all 5 conditions in a few example voxels per color?\n6. What is the amount of data per participant in each language & stimulus set? Iiic, the Enligsh datasets have far more data per participant than French. How does the paper circumvent differences in the quality of model fits due to different amount of data being used? (especially for cross-language comparisons) -NA-", " The paper provides a study to investigate how Wav2Vec 2.0, as a self-supervised method, learns brain like representation. Different sets of experiments were conducted to examine the cortical hierarchy of speech processing aligns its function. It also provides observation on brain's reaction to non-speech, non-native and native sounds compared to that of Wav2vec 2.0. Strengths: \nInteresting observations and an extensive set of results\nWeakness:\nNo machine learning innovation - I believe the results are interesting to be shared, but I do not see any machine learning innovation in this paper. Basically the authors have used an already proposed self-supervised method to compare with the brain action. I believe this could be a better fit for some speech related journals. \n- R is some times used as Neural predictivity score and sometimes as Pearson correlation coefficient. Please make it clear through the paper.\n- R is some times very small, e.g. Figure 4 B. is small R still something to rely on and make conclusion for?\n- There are some related work, e.g. https://www.science.org/doi/abs/10.1126/science.1245994, that could be mentioned in the paper.\n- Although the results are promising, please make the conclusion linked to the current set up and do not strongly generalize it to all cases. no limitation" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 2 ]
[ "Vy8rBhMEgVf", "i_Bce2-0BMy", "xZaau6dfjNH", "Ky7cHltP8dG", "pYX72_a132T", "pYX72_a132T", "Nf_1GjZCFmG", "pYX72_a132T", "ayqpTPN2Wy4", "xZaau6dfjNH", "xZaau6dfjNH", "xZaau6dfjNH", "IYqZ2Q_T__w", "HAI7Bi4lwH", "7f3bddenKKb", "j_CycJwcqXK", "Vi_oPIs20Si", "0bmDGO6cagc", "xZaau6dfjNH", "FqFnp5wxrTS", "vanVvzNspxW", "xZaau6dfjNH", "0R1K4tAz12", "dKdhPqll7a8", "xZaau6dfjNH", "IYqZ2Q_T__w", "3d2uEI_1P7b", "IYqZ2Q_T__w", "IYqZ2Q_T__w", "nips_2022_Y6A4-R_Hgsw", "nips_2022_Y6A4-R_Hgsw", "nips_2022_Y6A4-R_Hgsw", "nips_2022_Y6A4-R_Hgsw" ]
nips_2022_DTD9BRDWtkn
Multi-layer State Evolution Under Random Convolutional Design
Signal recovery under generative neural network priors has emerged as a promising direction in statistical inference and computational imaging. Theoretical analysis of reconstruction algorithms under generative priors is, however, challenging. For generative priors with fully connected layers and Gaussian i.i.d. weights, this was achieved by the multi-layer approximate message (ML-AMP) algorithm via a rigorous state evolution. However, practical generative priors are typically convolutional, allowing for computational benefits and inductive biases, and so the Gaussian i.i.d. weight assumption is very limiting. In this paper, we overcome this limitation and establish the state evolution of ML-AMP for random convolutional layers. We prove in particular that random convolutional layers belong to the same universality class as Gaussian matrices. Our proof technique is of an independent interest as it establishes a mapping between convolutional matrices and spatially coupled sensing matrices used in coding theory.
Accept
This paper considers finding an input vector from multi-layer noisy measurements. This can alternatively be thought of as finding the latent code of generative models. The authors analyze the state evolution of a multi-layer multi-layer approximate message passing algorithm. The main technical idea is relating random convolutional layers to Gaussian ones using permutation matrices and then utilizing a connection with spatially coupled matrices in coding. The authors also provide numerical evidence. Overall all the reviewers were positive but did raise some concerns about the model bing not realistic since the convolutional layers are not trained. I agree with the assessment of the reviewers. I think the paper is interesting and the connections and theoretical results are nice. Therefore I recommend acceptance. However, I do have some concerns about the model studied. I also have some concerns about the theoretical analysis as it is sometimes difficult to differentiate what is fully rigorous and what is based on statistical physics conjectures. In your final manuscript please clarify which parts are fully rigorous (perhaps all) and which parts rely on conjectures that have not been fully proved.
train
[ "don2tz17IAI", "VCe62k-m-K", "L1f-gF-5U3L", "ojiiK5o5Hms", "_ztjKd1Fl1O", "AznkeTgold3", "j3lMC3IWERC", "2_rVZPIxhP-", "dl3eETxOugb", "DXkRxlQbhM", "aA3pvkp3dmL", "Uoa6C9u-XaI", "AA-0lmAlSY9", "2zPFigHuL_v" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have increased my score in terms of the fairness between the other papers I reviewed. ", " Concerning the insights our work can bring for trained networks: we will add our thoughts in the discussion of the paper. There are several trained settings for which the same state evolution could apply and that we are currently investigating. One possibility is to study the weight matrices of neural networks in the lazy training regime, in which the weights stay close to their initial values throughout training. AMP methods can also be modified to include partially deterministic structure in their sensing matrices and better represent learned models. For example, as we discuss in Appendix D and briefly in the body, our proof can easily be extended to study MCC matrices whose convolutional filters have non-isotropic variance, which is a simple model for convolutions with partially deterministic structure. As another example, [Gabrié et al. 2018](https://papers.nips.cc/paper/2018/hash/6d0f846348a856321729a2f36734d1a7-Abstract.html) propose an AMP-based method to compute mutual information between the activations at each layer of a deep random neural network. They observe in Section 3 that their AMP-based predictions hold for a linear model whose sensing matrices have random singular vectors and learned spectrum. As stated in the last sentence of our paper, trained settings for which the simple Gaussian state evolution does not apply anymore are even more interesting venue for future works.\n", " Thanks very much for the replies. The equivalence of the Gaussian and MCC state evolution due to the variance being the same in each block makes sense. Thanks again for the clarification.", " I would like to thank the authors for their answers, particularly the clarification regarding the channel noise. This resolves my concern. I appreciate if the authors can reflect on the assumption of random weights and whether (or how) the obtained state evolution could still bring insights for the trained networks. This is a valuable discussion for the main paper.\n\nFor now, I increased my score of the paper.", " Thank you for your responses. I do not intend to evaluate your paper negatively. I agree with the other reviewers who recommended 6: weak accpet or 7: accept.", " Based on feedback from the reviewers, we have updated the text in the body of our paper and reuploaded a new version. For the reviewers' convenience, we also provide a version with significant edits highlighted in blue: \n\n[https://anonymous.4open.science/r/conv-ml-amp-draft-6BDA](https://anonymous.4open.science/r/conv-ml-amp-draft-6BDA)\n\nPlease keep in mind that we are still planning to modify our experiments by adding empirical evaluations of the corresponding dense Gaussian model (see response to R1-PaFY, 'Lack of numerical simulations'). \n\nWe thank the reviewers for their consideration of our work. ", " Thank you for your comments on our work. In response to the questions and weaknesses identified for this paper: \n\n- __Lack of numerical simulations__: we will add empirical comparisons between the Gaussian ensemble and the MCC ensemble to both Figure 1, for the Gauss-Bernoulli prior, and Figure 5, where we study artificial data generated by multilayer linear and ReLU nets. Specifically, for each ‘x’ (MCC AMP empirical performance), we will add an ‘o’ indicating the performance of the corresponding model with dense Gaussian weights. Since these two models admit the same state evolution equations in the separable case, their performance is expected to be nearly identical (see eg. Manoel et al. [2017] Figure 2, agreement of dense Gaussian AMP and SE). \n\n- __Benefits of MCC over dense Gaussian__: The main advantage of the convolutional ensemble is its sparsity, which makes the resulting convolutional ML-AMP significantly faster. As discussed in Section 3.3.1, MCC matrix multiplications have a speedup by factor $O(k/q^2)$ over the Gaussian ensemble, which is a significant practical improvement (eg. in Figure 5, $q=10$, $k=3$, so roughly $30x$ speedup per matrix multiplication). We will highlight this more clearly in the body and introduction of the paper. \n\n Relative to previous work on ML-AMP, which require strong independence assumptions on the sensing matrices, our contribution is to prove SE for a model with sparse, structured sensing matrices. Because of this structure, the corresponding AMP algorithm is significantly faster while having the same state evolution, which is one practical advantage of the MCC ensemble over the Gaussian ensemble.\n\n- __Pseudo-Lipschitz regularity of the activations__: the framework of Gerbelot-Berthier 2021 shows that the activations can be pseudo-lipschitz of any order since the state evolution variables are Gaussian, which can be integrated against any polynomial. However, because the proof of Gerbelot-Berthier 2021 is written in a non-separable framework, the definition of pseudo-Lipschitz function is scaled differently than the original one from Bayati-Montanari, which mainly changes the definition of low-dimensional observables, see Definition A1 and the comment below. We indeed need to assume that the prior distribution $p_x$ is subGaussian in order for higher moments to be well defined. We thank the reviewer for spotting this inconsistency and will correct it in the revised version.\n\n\n- __Clarity & typos__: We will fix the typo in Lemma 4.3. By ‘line,’ we mean ‘row,’ and we will update the text to use only the latter.\n", " Thank you for reviewing our work. As discussed with **R1-paFy**, part of the practical interest of our theoretical result is that MCC matrices are computationally cheaper to use in ML-AMP than their Gaussian counterparts because of their sparse circulant structure. We plan to highlight this more clearly in our experiments and introduction. MCC matrices are also practically interesting because of their popularity in areas like deep learning, computer vision, and image recovery. It was not a-priori clear that these structured matrices are compatible with AMP, since existing state evolution proofs require strong randomness assumptions on the sensing matrices. \n", " Thank you for your comments on our work. We agree that the paper will be clearer with additional discussion of the key results and implications, we will add this to the Introduction and Main Results. To answer the questions:\n\n- __Equivalence of Gaussian and MCC state evolution__: we agree that our equivalence result is somewhat surprising in the context of existing literature on spatial coupling, where spatially coupled sensing matrices are used to achieve better recovery thresholds than their dense counterparts. The reason that MCC state evolution reduces to the dense case is because, according Lemma 4.3, the matrix $\\tilde{W}$ has dense Gaussian blocks which each have the _same_ variance. This is different from the literature on spatial coupling, where different Gaussian blocks have different variances and correspondingly a different SE. \n\n So, in order to break the equivalence, one would need to use $\\tilde{W}$ with Gaussian blocks of inhomogenous variances. This can actually be represented in the Graph AMP framework, and our proof would go through easily, but for simplicity we focus on the homogeneous case. Specifically, while we focus on separable denoisers (scalar functions applied coordinatewise to vectors), one can represent inhomogeneous variances using a non-separable denoising function that applies a different scaling to different coordinates of its input vector. We comment on this idea in Appendix D, where we also show how inhomogeneous variances can be used to represent a simple model for structured convolutions. \n \n We will add further discussion following Theorem 4.1 to highlight the above intuition and the contrast between our model and the spatial coupling literature. We will also add further discussion of the ideas in Appendix D to the body of our work.\n\n- __Figure 2 Caption__: for $W \\sim \\mathcal{M}(D, P, q, k)$, each row of $W$ has $kP$ nonzero entries, because each row of $W$ is a concatenation of $P$ different rows from convolutional matrices, each row having $k$ nonzero entries. This figure is intended to show diagrammatically which $kP$ coordinates of an input signal $x$ contribute to a single coordinate of an output signal $y = Wx$. We will clarify this point in the description.\n\n- __Definition 4.1 and Theorem 4.2__: yes, $(m, \\hat{m})$ and $(\\kappa, \\hat{\\kappa})$ refer to the same quantities. We have fixed this and the other highlighted typos in these two statements. \n\n- __Definition of the multilayer model__: yes, the $\\zeta$ are independent between different layers, we will clarify this in the definition of the multilayer model. \n\n- __Noise level__: no, $\\sigma^2 = 10^{-4}$ is not required for empirical agreement at reasonable sizes. We will note that. We keep this value fixed in our experiments for no particular reason. There are many examples in the ML-AMP literature of agreement between empirical statistics and SE predictions for a variety of channel noise levels, such as: \n - Manoel et al. [2017], Figure 2, studies sparse linear regression and perceptron models at noise $\\sigma^2= 10^{-8}$.\n - Rangan [2010], figure 4, studies sparse generalized linear regression at noise $\\sigma^2 = 10^{-2}$. (Available at [https://arxiv.org/abs/1010.5141](https://arxiv.org/abs/1010.5141))\n", " Thank you for carefully reviewing our work. In response to these questions: \n\n- __Question on noise in Definition 3.3__: thank you for the question, pointing out a misprint in the definition. The noise in Definition 3.3 should not be $\\sigma^2 = 1$, but a generic $\\sigma^2 \\geq 0$ which includes $\\sigma^2=0$. Indeed, our proof holds for generic noise including zero, and we show empirics for this case in Figure 5, where the signal prior is a noiseless ReLU generative prior. We completely agree that generative models usually consider activations with zero noise and this is included in our results. We state our results in the form with noise simply because it is more general and includes the traditional noiseless case. We will add a comment connecting back to the noiseless case that is more commonly considered. \n\n- __Mixing the notions of generative priors and multilayer measurement processes__: we agree that our model differs slightly from the model considered by Bora et al. 2017. In their model, the goal is to estimate the output $s$ of a generative model $s = G_\\theta(x)$, while in our model, the goal is to `invert the generative network' by estimating an input $x$ such that $y = AG_\\theta(x)$. However, in the case when each layer of $G_\\theta(x)$ has a deterministic channel function (for example, $\\sigma^2 = 0$ at each layer $1 \\leq l \\leq L-1$), estimating $x$ is sufficient because one can deterministically compute the corresponding $s = G_\\theta(x)$. We will comment on this subtle difference and highlight how our multilayer signal model generalizes the setting of Bora et al. 2017, so that our results are indeed applicable to generative CS. \n We plan to edit the discussion following equation (1) to clarify that our problem can be seen both as multilayer signal recovery and as generative CS (via inverting a generative model whose last layer implicitly contains the sensing matrix). \n- __Introductory discussion of Manoel et al. 2017__: we will update our discussion of Manoel et al. 2017 on Page 2 to refer to unstructured matrices instead of strictly dense Gaussian matrices. While Manoel et al. 2017 don’t require distributional assumptions on the sensing matrix coordinates, it is not known in general what conditions are necessary for rigorous state evolution to hold, so it could be misleading for us to refer to generic unstructured matrices in the context of proving state evolution. We emphasize dense Gaussian matrices in our work because they are a good representative comparison to Gaussian MCC matrices, with rigorous state evolution in both cases. \n\n- __Last paragraph of related work__: we agree that the discussion of stable training is unnecessary and potentially confusing. The main point of this paragraph is to highlight practical architectural features of the generative CNN models that our idealized setting is intended to capture. We will edit our discussion to focus only on architectural aspects of generative neural nets for signal recovery.\n\n- __Citations__: We will cite Rezende et al 2015, Dinh et al. 2015, before discussing GLOW/RealNVP and their application to generative signal recovery. We will change our discussion to indicate that Bora et al. 2017 do not only study a DC-GAN architecture, but also a dense MLP VAE.\n\n\n\n", " The paper establishes state evolution analysis of multi-layer approximate message passing algorithms used to find an input vector from multi-layer noisy measurements (or finding the latent code of generative models) with random convolutional layers or random Gaussian layers. The proof is based on writing random convolutional layers in terms of random Gaussian matrices using permutation matrices (Lemma 4.3). The proof points to a connection between convolutional matrices and spatially coupled matrices in coding theory. The results are numerically verified for linear and multi-layer cases. \n **Strengths**\n\n* Applying state evolution analysis in deep learning context is very interesting direction, which is explored here. Similarly, spatially coupled matrices and related theoretical works are an important and rich area of coding theory, and the connection with coding theory opens up an interesting perspective. \n* The paper is in general well written and covers related works in a clear and adequate way.\n* The proofs and derivations seem sound to me and tend to follow standard message passing derivations apart from Lemma 4.3. I have not checked the derivations line by line.\n\n**Weaknesses**\n\n* The problem considered in this paper and some of the assumptions are different from typical generative compressed sensing paper (putting aside the assumptions on randomness of parameters). Please see my comments below.\n* The core issue, related to the comment above, is to mix the notion of measurements and the notion of prior. Generative priors are not measurements in context of generative CS. In the paper, the layers of the generative model are measurements with their measurement noise (channel noise $\\zeta$ in Definition 3.3). See my comments below on the notion of channel noise.\n* Practical models do not have random convolutions but trained ones. The implication of the current approach for the trained models is not clear and not discussed in the paper. \n\nOverall, the paper has a valuable contribution and present that contribution properly, however, the connection with generative compressed sensing in style of Bora et al is not properly established. A particular deviation from standard networks is the notion of channel noise. I think that the framework can still be relevant for generative priors by considering it as the problem of inverting a network or finding the latent code. This would require some changes in the paper and its exposition.\n * Regarding Definition 3.3, Generative priors do not have noise in them. Note that this is not a bias term, otherwise, the conditional distributions like $P^{(l)}(h|z)$ are not needed. This can indicate that the paper is about multi-layer measurements and not generative priors. The authors should comment on what happens if the channel noise is assumed to be zero or non-random. If I am not mistaken, the analysis should remain valid (at least in $y=Ax$ case, where the Onsager correction disappears). \n\n\n* The problem description of the paper is bit confusing specially when the connection with generative prior is mentioned. Generative priors are used to parametrize a signal $s$ using $s=G_\\theta(x)$, and then solve an inverse problem $y=As$ using it, for example by gradient descent on $\\Vert y-AG_\\theta(x) \\Vert_2^2$.\n\nThe authors. However, mention: “one seeks to recover a data signal $x_0$ given access to measurements $y_0 = G_\\theta(x_0)$”. This seems to combine the generative prior and measurement model. The measurements are typically of form $y=Ax$; one can try to incorporate it as the last layer of the generative model, which is implicitly done by the authors. See Page 1, and the choice $\\phi(z)=z$ which means $y=Wh$. However, this is not typical generative compressed sensing setup. As I reflected in my summary, the paper can be seen as finding the latent code of a generative model, which can be useful in context of generative compressed sensing. In any case, there is a subtle difference between these cases, and the authors can comment to clarify it. \n\n* As far as I can see Manoel et al. 2017 consider a general random matrix with i.i.d. entries and not only Gaussian (in contrast with the claim in page 2). However, the matrices are still unstructured. It is good if the authors replace the term “dense Gaussian” with a more general term, say unstructured random matrices.\n\n* The last paragraph of “Related Work” section: the notion of stable training, mentioned in this paragraph, is a bit convoluted and ill defined. The stability issues of training GANs is very different from covariate shift related stability handled by batch norm. These are different notions of stability. Besides, I do not see the relevance of this discussion for the rest of the paper. \n\n* On flow based models, it is better to cite the original work of Rezende et al 2015 and Dinh et al. 2015 and then mention follow up works like GLOW, RealNVP. \n\n* Note that Bora et al do not only consider convolutional models. They also use MLP variants of generative priors (VAEs). The main limitation of the paper is the way it connects with existing works on generative priors. ", " This paper studies the problem of estimating a signal vector from an observation obtained via a multilayer convolutional model. The weights defining the convolutional model are assumed to be random (and known). The main contributions are an AMP algorithm for signal estimation and a theorem proving that state evolution accurately predicts the performance of AMP on this model. Numerical results, presented for synthetic settings, show good agreement of empirical performance and state evolution predictions. The technical contribution of extending the multilayer AMP and state evolution from iid Gaussian weight matrices to a specific class of random convolutional matrices is a nice one. The proof technique is clever: it uses permutatation matrices to shuffle the weight matrices so that the correlated entries are in separate blocks, and then invokes the graph-based AMP proposed to define a matrix-valued AMP for the permuted matrix. The problem setting is a bit artificial since assumes that the weights are random (rather than learned from data), but in my opinion the novelty of the proof technique is a nice contribution. \n\nA weakness of the paper is the presentation. The paper is dense, some of which is due to the notation required to defined the convolutional matrices, but the notation is inconsitent and sometimes confusing. Key results and their implications should also be discussed in more depth to aid understanding. More details in the \"Questions\" section below. \n\n -- It is mentioned in the abstract and the introduction that the state evolution equations for the MCC model are the same for as the iid Gaussian case, upto rescaling. I find this surprising based on an analogy with spatial coupling, where AMP with spatially coupled matrices has a different state evolution from the iid Gaussian case. The authors should discuss this point after Definition 4.1/Theorem 4.2, and if possible, give some high-intuition on why the two SEs are the same.\n\n-- The blue shaded parts in Fig. 2 are confusing. It's not clear why kP entries from the matrix on the left are connected to a single one on the right.\n\n-- the notation is Definition 4.1 (State evolution) is inconsistent with the theorem below. One uses m and \\hat{m}, while the other uses \\kappa and \\hat{\\kappa}. Do these refer to the same quantities?\n\n-- To have a state evolution result that can applied to compute asymptotic loss, like MSE, the test functions in Theorem 4.2 should also take as arguments h^l/x or z^l. Please also refer to the assumptions (on x and the noise) in Def. 3.3 in the theorem statement.\n\n-- In the definition of the multilayer model (in the display above eq. 2), the same noise variable \\eta is used for each layer. If the \\eta's in each layer are indepdendent, please make this clear.\n\n-- The noise level \\eta in the simulations is chosen to be 10^{-4}, extremely small. Is this required to get a good match between empirical results and SE at reasonable dimensions?\n\n-- Below eq. (2), could you clarify whether h^L is taken to be x? \n\nTypos: \nl.33 : distributions --> distribution; l.206: should be \"iteration (4)\" -- add parentheses around 4. The random convolutional model is simplistic and omits many aspects of real convolutional networks, but the authors acknowledge this.", " This paper provides a theoretical study on the multi-layer approximate message passing (ML-AMP) algorithm with sensing matrices in some of the layers being multi-channel convolution, which extends previous results with regular 2D sensing matrices in all layers. The main result is the state evolution equations presented in Sec. 4, derived under asymptotic limits on the shape of convolution matrices. Experimental evidence shows that the theoretical results align well with the simulation results. **Originality**\n\nAs the paper explains, there is existing work on the analysis of ML-AMP for regular (2D) sensing matrices. The novelty of this work lies in extending such analysis for the case where the sensing matrices are convolutional.\n\n**Quality**\n\nThe paper seems to be providing a good quality theoretical result that aligns well with the simulation results, even though the analysis concerns an asymptotic case only.\n\n**Clarity**\n\nI find the paper to be very well written, with an informative introduction, sufficient background before presenting their main result and so on. The paper may benefit from a discussion on why their theoretical result is of practical interests. This is discussed at the end of the paper.", " The submitted paper addresses multi-layer aproximate message-passing (ML-AMP) for signal recovery under multi-layer generative neural network priors (1), in which the activation (or denoiser in compressed sensing) is separable and the sensing matrices are sampled from the multi-channel convolution ensemble in Definition 3.2. ML-AMP was originally proposed by Manoel et al. [2017] for dense sensing matrices with zero-mean i.i.d. (Gaussian) elements. The submitted paper generalizes the existing ML-AMP to the convolutional case in Definition 3.2. \n\nThe main contribution is rigorous state evolution of ML-AMP for the multi-channel convolution ensemble, i.e. Theorem 4.2. The proof is fully based on graph-based AMP and its rigorous state evolution by Gerbelot and Berthier [2021]. The main technical contribution is Lemma 4.3, allowing us to transform any multi-channel convolution matrix into block-convolutional matrix with dense, zero-mean, and i.i.d. Gaussian blocks via permutation. When separable activations, i.i.d. signals, and i.i.d. channel noise in (2) are assumed, all permutation matrices are absorbed into messages in graph-based AMP via the change of variables. Since graph-based AMP is general enough to model ML-AMP, ML-AMP for the multi-channel convolution ensemble reduces to a special case of graph-based AMP. \n\nThe state evolution result is also justified via numerical simulations for artificial data. No numerical simulations are presented to show the advantage of the multi-channel convolution ensemble against conventional dense Gaussian ensemble or practical usefulness of ML-AMP for real data. Strength\nThe strength is the proof of Theorem 4.2. From a technical point of view, the main part of the contributions to prove the theorem might be in Gerbelot and Berthier [2021]. Nonetheless, it should be significant contributions to understand graph-based AMP and apply its framework to signal recovery under multi-layer generative neural network priors. If the framework were old results, such as Bayati and Montanari [2011] or Rangan et al. [2019], I would not have positive impression since they are (well) known in the NeurIPS community. However, graph-based AMP is state-of-the-art message-passing and therefore not fully understood in the community. \n\nWeakness \nAs pointed out in Summary, the weakness of the submitted paper presents no numerical simulations to show the advantage of the multi-channel convolution ensemble against conventional dense Gaussian ensemble or practical usefulness of ML-AMP for real data. The paper only presents numerical simulations for artificial data with the so-called Bernoulli-Gaussian prior. At least the authors could add numerical comparisons between the multi-channel convolution ensemble and conventional dense Gaussian ensemble for artificial data. --(A1) assumes pseudo-Lipschitz activations of any finite order while conventional state evolution by Bayati and Montanari [2011] postulated the Lipschitz-continuity. When pseudo-Lopschitz activations are used, in my understanding, the boundedness of higher-order signal moments is required as the iteration t increases. Eventually, we need to assume the boundedness of all signal moments. Do not the authors need to assume any regularity conditions on the signal prior, such as sub-Gaussian? \n\n--The terminology \"line\" is a bit unclear. Clarify whether it means column or row. \n\n--\\tilde{W} in Lemma 4.3 is not displayed appropriately. It should be presented like (57) in Appendix A. The submitted paper is theoretical and therefore has no negative social impacts." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 1, 4 ]
[ "_ztjKd1Fl1O", "ojiiK5o5Hms", "dl3eETxOugb", "DXkRxlQbhM", "j3lMC3IWERC", "nips_2022_DTD9BRDWtkn", "2zPFigHuL_v", "AA-0lmAlSY9", "Uoa6C9u-XaI", "aA3pvkp3dmL", "nips_2022_DTD9BRDWtkn", "nips_2022_DTD9BRDWtkn", "nips_2022_DTD9BRDWtkn", "nips_2022_DTD9BRDWtkn" ]
nips_2022_UwzrP-B38jK
Learning Robust Rule Representations for Abstract Reasoning via Internal Inferences
Abstract reasoning, as one of the hallmarks of human intelligence, involves collecting information, identifying abstract rules, and applying the rules to solve new problems. Although neural networks have achieved human-level performances in several tasks, the abstract reasoning techniques still far lag behind due to the complexity of learning and applying the logic rules, especially in an unsupervised manner. In this work, we propose a novel framework, ARII, that learns rule representations for Abstract Reasoning via Internal Inferences. The key idea is to repeatedly apply a rule to different instances in hope of having a comprehensive understanding (i.e., representations) of the rule. Specifically, ARII consists of a rule encoder, a reasoner, and an internal referrer. Based on the representations produced by the rule encoder, the reasoner draws the conclusion while the referrer performs internal inferences to regularize rule representations to be robust and generalizable. We evaluate ARII on two benchmark datasets, including PGM and I-RAVEN. We observe that ARII achieves new state-of-the-art records on the majority of the reasoning tasks, including most of the generalization tests in PGM. Our codes are available at https://github.com/Zhangwenbo0324/ARII.
Accept
I thank the authors for their submission and active participation in the discussions. The paper presents a method for rule representation learning that can be transferred accross tasks. All reviewers unanimously agree that this paper's strengths outweigh its weaknesses. In particular, reviewers found the method to be well motivated [fEdf], general [fEdf], novel [ky2S], tackling an interesting task [bvvT], achieving strong empirical results [4Dou] and the paper to be well written [bvvT,ky2S,xDma]. Thus, I am recommending acceptance of the paper and encourage the authors to further improve their paper based on the reviewer feedback.
test
[ "sqNW9tB8nZP", "K6JRYNO3Ufa", "zJLHmA8Li7H", "0wRG-ss8IV_", "MXqeButhyJN", "SGr1fD18KUd", "EWsMJ9Z25AC", "BHHSEshkXlD", "Fjxk4XF5Yp3", "pDPJxw5633d", "NOBjU2ooHIe", "OqtB3WfHHAo", "UNs35GXuSuq", "MTBUAjzQmZJ", "RTYPNrJAed", "nlgju2MKWd", "m6PLFniDhEh", "5_7g21t617c", "2ZHUe_4Zczi" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks again for your insightful reviews. We are grateful to see that our response addresses your concerns, and appreciate your positive feedback. \n\nWe are glad to provide additional responses if you have any further questions.", " Thanks for your insightful and positive feedback.\n\nSince relations are much more abstract and general than the rule triples, it is more complicated to learn the relation itself. And the relation visualization results are not very significant (see Appendix G for specific analysis).\n\nTo further investigate whether our method has learned the underlying relations, we conduct another experiment to classify the relations. In particular, we use a single-layer classifier to classify the relations based on the rule representations associated with each problem. We first collect the rule representations from all the instances from the test set on the PGM dataset. All the instances are not seen before by the model and are split into training, validation and test sets following a ratio of 8:1:1. Then, since there are 5 relations (AND, OR, XOR, progression, and consistent union) in the dataset, we train a linear layer (400$\\times$5) followed by the softmax function to classify the rule representations (400 dimensions). The test results are reported as follows.\n\n| | Random Chance | ARII w/o training | SRAN | ARII |\n|:--------------:|:------:|:--------------------------------:|:--------------------:|:--------------------:|\n| Classification Accuracy | 20% | 21.8% | 59.4% | 72.1% |\n\nIt can be seen that the single-layer classifier can well classify the relations based on the rule representations extracted from ARII (72.1% classification accuracy) associated with each problem. The random chance and the result from ARII that was not trained (i.e., the weights of ARII were randomly initialized) are 20% and 21.8%, respectively. We also compare our model with a baseline model, SRAN [1], which yields 59.4% classification accuracy, largely lower than that of ARII. The comparison results demonstrate that ARII can capture the underlying relations in different visual domains, and therefore learn the invariant representations to some extent.\n\nWe found that this suggestion is very helpful in improving the quality of our submission. We will incorporate our response into our final version.\n\n[1] Sheng H., Yuqing M., Xianglong L., Yanlu W., and Shihao B. Stratified rule-aware network for abstract visual reasoning. In AAAI, pages 1567–1574, 2021.", " I have read the rebuttal and other reviews. I do not have additional questions and will raise my rating to 6.", " Once again, we are grateful for your efforts and support in the review process, in particular for further responding to our updated results in the rebuttal.\n\nIn addition, please feel free to raise any additional questions/concerns about our work. We would provide you with additional responses and promise to include most of these clarifications in the final revised version to make our paper clear.", " Thanks very much to the authors for this additional analysis. This is exactly the sort of thing that I had in mind, and the comparison with SRAN and also to the untrained version of the current model are very informative. I think this much more clearly establishes that the current model is, to a significant extent, learning explicit representations of the underlying rules.", " Thanks a lot for taking the time to run the suggested ablations, as well as expanding upon the related work section of your paper. I believe both of these increase the quality of the paper, as well as its overall readability. Hence, I have increased my score to a 7.\n\nRegarding the invariance wrt rules, I understand that the relation is only a part of the rule triplet. Perhaps there was some miscommunication in what the authors meant and what I inferred: the authors meant that the representations learned by ARII are invariant of the training instance (i.e. exact ordinal values of the attributes) whereas I inferred it as claiming that they are invariant to the visual domain (i.e. both the value and identity of attributes and objects). I concede that the authors claims apear correct based on their interpretation of instance-invariance, but also note that they are not instance-invariant in the much more general sense that I inferred (or other readers might infer). \n\nWhile \"The abstract rule in visual reasoning IS HIGHLY relevant to the visual domains\" is true, the problem statement in Raven's Matrices focuses on learning representations that are invariant to visual domains. For a concrete example, let us consider the generalization splits the PGM dataset includes: novel attribute values (interpolation, extrapolation), held-out object-attribute combination (line-type, shape-color), and held-out attribute pairs. These are all generalization splits where the model needs to learn rule representations that can transfer to the test set despite being presented with a new visual domain. The generalization requirement for the test splits in PGM focuses on generalizing to new visual domains (i.e. object and/or attribute combination) not on generalizing to new relations. This is based on cognitive theories of humans (e.g. the structure mapping theory, where generalization to new analogies focuses on generalizing the relation across domains instead of generalizing the object and attributes.) Hence, I suggested the authors run an ablation study visualizing the code book for individual relations. ", " Thank you very much for your generous response and we appreciate your efforts in the review process. As suggested by the reviewer, we use a single-layer feedforward neural network to investigate whether the rule representations capture the underlying rules. In particular, we first collect the rule representations from all the instances from the test set on the PGM dataset. All the instances are not seen before by the model and are split into training, validation and test sets following a ratio of 8:1:1. Then, since there are 15 rules in the dataset, we train a linear layer (400 $\\times$ 15) followed by the softmax function to classify the rule representations (400 dimensions). The test results are reported as follows.\n| | Random Chance | SRAN | ARII w/o training | ARII |\n|:---------------:|:------:|:--------------------------------:|:--------------------:|:--------------------:|\n| Classification Accuracy | 6.7% | 54.8% | 13.1% | 74.1% |\n\nWe observe that the learned rule representations achieve 74.1% classification accuracy while the random chance only yields 6.7%. We also perform the same analysis on both the ARII model that was not trained (i.e., the weights of ARII were randomly initialized) as well as one baseline model (i.e., SRAN [1]) for comparison. We notice that the rule representations of randomly initialized ARII obtain an accuracy of 13.1%, indicating that ARII has learned the general rules underlying the specific instances in an explicit way. Besides, the representations produced by SRAN merely yield 54.8%, largely lower than those of ARII. The above systematic comparisons demonstrate that ARII has an impressive capacity for learning rules compared to the previous methods.\n\nWe found this suggestion is very helpful in improving the quality of our submission. We will incorporate our response into our final version.\n\n[1] Sheng H., Yuqing M., Xianglong L., Yanlu W., and Shihao B.. Stratified rule-aware network for abstract visual reasoning. In AAAI, pages 1567–1574, 2021.\n", " Thanks to the authors for these responses. I am glad to hear there will be more implementation details in the revised version, and also discussion of the relation with self-supervised learning, and the limitation of the current object encoding method. For Q3 (the newly released dataset), I am indeed interested to see how the model performs, but I intentionally put this in the 'minor notes' section because I didn't want to emphasize it as something the authors absolutely need to do, given that the dataset was recently released. Got Q2 (interpretability analyses), it is good to include additional results for shape color rules, but I think it would be better still to do something more systematic. It seems like the key question here is whether the 'rule encodings' really capture the underlying rules in an explicit way. A simple way to determine this would be to see whether it's possible to train a (single-layer) classifier to classify the triples associated with each problem, given the rule encodings. That would tell us, in a more systematic manner, to what extent the rules are explicitly encoded in these embeddings across all rule types.", " We thank the reviewer for the thoughtful comments and will response to the comments point by point.\n\n**Q1: It would be nice to see the results of existing models trained with the masked modeling loss.**\n\n**A1:** We are also curious about whether the internal inference module (i.e., the masked reconstruction part) is plugged and played. Thus we follow the reviewer’s suggestion to apply the internal inference module to other models. However, since the internal inference takes the rule representation as input, most of the previous methods do not satisfy the requirement of the internal inference. That is, most previous methods do not explicitly have a rule representation during the reasoning process. We only find one that is suitable, that is, SRAN [1]. We adapt its released code and impose the internal inference module into its framework. The reasoning performance is listed in the below table.\n\n| Regime | SRAN | SRAN (with internal inference) |\n|---------------|------|--------------------------------|\n| Interpolation | 56.4 | 58.9 |\n| H.O.A.P | 25.0 | 26.4 |\n\nExperimental results show that SRAN with internal inference yields better performance than the original SRAN. We notice that the internal inference process does not produce significant performance improvement. We conjecture the reason is that the rule encoder of SRAN can access full information of the context (our rule encoder is restricted to two rows and outputs symbolic features). The regularization of the internal inference can only play a moderate role. However, the consistently better results on the two regimes of PGM indicate that the internal inference is plug and play for the other models. Thanks for recommending this ablation study. This would significantly improve our paper.\n\n**Q2: How the reasoning capability emerges.**\n\n**A2:** One of the widely used definitions of reasoning is “reasoning is the ability that consciously applies logic from premises to conclusion” [2, 3]. As Garcez and Lamb said, “Reasoning can take place either symbolically or within the neural network via the statistical learning”[3]. In other words, if we ignore the interactions between humans and machines, the induction and logical reasoning of machines can be performed in either a distributed or symbolic fashion. Therefore, we think the reasoning ability learned from statistical learning is an important part of artificial intelligence. \n\nIn the meanwhile, the symbolic form of logic and interpretability (such as the studies [4, 5]) are always preferred since they are easy to understand and validate for humans. Although our work follows the traditional literature of visual reasoning to make conclusions by neural networks, our method makes important contributions to explicitly learning robust rule representations and providing interpretability of the rule. In the future, we could further extend this work to a neural-symbolic method to address the visual reasoning task. We have discussed this point in our revision (Section 5).\n\n**Q3: A few neuro-symbolic methods are missing.**\n\n**A3:** We have added and discussed them in our revision to provide a complete picture of the abstract reasoning field (Section 2).\n\n[1] Sheng H., Yuqing M., Xianglong L., Yanlu W., and Shihao B.. Stratified rule-aware network for abstract visual reasoning. In AAAI, pages 1567–1574, 2021.\\\n[2] Proudfoot, M., and Alan R. L. The Routledge dictionary of philosophy. Routledge, 2009.\\\n[3] Artur S. G., Luis C. L., and Dov M G. Neural-symbolic cognitive reasoning. Springer Science & Business Media, 2008\\\n[4] Chi Z., Baoxiong J., Song-Chun Z., Yixin Z. Abstract spatial-temporal reasoning via probabilistic abduction and execution. In CVPR, 2021.\\\n[5] Chi Z., Sirui X., Baoxiong J., Yingnian W., Song-Chun Z., and Yixin Z. Learning algebraic representation for systematic generalization in abstract reasoning. arXiv preprint arXiv:2111.12990, 2021.\n\n\n\n\n\n", " We thank the reviewer for the instructive comments and the nice summary of our contributions. Below, we will make a point-to-point response to each specific comment.\n\n**Q1: Implementation details.**\n\n**A1:** We have provided more implementation details to ensure the reproduction of our results, including the hyperparameters, the training settings, and the computing machinery. Due to the page limitation, we have put this part to Appendix. Besides, we will release both the training and the test code upon acceptance.\n\n**Q2: More interpretability analysis.**\n\n**A2:** In addition to the [・,line, color] rules, we also visualize the selection frequencies of the quantized code for the [s, shape, color] rules, where $s \\in S$. The distributions of the selection frequencies are shown in Appendix Fig 2. We notice that all the five rules share the same code E397 as their top-1 code, indicating that the code E397 is highly relevant to the [・,shape, color] rules. These results further validate the strong interpretability of the rule representations in ARII.\n\n**Q3: Test on a newly released dataset.**\n\n**A3:** We will test our model on this newly released dataset. Considering that this dataset was released after we submitted our paper, we need some time and effort to preprocess the data and train the model.\n\n**Q4: The connection between the 'internal inference' objective and self-supervised learning.**\n\n**A4:** The internal inference can be viewed as one type of self-supervised learning algorithm for the rule representation learning. But the traditional self-supervised learning models are usually trained first and finetuned for the downstream tasks afterward. Different from them, we explore an interesting strategy that performs the internal inference and the supervised learning simultaneously. We have added the above discussion in the revision (Appendix F).\n\n**Q5: Limitations on the object encoding method.**\n\n**A5:** We agree that the current object encoding is not inflexible and general. We will explore the object encoder that is more robust and flexible to represent the objects, such as using the object detector pretrained with natural images. In the revised paper, we have discussed the limitation of the current object encoder and the potential avenue for future work (Section 5).", " We thank the reviewer for the nice summary of our contributions. Below, we will make a point-to-point response to each specific comment.\n\n**Q1: Transposed PGM matrices for training.**\n\n**A1:** We conduct an ablation study to see the influence of the transposed PGM matrices on the reasoning performance. In particular, we select two competing baselines whose codes were released (i.e., SCL [1] and SRAN [2]). We train these baselines with the PGM dataset and its transposed PGM matrices. The results are reported in the below table.\n| | SRAN | SRAN | SCL | SCL |\n|:-------|:------|:------|:------|:-----|\n| Regime | w/o transposition | w transposition | w/o transposition | w transposition |\n| Interpolation | 44.9 | 47.8 | 67.1 | 63.6 |\n| H.O.A.P | 30.9 | 31.6 | 32.5 | 32.1 |\n\nWe observe that SRAN trained with the additional dataset only has marginal improvement. The performance of the SCL model trained with the additional transposed dataset even presents decreases. These results show that the benefit of the transposed matrices to the baselines is limited. This is because most of the previous visual reasoning methods (e.g., SRAN and SCL) could extract both row-wise and column-wise features. Therefore, the transposed PGM matrices could hardly provide more information than the original matrices for the baseline models. Therefore, our comparisons are fair. We have added these results in the revision (Section 4.4, Appendix E)\n\n**Q2: Are representations of rules Instance-invariant? This appears counter-productive to the argument that the ARII model has learned instance-invariant representations, since if it were indeed instance-invariant then the visual domain should have little to no impact on the rule representation.**\n\n**A2:** The abstract rule in visual reasoning IS HIGHLY relevant to the visual domains. As defined in the original paper on the PGM reasoning task, the abstract rule in the PGM matrix is a set of triples [r; o; a] (r: relation, o: object, a: attribute), which stands for that, in a row of images, the attribute a of the object o has a relation of r. Therefore, a rule in the abstract reasoning is still respective to the visual domain. In the meanwhile, the above rule is still abstract, because the rule is invariant to the positions, directions, and other attributes of the object.\n\nIn addition, Figure 3 shows that the rule representations are clustered according to individual rule triples and ignore the identity of the specific instances, clearly demonstrating the rule representations are instance-invariant.\n\n**Q3: Showing that one particular rule (such as AND) will be similar irrespective of the visual domain.**\n\n**A3:** “AND” is not a particular rule in the abstract reasoning tasks. The abstract rule in the PGM matrix is a set of triples [r; o; a] (r: relation, o: object, a: attribute). “AND” is just one type of relation in the rule. Actually, the “AND” relation in different rules (denoted by [AND, .,.]) presents dramatically distinct properties. For example, the “AND” relation of the shape of lines is quite different from the “AND” relation of the color of lines. Therefore, analyzing the “AND” relation is infeasible.\n\n**Q4: Draw connections to the previous related works.**\n\n**A4:** In the revised submission, we have updated the related work and the methodology section to provide reasonable credit to the works that the ARII method is built upon.\n\n[1] Yuhuai W., Honghua D., Roger G., and Jimmy B.. The scattering compositional learner: Discovering objects, attributes, relationships in analogical reasoning. arXiv preprint arXiv:2007.04212, 2020\\\n[2] Sheng H., Yuqing M., Xianglong L., Yanlu W., and Shihao B.. Stratified rule-aware network for abstract visual reasoning. In AAAI, pages 1567–1574, 2021.\n\n\n\n\n", " **Q9: The combinatorial benefit of using r1&2 with r1&3 and r2&3.**\n\n**A9:** Following the reviewer’s suggestions, we conduct an additional study on the combinatorial benefit of using r1&2 with r1&3 and r2&3. In particular, we remove the information of r2&3 from the inputs of the reasoner and report the performance in the below table. We observe that the reasoning performance decreased slightly compared to the original model. This result indicates that the combination of using r1&2 with r1&3 and r2&3 is beneficial to the reasoner but it is not the main reason for the performance gain of ARII. Instead, Table 3 (the third column) in the main text presents that the internal inference could significantly improve the robustness of the reasoning. We have added these results in our revision (Appendix C).\n| Regime | ARII (r1&3 & r2&3) | ARII (w/o r2&3) |\n|---------------|------|--------------------------------|\n| Interpolation | 71.6 | 69.5 |\n| H.O.A.P | 61.6 | 57.1 |\n\n\n**Q10: Have you evaluated what happens in a situation where the reasoning required to obtain the correct answer is generally seen as harder than the normal type of reasoning?**\n\n**A10:** The PGM dataset, especially the generalization regime, is well-suited for harder reasoning. The rule in the PGM dataset is represented as a set of triples [r; o; a] (r: relation, o: object, a: attribute) and there are 29 possible unique triples. For example, in the held-out triple regime, seven of these triples are held out for the test set. The held-out triples never occurred in questions in the training set, and the problems in the test set contained at least one of them. In traditional reasoning tasks, the test set generally contains the same rules as the training set, just with the test image changed. Therefore, the generalization regime in PGM is a harder way to test the reasoning ability since the rule distributions of the training and test sets are different.\n\n\n[1] Xu, H., and Mannor, S. (2012). Robustness and generalization. Machine learning, 86(3), 391-423.\\\n[2] Proudfoot, M., and Alan R. L. The Routledge dictionary of philosophy. Routledge, 2009.\\\n[3] Donadello, I., Serafini, L., and Garcez, A. D. A. Logic tensor networks for semantic image interpretation. IJCAI, 2017.\\\n[4] Artur S. G., Luis C. L., and Dov M G. Neural-symbolic cognitive reasoning. Springer Science & Business Media, 2008.\\\n[5] Hitzler, P. Neuro-Symbolic Artificial Intelligence: The State of the Art. 2021.\\\n[6] Garcez, A. D. A., and Lamb, L. C. Neurosymbolic AI: the 3rd wave. arXiv preprint arXiv:2012.05876. 2020.\\\n[7] Diligenti, M., Gori, M., and Sacca, C. Semantic-based regularization for learning and inference. Artificial Intelligence, 244, 143-165. 2017.\\\n[8] Zheng, Z., Wang, W., Qi, S., and Zhu, S. C. Reasoning visual dialogs with structural and partial observations. In CVPR. 2019.\\\n[9] Mikołaj M. and Jacek M. Multi-label contrastive learning for abstract visual reasoning. arXiv preprint arXiv:2012.01944, 2020.\\\n[10] Duo W., Mateja J., and Pietro L. Abstract diagrammatic reasoning with multiplex graph networks. In ICLR, 2020.\n\n\n\n", " We thank the reviewer for the instructive comments and are grateful for the time you spent with our work. We are also glad for the acknowledgment that the problem we are working on is interesting.\n\n**Q1: Captions of the figures could have been more informative.**\n\n**A1:** We have added more descriptions to the captions of the figures to make them easier to understand in the revised manuscript.\n\n**Q2: Related work.**\n\n**A2:** We have extended the related work section to better situate our work on both the abstract reasoning and the neural-symbolic literature.\n\n**Q3: Acronyms are used in the paper without stating first.**\n\n**A3:** We have modified it in the revision.\n\n**Q4: Robustness should be defined.**\n\n**A4:** The robustness of a machine learning model is defined as: the closeness between the testing error to the training error when the testing samples are “similar” to training samples [1]. Figure 3 in the main text shows that the rule representations in ARII are mainly dependent on the rule identity and irrespective of the specific samples (including noises). Therefore, the learned rule representations in ARII are robust. We have added the definition of robustness in the revised version (Section 3.5).\n\n**Q5: Lack of a definition of \"reasoning\".**\n\n**A5:** One of the widely used definitions is “reasoning is the ability that consciously applies logic from premises to conclusion” [2,4]. For neural networks, they use different layers of the distributed neurons to implicitly represent the premises, the logic and the temporal results, and formulate the final conclusion-making process as the classification or generation task. Although it is difficult to analyze the reasoning process of neural networks, the reasoning performance has been significantly improved by neural networks [3,5,6,7,8]. As the reviewer pointed out, it is a promising way to combine the symbolism and the connectionism to build an interpretable, robust, and accurate reasoner in future work. We have added the above discussion in the revision (Section 5).\n\n**Q6: It is not easy to see the relationship between Fig 1 and the rule encoder, the reasoner, and the internal inferrer.**\n\n**A6:** Fig 1 is to illustrate the overall motivation of this paper, that is, leveraging internal inference to build a robust rule representation for visual reasoning. In order to highlight the key points, we have omitted the particular modules (such as rule encoder and reasoner). Readers can easily access the definition and description of these modules in Fig 2 and the main text.\n\n**Q7: How can you be sure that the \"internal inference\" process isn't a way of providing additional information?**\n\n**A7:** The standard to justify whether one model has accessed the additional information is simple, that is, whether the input and the supervision signals are the same with the baselines. Some approaches utilized extra labels, such as the meta-target, which encodes the rule in the instance [9, 10]. They train the model not only to predict the answer but also the meta-target. That is, they use additional supervision signals to enhance the model performance. By contrast, our method only takes the incomplete matrix as input and the missing image as the supervision signal, without any additional information for the training or testing.\n\n**Q8: The \"internal inference\" process seems to function in such a way as to turn propositional rules into rules with variables. Is this a fair description of the purpose of the proposed process?**\n\n**A8:** The internal inference process does not turn propositional rules into rules with variables. Reviewer 4Dou has a nice summary of the internal inference process, i.e., it is a “kind of 'hypothesis testing' that characterizes the way human reasoners solve these problems (internally generating proposals for the abstract rules, and then checking to see if they explain the presented panels)”. In order to enhance the above process and obtain the robust rule representations, we perform the internal inference for multiple times by randomly masking the different images in the given two rows (i.e., the rows that the internal inferrer used). Also, the ablation study (Table 3) validates our design, that is, the internal inference could significantly improve the robustness of the reasoning.", " We thank you for the review and are grateful for the time you spent with our work.\n\n**Q1: Design choices of the internal inference.**\n\n**A1:** We have conducted an ablation study on the internal inference to investigate the role played by the generative process. In particular, we replace the generative process with classification. In the classification task, the input is the two rows of images where one image is masked with blank (i.e., $I_m$ in Equation 14, the same as the original ARII). The classification choices include the existing options in the instance and the particular image we mask out. The inferrer reuses the reasoner module to make the classification decisions. Apart from the above changes, the other settings are the same as the original ARII.\n\nThe following table reports the results of the classification variant of ARII on the interpolation regime of the PGM datasets. We observed that the classification variant yields slightly lower performance than the generative variant on the interpolation regime. These results indicate that the classification process could also serve as a reasonable task in the internal inference module of ARII but the generative process is better. This ablation study further demonstrates that the internal inferences play a critical role in visual reasoning no matter which inference tasks are performed. We have added the above results in the revision (Appendix B).\n\n| Regime | ARII (generation) | ARII (classification) |\n|---------------|------|--------------------------------|\n| Interpolation | 71.6 | 62.4 |\n| H.O.A.P | 61.6 | 59.4 |\n\n**Q2: The details of the vector quantization.**\n\n**A2:** We have added more details on vector quantization in the revision (Section 3.3) and provided a figure (Appendix Fig 1) to explain the computation process.\n\n**Q3: The size of the vector quantization.**\n\n**A3:** Since our method selects multiple quantized vectors from the codebook to compose the final rule representation, the combinational number of rules could be huge (i.e., $A_{K_e}^{K_r}$, $K_e$=512, $K_r$=80). Therefore, ARII has sufficient expressive capacity in the vector quantization to represent the visual rules. Following the comments of the reviewer, one of the limitations of ARII is performing reasoning on the case where the rule is brand-new and could not be represented with the previously learned patterns. However, this drawback is a general challenge for all the current methods. ", " The authors propose ARII, a new framework to solve the RPM datasets. The main idea is to try to learn a reusable rule representation from an instance of the task, such that it can be applied to other tasks when applicable. They achieve this by mapping individually learned rule vectors to a discrete representation using vector quantization and then reusing those rules to infer some of the already present panels in the RPM instance. Overall, they show that ARII is able to perform better, specifically on larger grids and more difficult settings. Strengths:\n* The paper is nicely motivated in general. The design of each module is well described and easy to follow through\n* Final results on both performance and interoperability are quite impressive\n\nWeakness:\n* The motivation for the design choices is not well supported in the ablation. I'm specifically not convinced about the design choices of the internal inferences (Section 3.5). Why does the process need to be generative instead of classification? For negative answer choices, you can fall back on the existing options in the instance. The correct answer is the particular image you mask out. The advantage of this is you can potentially reuse the reasoner module for this part as well. This can at least be an additional idea to try for ablations, if not the main approach. Writing suggestions:\n* It would help to have some background on the details of the vector quantization steps involved in the discretization of the rule embeddings. Maybe a motivating figure to explain with an example what q_l, etc., mean can make that section more accessible. Please add a section to describe the limitations of the work. I believe the major limitation is deciding the size of the vector quantization you do to obtain unique rules. Potentially, there can be new rules for each instance of the dataset, where your basic hypothesis breaks. Try to add some discussion along these or other relevant points.", " The paper addresses the important issue of abstract reasoning, in particular the ability of neural networks at performing extrapolation.\n\nThe paper's main goal is summarised by the authors as to \"ameliorate the reasoning generalizability by learning a robust rule representation\". This needs to be reconciled with the goal of addressing extrapolation, since generalizability is something else. \n\nThe authors claim to learn rule representations and to perform reasoning. Many ML systems learn rules. It is important to note that the rule representations are embeddings and that the reasoning is by similarity only. Looking at the paper from that perspective makes the claim made around visualization and interpretation of the rules seem rather vague. \n\nHaving said that, the task at hand being considered in the paper is very interesting from a reasoning point of view. And the improved results obtained are valid and relevant and should not be ignored. \n\nA main claim of the paper is that other approaches may \"require additional prior information, which is not a general way to address the reasoning problem\". The paper is well-written and well-organized, although the captions of the figures could have been more informative, e.g. Figs 3 and 4 are difficult to grasp. \n\nThe task at hand being considered in the paper is very interesting from a reasoning point of view. And the improved results obtained are valid and relevant.\n\nThe related work section is generally limited to work on the specific datasets, e.g. RAVEN and variations thereof.\n\nA number of acronyms is used in the paper without stating first what they stand for, sometimes even without providing a reference, e.g. PGM. \n\nRobustness should also be defined (ideally). The most problematic part of the paper is the lack of a definition of \"reasoning\". Reasoning in neural networks is an important endeavour. It is nowadays common for neural net papers to refer to reasoning without providing any definition of what kind of reasoning or language that is being considered (and there are many). These are all formally well-defined in the area of knowledge representation in AI. The area of neurosymbolic AI has taken those definitions on board. If the field is to develop as needed in this direction then it is important to consider the work that goes on since the 1990's in knowledge representation and neurosymbolic AI. I'd refer the authors at least to:\n\nhttps://link.springer.com/book/10.1007/978-3-540-73246-4\nhttps://ebooks.iospress.nl/volume/neuro-symbolic-artificial-intelligence-the-state-of-the-art\n\nAnd specifically on adding knowledge to neural nets:\nhttps://www.ijcai.org/proceedings/2017/221\nhttps://www.researchgate.net/publication/283897473_Semantic-based_regularization_for_learning_and_inference\nhttps://arxiv.org/abs/1605.06523\n \nAnd for a recent survey:\nhttps://arxiv.org/abs/2012.05876 It is not easy to see the relationship between Fig 1 and the rule encoder, the reasoner, and the internal inferrer. Is this done on purpose?\n\nA main claim of the paper is that other approaches may \"require additional prior information, which is not a general way to address the reasoning problem\". Other approaches may provide a general mechanism for reasoning. When it comes to a fair comparison, how can you be sure that the \"internal inference\" process isn't a way of providing additional information?\n\nThe \"internal inference\" process seems to function in such a way as to turn propositional rules into rules with variables. Is this a fair description of the purpose of the proposed process? You state that \"we randomly mask one panel of the first two rows and ask the referrer to infer the masked panel based on the rule representation. We perform the internal inference for multiple times to make the rule representation invariant to specific instances.\" To answer my question above, this process needs to be specified well and analysed from the perspective of reasoning capabilities.\n\nHow do we know that the results derive from better reasoning rather than e.g. the combinatorial benefit of using r1&2 with r1&3 and r2&3? Did the ablation study conclude anything about this? How do we check whether this internal inference is indeed producing the expected \"more abstract\" rules (with variables)?\n\nWhen one says that one evaluates \"reasoning ability\", one would wish to evaluate what kinds of reasoning take place, not simply evalute improvements in test set accuracy. Have you evaluated what happens in a situation where the reasoning required to obtain the correct answer is generally seen as harder than the normal type of reasoning? n/a", " The authors propose a novel framework Abstract Reasoning via Internal Inferences (ARII) for the task of solving abstract visual reasoning problems. In particular, the ARII model is designed to solve the task of selecting the correct candidate panel for a Raven's Progressive Matrices (RPM) reasoning problem. The proposed method consists of three components: \n\n1. A rule encoder network that takes the first two context rows of the RPM matrix, and generates vector quantized codes representing the abstract rule encoded in the images. The encoder works by generating CNN embeddings for each panel, which are then split into groups and concatenated based on their position. These image encodings are once again split into positions for each of the six image panels and then re-combined together to generate a continuous rule representation. The continuous rule representation is then discretized into the Ke X D dimensional code book, which is trained together with the encoder.\n2. A reasoner which takes each candidate panel as a potential rule with the context panels in the third row, calculates the rule encoding of this candidate rule with rows 1 and 2, and then selects the candidate rule (panel) which has the highest similarity with the rule encoded by rows 1 and 2. \n3. A generative internal inferrer, that masks out each image sequentially out of the six context images in the first two rows, and aims to generate the original rule encoding with one context image missing by minimizing the distance between the original rule and the rules extracted with a panel missing. \n\nARII outperforms existing methods when it comes to generalizing to novel test domains on the i-RAVEN and PGM datasets. Ablation studies performed by the authors demonstrate the utility of both the discretized code book as well as the internal inference mechanism. \n Strength:\n\n1. The paper is extremely well written: the RPM problem is presented succinctly, the ARII model is explained clearly and each component is motivated well, and the training procedure, results and ablation studies are easy to follow.\n\n2. The idea of utilizing a learnable vector quantized codebook for representing abstract rules in PGM problems is quiet novel, and works well in practice as seen from the generalization results of ARII. \n\n3. Under the scenario where auxiliary information such as symbolic rule embeddings are not used for training, it demonstrates state-of-the-art performance on four generalization splits of the i-RAVEN dataset, and five generalization splits of the PGM dataset.\n\nWeakness:\n\n1. In equation 1, the authors make an implicit assumption that the RPM rule is encoded row-wise in the image panels. While this holds true for the i-RAVEN dataset, it is not necessarily true for PGM. The authors work around this limitation by adding the transposed PGM matrices for training. Several other models compared in the paper do not explicitly rely on this assumption, and hence it would only be fair to compare these models with this extra training step included. \n\n2. In the ablation study, the authors show that the codes E-358 and E-181 occur in the top three codes for each possible relation in the visual domain of [., line, color] rules. This appears counter-productive to the argument that the ARII model has learned instance-invariant representations, since if it were indeed instance-invariant then the visual domain should have little to no impact on the rule representation. A better way to demonstrate this would have been to take the codes for [AND, ., .] across multiple visual domains and show that the codes are consistent with various object and attribute values. Could the authors further validate their claims that the codes learned by ARII are instance-invariant? In abstract reasoning terms this would involve that the codes learned for one particular rule (such as AND) will be similar irrespective of the visual domain. The ablation studies done by the authors do not demonstrate this. Please see the limitations highlighted in the review for one possible suggestion to demonstrate this.\n\n**Post-Rebuttal Edit:** See response to author's rebuttal below. Score increased to a 7 post-rebuttal. The authors did a good job in explaining several assumptions in their modeling process, such as space of rules is not large. Several of the ideas utilized in the ARII model build upon previous research (such as splitting and grouping panel embeddings like the Scattering Composition Learner, row wise embeddings relying on feature locations like Multiplex Graph Network, etc.) While many of these papers are referenced, the authors fail to draw connections and provide reasonable credit to the authors of these works in their Related Work section. It would be great if the authors can highlight these connections, as it will help future research in abstract visual reasoning and would be only fair to the authors of previous works.", " This paper proposes a novel method for visual abstract reasoning tasks, focused on the RPM-like benchmarks PGM and RAVEN. Some of the key features of the new method are the use of a learnable discrete representation to encode abstract rules, and a self-supervised objective that requires the model to use the encoded rule to fill in masked panels. The model achieves competitive results on both benchmarks, and shows particularly strong performance in the OOD regimes of the PGM dataset. There are also some interesting analyses of the rule representations learned by the model. ## Strengths\n- The model achieves excellent results, surpassing or competitive with the state-of-the-art on both benchmarks, and with particularly strong performance on the OOD regimes of PGM, which are arguably the most important part of these benchmarks.\n- The evaluation is very thorough, evaluating the model on all formats and generalization regimes for the two major benchmarks in this area.\n- The inductive biases introduced in this method are quite interesting. Using discrete encodings of the rules likely forces them to be more abstract, which presumably contributes to the OOD performance. The self-supervised objective ('internal inference') is an interesting way to capture the kind of 'hypothesis testing' that characterizes the way human reasoners solve these problems (internally generating proposals for the abstract rules, and then checking to see if they explain the presented panels).\n- The ablations nicely demonstrate the contributions of these two components.\n- The authors have included some code in the supplemental material, and plan to release all code upon acceptance.\n\n## Weaknesses\n- The primary weakness is that there are a number of important details missing that need to be included, perhaps in an appendix, e.g. training details (number of epochs, batch size, learning rate etc), model hyperparameters, and the computing machinery that was used to train the models. These should be included in the final version of the paper.\n\nI also have a few other minor notes:\n- I like the rule interpretability analysis, but it would be more informative if it were more comprehensive, rather than only focusing on rules applied to line color.\n- It would be informative to also test the proposed method on this recently released variation on RAVEN: https://arxiv.org/abs/2206.14187\n- It would be good to explicitly mention the connection between the 'internal inference' objective and self-supervised learning.\n- The object encoding method employed here is very inflexible, relying on the rigid spatial structure of these problems. I think it's ok in the context of this work, but it would be good to mention this as a limitation and a potential avenue for future work. I don't have any questions, other than regarding some of the missing details about hyperparameters and training. I don't envision any potential negative societal impact from this work.", " A model called ARII is proposed in this work to solve abstract reasoning problems. The ARII model is composed of a rule encoder, a reasoner, and an internal inferrer. The rule encoder adopts a vector quantization method and turns continuous rule representation from the first two roles into a discrete token in the codebook. The reasoner uses a similarity-based measure to select the answer: the candidate which, when filled in, generates rule features most similar to the first two rows will be treated as the answer. In addition, the internal inferrer reconstructs a masked panel like in BERT. In experiments, the authors conduct evaluation on I-RAVEN and PGM. It's found that the model achieves best performance on I-RAVEN and improves on the generalization regimes on PGM. The paper is well written with clear explanation and a streamlined flow. \n\nThere are a few things that are done right which I believe lead to the performance boost in this work. (1) image features are extracted from a group level: considering the grid-like layout in I-RAVEN and PGM, the position-based grouping could help better capture the statistics in the images. (2) Quantization in rules: as there are only a finite number of rules, quantization seems a reasonable choice, which is unfortunately missing in existing works. (3) BERT-like masked reconstruction. This might be the part that contributes to the generalization improvement as hinted in existing language modeling works.\n\nI'm wondering, as the masked reconstruction part can be plugged and played, whether performance in generalization could be consistently improved in different models. It would be nice to see some results regarding this point. \n\nAlso I wonder how reasoning is conducted. It occurs to me that the work still applies statistical learning but lacking in explaining how a model performs induction and reasoning. Some statistics may be captured using the method, but as shown in generalization results (which are indeed improved), it is still far below fair expectation. I would be interested in knowing how the authors view the perspective: does reasoning emerge from statistical learning and if not what is your opinion.\n\nFinally, I notice a few neuro-symbolic methods [1, 2] are missing, which when added should better complete the full picture of the area. \n\n[1] Zhang, Chi, et al. \"Abstract spatial-temporal reasoning via probabilistic abduction and execution.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021. \n[2] Zhang, Chi, et al. \"Learning algebraic representation for systematic generalization in abstract reasoning.\" arXiv preprint arXiv:2111.12990 (2021). 1. It would be nice to see results of existing models trained with the masked modeling loss. \n2. How the reasoning capability emerges. No. Limitations are not adequately addressed, though there is not ethical issues." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 8, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 5, 5, 5 ]
[ "zJLHmA8Li7H", "SGr1fD18KUd", "Fjxk4XF5Yp3", "MXqeButhyJN", "EWsMJ9Z25AC", "NOBjU2ooHIe", "BHHSEshkXlD", "pDPJxw5633d", "2ZHUe_4Zczi", "5_7g21t617c", "m6PLFniDhEh", "UNs35GXuSuq", "nlgju2MKWd", "RTYPNrJAed", "nips_2022_UwzrP-B38jK", "nips_2022_UwzrP-B38jK", "nips_2022_UwzrP-B38jK", "nips_2022_UwzrP-B38jK", "nips_2022_UwzrP-B38jK" ]
nips_2022_8N1NDRGQSQ
CalFAT: Calibrated Federated Adversarial Training with Label Skewness
Recent studies have shown that, like traditional machine learning, federated learning (FL) is also vulnerable to adversarial attacks. To improve the adversarial robustness of FL, federated adversarial training (FAT) methods have been proposed to apply adversarial training locally before global aggregation. Although these methods demonstrate promising results on independent identically distributed (IID) data, they suffer from training instability on non-IID data with label skewness, resulting in degraded natural accuracy. This tends to hinder the application of FAT in real-world applications where the label distribution across the clients is often skewed. In this paper, we study the problem of FAT under label skewness, and reveal one root cause of the training instability and natural accuracy degradation issues: skewed labels lead to non-identical class probabilities and heterogeneous local models. We then propose a Calibrated FAT (CalFAT) approach to tackle the instability issue by calibrating the logits adaptively to balance the classes. We show both theoretically and empirically that the optimization of CalFAT leads to homogeneous local models across the clients and better convergence points.
Accept
This submission aims to ensure adversarial robustness in federated learning when label skewness exists among different local agents. The main idea of the proposed solution is to calibrate the logits to balance the predicted marginal label probabilities. Most of the reivewers found the topic studied in this work relevant, important and timely. The authors have also successfully addressed the concerns from the reviewers during the rebuttal. Hence I recommend acceptance.
test
[ "f6HVuX9kha", "plby1aZTqjb", "kiRImXYHlIs", "96Ze1xoFfo", "QjZkfP2JfnC", "wZw4gN6WuFg", "bkevMw1ygTT", "Rlziwi0m_EC", "fN26kx5K2W0", "gjQ1wHuufWG", "_W1PL_kr5j7", "AdNLAT3aiTT", "n71A4y5mrO", "uRaNWE01BPU", "28fbP8GRvF6", "c4OK30GjTr2", "n-z2bWUE19I" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for further clarification and additional experiments, which solved most of my concerns. I would like to improve my score.", " We would like to gently remind the reviewer of any follow-up clarifications or questions that we can do our best to address in the remaining limited time. We hope our previous response has clarified your comments on our technical contribution and limitations, and it has helped improved your opinion of our work. We truly appreciate all your comments and will incorporate them in our revised version. Please let us know if there are additional comments you have for us.", " We would like to gently remind the reviewer of any follow-up clarifications or questions that we can do our best to address in the remaining time. We hope our previous response has clarified your comments and it has helped improved your opinion of our work. We truly appreciate all your comments and will incorporate them in our revised version. Please let us know if there are additional comments you have for us.", " I want to thank the authors for your response, which addressed my major concerns. I have improved my rating.", " Comment:\nWe would like to thank the reviewer for the detailed comments, and particularly, for admitting our work with an interesting topic and good motivation.\n\nWe hope our response has adequately addressed your concerns regarding the significance of FAT and related references. The experiment results in the iid setting are shown in Table 6. Note that more information can be found in our rebuttal summary.\n\nKindly let us know if anything is unclear. We truly appreciate your valuable feedback and comments that help us further improve our work.\n", " We would like to thank the reviewer for taking the time to review our paper and for the valuable comments.\n\nKindly let us know whether we have adequately addressed your comments on the experiment design and writing. Note that more detailed information is provided in our rebuttal summary.\n\nWe truly appreciate this opportunity to improve our work and shall be most grateful for any feedback you could give to us.", " Dear Reviewer RW5e , Thanks again for the valuable comments.\n\nWe have now clarified the significance and the limitations of our CalFAT and also show the new empirical results on Table 2. Note that more detailed information is shown in our rebuttal summary.\n\nPlease kindly let us know if anything is unclear. We truly appreciate this opportunity to improve our work and shall be most grateful for any feedback you could give to us.", " We sincerely thank all reviewers for their valuable comments and suggestions. We have made the following updates (where * denotes the content included in our initial submitted paper, and + denotes the newly added content during the rebuttal):\n\n* *Abstract: change \"much improved convergence rate\" to \"better convergence point\".\n\n+ +Section 1: added discussions about application of FAT; added references of relevant papers.\n\n* *Section 4.1: reorganized the main experimental results of our CalFAT against other FAT methods under the non-IID setting.\n\n+ +Section 4.1: added the new experimental results of our CalFAT against state-of-the-art long-tail learning methods under the non-IID setting; added standard deviation to the empirical results.\n\n+ +Section 4.5: added the new experimental results of our CalFAT against other FAT methods under the IID setting.\n\n+ +Section 5: added discussion about other non-IID settings in FAT.\n\n+ +Appendix D.6: added the new experimental results of our CalFAT against other FAT methods under the IID setting.\n\n* *Appendix D.7: the experimental results of our CalFAT against other FAT methods across different numbers of clients.\n\n+ +Appendix D.7: added standard deviation to the empirical results.\n\n* *Appendix D.8: the experimental results of our CalFAT against other FAT methods under different levels of label skewness.\n\nWe have revised our paper according to all the valuable comments and please let us know if there is anything still not clear. We are happy to run more experiments if you have further suggestions.", " **Q3:** The considered setting of federated learning with label skewness may be of limited interest to the community.\n\n**A3:** Thanks for the thoughtful comment. We agree that there exist other types of non-IID settings, each poses unique challenges that require different techniques to tackle with [a]. The label skewness considered in this paper is arguably the most representative non-IID setting in the literature. There are an extensive body of works that specifically focus on only label skewness in federated learning, which manifests the importance of label skewness in federated learning, e.g. [b, c, d].\n\nIt is arguably difficult to defend all non-IID settings via ne single method. We consider this as one limitation of our work as well as other works. We have clarified and discussed this point in the revision (Section 5, Line 364-366).\n\nWe hope the above clarifies the significance of our method. We genuinely hope that, as the first work that reveals and properly tackles one major cause of the training instability and much degraded natural accuracy in FAT, our work is not penalized by not being able to cover all the non-IID settings.\n\n---\n\n**References:**\n\n[a] Li, Xiaoxiao, et al. \"FedBN: Federated Learning on Non-IID Features via Local Batch Normalization.\" International Conference on Learning Representations. 2020.\n\n[b] Li, Tian, et al. \"Federated optimization in heterogeneous networks.\" Proceedings of Machine Learning and Systems 2 (2020): 429-450.\n\n[c] Shoham, Neta, et al. \"Overcoming forgetting in federated learning on non-iid data.\" arXiv preprint arXiv:1910.07796 (2019).\n\n[d] T Dinh, Canh, Nguyen Tran, and Josh Nguyen. \"Personalized federated learning with moreau envelopes.\" Advances in Neural Information Processing Systems 33 (2020): 21394-21405.", " Thanks for your valuable comments. We hope our response below can adequately address your concerns.\n\n---\n\n**Q1:** The technical contribution is quite limited. CalFAT is just a simple adaption of [22] into federated adversarial training.\n\n**A1:** \nThanks for the insightful question. While the logit calibration idea of our method follows [22] (termed LogitAdj), we would like to argue that we have made sufficient extension/improvement over LogitAdj:\n\n(1) We extend the logit calibration idea of [22] to the setting of FAT, and theoretically prove its usefulness in FAT via Propositions 1 and 2. \n\n(2) Technically, we use different loss functions from LogitAdj.\nSpecifically, we use a calibrated KL loss for the local inner maximization and a calibrated CE loss for the local outer minimization, while LogitAdj only has a calibrated CE loss. As shown in Table 2 below, LogitAdj has much lower natural and robust accuracy than our CalFAT.\n\n(3) Please also note that the imbalance issue in centralized machine learning (majority class vs. minority class) is a different concept from the non-IID label skewness (different classes at different clients) in FAT. Such difference poses a notable difficulty in the theoretical analysis. \n\nWe believe our exploration of logits calibration in FAT is of significant interest to the FL community. We hope the above clarifies the significance of our method.\n\n(Table 2: Combining FAT methods with different losses.)\nMetric | Natural | PGD-20 |\n|:------------------------------:|:-----------:|:-----------:|\nMixFAT | 53.35\t$\\pm$ 0.11 | 26.27\t$\\pm$ 0.11 |\nMixFAT + LogitAdj | 57.53\t$\\pm$ 0.21 | 27.65\t$\\pm$ 0.16 |\nMixFAT + RoBal | 58.25\t$\\pm$ 0.13 | 27.86\t$\\pm$ 0.10 |\nMixFAT + Calibration (ours) | **60.23**\t$\\pm$ 0.19 | **28.67**\t$\\pm$ 0.14 |\n||\nFedPGD | 46.96\t$\\pm$ 0.16 | 26.74\t$\\pm$ 0.18 |\nFedPGD + LogitAdj | 59.79\t$\\pm$ 0.15 | 28.84\t$\\pm$ 0.12 |\nFedPGD + RoBal | 61.48\t$\\pm$ 0.07 | 29.51\t$\\pm$ 0.07 |\nFedPGD + Calibration (ours) | **63.91**\t$\\pm$ 0.13 | **30.72**\t$\\pm$ 0.16 |\n||\nFedTRADES | 46.06\t$\\pm$ 0.12 | 26.31\t$\\pm$ 0.12 |\nFedTRADES + LogitAdj | 58.26\t$\\pm$ 0.20 | 27.92\t$\\pm$ 0.19 |\nFedTRADES + RoBal | 59.25\t$\\pm$ 0.23 | 28.63\t$\\pm$ 0.08 |\nFedTRADES + Calibration (ours) | **63.12**\t$\\pm$ 0.10 | **30.27**\t$\\pm$ 0.23 |\n||\nFedMART | 25.67\t$\\pm$ 0.21 | 18.10\t$\\pm$ 0.22 |\nFedMART + LogitAdj | 42.01\t$\\pm$ 0.10 | 24.92\t$\\pm$ 0.02 |\nFedMART + RoBal | 44.26\t$\\pm$ 0.22 | 25.57\t$\\pm$ 0.17 |\nFedMART + Calibration (ours) | **48.85**\t$\\pm$ 0.08 | **27.19**\t$\\pm$ 0.11 |\n||\nCalFAT (ours) | **64.69**\t$\\pm$ 0.08 | **31.12**\t$\\pm$ 0.11 |\n\n---\n\n**Q2:** The compared baselines (e.g., MixFAT and Fed**) are off-the-shelf, which do not adapt to the special property of label skewness. Stronger baselines utilizing label skewness should be involved.\n\n**A2:** \nThanks for the valuable suggestion. \nWe choose MixFAT and FedRBN as baselines because they are the latest FAT methods.\nHowever, we have followed your suggestion and explored two additional losses that better utilize the label skewness property. More specifically, we adopted 2 long-tail learning losses (LogitAdj and RoBal) and combined them with FAT methods. The results are presented in Table 2 below where it shows that: 1) our calibration loss delivers better performance than both LogitAdj and RoBal; 2) CalFAT performs the best amongst all FAT baselines.\n\n(Table 2: Combining FAT methods with different losses.)\n|Metric | Natural | PGD-20 |\n|:------------------------------:|:-----------:|:-----------:|\n|MixFAT | 53.35\t$\\pm$ 0.11 | 26.27\t$\\pm$ 0.11 |\n|MixFAT + LogitAdj | 57.53\t$\\pm$ 0.21 | 27.65\t$\\pm$ 0.16 |\n|MixFAT + RoBal | 58.25\t$\\pm$ 0.13 | 27.86\t$\\pm$ 0.10 |\n|MixFAT + Calibration (ours) | **60.23**\t$\\pm$ 0.19 | **28.67**\t$\\pm$ 0.14 |\n||\n|FedPGD | 46.96\t$\\pm$ 0.16 | 26.74\t$\\pm$ 0.18 |\n|FedPGD + LogitAdj | 59.79\t$\\pm$ 0.15 | 28.84\t$\\pm$ 0.12 |\n|FedPGD + RoBal | 61.48\t$\\pm$ 0.07 | 29.51\t$\\pm$ 0.07 |\n|FedPGD + Calibration (ours) | **63.91**\t$\\pm$ 0.13 | **30.72**\t$\\pm$ 0.16 |\n||\n|FedTRADES | 46.06\t$\\pm$ 0.12 | 26.31\t$\\pm$ 0.12 |\n|FedTRADES + LogitAdj | 58.26\t$\\pm$ 0.20 | 27.92\t$\\pm$ 0.19 |\n|FedTRADES + RoBal | 59.25\t$\\pm$ 0.23 | 28.63\t$\\pm$ 0.08 |\n|FedTRADES + Calibration (ours) | **63.12**\t$\\pm$ 0.10 | **30.27**\t$\\pm$ 0.23 |\n||\n|FedMART | 25.67\t$\\pm$ 0.21 | 18.10\t$\\pm$ 0.22 |\n|FedMART + LogitAdj | 42.01\t$\\pm$ 0.10 | 24.92\t$\\pm$ 0.02 |\n|FedMART + RoBal | 44.26\t$\\pm$ 0.22 | 25.57\t$\\pm$ 0.17 |\n|FedMART + Calibration (ours) | **48.85**\t$\\pm$ 0.08 | **27.19**\t$\\pm$ 0.11 |\n||\nCalFAT (ours) | **64.69**\t$\\pm$ 0.08 | **31.12**\t$\\pm$ 0.11 |", " **Q6:** Some experimental results are hard to tell the significance. For example, most numbers of AA column in Table 4 and Table 5 are very close. If the authors have conducted different trials, standard deviation should be put there.\n\n**A6:** \nThanks for the suggestion. We have run all the experiments for 5 times and added the mean and standard deviation to Table 3 (original Table 4) in Section 4 and Table 7 (original Table 5) in Appendix D.7 in the revision.\nSince AA is a strong adversary, most methods can hardly improve the robustness under AA [5]. Nevertheless, our CalFAT can still improve the robustness of baseline FAT methods against AA as shown in Table 3 and Table 7 below. We hope these results can help address your concerns.\n\n(Table 3: Natural and robust accuracy (\\%) across different FL frameworks on CIFAR10 dataset.)\n|FL framework | FedProx |-|-| Scaffold |-|-|\n|:------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|\n|Metric | Natural | PGD-20 | AA | Natural | PGD-20 | AA |\n||\n|MixFAT | 53.75\t$\\pm$ 0.16 | 29.61\t$\\pm$ 0.19 | 21.59\t$\\pm$ 0.27 | 55.27\t$\\pm$ 0.20 | 28.78\t$\\pm$ 0.15 | 21.26\t$\\pm$ 0.11 |\n|FedPGD | 49.57\t$\\pm$ 0.18 | 28.48\t$\\pm$ 0.17 | 21.31\t$\\pm$ 0.18 | 49.52\t$\\pm$ 0.14 | 27.46\t$\\pm$ 0.21 | 20.27\t$\\pm$ 0.15 |\n|FedTRADES | 48.14\t$\\pm$ 0.20 | 27.75\t$\\pm$ 0.17 | 21.13\t$\\pm$ 0.21 | 47.78\t$\\pm$ 0.23 | 27.31\t$\\pm$ 0.16 | 20.04\t$\\pm$ 0.16 |\n|FedMART | 28.32\t$\\pm$ 0.22 | 19.32\t$\\pm$ 0.23 | 15.91\t$\\pm$ 0.25 | 27.80\t$\\pm$ 0.17 | 20.03\t$\\pm$ 0.26 | 16.85\t$\\pm$ 0.15 |\n|FedGAIRAT | 49.61\t$\\pm$ 0.20 | 29.34\t$\\pm$ 0.11 | 21.33\t$\\pm$ 0.18 | 49.54\t$\\pm$ 0.21 | 27.23\t$\\pm$ 0.25 | 20.16\t$\\pm$ 0.09 |\n|FedRBN | 47.26\t$\\pm$ 0.13 | 26.63\t$\\pm$ 0.15 | 20.46\t$\\pm$ 0.06 | 49.77\t$\\pm$ 0.09 | 28.37\t$\\pm$ 0.12 | 20.32\t$\\pm$ 0.06 |\n|CalFAT | **66.32**\t$\\pm$ 0.08 | **32.79**\t$\\pm$ 0.13 | **22.83**\t$\\pm$ 0.11 | **67.16**\t$\\pm$ 0.06 | **32.94**\t$\\pm$ 0.06 | **21.94**\t$\\pm$ 0.05 |\n\n(Table 7: Natural and robust accuracy (\\%) across different numbers of clients $m=\\{20,50,100\\}$ on CIFAR10 dataset.)\n|$m$ | 20 | - | - | 50 | - | - | 100 | - | - |\n|:---------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|\n|Metric | Natural | PGD-20 | AA | Natural | PGD-20 | AA | Natural | PGD-20 | AA |\n||\n|MixFAT | 26.59\t$\\pm$ 0.16 | 18.24\t$\\pm$ 0.07 | 13.12\t$\\pm$ 0.14 | 23.28\t$\\pm$ 0.16 | 15.55\t$\\pm$ 0.13 | 10.92\t$\\pm$ 0.14 | 20.85\t$\\pm$ 0.16 | 14.41\t$\\pm$ 0.11 | 10.66\t$\\pm$ 0.12 |\n|FedPGD | 29.38\t$\\pm$ 0.20 | 18.19\t$\\pm$ 0.18 | 14.22\t$\\pm$ 0.11 | 27.73\t$\\pm$ 0.15 | 16.98\t$\\pm$ 0.23 | 11.94\t$\\pm$ 0.14 | 23.86\t$\\pm$ 0.18 | 15.37\t$\\pm$ 0.18 | 10.78\t$\\pm$ 0.09 |\n|FedTRADES | 29.39\t$\\pm$ 0.14 | 18.47\t$\\pm$ 0.13 | 14.66\t$\\pm$ 0.19 | 21.44\t$\\pm$ 0.06 | 15.20\t$\\pm$ 0.16 | 11.85\t$\\pm$ 0.09 | 21.06\t$\\pm$ 0.11 | 14.76\t$\\pm$ 0.16 | 11.68\t$\\pm$ 0.07 |\n|FedMART | 22.95\t$\\pm$ 0.15 | 17.08\t$\\pm$ 0.07 | 13.34\t$\\pm$ 0.09 | 22.43\t$\\pm$ 0.15 | 15.01\t$\\pm$ 0.08 | 11.59\t$\\pm$ 0.06 | 21.58\t$\\pm$ 0.12 | 14.48\t$\\pm$ 0.17 | 11.01\t$\\pm$ 0.09 |\n|FedGAIRAT | 22.74\t$\\pm$ 0.13 | 17.00\t$\\pm$ 0.12 | 13.77\t$\\pm$ 0.17 | 20.84\t$\\pm$ 0.26 | 14.68\t$\\pm$ 0.21 | 11.80\t$\\pm$ 0.17 | 19.26\t$\\pm$ 0.15 | 14.17\t$\\pm$ 0.11 | 11.33\t$\\pm$ 0.14 |\n|FedRBN | 21.90\t$\\pm$ 0.13 | 17.46\t$\\pm$ 0.14 | 12.91\t$\\pm$ 0.11 | 20.22\t$\\pm$ 0.16 | 14.74\t$\\pm$ 0.16 | 12.13\t$\\pm$ 0.11 | 18.99\t$\\pm$ 0.11 | 13.48\t$\\pm$ 0.19 | 12.05\t$\\pm$ 0.08 |\n|CalFAT | **60.26**\t$\\pm$ 0.09 | **24.32**\t$\\pm$ 0.13 | **15.41**\t$\\pm$ 0.12 | **49.86**\t$\\pm$ 0.07 | **18.79**\t$\\pm$ 0.10 | **13.22**\t$\\pm$ 0.13 | **40.69**\t$\\pm$ 0.08 | **16.19**\t$\\pm$ 0.15 | **12.51**\t$\\pm$ 0.09 |", " **Q4:** Line 14-16, I read that the authors claim they theoretically show a much improved convergence rate, expecting a theoretical proof on convergence rate. Prop 1 and Prop 2 in a combined manner show a better final convergence point, as opposed to a faster convergence rate. I think these are two different things.\n\n**A4:** Thanks for pointing this out. We apologize for the confusion, by \"much improved convergence rate\" we mean \"better convergence point\" in this work. We have adjusted the wording at Line 14-16 in the revision to make it clear.\n\n---\n\n**Q5:** Line 259-260 says CKL loss can better generate adversarial examples. I suppose LogitAdj and RoBal do not associate with adversarial examples. Then my questions are: 2.1) Do previous FAT baseline methods use the same adversarial training based on the same\nset(eg, same class ratio, sample size, etc) of adversarial examples? 2.2) What’s the setup to apply LogitAdj/RoBal comparing with CalFAT? 2.3) Is it possible to combine LogitAdj/RoBal with previous FAT baselines to compete with CalFAT?\n\n**A5:** \nThanks for the insightful question.\n\n2.1) Different FAT methods share the same hyperparameters (e.g., same class ratio, sample size, etc.) but use different methods to generate the adversarial examples. I.e., the only difference is the adversarial examples used to train their models.\n\n2.2) \\& 2.3)\nThanks for your suggestions. \nWe have combined different losses (including LogitAdj, RoBal, and our calibration loss) with previous FAT methods and report their performance in Table 2 below (as well as in the revised paper). \nNote that here all methods share the same settings except the loss function.\nTable 2 shows that: (1) our calibration loss-based methods achieved better performance than LogitAdj-based methods and RoBal-based methods; (2) CalFAT is the best among all FAT baselines.\nPlease kindly refer to Table 2 in Section 4.1 in the revision for more details.\n\nWe hope these results can help ease your concerns on the advantage of our CalFAT method.\n\n(Table 2: Combining FAT methods with different losses.)\nMetric | Natural | PGD-20 |\n|:------------------------------:|:-----------:|:-----------:|\n|MixFAT | 53.35\t$\\pm$ 0.11 | 26.27\t$\\pm$ 0.11 |\n|MixFAT + LogitAdj | 57.53\t$\\pm$ 0.21 | 27.65\t$\\pm$ 0.16 |\n|MixFAT + RoBal | 58.25\t$\\pm$ 0.13 | 27.86\t$\\pm$ 0.10 |\n|MixFAT + Calibration (ours) | **60.23**\t$\\pm$ 0.19 | **28.67**\t$\\pm$ 0.14 |\n||\n|FedPGD | 46.96\t$\\pm$ 0.16 | 26.74\t$\\pm$ 0.18 |\n|FedPGD + LogitAdj | 59.79\t$\\pm$ 0.15 | 28.84\t$\\pm$ 0.12 |\n|FedPGD + RoBal | 61.48\t$\\pm$ 0.07 | 29.51\t$\\pm$ 0.07 |\n|FedPGD + Calibration (ours) | **63.91**\t$\\pm$ 0.13 | **30.72**\t$\\pm$ 0.16 |\n||\n|FedTRADES | 46.06\t$\\pm$ 0.12 | 26.31\t$\\pm$ 0.12 |\n|FedTRADES + LogitAdj | 58.26\t$\\pm$ 0.20 | 27.92\t$\\pm$ 0.19 |\n|FedTRADES + RoBal | 59.25\t$\\pm$ 0.23 | 28.63\t$\\pm$ 0.08 |\n|FedTRADES + Calibration (ours) | **63.12**\t$\\pm$ 0.10 | **30.27**\t$\\pm$ 0.23 |\n||\n|FedMART | 25.67\t$\\pm$ 0.21 | 18.10\t$\\pm$ 0.22 |\n|FedMART + LogitAdj | 42.01\t$\\pm$ 0.10 | 24.92\t$\\pm$ 0.02 |\n|FedMART + RoBal | 44.26\t$\\pm$ 0.22 | 25.57\t$\\pm$ 0.17 |\n|FedMART + Calibration (ours) | **48.85**\t$\\pm$ 0.08 | **27.19**\t$\\pm$ 0.11 |\n||\n|CalFAT (ours) | **64.69**\t$\\pm$ 0.08 | **31.12**\t$\\pm$ 0.11 |", " Thanks for your time reviewing our paper and the thoughtful comments. Following your suggestions, we have run additional experiments and added the new results to the revised paper. We hope the new version can adequately address your concerns. We are very happy to run more experiments if you have further concerns.\n\n---\n\n**Q1:** More experiments: a) FAT baselines vs FAT baselines + Eq (11).\nb) FAT baselines + other label distribution calibration vs FAT baselines + Eq(11). \n\n**A1:** \nThanks for your suggestions. Following your suggestion, we have combined FAT baselines with different losses (including LogitAdj, RoBal, and our calibration loss) for all the FAT baselines. The results indicate that our calibration loss-based methods achieve better performance than LogitAdj-based methods and RoBal-based methods; and our CalFAT performs the best amongst all the baselines. Please refer to the updated Table 2 in the revision for more details.\n\nWe hope these new results have addressed your concerns about the comparison with other FAT baselines with different losses.\n\n(Table 2: Combining FAT methods with different losses.)\nMetric | Natural | PGD-20 |\n|:------------------------------:|:-----------:|:-----------:|\n|MixFAT | 53.35\t$\\pm$ 0.11 | 26.27\t$\\pm$ 0.11 |\n|MixFAT + LogitAdj | 57.53\t$\\pm$ 0.21 | 27.65\t$\\pm$ 0.16 |\n|MixFAT + RoBal | 58.25\t$\\pm$ 0.13 | 27.86\t$\\pm$ 0.10 |\n|MixFAT + Calibration (ours) | **60.23**\t$\\pm$ 0.19 | **28.67**\t$\\pm$ 0.14 |\n||\n|FedPGD | 46.96\t$\\pm$ 0.16 | 26.74\t$\\pm$ 0.18 |\n|FedPGD + LogitAdj | 59.79\t$\\pm$ 0.15 | 28.84\t$\\pm$ 0.12 |\n|FedPGD + RoBal | 61.48\t$\\pm$ 0.07 | 29.51\t$\\pm$ 0.07 |\n|FedPGD + Calibration (ours) | **63.91**\t$\\pm$ 0.13 | **30.72**\t$\\pm$ 0.16 |\n||\n|FedTRADES | 46.06\t$\\pm$ 0.12 | 26.31\t$\\pm$ 0.12 |\n|FedTRADES + LogitAdj | 58.26\t$\\pm$ 0.20 | 27.92\t$\\pm$ 0.19 |\n|FedTRADES + RoBal | 59.25\t$\\pm$ 0.23 | 28.63\t$\\pm$ 0.08 |\n|FedTRADES + Calibration (ours) | **63.12**\t$\\pm$ 0.10 | **30.27**\t$\\pm$ 0.23 |\n||\n|FedMART | 25.67\t$\\pm$ 0.21 | 18.10\t$\\pm$ 0.22 |\n|FedMART + LogitAdj | 42.01\t$\\pm$ 0.10 | 24.92\t$\\pm$ 0.02 |\n|FedMART + RoBal | 44.26\t$\\pm$ 0.22 | 25.57\t$\\pm$ 0.17 |\n|FedMART + Calibration (ours) | **48.85**\t$\\pm$ 0.08 | **27.19**\t$\\pm$ 0.11 |\n||\n|CalFAT (ours) | **64.69**\t$\\pm$ 0.08 | **31.12**\t$\\pm$ 0.11 |\n\n---\n\n**Q2:** The structure of the paper can be further polished. For example, the table 1 and table 2 are actually the same experiments, and should be combined into one.\n\n**A2:** Thanks for the thoughtful suggestion. We have now combined them into one table, please find the new table (Table 1) in the revision.\n\n---\n\n**Q3:** Line 241-242 is not necessary. I don’t see it helping validate any arguments made by authors.\n\n**A3:** Thanks for the suggestion. We have removed Line 241-242.\n", " Thank you very much for the positive feedback and valuable comments. We hope the following new results and clarifications can address your concerns. \n\n---\n\n**Q1:** It is not clear why the natural performance degradation is considered as the main motivation (figure 1 on Page 2) instead of robust performance since the goal of FAT is gaining robustness. Adding more elaboration would be helpful.\n\n**A1:** Thanks for the insightful question. While FAT improves adversarial robustness, it also significantly reduces the natural accuracy [26], making the final model less usable in real-world scenarios. Therefore, we aim to address this low natural accuracy issue of FAT without affecting the robust performance. \nIn other words, robustness is also one key part of our goal but was tackled through the improvement of natural accuracy and stable training. As indicated by the results in Figure 1, the training instability issue and low natural accuracy appear to be the major cause of degraded robustness. \n\nWe hope the above clarifies your concern.\n\n---\n\n**Q2:** More discussions on the potential applications of FAT are expected to better highlight its significance.\n\n**A2:** Thanks for the suggestion. In fact, FAT can be applied in many real-world FL systems where robustness is one of the top concerns, e.g., disease diagnosis in medical applications or anti-money laundering in financial applications. In these scenarios, data is often non-IID, where our CalFAT can help train more robust models. We have added more discussions in the revision.\nPlease refer to Section 1, Line 28-29.\n\n---\n\n**Q3:** Even though sufficient references are given, a few relevant papers are recommended: “Privacy and Robustness in Federated Learning: Attacks and Defenses” that also touched on federated adversarial robustness; “Decision Boundary-aware Data Augmentation for Adversarial Training” that studied how to improve adversarial robustness by playing with decision boundary. These papers did not study exactly the same topic as this paper, but would certainly enrich the literature review further.\n\n**A3:** Thanks for the suggestion. We have cited and discussed these papers in the revision.\nPlease refer to Section 1, Line 21-25 and 32-35.\n\n---\n\n**Q4:** I am wondering whether CalFAT also works in iid setting, can you elaborate more on this?\n\n**A4:**\nThanks for your valuable comments. We have conducted the experiments under iid setting. \nThe results are reported in Table 6 below. It shows that our CalFAT achieves the best robustness (under PGD-20 attack).\nPlease refer to Section 4.5 and Appendix D.6 in our revised version for details.\nWe hope these results can help clarify your concern. \n\n(Table 6: Natural and robust accuracy (\\%) on CIFAR10 dataset under the IID setting.)\n| Metric | Natural | PGD-20 |\n|:-----------:|:---------:|:--------:|\n| MixFAT | 79.62 | 37.57 |\n| FedPGD | 75.89 | 42.16 |\n| FedTRADES | 74.29 | 44.35 |\n| CalFAT | 74.23 | 44.68 |", " This paper aims to answer the question of how to guarantee adversarial robustness in federated learning. In particular, this paper targets a challenging non-IID setting - the skewed label distribution, which gives rise to imbalanced class probabilities and heterogeneous local models, and proposed CalFAT to solve this problem by adaptively adjusting the logits. Theoretically, the authors prove that CalFAT can help learn homogeneous local models. Empirically, CalFAT outperforms other traditional centralized adversarial training methods (like AT, TRADES, MART, GAIRAT) and also the recent robustness propagation method (I.e., FedRBN) on several benchmarked datasets (like CIFAR-10, SVHN, CIFAR100). Strengths:\n+ The literature review is solid; summarizing the advantages and disadvantages of the literature are very helpful.\n+ The paper points out why the previous methods hurt convergence of FAT and proposes novel techniques to tackle this issue that leads to better convergence and higher performance. \n+ The paper provides a solid theoretical analysis. \n+ Experimental evaluations are solid and well designed. The comparisons include the most recent FAT and centralized AT methods. \n+ The results are convincing and show that CalFAT outperforms state-of-the-arts.\n\nWeaknesses:\n- It is not clear why the natural performance degradation is considered as the main motivation (figure 1 on Page 2) instead of robust performance since the goal of FAT is gaining robustness. Adding more elaboration would be helpful.\n- More discussions on the potential applications of FAT are expected to better highlight its significance.\n- Even though sufficient references are given, a few relevant papers are recommended: “Privacy and Robustness in Federated Learning: Attacks and Defenses” that also touched on federated adversarial robustness; “Decision Boundary-aware Data Augmentation for Adversarial Training” that studied how to improve adversarial robustness by playing with decision boundary. These papers did not study exactly the same topic as this paper, but would certainly enrich the literature review further.\n - Why the natural performance degradation is considered as the main motivation (Figure 1 on Page 2) instead of robust performance since the goal of FAT is gaining robustness?\n- I am wondering whether CalFAT also works in iid setting, can you elaborate more on this?\n\n The authors did not discuss the limitations of the work.", " The paper discusses the cause of instability of federated adversarial learning(FAT) is the skewed label distribution. An adapted calibrated loss is applied to FAT to reduce the heterogeneity of local models and further improve the final model’s natural and robust accuracy. Experimental results of four computer vision datasets are conducted to support the author’s arguments. Strength\n\n1 The authors provide extensive experimental results. \n2 The paper is well-written and easy to follow.\n\nWeaknees\n\nMy biggest concern is the experiment design does not align well with the arguments.\n\nThe authors argue that the training instability comes from the skewed label distribution that leads to heterogeneity of the local model. I think it’d be true for both standard federated learning and federated adversarial training(FAT). Eq.(11) is then proposed to calibrate the heterogeneity of local models for FAT. Therefore I expect at least two sets of experiments which I don’t see many of them in the main paper.\n\na) FAT baselines vs FAT baselines + Eq (11)\nFor example in Table 1, showing the results of {MixFAT, FedPGD, FedTRADES…} + Eq(11) would be more helpful to convince reviewer about the effectiveness of the proposed loss. \n\nb) FAT baselines + other label distribution calibration vs FAT baselines + Eq(11) \nTable 3 is a good example. However, the authors should explore it more. See Question 2 below.\n\n\n\n\nMinor: \nThe structure of the paper can be further polished. For example, the table 1 and table 2 are actually the same experiments, and should be combined into one. \n\nLine 241-242 is not necessary. I don’t see it helping validate any arguments made by authors.\n 1 Line 14-16, I read that the authors claim they theoretically show a much improved convergence rate, expecting a theoretical proof on convergence rate. Prop 1 and Prop 2 in a combined manner show a better finall convergence point, as opposed to a faster convergence rate. I think these are two different things. \n\n\n2 Line 259-260 says CKL loss can better generate adversarial examples. I suppose LogitAdj and RoBal do not associate with adversarial examples. Then my questions are:\n 2.1) Do previous FAT baseline methods use the same adversarial training based on the same \n set(eg, same class ratio, sample size, etc) of adversarial examples?\n 2.2) What’s the setup to apply LogitAdj/RoBal comparing with CalFAT? \n 2.3) Is it possible to combine LogitAdj/RoBal with previous FAT baselines to compete with \n CalFAT? \n\n\n3 Some experimental results are hard to tell the significance. For example, most numbers of AA column in Table 4 and Table 5 are very close. If the authors have conducted dfferent trials, standard deviation should be put there. \n Yes", " This paper proposes a calibrated federated adversarial training (CalFAT) method, which is used in the case of label skewness. Empirical evaluation is done on several datasets including SVHN, CIFAR-10/100 and a subset of ImageNet. Strengths:\n- The empirical evaluation is done on several different datasets, and under strong attacks like AA.\n- The improvement in clean accuracy seems significant.\n- The writing is clear and easy to follow the main idea.\n\nWeaknesses:\n- The technical contribution is quite limited. CalFAT is just a simple adaption of [22] into federated adversarial training.\n- The compared baselines (e.g., MixFAT and Fed**) are off-the-shelf, which do not adapt to the special property of label skewness. Stronger baselines utilizing label skewness should be involved.\n- The considered setting of federated learning with label skewness may be of limited interest to the community.\n Limited technical contribution and weak baselines. No" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "n-z2bWUE19I", "n-z2bWUE19I", "c4OK30GjTr2", "QjZkfP2JfnC", "28fbP8GRvF6", "c4OK30GjTr2", "n-z2bWUE19I", "nips_2022_8N1NDRGQSQ", "n-z2bWUE19I", "n-z2bWUE19I", "c4OK30GjTr2", "c4OK30GjTr2", "c4OK30GjTr2", "28fbP8GRvF6", "nips_2022_8N1NDRGQSQ", "nips_2022_8N1NDRGQSQ", "nips_2022_8N1NDRGQSQ" ]
nips_2022_02YXg0OZdG
Eliciting Thinking Hierarchy without a Prior
When we use the wisdom of the crowds, we usually rank the answers according to their popularity, especially when we cannot verify the answers. However, this can be very dangerous when the majority make systematic mistakes. A fundamental question arises: can we build a hierarchy among the answers without any prior where the higher-ranking answers, which may not be supported by the majority, are from more sophisticated people? To address the question, we propose 1) a novel model to describe people's thinking hierarchy; 2) two algorithms to learn the thinking hierarchy without any prior; 3) a novel open-response based crowdsourcing approach based on the above theoretic framework. In addition to theoretic justifications, we conduct four empirical crowdsourcing studies and show that a) the accuracy of the top-ranking answers learned by our approach is much higher than that of plurality voting (In one question, the plurality answer is supported by 74 respondents but the correct answer is only supported by 3 respondents. Our approach ranks the correct answer the highest without any prior); b) our model has a high goodness-of-fit, especially for the questions where our top-ranking answer is correct. To the best of our knowledge, we are the first to propose a thinking hierarchy model with empirical validations in the general problem-solving scenarios; and the first to propose a practical open-response-based crowdsourcing approach that beats plurality voting without any prior.
Accept
This work proposes the framework to elicit people's "thinking hierarchy" that helps improve the wisdom of the crowd even if the majority is wrong. The reviewers overall appreciate the main idea of the work and believe it makes a nice contribution to the literature. There have been some questions/concerns raised about the efficiency of the algorithm and the model limitations, to which the authors have provided reasonable responses. We encourage the authors to incorporate those responses (and other reviewer comments) into the final version of the paper.
train
[ "-5lhHqjCO6", "StCCchwU2JB", "a2V2g6WJoK4", "EYwFGcYPErc", "h5jqDva0-2O", "wOhodG-OBa", "XPWZX3ai2J", "9I30YdnJ7_O", "0y7DF4ymDUt", "YEGu13yzJBx", "mQjlDje-Ip1", "KcUdT6ehdSj", "HPh0leCp0oD", "gID0Vxif1FL" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I suggest the authors put the running time analysis in the main body of the paper, probably even the dynamic programming-based algorithm.\n\nI think the exponential running time in the number of possible answers is not very practical and limits the application of the model proposed in the paper with the algorithms known so far. However, the authors clarify, and I agree, that their main contribution is the mathematical model of the thinking hierarchy. Hence, raising my score to 5.", " Thanks for your reply! \n\nThe input size is the size of the answer space |A|. The brute force requires $O(|A|! * |A|^2)$. The dynamic programming algorithm improves the complexity to $O(2^{|A|}*|A|^2)$ (see lines 460-467 in the full version attached in the supplementary material).\n\nWe want to additionally emphasize that \n\n1) the input size |A|<=33 when we set threshold t = 3% and our algorithm is very efficient empirically as we mentioned in our previous comment. \n\n\n2) The main focus of this paper is not proposing a more efficient algorithm to improve an existing algorithm that is notorious for slow running time previously. The main focus of this paper is a novel model and approach to address the systematic bias in crowdsourcing. For practical use, the algorithm is sufficiently efficient and outputs an accurate answer without any prior even if the majority is wrong. \n\n\nHope our reply can address your comments! Thank you very much!\n", " Could the authors expand further on the theoretical running time bound for the dynamic programming algorithm?\n\nThanks for addressing all other comments.", " Does our reply address your comments? We wondered if you had any additional questions for us. Thanks!", " Thanks for your reply! We will add these explanations about the model in the revised paper.", " The authors addressed my main concerns and questions about the model. I raise my score to 6. It would be nice to include these explanations about the model in the revised paper. ", " Thank you very much for your review!\n\nQ: Modeling Issue: are thinking oracles public or private? how to generate prediction? \n\nA: All of these oracles are private information. People can run some of them privately during the thinking process. Thus, it does not mean type t respondent can only run the oracle of her own type, t, because a type t agent may have run type < t oracle during her thinking process. \n\nOur model is inspired by two prior works:\n\n1) System 1/system 2 theory in (Tversky and Kahneman. Judgment under uncertainty: Heuristics and biases. Science, 1974.). \n\nThis theory assumes that people have two systems, a fast and intuitive system, and a slow and logical system. In our language, running the system is running the oracles. \n\nFor example, when an expert starts to solve the circle problem, when she read the question, she will first run her intuitive system 1 and obtain answer 3. However, when she starts to think carefully, she runs her more careful system 2 to obtain answer 4. Using our language, she first runs a lower-type oracle and then runs a higher-type oracle. \n\nOne of our contributions is to show that people who have run system 2 have also run system 1, but not vice versa. \n\nWe extend the system 1/system 2 theory to the existence of multiple oracles because, during the thinking process, the respondent may approach the question through different thinking paths. \n\n2) Cognitive hierarchy theory (CHT) (Camerer, Ho, and Chong. A cognitive hierarchy model of games. The Quarterly Journal of Economics, 2004.)\n\nThis theory assumes that when agents play games, they perform different levels of strategies. Level k strategy is the best response to the < k level strategies. The agent starts from the lowest level and then derives a higher level. For example, in the famous Beauty contest game(Camerer, Ho, and Chong 2004 ), people are asked to guess 2/3 of the average of people's answers, the theory assumes that a person starts from a random number as level 0 answer and then obtain 2/3 of the average of the level 0 answer, which is the level 1 answer, and the obtain 2/3 of the average of the mix of level 0&1 answers., which is the level 2 answer.... \n\nIn the above theory, the system and strategy are functions, which correspond to the oracles in our model. In the cognitive hierarchy theory, before people adopt the high-level strategy, they usually have run a low-level strategy. Our model has a similar conceptual idea. Though, unlike CHT, our approach does not need to understand the problem to obtain the thinking hierarchy. \n\nQ: Why an expert generates predictions by running thinking oracles of others\n\nA: When an agent generates predictions for other people's answers, she will output the \"mistake\" she has made before. Therefore, she will output an answer output by the oracles she has run before during the thinking process. \n\nOther Questions:\n\nQ: Why not just assume there is an underlying ranking of answers and people with higher-type answers can predict the lower-type answers?\n\nA: Our model is more general. We provide a more general model mainly for the following two reasons:\n\na) Our model does not require the higher-type answer to be able to predict ALL lower types. In the above example, it is also possible that some higher-type respondents obtain the correct answer initially by running system 2 directly without running system 1. Thus, we make the assumption of upper-triangularity *without* assuming all entries in the upper-triangular part are non-zero, *nor* assuming all entries are one. \n\nb) Our model allows one type to output more than one answer. For example, there are two types. The higher type outputs 4, the lower type outputs 3 or 6. In this case, our model allows people who answer 3 and people who answer 6 can predict each other's answers. In this case, 3 & 6 will form a community and 4 is higher than any of it. \n\nWe observe both a) and b) in the results collected by empirical studies. \n\nQ: How do your algorithms compute the best permutation matrix/the matrix W*? Are they polynomial-time algorithms?\n\nA: We use dynamic programming to compute the best permutation matrix, more details and pseudocode are in Section B. It's not a polynomial of the input size. \n\nHowever, the input size of the algorithm is the size of the answer space. In practice, we can set a threshold t and only collect answers that are supported by more than t% of people. For example, in our empirical studies, we set the threshold as 3% (See line 274, Section 3 Studies). In this case, the number of distinct answers is at most 1/3% ~= 33, which is a constant. Therefore, the time complexity is a *constant* when the threshold t is a constant. \n\nIn our empirical studies, the size of the answer space is at most 7 or 8. Our default algorithm takes only **91 milliseconds** to finish the computation of all 152 questions. \n\nTherefore, our algorithm's efficiency is sufficient for broad applications. ", " Thank you very much for your review!\n\nQ: Efficiency of Algorithm.\n\nA: The input size of the algorithm is the size of the answer space. In practice, we can set a threshold t and only collect answers that are supported by more than t% of people to avoid noise. For example, in our empirical studies, we set t=3% (See line 274, Section 3). Here the number of distinct answers is at most 1/3% ~= 33, which is a constant. Therefore, the time complexity is a constant when the threshold t is a constant. \n\nWe also use dynamic programming to improve efficiency (see line 463, Section B). In our empirical studies, the size of the answer space is at most 7 or 8. Our default algorithm takes only **91 milliseconds** to finish the computation of all 152 questions. \n\nTherefore, our algorithm's efficiency is sufficient for broad applications. \n\nQ: Restricted settings:\n\nA: Without the thinking hierarchy assumption, we cannot deal with the systematic bias of the majority like in the scenario of the circle problem. This assumption is conceptually very similar to the cognitive hierarchy assumption (Camerer, Ho, and Chong 2004. ) (see more comments for the model in reply to Reviewer 239z ). \n\nIn the empirical studies, the lack of fit score and results illustrated that the answer-prediction matrix has a hierarchical structure as our theory predicted (See Figure 3(b), Section 3) in many situations. This validates thinking hierarchy assumption in various situations. \n\nQ: When is the method better compared to the plurality vote?\n\nA: Theoretically, with thinking hierarchy assumptions, the top ranking answer of our method has the highest level, thus >= the level of plurality answer. Empirically, the studies illustrated the superiority of our method. In detail, possible scenarios include:\n\n1) Systematic bias exists: a) most people have a bias b) a few have the bias\n2) Mistakes are diverse: a) people do not know what others will answer, and the answer-prediction matrix is diagonal; b) people can easily predict each other's answer and the answer-prediction matrix does not have a hierarchical structure\n\nOur method is strictly better than the plurality in 1) a); the same as the plurality in 1) b) and 2) a). \n\nIn 2) b), plurality may be better because there does not exist a hierarchical score and our approach may output a random answer. However, in this case, the answer-prediction matrix has a high lack-of-fit score (see Definition 2.6 and figure3(b)), which we can judge at the beginning without any prior and choose to use a plurality vote. \n\nQ: What if the questions & answers are complex?\n\nA: We are not sure about the definition of complexity and provide comments based on our understanding. \n\n1) The size of the answer space: with a threshold of 3%, the number of distinct answers is at most = 33 (see comments for efficiency of the algorithm). Moreover, in many crowdsourcing applications including species recognition (Silvertown, et al 2015), the number of distinct answers can be even less like our empirical studies. \n\n2) The format of the answers: for example, the answers are sentences. In this case, it's also not clear how to use the plurality-voting without classifying the answers. If the answers can be classified, then our method can be applicable again. \n\n3) Difficulty: one contribution of our paper is a definition for the difficult questions: the question that most people make systematic bias and the answer-prediction matrix has a significant hierarchical structure. In this case, the plurality answer has a lower level and our approach is better than the plurality vote. \n\nThus, our method can be implemented directly in many applications. \n\nOther questions:\n\nQ:  Lines 155 to 161...\nA: People do not need to predict types. It means \"runs type t' oracle and report its output as a prediction\". We will clarify it. \n\nQ: Please write ...\nA: We will follow your suggestion and clarify it. \n\nQ: Noisy side information about other people’s predictions? \nA: Here are our methods for robustness\n1) Setting a threshold t% (see line 274, Section 3):\n\nThe answer space A is the set of answers that are supported by more than t% of the respondents. The answer-prediction matrix only counts the number of people who answer in A and predicts an answer in A. For example, if a person predicts \"Bulbasaur\", but \"Bulbasaur\" is not supported by t% of the population. This prediction will not be counted. \n\n2) Finding the closest solution:\n\nWe find the closest solution and use the Frobenius norm to measure the distance (see definition for NCT). This is similar to low-rank approximation and will be robust to data noise when agents' behavior does not fully follow our model. For example, in the circle problem, a few people who answer low-level answer \"3\" predicts high-level answer \"4\" in the collected data. However, because our algorithm finds the rank that maximizes the sum of the square in the upper-triangular area, the result still ranks \"4\" as the highest level. \n", " Thank you very much for your review!\n\nQuestion: Negative impacts on society are not particularly discussed, but one could think of situations such as political mercenaries or governments collecting information on the thinking patterns of a voting population in pursuit of an agenda.\n\nAnswer: Thanks for mentioning this. We appreciate your recognition of the potential of our approach. Yes, our algorithm can be used by governments or political actors to elicit the hierarchy of opinions from the crowds. With the thinking pattern information, it may be easier to implement a social media manipulation of public opinion. We will add more discussions on the negative impacts of our approach in the revised version. ", " Thank you very much for your review!\n\nQuestion: How are the questions for the user study selected?\n\nAnswer: Except some questions in math study are classic interview problems like the Monty Hall problem, the questions are selected randomly from a pool. For example, the Life-and-death problems are selected from the quiz site (https://www.101weiqi.com/). The math problems are selected from a question bank which consists of math Olympiad contest problems for elementary school. \n\nWe talked about it in Section A of our full version paper (the full version is in the supplementary materials) and will add more discussions. \n", " This paper argues that in the wisdom-of-the-crowd paradigm, plurality voting may not necessarily yield the correct answer when the majority makes systematic errors. The paper presents a theoretical framework to elicit thinking hierarchy and demonstrates that their method outperforms plurality voting and also demonstrates certain desirable properties. Apart from presenting the theoretical framework, the paper also conducts crowdsourced user studies to demonstrate the practical effectiveness of their framework. Strengths:\n1. The paper's motivation is clear and important. If a vast majority displays certain biases, plurality voting would not be very useful. \n2. The paper's writing and organization are good. The examples provided help understanding. \n\nWeaknesses: \n1. The paper can be strengthened by making the user study more elaborate with a clear description of how are the questions selected. \n 1. How are the questions for the user study selected? No negative societal impact is discussed. I do not see any obvious red flags in terms of negative societal impact. ", " The crux of this paper is that it provides an empirically validated way to use the wisdom of the crowd rather than the default “popular answer”/”plurality voting” paradigm. In the process, the authors show how to obtain the thinking hierarchy of the crowd for the given set of questions. They claim that knowing this (rich) hierarchy helps in areas like policy making. Using mathematical tools and certain assumptions, the paper demonstrates the superiority of their method especially when obtaining a prior distribution from the crowd is prohibitively expensive.\n Originality: The authors distinguish themselves from previous work by discarding the use of a prior distribution. Further, they claim that their thinking hierarchy learning model captures a richer spectrum of answers. Their method reduces bias like previous work does, with the difference that they do not collect a prediction distribution, rather a single prediction answer. They also differentiate themselves from the game theoretic setting. The concept of the thinking hierarchy is sufficiently novel wrt past work.\n\nQuality: The paper provides mathematical justification, although under certain idealistic assumptions, for every statement it makes. The experiment results have been provided at a URL and are easy to grasp. The authors are upfront about the assumptions that are required and also provide alternatives - for e.g. the i.i.d assumption under which respondents make their predictions is contrasted with picking the first prediction that a respondent makes, due to the fact that i.i.d predictions don’t match with reality.\n\nClarity: The paper is clear with what it aims to achieve and under what conditions it can achieve them. The goal is to: learn the thinking hierarchy among respondents and to do so without collecting prior information. The authors also provide future uses of their work from the ML and scale points of view. The experiments are easy enough for a novice reader to understand, and collect sufficient information.\n\nSignificance: The method described in the paper has significance in that it is applicable to cases where collecting prior information from a crowd is difficult, and plurality voting is not sufficient. One can foresee future applications of systematically obtaining a thought hierarchy as more information on a certain topic or policy can only help with decision making.\n See above. The authors discuss technical limitations such as the i.i.d assumption on user predictions. Negative impacts on society are not particularly discussed, but one could think of situations such as political mercenaries or governments collecting information on thinking patterns of a voting population in pursuit of an agenda.\n", " The paper proposes a mathematical model for thinking hierarchy of users predictions for their answer and other users’ answers. Given the joint distribution of the answers and predictions of other answers, the authors show that the parameters of the model can be derived using a novel matrix factorization. The authors solve an equivalent Frobenius norm minimization problem for the special case when the $W$ matrix (one of the parameters of the model) is semi-orthogonal. And then the authors propose a brute-force search based algorithm to find the ranking of the answers from the thinking hierarchy. Since the joint distribution of answers and predictions may not be available they use an empirical estimate. Naturally, this works if the samples are i.i.d. Strengths:\n\n1. The problem of eliciting thinking hierarchy to come up with the correct ranking over various answers is well-motivated, and is an important problem to study.\n2. The paper seems to be placed well in the recent literature about using additional information about people’s prediction about other answers to get better accuracy for ranking answers.\n3. The theoretical results about the problem are interesting, however somewhat limited.\n\nWeaknesses:\n\n1. The main drawback is that the algorithms proposed in the paper are just brute-force search based algorithms and hence are not efficient for complicated questions.\n2. The theoretical guarantees about eliciting thinking hierarchies hold in fairly restricted settings, which may not hold majority of the times in real-world. Especially, the questions and answers very quickly become complicated, in which case, plurality-voting could be a better method? \n3. Basically, for a complete picture, the complexity of the questions and answers needs to be captured in this framework. In what cases does plurality-voting give better ranking than the algorithms in the proposed framework. I don’t understand when is the side information better?\n4. The writing can be improved (see questions). 1. Lines 155 to 161 are confusingly written. Are the people predicting other peoples’ answers or their types?\n2. Please write Example 2.3 properly. Since the meaning of $p_t$ and $p_{t \\rightarrow t’}$ has already been explained it would be much easier to understand if the authors just use this notation while showing the construction of $\\Lambda$.\n3. Could the authors comment about what happens if the side information about other people’s predictions is noisy? I don’t see any mention of the negative impacts of when the proposed algorithms are bad compared to standard algorithms. This discussion is needed.", " The paper proposes a method to aggregate crowdsourcing answers based on a key observation: experts have different expertise levels and experts at a higher level are able to “simulate” the experts at lower levels. The paper aims to find this underlying “thinking hierarchy” —which can be used to find the best answer—by asking people to predict other people’s answers. The paper first proposes a model for this thinking hierarchy. The model assumes that experts have their types, and for each type, there is a thinking oracle that specifies how this type of experts generate their answers. An expert can run the oracles with lower types but never higher types. The paper then develops two algorithms to learn the underlying thinking hierarchy. The algorithms basically generate an answer-prediction matrix based on experts’ answers and predictions and then find a ranking of answers that will reorder the matrix to match the hierarchical structure in the best way. The paper provides theoretical justification for their algorithms. Finally, the paper shows by real-world experiments that their methods outperform plurality voting.\n The strengths of the paper are the algorithms and the experiments. The paper proposes novel methods to utilize the underlying thinking hierarchy when aggregating experts’ answers and tests the method through real-world experiments. The algorithms are intuitively reasonable and the experiment results are good. \n\nThe weakness of the paper is the modeling of the thinking hierarchy, which also makes the theoretical justification of the algorithms implausible. The use of thinking oracle seems unreasonable to me. Are these thinking oracles private information or public information? If they are private information, how can the experts of higher types use the lower-type oracles? If they are public information, why can’t the lower-type experts use the higher-type oracles? It is also ungrounded why an expert generates predictions by running thinking oracles of others. The assumption seems crucial for the theoretical analysis, which in my point of view should be more carefully justified. \n 1. Why not just assume there is an underlying ranking of answers and people with higher-type answers can predict the lower-type answers?\n\n2. How do your algorithms compute the best permutation matrix/the matrix W*? Are they polynomial-time algorithms?\n Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 1, 3, 4 ]
[ "StCCchwU2JB", "a2V2g6WJoK4", "EYwFGcYPErc", "9I30YdnJ7_O", "wOhodG-OBa", "XPWZX3ai2J", "gID0Vxif1FL", "HPh0leCp0oD", "KcUdT6ehdSj", "mQjlDje-Ip1", "nips_2022_02YXg0OZdG", "nips_2022_02YXg0OZdG", "nips_2022_02YXg0OZdG", "nips_2022_02YXg0OZdG" ]
nips_2022_vphSm8QmLFm
GBA: A Tuning-free Approach to Switch between Synchronous and Asynchronous Training for Recommendation Models
High-concurrency asynchronous training upon parameter server (PS) architecture and high-performance synchronous training upon all-reduce (AR) architecture are the most commonly deployed distributed training modes for recommendation models. Although synchronous AR training is designed to have higher training efficiency, asynchronous PS training would be a better choice for training speed when there are stragglers (slow workers) in the shared cluster, especially under limited computing resources. An ideal way to take full advantage of these two training modes is to switch between them upon the cluster status. However, switching training modes often requires tuning hyper-parameters, which is extremely time- and resource-consuming. We find two obstacles to a tuning-free approach: the different distribution of the gradient values and the stale gradients from the stragglers. This paper proposes Global Batch gradients Aggregation (GBA) over PS, which aggregates and applies gradients with the same global batch size as the synchronous training. A token-control process is implemented to assemble the gradients and decay the gradients with severe staleness. We provide the convergence analysis to reveal that GBA has comparable convergence properties with the synchronous training, and demonstrate the robustness of GBA the recommendation models against the gradient staleness. Experiments on three industrial-scale recommendation tasks show that GBA is an effective tuning-free approach for switching. Compared to the state-of-the-art derived asynchronous training, GBA achieves up to 0.2% improvement on the AUC metric, which is significant for the recommendation models. Meanwhile, under the strained hardware resource, GBA speeds up at least 2.4x compared to synchronous training.
Accept
The paper identifies and illustrates a practically relevant challenge for training of deep learning-based recommender systems on distributed architectures: switching between synchronous and asynchronous training modes. The proposed mechanism called global batch gradient aggregation (GBA) is simple but mitigates the need to do hyper-parameter tuning when switching which was identified as the critical performance bottleneck. Experiments were conducted on industry-scale recommendation tasks and show that the proposed method is effective and improves in accuracy over fully asynchronous training methods and in speed over synchronous training schemes. The prevailing opinion among the reviewers was that the paper is well written, well executed and addressed a practically relevant not previously studied problem. The identified challenge is clearly outlined and the proposed solution is well motivated, analyzed with ablation studies and shown to be effective. I advocate acceptance because it is a well executed practical piece of work. For a potential camera ready version the authors should anticipate the questions of the reviewers and clarify them in the manuscript. They should also be more explicit about the manual steps and the parameter choices required when implementing the method so the limitations are more clear. Further, it would be appreciated if the authors would do an effort beyond acceptance of the paper to push the (currently proprietary) optimizer into an open source framework to make it available to the broader community as promised in the checklist.
train
[ "7nxfwetkVWE", "dysZe_zxyMT", "cFQbpBkWe4n", "AP738OwxNqE", "NZ-mj_M92v", "Zd4JDuyzfWO", "j4kLHL7su2p", "PW-EByxcA13", "6xJHTFesvss" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We have done many experiments and studies **to understand this phenomenon** in terms of different hyper-parameters. As we said in our last response, it is a common phenomenon encountered by our researchers / developers, which remains an open question in the community. And a comprehensive theoretical analysis of this phenomenon is **out of our scope**. Following your suggestion, we have added a **theoretical analysis** of this phenomenon in terms of batch size in Appendix. However, our main contribution lies in the design of GBA to switch between training modes **without tuning hyper-parameters**. GBA **connects sync and async modes**, and it solves a big problem of our daily training under limited resources.", " Thanks for the author's detailed feedback! \n\nI appreciate the clarification of the experimental results and agree with the statements the author made. (I raised my score for this.)\n\nOn the other hand, I am still a little confused about the claim of contribution in this paper. I would appreciate it if the author could correct me about my understanding of the contributions in this paper: \n\n- Discover a phenomenon that direct switch between synchronous and asynchronous training of recommendation model leads to diverge;\n\n- Some insight about why this phenomenon happens;\n\n- Based on these insights, design some algorithm to overcome this issues. \n\nLogically, how could one propose a solid solution without understanding the phenomenon? ", " Thank you for the valuable comments. We really appreciate your positive comments about our work. We will address the issues as follows.\n\n**Weakness-1**: The paper should provide details on when to initiate switching based on cluster status and ablation studies that show how the model performs for different cluster statuses.\n\n**Answer to Weakness-1**: In Section 3.1, we use CPU utilization to measure the cluster status and QPS to show the training performance of aysnc and sync modes for different cluster statuses. Currently, our users can only switch manually based on their experience and requirements for training performance. GBA makes it possible to switch between all-reduce sync mode and PS async mode. The guidelines for automatic switching would be derived from more analyses upon the training trace logs. And it could be formulated as an optimization problem under many control factors, including but not limited to the overall QPS, training cost, and task scheduling with priority. Since it needs deeper study, we consider it as one of our future work.\n\nFurthermore, we collected the performance of GBA under three representative cluster statuses (different time of a day), and have depicted the detailed metrics in Table 3 in Section 5.3. The QPS of the asynchronous training indicates the huge differences among the three time periods of training. From Table 3 we see that GBA appears to have similar training efficiency as the asyn mode, and provides comparable AUC scores to the sync mode. The result also implies that the model accuracy is not sensitive to the switching, which means that it is safe to switch the training mode by users (either manually or automatically in the future).\n\n**Question-1**: How does the performance vary with the number of workers? Since it relies on global batch aggregation, there may be some dependence on the number of workers?\n\n**Answer to Question-1**: In our experiments of Section 5.3, we show that GBA has good scalability. Figure 7 shows that we could decrease the local batch size and increase the worker number, while the AUC scores remain nearly the same. Although the optimization process of training relies on global batch aggregation, the worker number is limited by the physical implementation. It means that we cannot use an extremely large local batch size (incurring the out-of-memory failure) nor too many workers (incurring severe communication bottleneck on PS nodes due to too many connections). Therefore, the experiments in Figure 7 were conducted over four practical numbers of workers which ensures resource efficiency.\n", " Thank you for your efforts and comments. We have submitted an updated version of the paper, and will respond to your comments in detail below. What we would like to highlight is that both weaknesses you mentioned are outside the scope of our claimed contributions. We would greatly appreciate it if you could reconsider our contributions and revisit the review in light of our clarifications.\n\n**Weakness-1**: The explanation of the phenomenon is lack of technique depth. The insights presented in Section 3.2 are intuitive rather than analytical. It would make the paper stronger if there were some formal theoretical analysis on the drop in performance.\n\n**Answer to Weakness-1**: \nAccording to your comments, we have added more related work and discussion in Section 3.2 to make it more clear and more convincing (Line 144 [2,17]). In addition, we provided an analysis on the sudden drop in performance after switching. It is challenging to perform a comprehensive theoretical analysis of this phenomenon, which remains an open question in the community. The drop in performance after switching is related to many factors of the implementation, such as batch size and optimizer settings. We analyzed the phenomenon from the perspective of batch size based on our convergence analysis in Section 4.2. The detailed theoretical analysis can be found in Appendix D of the updated version, and we briefly summarize it as follows:\n\nIn synchronous training, if we do not switch the training mode, the expectation of loss in the $k+1$ step is\n$$\nE(F(w_{k+1}))\\leq E(F(w_{k}))-\\eta\\left(1-\\frac{L\\eta\\Theta}{2NB}-\\frac{L\\eta}{2}\\right)E(\\|\\nabla F(w_{k})\\|_{2}^{2})+\\frac{L\\eta^{2}\\sigma^{2}}{2NB}.\n$$\n\nHowever, if we switch to the asynchronous mode, the expectation of loss in the $k+1$ step is\n$$\nE(F(w_{k+1}))\\leq E(F(w_{k}))-\\eta(1-\\frac{L\\eta\\Theta}{2B}-\\frac{L\\eta}{2})E(\\|\\nabla F(w_{k}))\\|_{2}^{2}+\\frac{L\\eta^{2}\\sigma^{2}}{2B}.\n$$\nSince $\\frac{L\\eta^{2}\\sigma^{2}}{2B}>\\frac{L\\eta^{2}\\sigma^{2}}{2NB}$ and $\\frac{L\\eta\\Theta}{2B}>\\frac{L\\eta\\Theta}{2NB}$, switching the synchronous mode to the asynchronous mode may drop in performance. A similar conclusion can be found regarding the drop in performance from asynchronous mode to synchronous mode, and we depict the deduction in Appendix D.\n\nHowever, we would like to highlight that the analysis of this phenomenon is **not the main point/contribution** of this paper. The observation of this phenomenon just motivated us to propose GBA. GBA allows switching the training modes without tuning hyper-parameters, so as to avoid those complex influences caused by the hyper-parameters and avoid the sudden drop in performance. \n\n**Weakness-2**: The performance boost is marginal; for example, in Figure 6, it is very difficult to understand the performance gain w.r.t. the proposed method. To be more concrete, it seems that, for the Criteo dataset, synchronous training achieves the optimal AUC much ealier than GBA.\n\n**Answer to Weakness-2**:\nWe present the detailed statistics of Figure 6 in Appendix C (also in the general response at the top of this page). Originally, we expect to emphasize the tendency of the AUC scores after switching in Figure 6. GBA provides the most similar tendency to sync training, i.e., a promising and stable accuracy from the first day to the last day of the dataset. From the perspective of industrial search or recommendation systems, a 0.2% AUC gain is not trivial, which can typically yield a 2% online CTR improvement (as in [4,33]).\n\nWe declared in the introduction that GBA has comparable convergence properties with the synchronous mode (Line 58 of the old version). So we did not mean to claim that GBA has better model performance than sync mode. The main point/contribution of GBA is to design a novel method for switching between sync and async modes without sacrificing accuracy while not tuning hyper-parameters. The model performance of sync mode can be seen as the upper bound of GBA. Under the strained hardware resource, GBA can speed up 2.4x upon synchronous training while remaining the AUC scores. Meanwhile, GBA can provide a promising and stable accuracy compared to the other async baselines when switching between different training modes.\n\n\n**Question-1**:\nFigure 6 is hard to track, would it be possible to list the final AUC of each model for each task in a table?\n\n**Answer to Question-1**:\nWe listed them in Appendix C and added more explanations in the updated version. Details can be found in the general response.", " We appreciate all the reviewers for your valuable comments and suggestions. We have submitted an **updated version** of the paper based on these comments. We hope that the latest draft addresses your questions. The revisions are summarized as follows:\n\n* We provide **theoretical analysis** in Appendix D to discuss the sudden drop of model accuracy after switching, which is based on our convergence analysis in Section 4.2. These analyses connect Observation 2 and Insight 1 in Section 3.2.\n* We clarify the statements about Figure 6 in Section 5.2, and present the detailed statistics in Appendix C. Table R1-R3 depict the AUCs after inheriting the checkpoints trained by synchronous training. We collect the mean AUC for the first, last and all days of the three datasets, as shown in Table R4. We can infer from Table R4 that GBA provides the closest AUC score as synchronous training. GBA appears with the **lowest immediate AUC drop** after switching. Throughout the three datasets, GBA outperforms the other baselines by at least **0.2% (Hop-BW in Table R4)** when switching from synchronous training. More explanations can be found in the updated version.\n* We polish the paper and fix some typos in the experiment section.\n\n**Table R1: Figure 6(a) - Criteo (from Sync.)**\n\n|Date|Sync.|GBA|Hop-BW|Hop-BS|BSP|Aysnc.|\n|:-----|:-----|:-----|:-----|:-----|:-----|:-----|\n|13|0.7999|0.7964|0.7954|0.7924|0.7930|0.5000|\n|14|0.7957|0.7932|0.7959|0.7869|0.7886|0.5000|\n|15|0.7967|0.7957|0.7895|0.7891|0.7889|0.5000|\n|16|0.7963|0.7956|0.7932|0.5040|0.7891|0.5000|\n|17|0.7962|0.7955|0.7930|0.5040|0.7883|0.5000|\n|18|0.7957|0.7950|0.7939|0.5030|0.7862|0.5000|\n|19|0.7972|0.7966|0.7968|0.5030|0.7863|0.5000|\n|20|0.7974|0.7973|0.7985|0.5030|0.7868|0.5000|\n|21|0.7965|0.7959|0.7948|0.5050|0.7863|0.5000|\n|22|0.7957|0.7955|0.7939|0.5040|0.7865|0.5000|\n|23|0.7987|0.7986|0.7933|0.5060|0.7871|0.5000|\n|Avg.|0.7969|0.7959|0.7944|0.5819|0.7879|0.5000|\n\n**Table R2: Figure 6(b) - Alimama (from Sync.)**\n\n|Date|Sync.|GBA|Hop-BW|Hop-BS|BSP|Aysnc.|\n|:-----|:-----|:-----|:-----|:-----|:-----|:-----|\n|6|0.6490|0.6489|0.6472|0.6488|0.6472|0.5000|\n|7|0.6503|0.6502|0.6478|0.6503|0.6500|0.5000|\n|8|0.6523|0.6523|0.6483|0.6523|0.6512|0.5000|\n|Avg.|0.6505|0.6505|0.6478|0.6505|0.6495|0.5000|\n\n**Table R3: Figure 6(c) - Private (from Sync.)**\n\n|Date|Sync.|GBA|Hop-BW|Hop-BS|BSP|Aysnc.|\n|:-----|:-----|:-----|:-----|:-----|:-----|:-----|\n|15|0.7877|0.7880|0.7870|0.7875|0.7880|0.7795|\n|16|0.7874|0.7877|0.7860|0.7878|0.7870|0.7785|\n|17|0.7856|0.7860|0.7840|0.7858|0.7850|0.7774|\n|18|0.7884|0.7888|0.7850|0.7882|0.7877|0.7873|\n|19|0.7894|0.7905|0.7855|0.7890|0.7886|0.7878|\n|20|0.7785|0.7788|0.7750|0.7781|0.7774|0.7598|\n|21|0.7865|0.7868|0.7823|0.7863|0.7850|0.7769|\n|22|0.7862|0.7870|0.7825|0.7858|0.7862|0.7754|\n|Avg.|0.7862|0.7867|0.7834|0.7861|0.7856|0.7778|\n\n**Table R4: Average AUC decrement on three datasets between GBA and the other baselines (from Sync.).**\n\n| |Sync.|Hop-BW|Hop-BS|BSP|Aysnc.|\n|:-----|:-----|:-----|:-----|:-----|:-----|\n|1st day|_+0.0011_|-0.0012|-0.0015|-0.0017|-0.1513|\n|Last-day|_-0.0002_|-0.0046|-0.0979|-0.0045|-0.1542|\n|Average|_+0.0002_|**-0.0025**|-0.0716|-0.0034|-0.1518|\n", " Thank you for the time and constructive feedback. We really appreciate your positive comments about our work. We have submitted an updated version of the paper, and will address your questions as follows.\n\n**Question-1**: About Figure 6.\n\n**Answer to Question-1**: Please refer to our general response attached to this submission at the top of this page, where we provide the detailed statistics and some supplementary discussion to Figure 6 of the submission. In Figure 6, the major concern should be the tendency of the AUC scores throughout training. In particular, we focus on the immediate AUC after switching (i.e., AUC on the first day), the AUC after several day's training (i.e., AUC on the last day), and the average AUC throughout the training data. Table R4 indicates that GBA outperforms the best baseline (i.e., Hop-BW) by about 0.2% after switching from synchronous training. GBA also provides almost the same AUC score on average as well as on the last day compared to synchronous training. Besides, we can infer from Table R4 that the performance of Hop-BW and Hop-BS is not stable after switching, i.e., Hop-BW performs much better on switching from synchronous training than switching to synchronous training, while Hop-BS works only well when switching to synchronous training (Table C.8 in Appendix). We have polished the statement of Figure 6 and fixed the typos in our revised submission.", " This paper proposes a mechanism called global batch gradient aggregation (GBA) on parameter server architectures that enables effective switching between synchronous and asynchronous training of deep learning-based recommendation systems. Strengths:\n\n- The discovery of the phenomenon that directly switching between synchronous and asynchronous training of deep learning-based recommendation models leads to a significant drop in performances is interesting. \n\nWeaknesses:\n\n- The explanation of the phenomenon is lack of technique depth. The insights presented in Section 3.2 are intuitive rather than analytical. It would make the paper stronger if there were some formal theoretical analysis on the drop in performance. \n\n- The performance boost is marginal; for example, in Figure 6, it is very difficult to understand the performance gain w.r.t. the proposed method. To be more concrete, it seems that, for the Criteo dataset, synchronous training achieves the optimal AUC much ealier than GBA. \n Figure 6 is hard to track, would it be possible to list the final AUC of each model for each task in a table? Not applicable. ", " The paper proposes a novel algorithm to switch between two common distributed training modes (abbreviated as PS and AR in the paper) without sacrificing accuracy and performance while also not tuning hyperparameters. The paper identifies three key observations that govern performance and accuracy and based on these, proposes a global batch aggregation scheme that mimics the global gradients in synchronous training to switch between PS to AR without any loss in accuracy. The paper shows a convergence analysis of the algorithm and evaluates its performance on three different tasks showing it has better performance and accuracy overall.\n Strengths:\n- The paper proposes a novel algorithm to effectively switch between two training modes without tuning hyper parameters which is costly.\n- he paper theoretically proves convergence.\n- The proposed algorithm performs better than state-of-the-art recommender training schemes on three different tasks.\n\nWeakness:\n- The paper should provide details on when to initiate switching based on cluster status and ablation studies that show how the model performs for different cluster statuses. \n - How does the performance vary with the number of workers? Since it relies on global batch aggregation, there may be some dependence on the number of workers? The authors discussed the limitations of the work in the conclusion section.", " The authors of the paper \"GBA: A Tuning-free Approach to Switch between Synchronous and Asynchronous Training for Recommendation Models\" propose a new training method for the recommender systems that enables switching between two different modes: synchronous and asynchronous training benefiting from both approaches and allowing for continuous learning maintaining the same accuracy and efficiency without hyperparameter tuning. Users of such a system can select a suitable training mode according to their judgment of cluster workload. Post Rebuttal comments: After the changes provided by the authors, I've updated my score to accept\n\n------------------------------------------\nOriginality\n\nThe paper solves a problem that, though specific to recommender systems, can be very beneficial in a practical setting and valuable for all the industries attempting to train large recommendation systems under imperfect computing constraints. I would judge that the originality of the paper is high as, according to the authors, nobody has attempted to provide a hyperparameter tuning-free approach to the switching between synchronous and asynchronous training modes. \n\nQuality\n\nThe quality of this paper is also high - it provides a clear overview, and outlines several important insights and observations on the performance of the training modes that help to determine and address the gaps in the current work. This provides clarity on the choices and explains the conducted experiments. \n\nClarity\n\nThe paper is well-structured and well-written. I enjoyed reading it. The only problem that undermines the clarity is Section 5.2 and more specifically, Figure 6. I will outline my confusion in the Questions section below. In summary, it seems that the methods in the figure and their performance do not correspond to the description in the text. \n\nSignificance\n\nI find that this work might have a medium impact in addressing important challenges specific to the recommender systems' properties in the industrial setting. I believe that this impact might translate to better resource utilization and, therefore, cost savings. I have only a couple of questions:\n* In Section 5.2 you write that \"We can see\n261 that the low gradient staleness helps Hop-BS receive a better accuracy than BSP\" (line 260-261). When I look at Figure 6 (a-c), I observe that HOP-BS performs the worst in the (a) setting dropping right after the beginning (Test day 14) while in other settings (b,c) it is not possible to distinguish any difference in AUC performance. BSP (dark blue) is performing clearly better. Next, you write that HOP-BW performs the worst, while I observe a relatively normal performance. I assume HOP-BS and HOP-BW got mixed up here. However, it doesn't explain that you say that GBA improves the AUC on (a) and (c) by at least 0.2% but I can't see this improvement in (a) and I don't think on (c) it gets to 0.2% difference. I would suggest double-checking this part as it confuses readers at this point. \n* In addition, I think Figure 6 description states (d-e) while it is supposed to be (d-f)\n* Another suggestion would be to highlight in each of the tables the best-performing metrics as you use quite a few of them and it becomes more difficult to check. This is a minor suggestion but can potentially improve readability. \n\n\n\n The limitations were not explicitly discussed with the only suggestion for future work to include automatic switching between the training modes. I think the authors might want to consider unpacking the limitations of this work a bit more. " ]
[ -1, -1, -1, -1, -1, -1, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "dysZe_zxyMT", "AP738OwxNqE", "PW-EByxcA13", "j4kLHL7su2p", "nips_2022_vphSm8QmLFm", "6xJHTFesvss", "nips_2022_vphSm8QmLFm", "nips_2022_vphSm8QmLFm", "nips_2022_vphSm8QmLFm" ]
nips_2022_L0OKHqYe_FU
Online Neural Sequence Detection with Hierarchical Dirichlet Point Process
Neural sequence detection plays a vital role in neuroscience research. Recent impressive works utilize convolutive nonnegative matrix factorization and Neyman-Scott process to solve this problem. However, they still face two limitations. Firstly, they accommodate the entire dataset into memory and perform iterative updates of multiple passes, which can be inefficient when the dataset is large or grows frequently. Secondly, they rely on the prior knowledge of the number of sequence types, which can be impractical with data when the future situation is unknown. To tackle these limitations, we propose a hierarchical Dirichlet point process model for efficient neural sequence detection. Instead of computing the entire data, our model can sequentially detect sequences in an online unsupervised manner with Particle filters. Besides, the Dirichlet prior enables our model to automatically introduce new sequence types on the fly as needed, thus avoiding specifying the number of types in advance. We manifest these advantages on synthetic data and neural recordings from songbird higher vocal center and rodent hippocampus.
Accept
This paper describes a hierarchical Bayesian latent model to identify neural sequences from spike data. Especially in neuroscience, detection of patterns in neural sequences is an important computational problem as the infrared patterns are useful for characterizing brain activity. The key problem is reminiscent of clustering where individual spikes are associated with sequences. The proposed model -- Hierarchical Dirichlet Point model (HDPP) -- consists of a Dirichlet nonhomogeneous Poisson process (DPP) prior for observed spikes and a Dirichlet Hawkes process (DHP) prior for the neural sequences generating those observed spikes. Inference is done with sequential Monte Carlo, including a proposal mechanism for merging and pruning neural sequence categories/types that may have been incorrectly generated early during inference. A comparison of the method to two other top-down unsupervised methods (ConvNMF and PP-Seq) on synthetic data is provided. While the description of the hierarchical model seems to be complete, the reviewers asked for clarifications about the motivations. During the rebuttal, the authors were also able to answer various issues about experimental section and regarding the inference procedure, They were able to include results of further experiments. As a results, reviewers decided to raise their grades for the paper. In light of the importance of the problem and the soundness of the methodology, I am inclined to suggest acceptance for this work.
val
[ "bcmb-I61QEI", "qyHRt_TiFaG", "6a5QAsIWdG3", "iaSRF5mrOa", "8Lz7h8tzIrD", "h2LJx6pJ9OI", "XuYRv3yyAT", "DkYrlQYnxz", "lrBaq9V7FIP", "4W95EG0R52", "fEu130Htzv_", "SV2HmJ1T74", "fm-gIlBZ8TQ", "NfAQtdf4O8" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for considering our responses and for revising the score! We will be happy to address any further questions or concerns about the work.", " I thank the authors for conducting additional experiments and explanation. I think the paper has overall improved as a result of authors efforts to address raised questions and concerns, I have therefore decided to raise my score.\n", " Many thanks for considering our responses and for revising the score! We will be happy to address any further questions or concerns about the work.", " Thank you for the responses and additional experimentations. I have upgraded my rating from a 4 to 6 because the additional experiments have addressed my questions and concerns.", " Many thanks for helping improve the paper and for revising the score! Indeed, PP-Seq does not require the number of sequence types explicitly but infers this number through a Dirichlet Prior with $R$ categories. Thanks for pointing it out. We have revised our paper to clarify this point.", " Thanks for the additional experiments and explanation. I think these have improved the paper. I have a related clarifying question: in the new Fig. 2c, PP-Seq appears to be given the number of sequence types explicitly (e.g., 6, 11). However, the original paper seems to infer this number through a prior, and not rely on the exact value of this parameter. Could you please explain?", " Thank you for the encouraging feedback and practical suggestions. We have made several improvements according to your questions and comments. Hopefully these will resolve most of your concerns, and that they can be taken into account when deciding the final review score.\n\nThe main modifications are as follows:\n* Extra experiments were carried out:\n * We analyzed the performance of our method when there are larger types of neural sequences in Figure 2(c).\n * We explored the performance of our method when there is a greater spatial overlap between sequences in supplement material Figure 2.\n * We analyzed the convergence behaviour via the ability to recover ground-truth parameters in supplement material Figure 4.\n * We added extra experiments to compare our method with alternative approaches on synthetic data and neural recordings. Please see Figure 2(c), Figure 4(e), and supplement material Figure 1. \n\nWith these modifications, the soundness of the revised paper has improved.\n\n> Experiments on synthetic data involve very low number of sequence types (up to 3), so it is hard to see how the (inference) method would perform for a larger number of sequence types. \n\n* We thank the reviewer for this useful suggestion, and we have added an extra synthetic data experiment in the revised paper. Please refer to Figure 2 and Sec 6.1. This synthetic dataset has 11 different types of sequences with overlapping neurons (6 smaller sequences and 5 larger sequences). We compared our method with PP-Seq. According to the results, our method can identify all types of sequences without prior information on the number of types, while PP-Seq identifies the larger sequences when we set the number of types to 11.\n\n> Comparison with the alternative approach was only done on one data set. \n\n* As suggested, we have added extra experiments to compare our method with alternative approaches on synthetic data and neural recordings. Please see Figure 2(c), Figure 4(e), and supplement material Figure 1. \n\n> Why have you not compared PP-seq across the board? How does PP-seq's likelihood compare with the likelihood of your approach?\n\n* We have modified Figure 3 to show the comparison of log-likelihood between our method vs PP-Seq. From the result in Figure 3(e), our method has a similar log-likelihood on the training/testing dataset compared with PP-Seq.\n\n> Can you elaborate what do you compute ROC curves on in section 6.1? Why to you use just one type of sequence for the experiment?\n\n* We follow the steps in Section F.1 of PP-Seq’s supplementary material to compute ROC curves. Specifically, we discretized the sequence times into several time bins. For every time bin, we computed the empirical probability that it contained a sequence from the model, and this resulted in a discretized, nonnegative temporal factor $h$. We then encoded the ground truth sequence times in a binary vector and computed ROC curves by thresholding $h$ over a fine grid of values over the range [0, max($h$)]. Besides, we use this experiment setup (one type of sequence) since it keeps in line with previous literature such as PP-Seq and ConvNMF.\n\n> Have you considered or run experiments where you analysed the convergence behaviour of your methodology by assessing how often you manage to recover ground-truth parameters if you optimise on data generated by the process described in section 4, lines 145-152?\n\n* We thank the reviewer for this practical suggestion, and we have added an extra convergence experiment in the revised paper. Please refer to the supplement material Figure 4. This figure shows the comparison of ground-truth parameters and learned parameters with 95% credible intervals. The results empirically verify that our method could stably recover ground-truth parameters over different runs.\n\nWe again thank the reviewer for their comments, constructive feedback, and interesting suggestions, which greatly help improve the quality of our paper. Many thanks.", " Thanks for the feedback and constructive suggestions. We have made several improvements according to your questions and comments. Hopefully these will resolve most of your concerns, and that they can be taken into account when deciding the final review score.\n\nThe main modifications are as follows:\n* Extra experiments were carried out:\n * We explored the performance of our method when there are larger types of neural sequences in Figure 2(c).\n * We tested the time cost and memory cost of our methods vs PP-Seq on the hippocampal data in Figure 4(e) and supplement material Figure 3.\n * We investigated the time cost as a function of the number of detected sequences in Figure 4(f).\n* We have added the limitation of our approach in the Discussion.\n* We have clarified the conclusions that we draw from the results.\n\nWith these modifications, the soundness and presentation of the revised paper has improved.\n\n> For example, the authors do not demonstrate their method on a large dataset or provide evaluation of time and memory costs of their method vs. another method such as PP-Seq.\n\n* We thank the reviewer for this suggestion, and we have performed the time/memory costs comparison of our method vs parallel PP-Seq on the hippocampal data in Figure 4(e) in the main text and Figure 3 in the supplementary material. To have a fair comparison, we reimplemented our method via Julia, which is the programming language used by PP-Seq. Compared with parallel PP-Seq, our method has much fewer time costs, but higher memory allocations. We believe the cause of such high memory allocation comes from particle resampling, which involves many copy operations to transfer sufficient statistics among particles.\n\n> Figure 5 shows the time cost of the HDPP method, which the authors imply should remain constant as a function of number of detected sequences. What was the number of detected sequences, as a function of the x-axis?\n\n* We thank the reviewer for this useful suggestion, and we have added a new figure about time cost per 3000 spikes as a function of the number of detected sequences in Figure 4(f). The results empirically verify that the time cost per spike tends to be constant in our method.\n\n> In Figure 5, why is there an early higher time cost (~50-100s) before settling in?\n\n* The time cost in Figure 5 is tested via our Python implementation, which is an interpreted programming language and may cause an early higher time cost due to the memory allocations. To have more stable memory management, we retested the time cost via our Julia implementation. The result is presented in Figure 4(e) of the revised paper. \n\n> I would also like to see their method perform on a dataset with a larger number of neural sequences, where there may greater uncertainty in sequence types.\n\n* As suggested, we have added an extra synthetic data experiment in the revised paper. Please refer to Figure 2 and Sec 6.1. This synthetic dataset has 11 different types of sequences with overlapping neurons (6 smaller sequences and 5 larger sequences). We compared our method with PP-Seq. According to the results, our method can identify all types of sequences without prior information on the number of types, while PP-Seq identifies the larger sequences when we set the number of types to 11.\n\n> How does the time complexity of this method compare to PP-Seq on a fixed-sized dataset (such as the songbird or rat datasets used)?\n\n* For both songbird dataset and rat datasets, the time complexity of our method is $O(SP(K+M))$, where $S$ is the number of spikes in these datasets, $P$ is the number of particles, $K$ is the number of inferred sequences, and $M$ is the number of inferred types. As for PP-Seq, its time complexity is $O(NSKM)$, where N is the number of iterations for MCMC sampling, $S$ is the number of spikes, $K$ is the number of inferred sequences, and $M$ is the number of inferred types. Overall, our method’s time complexity is much smaller than PP-Seq since $P << N$.\n\n> What is the dataset size at which using HDPP faster than PP-Seq?\n\n* According to Figure 4(e), our method is faster than parallel PP-Seq on all five hippocampal recordings (3-minutes, 6-minutes, 9-minutes, 12-minutes, and 15-minutes).\n\n> The authors do not mention any limitations of their method nor do they mention potential negative societal impact of their work.\n\n* Thank you for this comment. We have updated the paper to address the technical limitations of our method in Discussion.\n\n> The authors should improve the clarity of the conclusions that they draw from their results.\n\n* We thank the reviewer for this suggestion, and we have revised the paper to clarify our conclusions drawn from the results.\n\nWe again thank the reviewer for their comments, constructive feedback, and interesting suggestions, which greatly help improve the quality of our paper. Many thanks.\n", " Many thanks for the feedback and constructive comments. We have given a point-to-point response to your comments below. Hopefully these will address most of your concerns, and can be taken into consideration when deciding the final review score of the paper.\n\n> Why do you want to partition the neural spikes into sequences? What is the meaning of doing so?\n\n* Detection of repetitive neural sequences is an important problem in neuroscience research because they help neuroscientists understand the brain activities such as learning[1], motor production[2], and working memory [3]. Our work will serve to advance this growing understanding by providing new analytical tools for theorists. We have revised the introduction of the paper to clearly show the definition and motivation of this topic. \n\n> Aren't the observed events already a sequence by time? \n\n* Indeed, spikes are already a sequence by time, but the neurons are not well ordered so we cannot directly observe neural sequences from data, thus we reorder the neurons to intuitively reflect the patterns, as in [4-5]. Specifically, since each spike event is represented by a tuple (neuron, time), we can sort the neurons to order the events. Firstly, the neurons that have low firing rates (few spike events) are excluded. Secondly, the neurons that have the same type of sequences are grouped. Finally, the neurons within each group are sorted by the inferred offset parameter $\\mu_{mn}$ of this sequence type. We have added extra descriptions to make it clear in the revised paper.\n\n> According to line 118, on the spike-train data, you've already had k_s to indicate the neural sequence, why do you want to infer them again with the model (see Eq2)?\n\n* Since we cannot directly observe sequences from spikes, the sequence indicator $k_s$ of each spike remains unknown until we infer the sequence from the data. We have clarified it in the revised paper.\n\n> The relationship with the classical HDP is not mentioned. The paper should highlight the difference with the classical HDP, to avoid unnecessary misunderstanding.\n\n* We are grateful that the reviewer pointed out that “hierarchical” leads to unnecessary misunderstanding. Like the classical HDP, our method also has a two-level structure: the DP at the lower level identifies the sequences from spikes, and the DP at the higher level partitions the sequences into types. The only difference is we use only one DP in the lower level since we focus on a single-trial analysis in this paper. We have clarified the difference with the classical HDP in the revised paper.\n\n> The contributions seem weak. \n\n* The main contribution lies in that we propose an online spike train pattern detection approach, which to the best of our knowledge, is the first method that enables sequential identification of sequences from observed spikes and copes with new sequence types on the fly. Our method performs comparably to the state-of-the-art with a significantly lower time cost. The two-level DPs to neural sequences detection method is specifically proposed for this objective. The lower level Dirichlet Poisson process with a Gaussian form intensity function could correctly model the within the structure of sequences, and the higher level Dirichlet Hawkes process could greatly help disambiguate spatial overlapping sequences.\n\n> The quantitative metric is only based on test log-likelihood. I look forward to seeing some evaluation that can truly reflect the performance, such as future event time prediction.\n\n* We used the log-likelihood metric following the existing neural sequences detection approach [5]. We agree the future event time prediction ability is also an important metric, but it is difficult to carry out the experiments given time limitation. We are working on it and will add the suggested metric in the future work. \n\n\n\n\nWe again thank the reviewer for the helpful comments and constructive feedbacks which greatly help improve the quality of our paper. Many thanks.\n\n**References**\n\n[1] Eichenbaum, Howard. \"Time cells in the hippocampus: a new dimension for mapping memories.\" Nature Reviews Neuroscience 15.11 (2014): 732-744.\n\n[2] Hahnloser, Richard HR, Alexay A. Kozhevnikov, and Michale S. Fee. \"An ultra-sparse code underliesthe generation of neural sequences in a songbird.\" Nature 419.6902 (2002): 65-70.\n\n[3] Goldman, Mark S. \"Memory without feedback in a neural network.\" Neuron 61.4 (2009): 621-634.\n\n[4] Mackevicius, Emily L., et al. \"Unsupervised discovery of temporal sequences in high-dimensional datasets, with applications to neuroscience.\" Elife 8 (2019): e38471.\n\n[5] Williams, Alex, et al. \"Point process models for sequence detection in high-dimensional neural spike trains.\" Advances in neural information processing systems 33 (2020): 14350-14361.", " Thank you for the appreciation and constructive comments. We have made several improvements according to your questions and comments. We hope these sufficiently clarified your concerns, and that they can be taken into account when deciding the final review score.\n\nThe main modifications are as follows:\n\n* Extra experiments were carried out:\n * We applied temporal windows to PP-Seq, and compared its time cost with our method on the hippocampal data in Figure 4(e).\n * We explored the performance of our method, PP-Seq, and ConvNMF when the firing rates are lower in supplement material Figure 1.\n * We explored the performance of our method and PP-Seq when the number of sequence types is large in Figure 2(c).\n* We have added the limitation of our approach in the Discussion.\n\nWith these modifications, the soundness and presentation of the revised paper has improved.\n\n> The argument in lines 266-267 on using temporal windows to reduce computational load appears applicable to all methods. I'd be curious to see the results of other methods after temporal windowing.\n\n* We thank the reviewer for this suggestion, and extra experiments were carried out to apply a temporal window to limit the max length of sequence for the baseline method PP-Seq (Figure 4(e)). For a fair comparison, we reimplemented our method via Julia programming language, by which PP-Seq was implemented. From the results, our method is still faster than parallel PP-Seq though it is benefited from the temporal window. Here, we report part of computation times (in seconds) of our method and PP-Seq:\n\n| Hippocampal Recording length | 3-minutes | 6-minutes | 9-minutes | 12-minutes | 15-minutes |\n| :--- | :----: | ---: | :----: | :----: | :----: |\n| HDPP | 18.2 | 37.3 | 64.5 | 85.5 | 106.6 |\n| Parallel PP-Seq | 33.1 | 127.4 | 233.4 | 333.1 | 556.3 |\n| Parallel PP-Seq with max seq length 20 | 21.9 | 60.2 | 101.4 | 123.7 | 192.3 |\n\n> Could Fig. 4 be turned into a comparative study as well? How much time would previous methods require to process that dataset?\n\n* As suggested, extra experiments were carried out to evaluate the time costs comparison of our method vs parallel PP-Seq on the hippocampal data in Figure 4(e) of the revised paper. According to the results, our method has a lower time cost compared with parallel PP-Seq.\n\n> Could you please sort the neurons in Fig. 3 identically across the two compared methods to facilitate comparison?\n\n* We thank the reviewer for this comment, and we have modified Figure 3 in the revised paper.\n\n> Could you please discuss how the different methods would perform when the firing rates are lower?\n\n* Extra experiments were carried out to evaluate the performance at lower firing rate of neurons. Please see Figure 1 in supplementary material. According to the result, as the firing rate becomes lower, the performance of all three approaches decreased. Overall, our method performed similar to PP-Seq, while was superior compared with ConvNMF.\n\n> Could you please discuss how the different methods would perform when the number of sequence types is larger?\n\n* Extra experiments with a new synthetic data were carried out as in the revised paper. Please refer to Figure 2 and Sec 6.1. This synthetic dataset has 11 different types of sequences with overlapping neurons: 6 types of smaller sequences contain 10 neurons, while the other 5 types of larger sequences contain 20 neurons. We compared our method with PP-Seq. According to the results, our method can identify all types of sequences without prior information on the number of types, while PP-Seq identifies the larger sequences when we set the number of types of sequences to 11.\n\n> One potential advantage of the proposed method could be real-time detection of sequences in a closed-loop experiment if inference can be made faster. What kind of desirable experiments could be performed by such a closed-loop setup?\n\n* We reimplemented our method via Julia programing language, and made it much faster than before. Our approach enables the online detection of neural sequences, which can be useful in diverse situations. For example, it can guide the electrode implantation process to check whether the target types of neurons exist or not around the implemented microelectrode array via identifying the sequences in a streaming way.\n\n> The authors should discuss technical limitations of the work.\n\n* We thank the reviewer for this suggestion and we have updated the paper to address the technical limitations of our method in Discussion.\n\nWe again thank the reviewer for the helpful comments and constructive feedbacks which greatly help improve the quality of our paper. Many thanks.", " This paper proposes an unsupervised learning approach to detect and cluster neural spike sequences in neural spike data. To model the data, the authors propose a hierarchical Dirichlet point process model, which employs Hawkes processes to model the temporal dynamics of neural activity within sequences, whereas spike rates within a sequence are modelled via a non-homogeneous Poisson process. The authors derive a particle filter based online inference method to detect sequences and infer their types. They use conjugate priors to derive closed-form updates for obtaining posterior distributions of model parameters. The authors evaluate the performance of their method on both synthetic and real datasets. Strengths:\n- The authors propose a well-grounded probabilistic generative model for modelling neural activity sequences in spike train data\n- The model is able to infer the number and types of sequences from the data \n- The authors have devised a tailored particle filter to infer unobserved latent variables for identifying neural activity sequences and inferring their types, which in turn allows for closed-form updates of posterior distributions over model parameters \n- The results on artificial data indicate that the proposed methodology can detect and correctly identify the types of underlying neural sequence activity in the presence of background noise. \n- The results on real data are qualitatively interesting and comparable to the results of another approach. \n- The proposed methodology in principle is scalable to larger datasets with up to 210k spikes.\n\nWeaknesses:\n- No analysis is done to show how often the inference procedure leads to suboptimal solutions, which requires reruns of measures like merging and pruning as highlighted by the authors. \n- Experiments on synthetic data involve very low number of sequence types (up to 3), so it is hard to see how the (inference) method would perform for a larger number of sequence types \n- Comparison with the alternative approach was only done on one data set. \n\n\n - Have you considered or run experiments where you analysed the convergence behaviour of your methodology by assessing how often you manage to recover ground-truth parameters if you optimise on data generated by the process described in section 4, lines 145-152?\n\n- Why have you not compared PP-seq across the board? How does PP-seq's likelihood compare with the likelihood of your approach?\n\n- Can you elaborate what do you compute ROC curves on in section 6.1? Why to you use just one type of sequence for the experiment?\n\n- How does your method perform if there is a greater (spatial and/or temporal) overlap between sequences than what we can see in synthetic datasets? I have highlighted technical limitations and weaknesses above. I have nothing further to add here.", " The authors propose a hierarchical, non-parametric Bayesian model to identify neural sequences from neural spike data and then to categorize the neural sequences, all in an online manner. Specifically, their model consists of a Dirichlet nonhomogeneous Poisson process (DPP) prior for observed spikes and a Dirichlet Hawkes process (DHP) prior for the neural sequences generating those observed spikes. They refer to their model as the Hierarchical Dirichlet Point model (HDPP). They present a particle filtering method to perform inference under their model as well as a scheme for merging and pruning neural sequence categories/types that may have been incorrectly generated early during inference. They compare their method to two other top-down unsupervised methods, ConvNMF and PP-Seq, on synthetic data. They demonstrate that their method is comparable in performance to PP-Seq under increasing background noise rate, while the performance of ConvNMF steadily drops. They then demonstrate their method on two experimentally-collected neural spike datasets, and present results indicating that their method infers the correct number and types of sequences, converge to similar parameter ranges in independent runs, and a constant expected tune cost with increasing number of observations. **Strengths**\n- Neural recordings for neuroscience are increasingly growing in size and length and under fewer experimental constraints. This direction of neuroscience research begets the need for methods that identify neural sequences in a streaming and non-parametric way. The authors' method would provide an important set of contributions to the field.\n- The authors' exposition of their model and inference method is clear.\n\n**Weaknesses**\n- The authors should improve the clarity of the conclusions that they draw from their results.\n- The authors validate their method against two existing methods, in particularly using PP-Seq to validate the number and type of sequences detected. I recommend that the authors expand the depth and rigor of their evaluations and their analysis of their method. For example, the authors do not demonstrate their method on a large dataset or provide evaluation of time and memory costs of their method vs. another method such as PP-Seq. I would also like to see their method perform on a dataset with a larger number of neural sequences, where there may greater uncertainty in sequence types. - How does the time complexity of this method compare to PP-Seq on a fixed-sized dataset (such as the songbird or rat datasets used)? In other words, what is the baseline time complexity cost imposed by the additions in our method (e.g. the non-parametric formulation)?\n- What is the dataset size at which using HDPP faster than PP-Seq?\n- Figure 5 shows the time cost of the HDPP method, which the authors imply should remain constant as a function of number of detected sequences. What was the number of detected sequences, as a function of the x-axis?\n- In Figure 5, why is there an early higher time cost (~50-100s) before settling in? The authors do not mention any limitations of their method nor do they mention potential negative societal impact of their work. Please discuss in greater depth, for example, the tradeoffs that your method's non-parametric formulation and resulting inference algorithm may result in terms of complexity, uncertainty, or power. When, for example, would using PP-Seq be more desirable?", " This paper proposed a neural sequence detection method, which is based on a so called ''hierarchical Dirichlet point process''. The model can jointly infer an unbounded number of sequences and the event types from data. The paper develops an online algorithm to process the neural spike data, with the particle filtering framework. The proposed method was evaluated on two real-world datasets. Strengths:\n1. The application is very interesting and important\n2. The experimental data is interesting. From the test log-likelihood, the proposed algorithm seems working. \n\nWeakness: In summary, there is a big issue in the presentation and the contributions seem weak. \n1. The problem formulation is not clear. A good paper should give a clear definition and motivation of the problem you want to solve, but i did not see it in this paper, making me very confused. Why do you want to partition the neural spikes into sequences? What is the meaning of doing so? Aren't the observed events already a sequence by time? For a new sequence (inferred by the model), based on what the events are ordered? The notations confuse me further --- according to line 118, on the spike-train data, you've already had k_s to indicate the neural sequence, why do you want to infer them again with the model (see Eq2)? \n\n2. The relationship with the classical HDP is not mentioned. The name is confusing in that I thought you are building a model similar to the HDP of (Teh et. al. 2005). However, it is not. The classical HDP uses the first-level DP to sample an infinite set of bases, and each DP in the second level will share these bases, but generate the data with different mixture weights. It took me a while to find out that this paper is not following the HDP framework. Rather, it is more like a hybrid of two DPs, one is to partition the data into sequences, and the other to partition the events into types. I do not see a clear ``hierarchical'' structure here. The paper should highlight the difference with the classical HDP, to avoid unnecessary misunderstanding. \n\n2. The contributions seem weak. The proposed model is an extension of Dirichlet Hawkes process by (Du et. al., 2015). One more DP is added, and it is quite incremental. The inference, based on the particle filtering, is almost the same as the SMC used by (Du et. al., 2015), plus some heuristics to merge and prune. If there are some more significant contributions, the authors should highlight them. Otherwise, I view this is a very incremental extension of (Du et. al., 2015), applied to spike-train data analysis. \n\n3. The quantitative metric is only based on test log-likelihood. I have a lot of experience of doing point process modeling. Although many works use test log-likelihood for evaluation, it is not a reliable metric, and is quite misleading. I look forward to seeing some evaluation that can truly reflect the performance, such as future event time prediction. see above N/A", " One hypothesis about neural spike data is that it can be considered as a sequence of 'neural syllables' consisting of the spiking activity of a subset of neurons over a short period of time. Understanding these sequences would then enable relating activity recordings to behaviorally or experimentally relevant observables. The present manuscript proposes a method for unsupervised identification of such sequences.\n\nRecent ML literature, as cited in the manuscript, studied similar problems for point processes in different contexts. In that sense, this manuscript primarily represents an application of existing methodology on hierarchical inference in temporally structured data. Briefly, the manuscript proposes a particle filter based (sampling-based) solution to inferring sequence labels in a hierarchical model where the observables are samples from point processes. The problem is both important and relevant to this community.\n\nThe manuscript uses a Dirichlet process prior, which enables the method to decide on the number of sequence labels without relying on prior information. Indeed, such information is not easy to know in practice. Briefly, the sequence types (labels) are modeled as a Dirichlet process so that the same type can appear multiple types in a given recording. Neuronal spiking activity is then modeled as a Hawkes process conditioned on the sequence type. Overall, the model is complicated, but it is fitting to the underlying complicated problem.\n\nUpdate on Aug. 8: In response to the author rebuttal, I decided to increase my overall score from 5 to 6. Strengths:\n- Unsupervised, adaptive (streaming) inference of the number of sequence types\n- Computational scalability\n\nWeaknesses:\n- Poor presentation: The manuscript is full of typos and grammatical mistakes. While this obscures the underlying meaning very rarely, reading the manuscript becomes time-consuming and frustrating. Similarly, reasoning should be better and more precisely explained. For instance, the argument in lines 266-267 on using temporal windows to reduce computational load appears applicable to all methods. Please better explain why it would be more beneficial for the proposed method compared to existing methods.\n\n- Other than unsupervised inference of the number of types, one potential advantage of the proposed method could be real-time detection of sequences in a closed-loop experiment if inference can be made faster. (According to Fig. 5, it doesn't appear fast enough with the hardware used in that experiment.) However, this point was never discussed. Is this a desirable capability for neuroscience? What kind of desirable experiments could be performed by such a closed-loop setup? In my opinion, this point determines whether the proposed contribution is timely and impactful or a technical upgrade without practical consequences. I listed a few questions under Weaknesses above. I'd like to add a few more questions here:\n\n- Could Fig. 4 be turned into a comparative study as well? How much time would previous methods require to process that dataset? I'd be curious to see the results of other methods after temporal windowing as I mentioned under Weaknesses.\n\n- Could you please sort the neurons in Fig. 3 identically across the two compared methods to facilitate comparison?\n\n- Could you please discuss how the different methods would perform when the firing rates are lower?\n\n- Overall, the demonstrations use only a few sequence types. This may be in line with previous literature. However, one might expect to see a much larger set of sequence types in larger, longer recordings. Could you please discuss how the different methods would perform when the number of sequence types is larger? The authors should discuss technical limitations of the work. That is, under which conditions can we expect it to fail? What are the roles of different noise sources, firing rates, etc?" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 4 ]
[ "qyHRt_TiFaG", "XuYRv3yyAT", "iaSRF5mrOa", "DkYrlQYnxz", "h2LJx6pJ9OI", "4W95EG0R52", "fEu130Htzv_", "SV2HmJ1T74", "fm-gIlBZ8TQ", "NfAQtdf4O8", "nips_2022_L0OKHqYe_FU", "nips_2022_L0OKHqYe_FU", "nips_2022_L0OKHqYe_FU", "nips_2022_L0OKHqYe_FU" ]
nips_2022_i0FnLiIRj6U
Iterative Scene Graph Generation
The task of scene graph generation entails identifying object entities and their corresponding interaction predicates in a given image (or video). Due to the combinatorially large solution space, existing approaches to scene graph generation assume certain factorization of the joint distribution to make the estimation feasible (e.g., assuming that objects are conditionally independent of predicate predictions). However, this fixed factorization is not ideal under all scenarios (e.g., for images where an object entailed in interaction is small and not discernible on its own). In this work, we propose a novel framework for scene graph generation that addresses this limitation, as well as introduces dynamic conditioning on the image, using message passing in a Markov Random Field. This is implemented as an iterative refinement procedure wherein each modification is conditioned on the graph generated in the previous iteration. This conditioning across refinement steps allows joint reasoning over entities and relations. This framework is realized via a novel and end-to-end trainable transformer-based architecture. In addition, the proposed framework can improve existing approach performance. Through extensive experiments on Visual Genome and Action Genome benchmark datasets we show improved performance on the scene graph generation.
Accept
The authors propose a new approach for end-to-end training of predicting scene graphs from images (different from the traditional two-stage approach.) The key observation that the fixed factorization approach can be suboptimal due to error compounding is reasonable and is supported by the experiment results. The proposed solution with iterative refinement is reasonable and the design choices of the method are sound. Evaluation is comprehensive overall and the result is convincing. Most of the key concerns raised by reviewers Vh2W and 5s37 who gave low scores (3 & 4) seem to be well addressed by the authors but the reviewers were not responsive and engaged in the follow-up discussion, so also no score update as well. However, the other reviewer h8U6 expressed that the authors addressed the concerns well and after reading the paper I agree on this based on my personal assessment. Therefore, even if two reviewers gave a rather low score 3 and 4, I cannot weigh those review scores much and rather rely more on the other reviewers giving 6 and my own assessment. So, I recommend accepting this paper even if the average score is rather lower than usual.
train
[ "6PM6tMEeVQ3", "3WOzgdhgl_", "7NniQZoTMHz", "Z3u49Ju2iD", "Gzl4gPop8rS", "Zhh3HvpfK-S", "5-7_XAscQ3n", "4t-bMd3Hlan", "GX2JqQy0los", "cixcvBtLei", "ITGhrDfGp27", "MhCB75CxxnW" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for reading our rebuttal.\n\nOverall it is difficult to objectively argue about novelty. However, we would like to highlight that, to the best of our knowledge, the high-level idea of iterative scene graph generation is novel and has not appeared elsewhere. The unique benefit of such formulation is its ability to forego fixed factorization common to all prior methods. This proposed formulation is both general and effective. At a technical level, formulating an instance of such a model using a transformer-based architecture, required us to introduce a novel loss, a synchronized three-stream decoder (for subjects, objects and predicates) and an innovative conditioning scheme across these decoders. In addition, we also study and address the long-tail nature of the scene graph predicate classes by introducing tunable loss weighting that can be adjusted based on demands of the final down-the-stream task.\n\nWe would additionally like to highlight that our results and ablations are indeed backed by strong performance. Specifically, our improvement over the latest state-of-the-art (concurrent work) of SGTR [30] are **very significant** (comparing #16 and #22 in Table 1; our approach is **3.7** higher on mR@50 and **10.2** higher on R@50). In terms of ablations, reported in Table 2, our improvement over the vanilla model (containing the same number of parameters) that does not contain our loss formulation or other components of the proposed model is **9.5** on mR@20 and **17.4** on R@20.", " @Review5s37: Could you be a bit more specific about this? Specifically, why did the rebuttal fail to address your initial concerns? ", " I have read the author response and the design choice or technical novelty may not be good enough.I would advise the author to make the motivation and writing clearer and easier to understand. The result and ablation should be backed by strong performance. And not including RelTr or Relationformer doesn't impact the rating, just good to have.", " We thank the reviewer for going through our rebuttal and are glad that we were able to address their concerns and questions. As suggested, we will clarify the importance of the joint loss in the revision and also add more examples in the supplementary. \n\nRegarding the \"no refinement\" model we present in the rebuttal, it has a similar number of parameters but only a **single** layer ($t=1$), that is we increase the number of parameters within each layer. Experiments pertaining to models having the same number of layers and parameters are already presented in the ablation study in **Table 2 (#2 and #3)**. #2 corresponds to the model with 6 decoder layers trained without any refinement, and #3 is the same 6 decoder layer model (equal number of parameters) but with our proposed joint loss (which enables refinement as explained previously). These two experiments highlight that increasing model capacity either via making each layer larger or via adding more layers (increasing model depth) does not emulate refinement. Therefore, our proposed refinement framework is necessary to obtain better performance. ", " Thanks for the detailed response to my concerns. I am overall very satisfied with the response and will consider revising my score.\n\n- My apologies for missing Figure A1 completely during the review - this figure is what I was looking for and definitely boosted my confidence in the paper by a lot.\n\n- Thanks for the clarification on CAS & CWS. The explanation makes sense to me. Would be helpful to clarify the importance of the joint matching loss a bit more for the revision.\n\n- For the \"no refinement\" model: do they have similar number of layers as the Refinement(t=6) ones?\n\n- Thanks for clarifying the selection process of A3-A7. That is very good to know. Still, adding a bit more examples in the revision would be helpful (perhaps as a bunch of jpegs in the supplementary).\n", " We appreciate the feedback from the reviewer, and clarify the issues and misunderstandings below. We hope that the reviewer will carefully consider them.\n\n$\\textbf{1. Qualitative examples and performance on smaller objects. }$\n\nThe qualitative examples shown in Figure 3 and supplementary Figures A3 - A7 were not chosen with any particular care or criteria. The ability of our approach to better detect smaller objects (line 7) is apparent in Figure 3, where the umbrella on the right is incorrectly localized to a much larger bounding box without any refinement ($t = 1$). As the refinement progresses, the bounding box gets much tighter and more accurate. We will add additional qualitative examples to the supplemental upon revision.\n\nAlthough you correctly point out that the recall and mean recall follow a similar trend in supplementary table A1, we believe this does not necessarily imply that all categories are improved by the same amount. This is evident from the per class predicate numbers in supplementary Figure A1, where the impact of certain predicates like “$\\texttt{flying in}$” is larger than predicates like “$\\texttt{says}$“ on the recall and mean recall metrics. To analyze this further, we similarly look at the per class object detection improvements from $t = 1$ (no refinement) to $t=6$. We find that categories like $\\texttt{plate}$, $\\texttt{fork}$, $\\texttt{kite}$, and $\\texttt{orange}$, that generally tend to have smaller bounding boxes, have the highest improvements in detection performance ($9.2$, $8.8$, $7.2$, $6.5$ better on AP respectively). This further supports our claim in line 7, and we will add numbers for remaining classes in the supplementary upon revision. \n\n$\\textbf{2. Impact of Joint Loss, CAS and CWS. }$\n\nAs you correctly point out in your additional comments, the joint matching loss is an important component in the refinement procedure. More specifically, the joint loss induces a strong implicit conditioning by using the same assignment at each refinement step (see Section 4.3). Therefore, even without CAS or CWS, the joint loss by itself enables refinement, which is evident from Table 2 (#3). CAS further builds on the joint loss, and allows for a more structured pathway to leverage any information that the implicit conditioning is unable to capture. This leads to an improvement of around 0.5 on mR@20 and 0.8 on R@20, which is significant in the scene graph literature. For example, in [43] the maximum observed improvements are around 0.6 on mR@20 (see the Scene Graph Detection column in Table 1 in [43]). CWS is complementary to the refinement process, and allows for more consistent graph generation within a step.\n\n$\\textbf{3. Performance gains are not due to additional parameters.}$\n\nWe argue that the improvements obtained by our refinement procedure are not a direct consequence of having an increased number of parameters. To demonstrate this further, we ran an additional experiment with a no-refinement transformer model that has a **similar number of parameters** as our proposed t=6 model in supplementary Table A1. The results are shown below. It is evident that our proposed refinement procedure allows for better learning, and the performance gains observed are not a result of having more parameters. We will add this ablation to the supplementary upon revision.\n\n| Model | mR@20/50 | R@20/50 | hR@20/50| AP | AP$_{50}$ | AP$_{75}$ | Model Size |\n|:----------:|:-------------|:----------:|:-------------:|:-------------:|:-------------:|:-------------:|:-------------:|\n| No Refinement | 9.9 / 13.1 | 17.5 / 21.8 | 12.6 / 16.4 | 10.1 | 21.6 | 8.0 | 1.1x |\n| Refinement (t=6) | **11.8 / 15.8** | **21.0 / 26.1** | **15.1 / 19.7** | **14.6** | **27.6** | **13.2** | 1x |\n\n$\\textbf{4. Originality and use of a decoder.}$\n\nAlthough we use a decoder as a part of our proposed transformer architecture, the primary contribution of our work is the iterative refinement formulation for scene graph generation, which can be realized using different architectures. We show this by additionally augmenting our proposed formulation to MOTIF [53], which is a completely different architecture compared to the transformer model in [2]. Additionally, the conditioning we use across our decoders is much richer than prior work, leading to better scene graphs. \n\n$\\textbf{5. Finer details in Table 1.}$\n\nWe appreciate the argument. Omitting finer details from Table 1, one can see that our improvements are **very significant** (#16 and #22 in Table 1; 3.7 higher on mR@50, 10.2 higher on R@50). The intent of the fine grained analysis is not to show model effectiveness, but rather to illustrate the behavior of our approach in different settings. Ideally, we would’ve put this in a separate table, but decided against it due to space constraints. Additionally, we apologize for the formatting in Table 1, and will make sure to highlight top performing entries for better readability. ", " We appreciate the feedback from the reviewer, and clarify the issues and misunderstandings below. We hope that the reviewer will carefully consider them. \n\n$\\textbf{Weaknesses}$\n\n$\\textbf{1. Design choices - three decoders and conditional queries.}$\n\nOur work proposes a model agnostic iterative refinement framework for scene graph generation (Equation 2). We implement this using a transformer based architecture, wherein each component of factorized distribution in Equation 2 is realized using a decoder. The three separate decoders are merely a tool to accomplish the aforementioned factorization, and predict different components of a relationship triplet. As described in Section 4.2 (lines 196 - 205), the factorization in Equation 2 dictates that each decoder needs to be conditioned in two ways. The conditional queries are used to implement one of those conditionings (over the previous graph estimate), and is a property of the transformer architecture, and not the proposed framework itself. For example, when augmenting our approach to MOTIF [53], we do not have any conditional queries. \n\nIn other words, our introduction focuses on the core idea of the need to condition in certain ways, which we believe we motivate well. The “conditional query” and “three decoders” are a specific, but natural, instantiation of this core idea, but by no means the only one. Since this instantiation is not critical to our overarching premise, we spend less time motivating these aspects of design directly. \n\n$\\textbf{2. Initial queries.}$\n\nSimilar to our approach, DETR [2] also uses queries and positional encodings (also referred to as object queries in [2]). Identical to DETR, our approach initializes the queries at the first step for each decoder to zero (i.e. $ \\mathbf{q^1_s} = \\mathbf{q^1_o} = \\mathbf{q^1_p} = \\mathbf{0}$ ). After the generation of a scene graph at the first step, these queries are updated in accordance with Equation 4, and used as input to the second step. This process is repeated $t$ times. \n\n$\\textbf{3. The proposed approach does not have an exponential complexity.}$\n\nAlthough the space of all possible relation triplets is quite large, our model does not evaluate every permutation. The three decoders are not independent of each other, and each predicts a specific component of the scene graph triplet, namely the subject, object, and the predicate. Therefore, the number of triplets our proposed transformer based model generates is equal to the number of queries used (and not exponential). \n\n$\\textbf{4. Citing related work.}$\n\nWe appreciate pointers to Relationformer (Shit et al.) and RelTr (Cong et al.); they are relevant and we will cite them appropriately. That said, we want to highlight that, to the best of our knowledge, they have only appeared on ArXiv recently and are still formally unpublished. Therefore, in accordance with the NeurIPS guidelines, they should not be considered prior work for the purposes of this submission.\n\n$\\textbf{Questions}$\n\n$\\textbf{1. Evaluation on videos.}$\n\nAs described in Section 5 (lines 255 - 259), we only use annotated frames in the Action Genome dataset [42]. Therefore, we do not evaluate our method on videos per se, rather we treat frames from video as “images” and work with those. Note that this is consistent with the listed Baselines. \n\n$\\textbf{2. The need for three decoders.}$\n\nThe need for three implicitly coupled decoders arises from our desire to induce conditional independence and structure among the predicted elements of the triplet. The alternatives would be to (1) have independent predictions for objects and predicates that need to be assembled into triplets (this is the approach taken by SGTR [30]) to which we compare in Table 1, or (2) to have a single decoder that would predict a full triplet at once. Note that in the latter case a single decoder would be required to predict both objects, their boxes and predicate labels all in one go. In our preliminary experiments, at the beginning of this project, this proved to be difficult and produced results that weren’t at all competitive with state-of-the-art.", " $\\textbf{3. Choosing the appropriate scaling parameter will depend on the underlying application. }$\n\nChoosing the appropriate scaling parameters is heavily influenced by the underlying application of the scene graph model. For example, in situations where long tail identification is important, setting higher alpha and beta values is desirable. We do not believe there is inherently a single choice that would be appropriate at all times and all applications. One of our contributions is highlighting this tradeoff by showing the performance of our model under different parameter values in Tables 1, 4 and supplementary table A2. Existing approaches lack the ability to modulate performance in such a manner, making their usability limited. Our approach, on the other hand, is able to handle a wider range of applications and allows users to set a desired tradeoff between effectively Recall and Mean Recall. We believe this to be a benefit and a desirable property as opposed to a limitation of our method. \n\n$\\textbf{4. Proposed transformer model does not use any prior information. }$\n\nOur work presents a general framework for iterative refinement (Equation 2). We implement this framework using two different architectures - a transformer based end-to-end model, and an existing method in MOTIF [53]. Our transformer based end-to-end network does not require the use of prior information (or bias) to generate scene graphs. All our results using this model are therefore devoid of this assumption.\n\n$\\textbf{Limitations. }$\n\nAutonomous driving was mentioned only to highlight an important use case of scene graphs. Although there are plenty of other applications, our work focuses on the underlying algorithm of generating scene graphs, and is independent of the downstream task. Better scene graph prediction, which is the focus of this work, will improve all downstream applications of the predicted scene graphs. ", " We appreciate the feedback from the reviewer, but would like to point out that most of the critiques stem from misunderstandings. We clarify these issues below and hope that the reviewer will carefully consider them. \n\n$\\textbf{Two-stage detection will not provide better prediction. }$\n\nWe respectfully disagree. As we discuss in lines 40 - 48 of the main paper, two-stage methods, although widely used in the scene graph generation literature [39, 45, 48, 50, 53], learn detectors that are oblivious to the graph generation task. Such object detectors are also slower (in terms of inference time) owing to their two-stage nature. Additionally, as the object detector in these approaches is often pretrained, the identified objects have to be paired up before predicate assignment. Theseis leads to the need for the algorithm to consider quadratic number of such pairs, causing further inefficiencies in training and inference. \n\nTo alleviate these issues, recent works in scene graph generation have adopted one stage architecture like transformers [13, 30] or convolutional networks [34]. Such one stage methods can be trained end-to-end and lead to faster inference. Although our proposed formulation in Equation 2 makes no assumption on the model architecture, we adopt a transformer based model owing to the aforementioned limitations of two stage architectures, and the overall trend in the literature. \n\nWe compare against two stage methods in Table 1, and highlight that our proposed transformer based design outperforms existing approaches, including two-stage ones, by a large margin. Furthermore, we show the generality of our core approach by augmenting it to a two stage architecture in MOTIF [53] in Table 3, demonstrating that any existing method can leverage our proposed formulation for better scene graph generation. \n\n$\\textbf{Questions}$\n\n$\\textbf{1. Direction of edges is important for relationship prediction.}$\n\nWe do believe that the direction of edges is critically important in predicate prediction, and our approach $\\textbf{does not}$ eliminate this property. This is evident from Figure 3, where all generated graphs produced by our method have directed edges. Line 157 talks about the flow of information within a particular refinement step based on the factorization of the joint distribution (Equation 2). This flow of information arises from the use of chain rule, and does not imply that edge directions are ignored. The direction of the edge is implicitly encoded in the corresponding subject and object elements. In other words the direction of the edge is implicitly assumed to be going from $\\texttt{subject}_i$ to $\\texttt{object}_i$. \n\nAs a simple example, consider an image consisting of two object instances -– a car and a pedestrian. The subject decoder would ideally identify both these objects in the image as potential “$\\texttt{subjects}$”, e.g., “car” as $\\texttt{subject}_1$ and “pedestrian” as $\\texttt{subject}_2$. Conditioned on the “car” ($\\texttt{subject}_1$) prediction, the object decoder will identify the “pedestrian” as its corresponding object ($\\texttt{object}_1$), implying a directed edge that goes from car $\\rightarrow$ pedestrian. The predicate decoder will then assign a class label to this edge, e.g., “\\texttt{next to}” as $\\texttt{predicate}_1$. Similarly, conditioned on the “pedestrian” prediction ($\\texttt{subject}_2$), we get the directed edge pedestrian $\\rightarrow$ car ($\\texttt{subject}_2$).\n\nImportant aspect of our model is that the three decoders are not independent and decoded instances among the three decoders are in correspondence with one another – forming a $\\texttt{<subject, object, predicate>}$ triplet. The total number of such triplets is $n$ and corresponds to the number of decoded queries (which is the same for all three decoders).\n\n$\\textbf{2. Proposed framework improves performance on both objects and predicates.}$\n\nFigure 3 (and more visualizations shown in supplementary figures A3-A7) highlight the iterative improvements of $\\textbf{both}$ the object detection and predicate prediction tasks. The task of object detection involves identifying objects and simultaneously localizing them in an image. Figure 3 highlights better bounding box localization for the object corresponding to the class umbrella with each refinement iteration, which visually demonstrates improvement on the detection task. The figure is small, so this may be hard to see, but we would appreciate it if the reviewer can zoom in and examine the predicted object bounding boxes in addition to the graph itself. \n", " This paper propose an end-to-end scene graph generation framework, in which design iterative refinement manner to gradually optimize the predictions. Besides, it also introduce the reweighting loss to tackle the long tail problem in this task. Author verified this method in two popular dataset and achieve the superior performance. But this manuscript is not well-written and some expressions need to be improved. #Strengths:\nThis paper reformulate the task of scene graph generation into an iterative optimization process. Based on transformer architecture, author expend the message passing within a Markov Random Field and design iterative refinement procedure. The performance in VG and AG dataset have achieve the state-of-the-art level.\n#Weaknesses:\nAlthough an end-to-end framework for generating scene graph is a good research direction, scene graph also involves object prediction. I think the pretrained object detector in two-stage methods will provide more better prediction, which authors did not given relevant introduction. The qualitative refinement analysis is not enough to support the effectiveness of the proposed method. 1. I have a question about the assumption in line 157. The [1] had pointed out that the direction of the edge is important property in relationship prediction, but you assume a fixed information flow from subject to object and finally reach predicate. Please give the explaination of eliminating graph property in your assumption.\n2. The proposed iterative refinement manner for SGG should involves the variation for both object and relationship, but figure 3 only present the estimation of different refinement steps for relationship.\n3. The scaling parameters for reweighting loss is hard to set and your experiment in Table 1 also attempt various combinations, which present a relatively large impact. So, is there a more effective way to set these two parameters? \n4. Author adopt the MOTIF [2] as baseline. As far as I know, this method use the prior information of dataset to bias the final predicted distribution. I would like to know that in your end-to-end framework, this kind of bias is still necessary for the your prediction? \n[1]GPS-Net: Graph Property Sensing Network for Scene Graph Generation. In CVPR 2020.\n[2]Neural Motifs: Scene Graph Parsing with Global Context. In CVPR 2018. Scene graph is first proposed to use for image retrieval and it is also the more direct application compared with autonomous driving. So author should focus on some simple but effective applications for scene graph.", " This paper proposes an end-to-end paradigm to predict scene graphs from image inputs. The key observation of this paper is that assuming a fixed factorization of subject-object-predicate of predicting relationships can be detrimental, as errors can accumulate in this on direction flow of this information. To alleviate this problem, the authors propose an iterative refinement process, where, although the factorization is still the same within a refinement step, information from previous steps can be utilized by later steps, thus allows information to flow in all directions. A joint matching loss is proposed, using the same matching across all steps, to stabilize this refinement procedure. Comprehensive evaluations show that the proposed method achieves good performances and outperforms many prior works. ### Strengths\n+ The idea of using an iterative refinement procedure to alleviate the issue with a fixed factorization is novel, makes sense intuitively, and seems to work on well in practice.\n+ Careful design choices for the aforementioned refinement procedure, including appropriate modification to the inputs of the architecture used (transformers), and a matching loss utilizing the same matching across all steps to ensure the stability of the iterative refinement process.\n+ Comprehensive evaluation and good empirical results, with good discussion of some of the nuances over prior works.\n\n### Weaknesses\n- Overall, I think there lacks enough evidence supporting the central claim of the paper: that this iterative refinement process is important and avoids the problem with a fixed factorization:\n * It is unclear how the qualitative examples are chosen, and overall too little qualitative examples are shown to support claims that this process fixes the problem with a fixed factorization. More, randomly drawn, examples shown the same qualitative behavior would greatly increase my confidence in the usefulness of this process. I would also expect some results to demonstrate how the proposed method solves issues discussed in the paper e.g. \"where an object entailed in interaction is small and not discernible on its own\" (L7).\n * Quantitative analysis over the output of each of the refinement step would help to show how the refinement process is actually improving the results. Ablations over the number of steps would also help here. I get that this is somewhat done in Supplementary Table A1, but that can also be due to the models having more parameters overall. I am looking more more fine grained analysis on what specific do the refinement process help the most in improving e.g. if the issue quoted above (L7) is addressed by the proposed method, that can probably be revealed by analysis showing what categories are improved the most over the steps. This currently does not appear to be the case to me since both recall and mean recall follows the same trend in table A1. \n * The fact that removing both CAS and CWS results in minimal performance drop seems to indicate that there isn't much actual ``refinement\" going on.\n - The claim that the proposed method can be used to improve other methods is not that well supported: although the performance is marginally better, it is unclear whether it's simply due to the modified model having more parameters or not.\n- Originality of the paper is pretty limited apartment from this (still questionable to me) procedure of refinement. The core of the method is still a decoder in the style if [2] and this has been attempted already in earlier literatures e.g. [30].\n- In general, I am not a fan of going into very fine details to show that a method outperforms prior/concurrent works e.g. 14-16, 21-22 in Table 1. Showing that the relative advantages of methods have to come down to such details, to me, is rather unnecessary. A good idea should be able to show its value with more specific evaluation methods / examples in addition to a holistic number about overall performance (see points above).\n\n###Post-rebuttal Comment\n\nI thank the authors for the clarifications regarding my concerns. It seems that I initially missed some evaluations, which did provide decent amount of evidence that there is actually a \"refinement\" process, and gives some level of intuition behind how this process works. Subsequently, I am raising my score from a 4 to a 6. I would encourage the authors to add more discussions around these points in the main paper, as well as providing more qualitative examples in the supplementary. As mentioned earlier, my main issue is with the (relatively) lack of analysis of the concrete benefits of the core idea of the paper. More evidence, in addition to \"better numbers overall\", that it is solving the issues with fixed factorizations will sway me towards accepting this paper.\n\nSome additional comments:\n- Might help to highlight the best/top performing entries in the larger tables. Very hard to parse the table as for now.\n- Would be interesting to visualize the qualitative behavior of the model without the joint matching loss. I suppose the model is not really doing refinement without that?\n- Following previous point: ablations of model with CAS and CWS, but without JL, would be nice to have. Limitations are discussed but those are not limitations specific to the central idea of this paper. More discussions on how to improvement the iterative refinement procedure would be more desirable here.\n\nThe authors are right in pointing out that the proposed work takes the right step towards alleviating some of the negative social impacts associated with the problem studied here.\n", " The paper proposes dynamic conditioning using Markov Random Filed for the Scene Graph generation. It implemented transformer based iterative refinement procedure . Conditional quaries has been used during the refinement and to reduce the exponential search space. Three separate multi layer decoder has been used for relation triplet and perform joint reasoning in a single stage architecture. It achieved sota performance on VG and AG dataset. Strength:\n1. The paper proposes a single stage Scene Graph generation approach, with novel transformer based formulation which include separate decoder for relational triplet, and conditional query.\n2. The paper show that their results improved VG and AG sota.\n3. Tested the approach on both video and images.\n\nWeakness : \nDepite their strength this paper lack few fundamental point :\n1. IMHO this paper lacks a strong motivation, author should clearly mention on the introduction why the conditional query is needed and how it can be incorporated to architecture and why they need thee decoder in a one stage..the story should be connected ...\n2. What is the initial query, is it same like DETR or how do u process or the initial query are only subject queries ? (sec 4.2)\n3. It has three separate layer of decoder and we know that scene graph has exponential complexity...author should analyiz dat\n\nAcknowledgement :\nIt would be better if you acknowledge other single stage paper, like Relationformer - Shit at el, RelTr - Cong at el.. and other earlier transformer based paper like Seq2Seq- Lu at el, Relation Transformer Network - Koner at el\n\nSuggention :\nThe paper has technical novelty, please write in a way such that it would be easy to follow and understand your contribution\n 1. How you have incorporated the method in video, using some tracking? please mention\n2. Please provide clear motivation for three decoders and why it is necessary despite their complexity \n\n yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4 ]
[ "7NniQZoTMHz", "7NniQZoTMHz", "5-7_XAscQ3n", "Gzl4gPop8rS", "Zhh3HvpfK-S", "ITGhrDfGp27", "MhCB75CxxnW", "GX2JqQy0los", "cixcvBtLei", "nips_2022_i0FnLiIRj6U", "nips_2022_i0FnLiIRj6U", "nips_2022_i0FnLiIRj6U" ]
nips_2022_4btNeXKFAQ
Low-rank Optimal Transport: Approximation, Statistics and Debiasing
The matching principles behind optimal transport (OT) play an increasingly important role in machine learning, a trend which can be observed when OT is used to disambiguate datasets in applications (e.g. single-cell genomics) or used to improve more complex methods (e.g. balanced attention in transformers or self-supervised learning). To scale to more challenging problems, there is a growing consensus that OT requires solvers that can operate on millions, not thousands, of points. The low-rank optimal transport (LOT) approach advocated in \cite{scetbon2021lowrank} holds several promises in that regard, and was shown to complement more established entropic regularization approaches, being able to insert itself in more complex pipelines, such as quadratic OT. LOT restricts the search for low-cost couplings to those that have a low-nonnegative rank, yielding linear time algorithms in cases of interest. However, these promises can only be fulfilled if the LOT approach is seen as a legitimate contender to entropic regularization when compared on properties of interest, where the scorecard typically includes theoretical properties (statistical complexity and relation to other methods) or practical aspects (debiasing, hyperparameter tuning, initialization). We target each of these areas in this paper in order to cement the impact of low-rank approaches in computational OT.
Accept
Overall: The paper focuses on advancing our knowledge, understanding and practical ability to leverage low-rank factorizations in optimal transport. Reviews: The paper received four reviews. 4 accepts (all confident). It seems that there are several reviewers that will champion the paper for publication. The reviewers found the paper is clear and has a clean presentation. The findings are interesting. The authors have provided extensive answers to reviewers' comments, answering most of them successfully. After rebuttal: A subset of the reviewers engaged in a consensus that the paper should be accepted. Confidence of reviews: Overall, the reviewers are confident. We will put more weight to the reviews that got engaged in the rebuttal discussion period.
train
[ "D63rmfOOMT", "L6raeKsMQSw", "ZzjlhR8TM4g", "CoRsCEOvwXY", "sNTMWKWaeqI", "_madUWpFsik", "1lX4wL7BZVf", "Q4QfSYcgBwU", "bylzDOqdX8P", "-dmg1DeKZYv", "CSGcaIa5UAQ", "5NeA860eDt", "Jxho_pci88", "NNhoScgngon", "iy1NtmBpI-" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer:\n\nMany thanks for your appreciation and supporting comments. Thank you very much for your suggestions. What follows is a simple and fairly open-ended response on the items you have raised:\n\nOn item (1): this is indeed an important subject. One might hope to start from the simplest possible cases in which LOT is shown to provably converge to the global optimum given a fairly simple initializer.\n\nOn item (2): if we understand your question correctly, we feel this is a fairly general question on embeddability of general metrics in Euclidean spaces that goes beyond, in our opinion, LOT's study. This does, however, of course play a very important practical role when using LOT beyond the $p=2$ - Wasserstein regime, and is tightly related to item 3:\n\nOn item (3): those are extremely important questions indeed for LOT applications beyond the $p=2$ - Wasserstein regime. We have collaborated with people that are interested in using LOT to achieve larger scale computations, and who are tempted to use not only the case $p\\ne 2$ but, as you mention, more advanced, composite distance functions. We do not have a clear picture yet of whether these settings are harder to optimize, and, if any difficulty arises, whether it comes from the LOT optimization itself, or from the fact that such distances are harder to approximate with low-rank structures. Such geodesic distances are known to be more robust to outliers (notably when using a NN graph) and so it would be desirable to achieve (and understand) a better performance in such settings.", " We thank the reviewer for having taken the time to read our rebuttal, and for your reply.\n\nWe do agree with your original assessment, in your first review, and the more recent one above. This is why, as mentioned above, we have replaced the original sentence in the paper:\n\n*“This result shows that the estimation of $LOT_{r,c}$ is independent of the dimension and can be performed on general compact metric spaces.“*\n\nby\n\n*\"This result is, to the best of our knowledge, the first attempt at providing a statistical control of low-rank optimal transport. We provide an upper-bound of the plug-in estimator which converges towards $LOT_{r,c}$ at a parametric rate and which is independent of the dimension on general compact metric spaces. While we fall short of providing a lower bound that could match that upper bound, and therefore provide a complete statistical complexity result, we believe this result might provide a first explanation on why, in practice, $LOT_{r,c}$ displays experimentally better statistical properties than unregularized OT and its curse of dimensionality [Dudley, 1969] etc..\"*\n\nWe do feel this is a significantly more nuanced formulation, which, while underlining that this is a first result towards a better statistical understanding, mentions that only half the job is done. Is there anything in that formulation that you feel is ambiguous?\n\nIf no, you mentioned **The way it (statistical complexity) is phrased in the paper is very unclear, actually not at all present.**. \n\nWhile we have corrected the part mentioned above, it is very likely, given the short time span of the rebuttal, that some other parts of the paper use the former, more forceful formulation (notably abstract or statement of contributions that we have not corrected yet). If you are referring to such parts to make the call above, rest assured that we will correct *all* references to statistical complexity, and introduce the nuance above, which reflects our current thinking.\n", " Dear authors,\n\nThank you for the very detailed response. This has addressed most of my concerns and I'm raising my score.\n\nIt seems to me there are two important and interesting theoretical questions left unanswered that are suitable for future work: 1) Under what conditions do spurious local minima not exist in LOT formulation? 2) When does the distance matrix of the data have low-rank structure? For 2), if data are indeed supported on a low-dimensional manifold, then should we use the ambient distance or the geodesic distance on the manifold (which is hard to obtain)? If using ambient distance would it still be low-rank?", " I thank the authors for their explanations. I do think the upper-bound on the statistical complexity is a nice contribution, as well as the rest of the paper. Still, I think a clarification is needed. \n\nIt is sometimes possible to design estimators that over-estimate or under-estimate quantities of interest. It can be the case that the best lower-bound that can be found is cursed with the dimension, like $1/n^{1/d}$ (although I tend to think that a parametric rate is more likely to occur). As a consequence, there is no concrete result on statistical complexity of LOT in this paper due to this lack of lower bound.\n\nThe way it is phrased in the paper is very unclear, actually not at all present. What do you think?", " > The example in Figure 2 is too simple. Swiss roll or two moons in figure 3.2 is more convincing.\n\n&#8594; We have followed the suggestion made by the reviewer and added an additional experiment in the Supp. Mat. (please refer to sec. C.2, Fig. 6) comparing the GF of LOT and DLOT between two moons. We will add this new experiment in the main text of our final version.\n\n> Figure 2, which stage is the middle two plots of Figure 2 in gradient flow? They seem to be in a very late stage of convergence, but a weird phenomenon is that the gradient flows of both DLOT and LOT exceed the target distribution firstly, and then come back. Especially when looking at those green arrows, they firstly point outside the moon, then point inside the moon. I think if you solve gradient flow correctly, it will not have this \"exceed first and then pull back\" process.\n\n&#8594; We run a GD scheme during 200 iterations and we plot in the middle the states at 50 and 100 iterations. We will precise it in the final version. We also thank the reviewer for pointing out this observation on the GF: we have corrected our GF by considering a smaller step-size in the GD scheme and we have replaced the figure in our main text (please refer to Fig. 2 of the new submission).\n\n> Typo: \nrow 142 delete \"in\"\nall the equation (6) is referenced as (16)\n\n&#8594; We thank the reviewer for pointing out these typos which have been corrected for the final version.\n\n \n", " > Figure 4: what is the x-axis \"operations\"? \n\n&#8594; The x-axis corresponds to the total number of algebraic operations. This number is computed at each iteration of the outer loop of the algorithm proposed in Scetbon et al., [2021] and is obtained by computing the complexity of all the operations involved in their algorithm to reach it. We consider this notion of time instead of CPU/GPU time as we do not want to be architecture/machine dependent. We have precised it for the final version.\n\n> Figure 4: why do some curves not start at 0 on the x-axis?\n\n&#8594; Some curves do not start at 0 because we start plotting the curves after obtaining the initial point which in some case requires more algebraic operations (e.g. kmeans methods). We have added an explanation for the final version.\n\n> Figure 4: what is the takeaway message from the right figure?\n\n&#8594; The right figure of Fig.4 shows two main observations: (i) that the initial point obtained using a “rank 2” or random initialization can be close to spurious and non-attractive local minima, which may trigger the stopping criterion too early and prevent the algorithm from continuing to run in order to converge towards an attractive and well behaved local minimum. (ii) When initialiazing the algorithm using kmeans methods, we show that our stopping criterion is a decreasing function of time meaning that the algorithm converges directly towards the desired solution. We have clarified these two points in our new submission.\n\n> Line 99: \"obtain next a control the approximation\", missing \"of\"?\nLine 187, Line 192: (15) does not exist. Do you mean (6)?\nLine 252: (15) should be (6)?\nLine 238: \"we do not have access to the this\", no \"the\" here\n\n&#8594; We thank the reviewer for pointing out these typo which have been corrected for the final version. (Indeed, Eq. (15) in the main text refers to Eq.(6)).\n\n", " > The experiments section is short but verifies part of the theory. There are some missing experiments justifying the adaptive choice of γk versus without adaptation. \n\n&#8594; We agree with the reviewer and we have provided an additional experiment in the Supp. Mat. (please refer to sec C.1, Fig. 5) showing the interest of the adaptive step-size in practice. We will add this experiment in the main text of the final version.\n\n> Parts of the figures and captions could be improved --- see detailed comments below.\n\n&#8594; We have followed the suggestions of the reviewer and changed the figures accordingly.\n\n> A central question I have regarding the practicality of LOT: Is the computational benefit of LOT worth the introduction of nonconvexity and spurious local minima?\n\n&#8594; This is indeed the point we have tried to make in this paper. In practice, our experiments suggest (as is often the case for factorized approaches) that only global minima (or at least local minima with a transportation cost very close to the optimal one) are attractive and therefore the non-convexity of the objective does not seem to be an obstacle here. Indeed, in Fig. 4, we show that whatever the initialization considered, the algorithm converges toward the same value. Therefore if we were able to initialize the algorithm close to the global minima we would also converge towards this value, meaning that the value obtained is at least very close to the optimal one. Moreover, experiments in Fig. 1~3 illustrate the above statement as well. In Fig. 1, we observe that our statistic (computed using the algorithm proposed in Scetbon et al. [2021]) converges towards 0 according to the theoretical rates obtained. In Fig. 2, we recover the target distribution meaning that we correctly minimize DLOT (which requires having access to a meaningful gradient of DLOT computed by solving the LOT problems involved in DLOT). Finally, we observe in Fig. 3 (top row) that we recover the same partition as the one obtained by kmeans on various clustering problems.\n\n> I would hope to see more experiments (at least empirically) on demonstrating the benefits gained by low-rank approximation and advice on which r to choose. It seems to me that LOT is only efficient when the ground cost matrix admits a low-rank factorization. In what applications is such condition met?\n\n&#8594; Our goal here is to bring clearer explanations on the effect of this new regularization on the OT problem and our contributions are mostly theoretical ones. We also want to recall that the goal of such regularization is not to approximate the true OT cost from samples, which is a non-solvable problem in high dimension but rather, as the entropic approach, to obtain a meaningful quantity able to compare distributions in the finite sample regime, even in high dimensions. Indeed recall that when $r=1$, DLOT is exactly the Maximum Mean Discrepancy (which is already a widely used metric in ML) and increasing $r$ allows to capture sharper information about the geometry of the problem instead of considering the “flat” geometry induced by the MMD. The higher the rank is, the more information about the geometry of the problem one gets, yet, at the same time, the more degraded estimation becomes as a result. Therefore, the rank $r$ introduces (much like $\\varepsilon$ in entropic OT) a tradeoff, and given a certain precision $\\delta$ and a number of samples $n$, the choice of the rank $r$ should be chosen the largest possible such that $\\sqrt{r/n}\\leq \\delta$.\n\nNote that when the data admits a low-rank structure (meaning that the ground cost matrix is low-rank), then it seems empirically that one does not need to choose a rank higher than this intrinsic dimension of the data. This observation deserves more work and we think that it is out of the scope of this paper. In addition, low-rank cost matrices may appears in various setting, especially when data are supported on a low-dimensional manifold with $d \\ll n$ where $d$ is the dimension of the manifold and $n$ is the number of samples. A classical illustration of this situation is when the cost considered in the squared Euclidean distance on $\\mathbb{R}^d$ for which we have an exact low-rank factorization assuming that $d\\ll n$.\n\n> Line 126: sample complexity shows promises since it does not depend on dimension - but wouldn't ||c||∞ in Proposition 4 depend in the sense that in many applications the diameter of X could increase exponentially in d? Also as discussed in the paragraph below Proposition 4, Kr could go to infinity.\n\n&#8594; We agree that the diameter may become larger as we increase the dimension $d$ in some cases. However, our upper bound does not show any dependence in the dimension associated to either the regularization parameter $r$ and most importantly the number of samples $n$. \n\n> Figure 1: which r is used for the upper bound curve?\n\n&#8594; We plot the upper bound using $r=1$. We have precised it in our new submission.\n\n\n\n\n\n", " > Several references of equations are mislabeled. Eg. in line 187, 192, and 252, in the main text the reference equations are (15) while in the supplements (16). Also, there is not eq. (15) in the main text. Please fix and make them consistent.\n\n&#8594; We thank the reviewer for pointing out these annotation errors. Line 187, 192, and 252 refer to Eq.(6) and we have corrected them in our new submission.\n\n> [1] Line 141-142: \"...one obtained in Proposition in 4...\", please remove a redundant \"in\".\n[2] Line 244: A typo: \"...we show the iterates obtained by a gradient descent...\", should iterates be \"iterations\"? [3] Eq. (6): \"Diag\" should be \"diag\" instead.\n\n&#8594; We thank the reviewer for revealing these typos that we have corrected in our new submission.\n", " > The naming for the variables and equations referencing labels are confusing, as a result, sometimes it is hard to follow. The clarity of this paper could be improved and the notations could be used in a more consistent way so the readers can understand the meanings of the plots and equations with less efforts. See questions below for more details.\n\n&#8594; We have followed the suggestions made by the reviewer and clarified the notations as well as the referencing of the equations used in the paper.\n\n> The adaptive step size improves the convergence. The authors also suggest clipping the step size in the range of [1, 10] for most use cases. Could an explanation or evidence be provided to support this choice?\n\n&#8594; The main issue when choosing a constant step size is that the range of admissible $\\gamma$ such that the algorithm converges depends on the problem considered. Indeed, for some problems we have encountered in practice, the algorithm might fail to converge for large $\\gamma$. This is because the algorithm of Scetbon et al. [2021, Alg. 3] requires solving Eq. (7) at each iteration, which involves some kernels (defined in l.198, 199, and 200. of the old version). These kernels depend on both $\\gamma$ and the current couplings. If their product (e.g. $\\gamma CR_k diag(1/g_k)$) have large values, then taking $\\exp(-\\gamma CR_k diag(1/g_k))$ will result in a kernel with some zero entries (as can be often the case for the Sinkhorn algorithm with low regularization). This is a real issue when solving (7) using the Dykstra’s algorithm proposed in Scetbon et al. [2021, Alg. 2], which divides (as the Sinkhorn algorithm does) quantities by these kernels: if one of these kernels have $\\sim 0$ entries, this may result in a “divison by 0” overflow error. Therefore $\\gamma$ must be chosen such that at each iteration $\\gamma CR_k diag(1/g_k)$ has reasonable entries.\n\nIn order to alleviate this issue and obtain a generic range of admissible values for $\\gamma$ independently of the problem considered, we propose to use an adaptive step-size. By doing so, we are able to guarantee a lower-bound of the exponential term involved in the expression of the kernels at each iteration. Indeed, recall that our adaptive step-size is defined at each iteration as follows (Eq. (8) in the paper):\n$$\\gamma_k = \\gamma / \\Vert (CR_kdiag(1/g_k),C^TQ_k diag(1/g_k), -\\omega_k / g_k^2 \\Vert_{\\infty}$$\n\nwhere $\\gamma$ is constant along the iterations. Then at each iteration we can guarantee that: \n$$ 0 \\leq \\Vert \\gamma_k CR_k diag(1/g_k)\\Vert_{\\infty}, \\Vert \\gamma_k C^TQ_k diag(1/g_k)\\Vert_{\\infty},\\Vert \\gamma_k \\omega_k / g_k^2 \\Vert_{\\infty} \\leq \\gamma$$\n\nmeaning that coordinatewise we obtain\n\n$$ \\exp(-\\gamma_k * CR_k diag(1/g_k)), \\exp(- \\gamma_k * C^TQ_k diag(1/g_k)), \\exp(\\gamma_k \\omega_k / g_k^2) \\geq \\exp(-\\gamma)\\; .$$ \n\nBy fixing the range of $\\gamma$ to be $[1,10]$, we can now guarantee that whatever the problem considered, the exponential terms involved in the kernels do not admit values smaller than \\exp(-10).\nWe observe empirically that it is sufficient in order to perform all the operations of the Dykstra’s algorithm solving Eq. (7) and to obtain convergence. We have clarified this point in our new submission.\n\n\nWe also have added a new experiment in the Supp. Mat. (please refer to sec. C.1, Fig.5) demonstrating the interest of using such an adaptive step-size and showing that the range of admissible $\\gamma$ may vary from one problem to another if we consider a fixed $\\gamma$ schedule. We will add it in the main text for the final version.\nNote also that the convergence of the MD scheme using such adaptive step-size is also theoretically justified (see results of~D’Orazio et al. [2021] and Bayandina et al. [2018]).\n\n> In Fig. 1, the notations are confusing. Could the authors choose the variable names more carefully so they are consistent with previous sections? Eg. n is number of samples here and in a few places, but somewhere else n represents dimension. Also, is the dimension d the same as stated in line 203 or is it the dimension of the marginal? \n\n&#8594; We have followed the suggestions made by the reviewer. More precisely, we always reference $n$ as being the number of samples and $d$ the dimension of the space where are supported the measures. We also have changed the notation of the low-rank associated to the cost matrix (l.205 of the new submission) and we have denoted it: $q$. Note that for the squared Euclidean distance, $q=d+2$. \n\n> In Fig. 1, it seems that the DLOT values of larger r are higher among all dcases. Is there an intuition or explanation?\n\n&#8594; Indeed, this observation was expected according to the rates obtained. We show that the rates should scale in $\\sqrt{r/n}$, therefore the higher the rank, the slower it should converge. We have precised it in our new submission.", " > Propositions 4 and 5 is about a one-sided inequality, i.e. there is no absolute value in the left-hand side of Equation in Prop. 4, neither in Equation in Prop. 5. Note that a complete result on statistical estimation is really about both lower and upper bounds. However, the authors only give an upper bound, as written line 123. Although I think it is true, I do not see how to get a lower bound. It is likely I may have missed a result in the paper showing that it is a simple consequence, and it was maybe obvious for the authors, and it would be meaningful to include it. Can the authors clarify their result? \n\n&#8594; You are absolutely right, we did not manage to lower bound the plug-in estimator using the true LOT. This result requires additional work and for now, we do not see how to achieve it. Our proof relies on a specific construction of an admissible and random coupling between the empirical distributions (introduced l.459 of the old version) for which its cost converges at a rate $\\mathcal{O}(\\sqrt{r/n})$ to the true LOT. Indeed we obtain a control of the difference in absolute value between the true LOT cost and the cost associated to this random coupling. The main issue is that we did not manage to control the error between the cost associated to this coupling and the LOT cost between the empirical measures. Therefore we exploit optimality in order to control at least one side of the difference. Using such technique, we are able to provide an upper-bound of the plug-in estimator of LOT which converges at a parametric rate towards the true LOT independently of the dimension. To the best of our knowledge, it is the first time that such a control has been obtained. The closest current result presented in the literature concerns the statistical control of the transportation cost between a fixed and arbitrary measure with a support of size $r$ and the empirical measure associated to a target probability measure for the specific case of the quadratic cost [Forrow et al., 2019, Theorem 4].\n\n> As a side remark, it is rather borderline practice that the authors pretend to have a complete result on statistical complexity. Indeed they write after proposition 4: \"This result shows that the estimation of LOTr,c is independent of the dimension and can be performed on general compact metric spaces.\" However the result in its current form is only partial and thus one cannot claim anything on statistical estimation. \n\n&#8594; Although we are upfront about the fact (e.g. l.123) that we obtain an upper bound, we understand the concern of the reviewer, and we have clarified this further in our new submission. More precisely, we have replaced the sentence:\\\n“This result shows that the estimation of $LOT_{r,c}$ is independent of the dimension and can be performed on general compact metric spaces.“\n\nby\n\n\"This result is, to the best of our knowledge, the first attempt at providing a statistical control of low-rank optimal transport. We provide an upper-bound of the plug-in estimator which converges towards $LOT_{r,c}$ at a parametric rate and which is independent of the dimension on general compact metric spaces. While we fall short of providing a lower bound that could match that upper bound, and therefore provide a complete statistical complexity result, we believe this result might provide a first explanation on why, in practice, $LOT_{r,c}$ displays experimentally better statistical properties than unregularized OT and its curse of dimensionality [Dudley, 1969] etc..\"\n\nIn addition, in order to avoid any confusion, we have clearly stated in the new submission our results presented in Proposition 4 and 5 as follows:\n$LOT_{r,c}(\\hat{\\mu},\\hat{\\nu}) \\leq LOT_{r,c}(\\mu,\\nu) + rate$.\n\n> Is it possible to add the following result. If $\\mu_n\\rightarrow \\mu$ for the weak-* topology then $LOT(\\mu_n)\\rightarrow LOT(\\mu)$? \n\n&#8594; In fact, we show in a Proposition presented in the Supp. Mat. (l.550 of the old version), the result mentioned. We have moved it into the main text of our new submission. \n\n> proof of proposition 1: the decomposition of pi line 423 in supplementary (btw, there is a typo there) should be explained a bit more. Is it a standard SVD?\n\n&#8594; Concerning the proof of Prop.1, we have added more explanation in order to make it clearer. In fact, it is not the SVD as we require that $(q_i, r_i)_{I=1}^n$ are nonnegative and sum to 1. We obtain such factorization by simply saying that the nonnegative rank of a nonnegative matrix of size $n\\times m$ cannot exceed $\\min(n,m)$. \n\n> typos in the supplementary material can be corrected, line 480, line 494.\n\n&#8594; We thank the reviewer for pointing out these typos which we have corrected for the final version.\n", " We thank the reviewers for their thorough reading of our work. We have used their remarks to improve our draft. Please refer to the new submission where the modifications have been marked in blue. We also respond to each of the reviewers below. ", " This papers is concerned with a model that approximates optimal transport (OT) using the low-rank coupling/matrices. This model in itself has been proposed a couple of years ago and this paper aims at answering important theoretical questions such as approximation error with respect to standard OT and statistical rates of estimation. They introduce a \"debiased\" version of their estimation, similar to the one proposed for entropic OT and make the link with clustering methods. Their theoretical work is also complemented with additional tricks for improving the numerical efficiency of the method.\n Although one can argue about the usefulness of the model studied by the authors, it is a good paper by its many theoretical contributions exploring the foundations of low-rank approximation of OT. Results are ranging from obvious, easy to non-trivial and the paper will be a reference for other research developments around this model.\nStrengths:\n- Paper is well written.\n- Several meaningful theoretical results.\n- So far I checked, the proofs are correct (I did not check proof of proposition 5).\n\nWeakness: \n- See my question below: Authors must address my question below, if not I'll revise my rating accordingly. My main concern is about the results for statistical estimation. \n\nPropositions 4 and 5 is about a one-sided inequality, i.e. there is no absolute value in the left-hand side of Equation in Prop. 4, neither in Equation in Prop. 5. Note that a complete result on statistical estimation is really about both lower and upper bounds. However, the authors only give an upper bound, as written line 123.\nAlthough I think it is true, I do not see how to get a lower bound. It is likely I may have missed a result in the paper showing that it is a simple consequence, and it was maybe obvious for the authors, and it would be meaningful to include it.\n\nCan the authors clarify their result? \n\nAs a side remark, it is rather borderline practice that the authors pretend to have a complete result on statistical complexity. \nIndeed they write after proposition 4: \"This result shows that the estimation of LOTr,c is independent of the dimension and can be performed on general compact metric spaces.\"\nHowever the result in its current form is only partial and thus one cannot claim anything on statistical estimation. Indeed, the fluctuations of the opposite quantity may be much larger and dependent on the dimension. This can happen in practice. \n\nSo what am I misunderstanding here?\n\nOthers: \n- Is it possible to add the following result. If $\\mu_n \\to \\mu$ for the weak-* topology then $LOT(\\mu_n) \\to LOT(\\mu)$? \n- proof of proposition 1: the decomposition of pi line 423 in supplementary (btw, there is a typo there) should be explained a bit more. Is it a standard SVD?\n- typos in the supplementary material can be corrected, line 480, line 494.\n no particular comments.", " The optimal transport (OT) is becoming more and more prominent in machine learning field, however, traditional algorithm such as the linear program has a slow computational speed. In the last decade the entropy-regularized OT (EOT) was proposed and the speed has been improved a lot. This work studies the low-rank OT (LOT) which is an algorithm proposed by Scetbon $\\textit{et al.}$ [2021] that has a promising linear time complexity by searching for the low-cost couplings with low-nonnnegative ranks. The rate of convergence and an dimension independent upper bound of the sample complexity are provided. Furthermore, a debiased version of LOT (DLOT) is proposed, ad the debiasing terms connect LOT to clustering methods. To improve the computation performance adaptive step size and better initializations are introduced, and the effectivenesses are empirically verified by experiments. Strengths:\nThis paper extends the LOT work by Scetbon $\\textit{et al.}$ [2021] and studies the theoretical and practical properties of LOT deeply in several aspects. Such complete investigation of an algorithm is essential for bringing in a member into the computational OT family. This work also proposes interesting ideas such as linking the low-rank transport bias to the clustering method, which may inspire other applications and benefit the machine learning community.\n\nWeaknesses:\nThe naming for the variables and equations referencing labels are confusing, as a result, sometimes it is hard to follow. The clarity of this paper could be improved and the notations could be used in a more consistent way so the readers can understand the meanings of the plots and equations with less efforts. See questions below for more details. 1. The adaptive step size improves the convergence. The authors also suggest clipping the step size in the range of [1, 10] for most use cases. Could an explanation or evidence be provided to support this choice?\n\n2. In Fig. 1, the notations are confusing. Could the authors choose the variable names more carefully so they are consistent with previous sections? Eg. $n$ is number of samples here and in a few places, but somewhere else $n$ represents dimension. Also, is the dimension $d$ the same as stated in line 203 or is it the dimension of the marginal? \n\n3. In Fig. 1, it seems that the DLOT values of larger $r$ are higher among all $d$ cases. Is there an intuition or explanation?\n\n4. Several references of equations are mislabeled. Eg. in line 187, 192, and 252, in the main text the reference equations are (15) while in the supplements (16). Also, there is not eq. (15) in the main text. Please fix and make them consistent.\n\nMinor errors: \n[1] Line 141-142: \"...one obtained in Proposition in 4...\", please remove a redundant \"in\". \n[2] Line 244: A typo: \"...we show the iterates obtained by a gradient descent...\", should iterates be \"iterations\"?\n[3] Eq. (6): \"$Diag$\" should be \"diag\" instead. There is no negative societal impact. The authors address the limitations.", " This work advances the theory of low-rank factorizations for OT by studying the approximation error as a function of rank and the sample complexity of LOT. It additionally proposes the debiased formulation, DLOT, which is shown to interpolate between MMD and OT, and that it metrizes weak convergence. Additional connection to clustering is drawn and better practices of using adaptive stepsizes and better initializations are suggested. Experiments are done to support the claims in 2D synthetic examples as well as on the Newsgroup20 dataset.\n This paper is well-written and easy to follow. Although the contributions are a bit all over the place regarding LOT, they are clearly stated and adequately justified.\n\nOn the theory side, the paper provides comprehensive bounds for approximation error and sample complexity while improving the bounds from previous results (e.g. Liu et al. 2021). Then the debiased formulation of LOT is shown to exhibit desirable properties similar to Sinkhorn divergence. The connection to clustering is interesting since it is specific to the low-rank approximation, something that full-rank versions cannot do. While I do not find any of the results surprising or groundbreaking, they are solid and much needed for future research on LOT.\n\nThe experiments section is short but verifies part of the theory. There are some missing experiments justifying the adaptive choice of $\\gamma_k$ versus without adaptation. Parts of the figures and captions could be improved --- see detailed comments below.\n\nA central question I have regarding the practicality of LOT: Is the computational benefit of LOT worth the introduction of nonconvexity and spurious local minima? I would hope to see more experiments (at least empirically) on demonstrating the benefits gained by low-rank approximation and advice on which $r$ to choose. It seems to me that LOT is only efficient when the ground cost matrix admits a low-rank factorization. In what applications is such condition met?\n\nComments:\n- Line 99: \"obtain next a control the approximation\", missing \"of\"?\n- Line 126: sample complexity shows promises since it does not depend on dimension - but wouldn't $||c||_\\infty$ in Proposition 4 depend in the sense that in many applications the diameter of $\\mathcal{X}$ could increase exponentially in $d$? Also as discussed in the paragraph below Proposition 4, $K_r$ could go to infinity.\n- Line 187, Line 192: (15) does not exist. Do you mean (6)?\n- Line 238: \"we do not have access to the this\", no \"the\" here\n- Line 252: (15) should be (6)?\n- Figure 1: which $r$ is used for the upper bound curve?\n- Figure 4: what is the x-axis \"operations\"? Why do some curves not start at 0 on the x-axis?\n- Figure 4: what is the takeaway message from the right figure?\n\n Please refer to my questions in the \"Strengths And Weaknesses\" section. The author has addressed the limitations and future directions to take to advance LOT. Societal impact is not discussed but I don't think it's needed.", " This paper provides an enormous amount of theoretical analysis about low-rank OT: the convergence rate of low-rank OT to the true OT wrt rank parameter, sample complexity for estimating LOT; introduce debiased version of LOT which metrizes the weak convergence; bridge LOT with clustering methods. Practically, they propose a novel initialization to avoid the bad local minima. Strengths:\n\nThis paper provides the rigorous theoretical analysis of low-rank OT: \n\n- the convergence rate of low-rank OT to the true OT wrt rank parameter, \n\n- sample complexity for estimating LOT is dimensional independent; \n\n- introduce debiased version of LOT which metrizes the weak convergence;\n\n- LOT$(\\mu,\\mu)$ can be seen as a generalization of the k-means method\n\n- Practically, they also propose a novel initialization to avoid the bad local minima.\n\nWeakness:\n\n- The example in Figure 2 is too simple. Swiss roll or two moons in figure 3.2 is more convincing. Question:\n\nFigure 2, which stage is the middle two plots of Figure 2 in gradient flow? They seem to be in a very late stage of convergence, but a weird phenomenon is that the gradient flows of both DLOT and LOT exceed the target distribution firstly, and then come back. Especially when looking at those green arrows, they firstly point outside the moon, then point inside the moon. I think if you solve gradient flow correctly, it will not have this \"exceed first and then pull back\" process.\n\nTypo: \n\nrow 142 delete \"in\"\n\nall the equation (6) is referenced as (16) ." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 3 ]
[ "ZzjlhR8TM4g", "CoRsCEOvwXY", "_madUWpFsik", "-dmg1DeKZYv", "iy1NtmBpI-", "1lX4wL7BZVf", "NNhoScgngon", "bylzDOqdX8P", "Jxho_pci88", "5NeA860eDt", "nips_2022_4btNeXKFAQ", "nips_2022_4btNeXKFAQ", "nips_2022_4btNeXKFAQ", "nips_2022_4btNeXKFAQ", "nips_2022_4btNeXKFAQ" ]
nips_2022_ZCGDqdK0zG
Fast Distance Oracles for Any Symmetric Norm
In the \emph{Distance Oracle} problem, the goal is to preprocess $n$ vectors $x_1, x_2, \cdots, x_n$ in a $d$-dimensional normed space $(\mathbb{X}^d, \| \cdot \|_l)$ into a cheap data structure, so that given a query vector $q \in \mathbb{X}^d$, all distances $\| q - x_i \|_l$ to the data points $\{x_i\}_{i\in [n]}$ can be quickly approximated (faster than the trivial $\sim nd$ query time). This primitive is a basic subroutine in machine learning, data mining and similarity search applications. In the case of $\ell_p$ norms, the problem is well understood, and optimal data structures are known for most values of $p$. Our main contribution is a fast $(1\pm \varepsilon)$ distance oracle for \emph{any symmetric} norm $\|\cdot\|_l$. This class includes $\ell_p$ norms and Orlicz norms as special cases, as well as other norms used in practice, e.g. top-$k$ norms, max-mixture and sum-mixture of $\ell_p$ norms, small-support norms and the box-norm. We propose a novel data structure with $\tilde{O}(n (d + \mathrm{mmc}(l)^2 ) )$ preprocessing time and space, and $t_q = \tilde{O}(d + n \cdot \mathrm{mmc}(l)^2)$ query time, where $\mathrm{mmc}(l)$ is a complexity-measure (modulus) of the symmetric norm under consideration. When $l = \ell_{p}$ , this runtime matches the aforementioned state-of-art oracles.
Accept
Reviewers found the problem, the results and the techniques (very) interesting. The main concerns were about the practicality of the results (esp. lack of experiments) and presentation (notably various typos). The presentation issues appear to be easily fixable with a careful pass over the paper. Ultimately, the positives significantly outweighed the negatives.
train
[ "GtBozmUobaK", "lOX45ej4g4Z", "8eCxxWO9jtL", "s2MZT6UvM6N", "JXbQAzPsz9i", "JDjim616G8e" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors try to explain why their contributions are meaningful and I can appreciate their results better now. I won't change my score due to the presentation issues I mentioned. I noticed that the other reviewers gave it a score of 2 regarding presentation, so overall this makes for a hard paper to read and the motivation could be improved. This is a top-tier conference, and papers published here should be high-quality in terms of the work and the presentation/motivation as well. Moreover, while I acknowledge the author's response stating that theirs is a theoretical contribution that develops \"novel algorithmic tools with provable guarantees, which *may* inspire practical heuristics for the problem in the future\" (emphasis is mine.) However, in my opinion, the paper still needs to show its value in practice. ", " We thank the reviewers for their time and their helpful inputs, and we will do our best to address all comments in the final version of our paper. \n\n* **(Reviewer HT4N)** The reviewer wonders *“whether people use arbitrary symmetric norms for which the DO problem wasn't understood?”.* \n There appears to be a misconception here – Our data structure provides faster distance estimation for a large class of well-studied and heavily-used symmetric norms beyond $L_p$ norms, most notably Orlicz norms (modeling sub-gaussian data), Top-$k$ norms and min/sum-mixtures of $L_p$ norms. All these symmetric norms are widely used in practice (see paper’s refs) and we believe that unifying them under a single (optimal) algorithm is valuable, both in theory and practice.\n\n* **(Reviewer HT4N)** Regarding the practicality of our data structure: We wish to emphasize that our paper targets theoretical contributions concerning the distance oracle problem, and the development of novel algorithmic tools with *provable* guarantees, which may inspire practical heuristics for the problem in the future. Our new techniques overcome several barriers in sketching distances, as elaborated in Section 2.\n\n* **(Reviewer oXwu)** The reviewer asserts that *“the problem statement is strange”*, as distance oracles are traditionally defined as data structures that quickly return the distance between any *pair* of points in $X$, whereas the problem being solved here asks to estimate the distance of an *arbitrary* point (possibly outside $X$) to a *subset* of points in $X$. \n This is true, however: \n - In the appendix, we design a data structure for the classical version (see subroutine EstPair()).\n - In $\\mathbb{R}^d$, the problem of estimating the normed distance between an arbitrary query point $y$ and a single point $x$ in the dataset $X$, is not interesting, since $\\sim d$ time is sufficient and necessary to merely read the query point $y$. By contrast, for a *subset* $S$ of points, this naïve algorithm yields $O(d\\cdot |S|)$ query time, whereas our query time is $\\sim O(d+|S|)$. We also stress that many learning applications require estimating distances between a single query point and *all* the dataset points, as explained in the introduction.\n\n* **(Reviewer oXwu)** *“Another issue is the repeating assumption that linearity is somehow central to the functioning of distance oracles. Perhaps it's essential to the current construction, but many oracles do not require linearity. I'm reminded of Indyk's hashing for lp using p-stables, where he takes the median over many computed distances. Oracles for non-vector spaces don't use linearity at all.”* \n\n Indeed, linearity is crucial for \n 1. reducing distance-sketching to *norm*-sketching, which is more amenable to the “layer-approximation” and heavy-hitter techniques we develop (see 1st paragraph of Sec 2). \n 2. linearity is key to handling distances between an arbitrary (new) query point and a point in $X$. \n\n Distance estimation in vector-spaces is the most popular setting both in theory and practice, however, we agree that the case of (nonlinear) metric spaces (e.g., graphs) is a very interesting open question. Our tools may be relevant for this case as well, via metric embeddings [Bourgain, Matousek].\n\n* **(Reviewer FHL4)** *“The proposed data structure takes time that is at least $\\epsilon^{-9}$. Even for small error accuracies, e.g., $\\epsilon= 0.001$, the running time tends to be astronomical… Can you explain why this is not a problem or suggest a way to reduce this running time to make it much more practical, considering that the similarity problem is highly important in machine learning and other fields as well? ”*\n\n We agree that the polynomial dependence on the accuracy parameter is a drawback for the practicality of our data structure, though we did not attempt to optimize the dependence on $\\epsilon$. Moreover, quadratic ($1/\\epsilon^2$) dependence is generally inevitable for distance-preserving dimensionality-reduction (e.g., [1]). We wish to emphasize that our paper targets theoretical contributions concerning the distance oracle problem, by introducing new techniques which may inspire practical heuristics for the problem. Our new techniques overcome several barriers in sketching distances, as elaborated in Section 2.\n\n[1] Larsen, Kasper Green, and Jelani Nelson. \"The Johnson-Lindenstrauss Lemma Is Optimal for Linear Dimensionality Reduction.\" *43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016).* Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik, 2016.\n\n", " This paper addresses the problem of distance oracles for symmetric norms. In this context, the goal is to preprocess a dataset of $n$ input points $x_1,\\ldots,x_n$ in some metric space, into a small-space data structure so that given a query vector $q$ and a subset $S \\subseteq [n]$, one can quickly estimate all the distances $d(q, x_i)$. To this goal, the author(s) design a new data structure named to generate sketches and manage them. With this data structure, the author(s) obtain a sublinear-time distance estimation algorithm to solve DO. Strengths\nI think the data structure they design for linear sketching is interesting in its own right and deserves further study. In their open questions, they point out that the efficiency of their data structure depends on the concentration property of the symmetric norm and whether or not said dependency is necessary. \n\nWeaknesses\nIt is not clear to me how significant it is from a practical point of view to solve DO problem for a generic symmetric norm. I realize the authors do mention that $\\ell_p$ norm are symmetric norms, and that the DO problem is understood in this case. However, in practice, do people use arbitrary symmetric norms for which the DO problem wasn't understood? The theoretical motivation is more clear to me, but not the practical motivation. \n\nThe paper was also a bit hard for me to read. $\\mbox{dst}_i$ did not seem to be defined anywhere, and even if it is obvious it should be defined for the sake of completeness. There are a couple of distracting typos (see the Questions sections) and overall the exposition is bit cumbersome for me to fully follow the relevance of the results of the paper.\n\nThere are also no experiments in the paper. Some of the theoretical guarantees in Section 4 could be moved to the appendices to make space for empirical evidence of the theoretical claims. Q1: $||{\\cdot}||_\\mbox{sym}$ denotes a fixed but arbitrary symmetric norm, correct? I think this should be said somewhere before referring to it as \"the\" symmetric form. \n\nQ2: On Line 83: What is $\\mbox{dst}_i$? It did not seem to have been defined. Same on Line 121. Did I miss the definition?\n\nQ3: On Line 152, it says \"$\\ell_2$ -heavy-hitters.\" What does this mean?\n\nQ4: On line 194 and 195, you mention $\\epsilon$ and $\\delta$ but they are not used in the following sentences. So would it be reasonable to not mention them in those lines to help the exposition?\n\nQ5: On Line 297, what is the order of the $\\setminus$ and the $\\cup$? Parenthesis would be helpful. \n\n\nMinor typos\n\nLine 1: Please use $\\ldots$ when listing elements and no $\\cdots$\n\nLine 8: In the abstract, there should be a \"-\" in \"(1 + $\\varepsilon$) distance\"\n\nLine 52: norm is repeated twice, \"norm norm\"\n\nLine 43: Add an extra space after the comma that follows the reference [BYJKS04]\n\nLine 63: Add : at the end of the line to make the rest a complete sentence\n\nLine 72: Do you mean \"a symmetric norm\" instead of \"the symmetric norm\" or \"a generic, but fixed, symmetric form\"?\n\nLine 92: use $\\ldots$ instead of $\\cdots$ in the definition of $[n]$.\n\nLine 100: Do you mean the $\\ell_2$-unit sphere?\n\nDefinition 1.2: Consider defining the median of a symmetric norm outside of Def. 1.2 (ideally before). This way, Def. 1.2 is only about mc\n\nLine 106: Add \"Maximum modulus of concentration\" to the definition to make it consistent with Def. 1.2\n\nLine 116: data is repeated twice\n\nLine 147: Use $\\ldots$ instead of $\\cdots$ in the definition of $\\mathcal{L}(v)$\n\nLine 160: all is repeated twice \"all all\"\n The authors do address the limitations of their work in specific sections of the paper. I think they do a very good job in this case. \n\nThey don't address potential negative societal impact, but as they point out in the check-list, it is theoretical work and does not seem to have any explicit negative societal impact. ", " The paper considers the distance oracle problem, where one has a database X of points, and given a query point, one must compute the approximate distances from the query to all points of some subset of X. The authors consider this problem in the very general setting of symmetric norms, norms where the ordering of the coordinates is unimportant, and taking absolute value of the coordinates does not alter the norm. Results are given in terms of the modulus of concentration of the norm.\n\nThere are several interesting techniques presented in the paper. The paper builds on the well-known \"layered approximation\" technique, but optimizes it for the current problem by reducing the number of layers, ensuring linearity, and reduces the decoding time. The overview of these techniques is informative. Strengths: The problem is interesting, as are the techniques employed and the achieved result.\n\nWeaknesses: Needs a few more rounds of proofreading, and the presentation needs to be improved. \n\nThere's too little detail in the body of the paper, and one has to look at the appendix to understand anything about the construction.\n\nThe problem statement is strange. Generally, distance oracles are defined on the input set X, to quickly return the distances between any two points in X (i.e. in less than d time). The problem being solved here is a different version of that, where this is a query point not in X, but the authors should not present this as the only (or in fact even the main) meaning of the term distance oracle. \n\nAnother issue is the repeating assumption that linearity is somehow central to the functioning of distance oracles. Perhaps it's essential to the current construction, but many oracles do not require linearity. I'm reminded of Indyk's hashing for lp using p-stables, where he takes the median over many computed distances. Oracles for non-vector spaces don't use linearity at all.\n\nMinor comments:\nThere are some informalities (\"cheap\", \"a bunch\") that are better avoided.\np2l52 norm norm\np2l65 n|S| shoud be d|S|\np2l72 \"need to design\" does not belong in a definition of a structure\np2l83 dst_i isn't defined yet\np3l105 How many 0's are there?\np3 table for the k-support norm, shouldn't there be a dependence on k?\np4l119 it's a 1+epsilon approximation, not epsilon\np4l133 The problem under consideration (recovering the norm) should be mentioned in the beginning of the paragraph. \np5l11 What's t?\np6 Both algorithms have the same name None N/A", " The authors present a data structure, for a given set of $n$ vectors in a $d$-dimensional metric space and a query vector, that can quickly approximate the distances of the query vector and any vector of the data structure.\n\t\nThe data structure can manage any symmetric norm hence it is not only limited to $\\ell_p$ norms. For $\\ell_p$ norms the data structure is as efficient as standard oracles, but it can be applied to more norms. The paper is well written and structured. Giving an intuitive version of the main results and used techniques is also really helpful. The problem the authors tackle is interesting, although it is iterative (i.e. the cases in which their improvements apply are minor).\n\nThe result is iterative as mentioned, though it remains interesting. Instead of giving sketches the authors reference knowledge of the appendix. This is even done to main results, which are only formulated in an informal way. Sometimes the reader is left on their own with out any text guiding them through sections, which is irritating. The authors should add more formal theorems, explanatory or guiding text passages and sketch some of the proofs a little more in detail.\n\nminor errors:\n- $\\ell \\ell$ 46 onward: the sentence is confusing;\n- $\\ell$ 128: ''Section 4 analyze ...'' should be something like ''\\textbf{In} Section 4 \\textbf{we} analyze ...''. Same goes for the following sentences in the roadmap.\n- $\\ell$ 128: \\st{Then ,}\\textbf{I}t\n- $\\ell$ 220: \\st{When it comes a data update}\\textbf{When updating the data with Algorithm 5} ...\n- $\\ell$ 229: take\\st{s} No potentially negative societal impact in sight.", " The paper provides a data structure based on matrix sketches and embeddings for the problem of estimating similarities, and computing distances. Specifically speaking, the proposed idea is to present a distance oracle that is represented by an efficient data structure. * Strengths:\n 1) The paper proposes a data structure based on a distance oracle for a family of norms, namely, symmetric norms.\n 2) The proposed data structure in the paper uses matrix sketches and embeddings to obtain theoretically faster running time than existing techniques.\n\n\n* Weaknesses:\n 1) The proposed data structure takes time that is at least $\\Omega\\left( \\varepsilon^{-9} \\right)$. Even for small error accuracies, e.g., $\\varepsilon = 0.001$, the running time tends to be astronomical. \n 2) The writing of the paper is slightly hard to follow. Please add some paragraphs to ease the transitions, specifically around the algorithms stated in the main paper.\n As mentioned before, my concern is regarding the running time stated in the paper. Can you explain why this is not a problem or suggest a way to reduce this running time to make it much more practical, considering that the similarity problem is highly important in machine learning and other fields as well.\n\nIn addition, can coresets fit the role of solving the objective of this paper? specifically dynamic coresets? The proposed data structure takes time that might be astronomical for small error parameters $\\varepsilon$." ]
[ -1, -1, 4, 7, 6, 6 ]
[ -1, -1, 3, 4, 3, 4 ]
[ "lOX45ej4g4Z", "nips_2022_ZCGDqdK0zG", "nips_2022_ZCGDqdK0zG", "nips_2022_ZCGDqdK0zG", "nips_2022_ZCGDqdK0zG", "nips_2022_ZCGDqdK0zG" ]
nips_2022_5Fg3XoHjQ4r
Towards Hard-pose Virtual Try-on via 3D-aware Global Correspondence Learning
In this paper, we target image-based person-to-person virtual try-on in the presence of diverse poses and large viewpoint variations. Existing methods are restricted in this setting as they estimate garment warping flows mainly based on 2D poses and appearance, which omits the geometric prior of the 3D human body shape. Moreover, current garment warping methods are confined to localized regions, which makes them ineffective in capturing long-range dependencies and results in inferior flows with artifacts. To tackle these issues, we present 3D-aware global correspondences, which are reliable flows that jointly encode global semantic correlations, local deformations, and geometric priors of 3D human bodies. Particularly, given an image pair depicting the source and target person, (a) we first obtain their pose-aware and high-level representations via two encoders, and introduce a coarse-to-fine decoder with multiple refinement modules to predict the pixel-wise global correspondence. (b) 3D parametric human models inferred from images are incorporated as priors to regularize the correspondence refinement process so that our flows can be 3D-aware and better handle variations of pose and viewpoint. (c) Finally, an adversarial generator takes the garment warped by the 3D-aware flow, and the image of the target person as inputs, to synthesize the photo-realistic try-on result. Extensive experiments on public benchmarks and our selected HardPose test set demonstrate the superiority of our method against state-of-the-art try-on approaches.
Accept
This paper received 4 positive reviews: 2xBA + WA+ A. All reviewers acknowledged that the proposed approach is simple and effective, it is well presented, and the claims are supported by strong empirical performance and extensive evaluation on several datasets. The remaining questions and concerns were addressed in the authors' responses, which seemed convincing to the reviewers. The final recommendation is therefore to accept.
train
[ "1-9KJCdlcpH", "QeblDBVF9p2", "pBRiFtjMmXh", "w5NtzzSwH2e", "ToWEVmBAYK8", "0Z40M1rNi2c", "KFIpJn4fwyJ", "KuCUwYRRjN", "6P_6lNUw5tYb", "lx3z3Ba-hox", "yJWzMNQ8rFb", "nh6sd9Bn5qH", "YtBLrnUN1rV", "QXhiYjxyfm_", "81jZfgvdPYV" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Authors addressed some of my concerns, especially w.r.t. the issues in the experimental setup. I am increasing my rating to borderline accept.", " Thanks authors for the clarifications. I will update my rating after cross-checking prior art on datasets and metrics.", " Dear Reviewer bUCs,\nWe have tried to address your concerns in our earlier responses (Part 1 and Part 2), and revised our paper based on your insightful suggestions. If you have any additional questions or suggestions, we would be very grateful to discuss them with you.\n", " Thanks for the authors' rebuttal. They addressed some of my concerns. I decide to increase my rating to borderline accept.", " Thank you for your careful revision and for acknowledging the good experimental results of our proposed methodology. In the following, we respond to your questions:\n\n* **Innovation of proposed method**: The main technical novelty of this work compared to prior art is the use of SMPL estimates for correspondence loss functions. As such the technical novelty is somewhat incremental.\n* **Unfair experiment setup**: Line 224 says that \"image pairs without properly detected IUV maps or SMPL models are filtered out\". This seems unfair filtering of test set as the proposed method relies on IUV map and SMPL model estimation. As a result, the reported numbers could be biased towards the proposed technique. Also it is not clear what it means by 'properly' here.\n* **Detail of HardPose**: The details on how the 'HardPose' dataset is constructed is not clear. What constitutes a hard pose? Having some visual examples that distinguishes normal and HardPose test sets would be good. And, I do not find any extreme poses in any of the result images. So, it is not clear to me whether any of the images can be considered a hard pose.\n* **Mismatch of the reported results**: The results reported in Tables 1 and 2 do not match to those reported in earlier works. I have checked some of the compared papers and their reported numbers seem different. Does this work use different test split compared to existing works?\n\n**Innovation of proposed method**\nWe would like to emphasize that the technical contributions of this paper are two-fold: (i) The first one is indeed the SMPL supervised correspondence learning. However, in contrary to existing methods [1,2,3] that use SMPL flows as inputs, we only consider SMPL flows for supervising our correspondence learning. Such a strategy allows us to leverage geometric priors of 3D human bodies without fitting SMPL models in inference. (ii) More importantly, we have devised an efficient and effective network architecture for **global correspondence estimation** (Sec. 3.2 \\& 3.3).\nUnlike other correspondence estimation methods, our proposed architecture can generate global correspondences (full resolution) without the expensive feature correlations/PatchMatch operations in existing methods (illustrated in Fig. 3 and Sec. 3.2, L157-L164).\nThe superiority of the proposed architecture is further validated in our experiments as well.\n\n**Unfair experiment setup**\nThanks for pointing out that this statement may cause confusion with regards to the experiment setup. We have elaborate this statement in the revised version and would like to ensure that the setup does not lead to a unfair experimental setup. \nFirstly, we only filter out the cases where no human is detected at all by DensePose and SMPL as these can be considered as outliers when regarding the global distribution of the data. We would like to stress that this process only filters out **345 of the original 101,967 pairs in the training set** and **6 of the original 8,570 pairs in the testing set** for Deepfashion, while no data is filtered out for the MVP dataset. Such a ratio of test pairs (6/8,570) has a negligible effect on the overall performance. To validate this quantitatively, we report the result of our method and the closest competitor [1] (Pose with Style) on the full testing set of Deepfashion. The experimental results are reported in Table 2 in the supplemental material and as follows:\n\n| Method | mIoU (the higher the better) | LPIPS (the lower the better) | FID (the lower the better) |\n| :-: | :-: | :-: | :-: |\n| Pose with Style (filtered) | 62.74 | 0.1997 | 16.35\n| Pose with Style (full) | 62.49 | 0.1998 | 16.35\n| 3D-GCL (Ours, filtered) | 75.76 | 0.1725 | 10.58 |\n| 3D-GCL (Ours, full) | 75.91 | 0.1725 | 10.58 |\n\nIt is apparent from the above results that the effects of the 6 filtered out test pairs can be ignored, and the proposed method is superior to the baseline method in both cases.\n\nSecondly, the closest competing baseline method [1] and also [4] heavily rely on the DensePose results and will thus benefit from this process as well, leading to a fair comparison. \n", " **Detail of HardPose**\nThanks for pointing out that the definition of \"HardPose\" in the paper might not be clear to the readers. We will elaborate our definition and also add some visual examples to the paper in the revision. In this paper, we define \"HardPose\" as opposite to standard posture, which is described as face forward and hands down. Pairs containing large viewpoint contrast or that have large differences in hand position or orientation between the source and target are regarded as \"HardPose\". \nWe first employ the calculating mechanism for pose complexity in [5] to filter out easy samples, then manually pick out the pairs that contain visually \"Hard\" posture from the testing set to ensure a difficult subset.\nWe compute the complexity score(lower is more complex) of this testing set, getting 74.39 for Deepfashion and 63.45 for MPV on the Hardset, compared to 77.87 for Deepfashion and 70.09 for MPV on the Fullset. We also include some visual examples of our HardPose testing set as well as an easy to hard example in the supplementary.\n\n**Mismatch of the reported results**\nNote that most of the baseline methods are implemented under different settings or leverage different train/test splits in the original papers. To ensure a fair comparison, all of the included methods in the tables are in our work compared using the train/test split of [1]. \nThe implementation of the evaluation metric may also cause slight differences as well. Here we use [6] to compute the distance between the distribution of our generated images and training data. Moreover, experiments designed in [1] are for pose transfer, while we conduct experiments following the common settings of Virtual Try-on, leading to different results even with the same train/test split. To further validate the effectiveness of our proposed model, we have now also compared our method with [1] under their Pose Transfer setting. Specifically, we swap top clothes of the same person in different poses in the images following the original test list of [1], getting 8,570 synthesis results. Then we compute the FID score of the generated images, obtaining a FID score of 9.53 for our method compare to [1]'s result of 13.57 (lower is better).\n\n---\n\n[1] Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN. ACM Transactions on Graphics 2021\n\n[2] ZFlow: Gated Appearance Flow-based Virtual Try-on with 3D Priors. ICCV 2021\n\n[3] Liquid Warping GAN: A Unified Framework for Human Motion Imitation, Appearance Transfer and Novel View Synthesis. ICCV 2019\n\n[4] Parser-Free Virtual Try-on via Distilling Appearance Flows. CVPR 2021\n\n[5] Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content. CVPR 2020 \n\n[6] On Buggy Resizing Libraries and Surprising Subtleties in FID Calculation. arXiv 2021\n\n", " Thank you for your constructive suggestions and valuable feedback. We are pleased to hear that you appreciate both the qualitative and quantitative results produced by our method. In the following, we will address your concerns one by one.\n\n* **Missing comparisons**: Missing comparisons. Some recent works also focus on incorporating 3D priors [1] or global correspondence [2] into image-based virtual try-on. There is no deep discussion on the difference between this work and [1] [2]. Quantitative comparisons with [1][2] are also missing. Comparisons with the above works are needed to elucidate the main contributions of this work.\n* **Unfair experimental setup**: Unfair experimental setup. Line 224. \"pairs without properly detected IUV maps or SMPL models are filtered out.\" I am concerned whether this is a fair comparison with other methods, as filtering out failure detections is beneficial for testing the proposed method. This makes the empirical results less convincing.\n* **Detail of HardPose**: What are the selection criteria for the HardPose test set? Is there a quantitative analysis of how hard the HardPose test set is compared to the public benchmarks?\n* **Structure detail of Ablation study**: Abalation study Table 3. What is the framework like for 3D-GCL w/o global correspondences?\n\n**Missing comparisons**\nThank you for the suggestion. We will add the following discussion on the differences between our 3D-GCL and the mentioned approaches [1] and [2] in the paper: \n\n>While [1] also incorporates 3D priors during training, we argue that there exist intrinsic differences between [1] and our 3D-GCL, in terms of the intention and the derivation of the 3D prior. In [1], the 3D prior is introduced in the Segmentation-Assisted Dense Fusion (i.e., the try-on synthesis module) by taking the DensePose as input and reconstructing it in the output. This will facilitate the synthesis network to preserve\nstructural and geometric integrity of the try-on results as mentioned in the original paper, but also means that the 3D prior in [1] is directly derived from the input DensePose. However, in our 3D-GCL, we innovatively employ the 3D prior to provide precise guidance to when learning the correspondence, which allows the warping module to preserve the garment texture even for challenging poses. Besides, the 3D prior of our 3D-GCL is derived from the 3D vertex correspondence between the SMPL model of the same person under various poses.\n>\n>On the other hand, although [2] proposes a global flow estimation module for garment deformation, it does not explicitly model the global correspondence between the source garment feature and the target pose feature. Specifically, [2] utilizes the style vector to modulate the weights of the StyleGAN-based network, where the style vector is obtained by concatenating the 1-D garment vector and the 1-D person vector.\nHowever, such a 1-D global style vector just provides the flow estimation network with the global information of the garment and person, rather than the global correspondence between the source and target feature.\nInstead, our 3D-GCL explicitly models the global correspondence between the garment and the person features by calculating the correspondence matrix in the low-resolution block and uses it as initial state for the high-resolution flow estimating blocks.\n\nWe further have conduct a quantitative comparison with [2] during the rebuttal. For this we load their pretrained model and conduct experiments under the person-to-person setting on the testing set of Deepfashion. Note, we were unfortunately unable to re-implement and retrain the model ourselves due to the limited rebuttal time.\n\n| Method | mIoU (the higher the better) | LPIPS (the lower the better) | FID (the lower the better) |\n| :-: | :-: | :-: | :-: |\n| FS-VTON | 44.15 | 0.2420 | 31.64\n| Ours | **75.76** | **0.1725** | **10.58**\n\n[1] and [2] are both representatives for the garment-to-person line of research. In this work, however, we have instead chosen PF-AFN (PBAFN)[3] to represent this of research, which has been demonstrated to be a stronger baseline than ZFlow [1] (Table 1 in [2]), and otherwise focused more on the more relevant person-to-person comparisons.", " **Unfair experimental setup**\nThanks for pointing out that this statement may cause confusion with regards to the experiment setup. We have elaborate this statement in the revised version and would like to ensure that the setup does not lead to a unfair experimental setup. \nFirstly, we only filter out the cases where no human is detected at all by DensePose and SMPL as these can be considered as outliers when regarding the global distribution of the data. We would like to stress that this process only filters out **345 of the original 101,967 pairs in the training set** and **6 of the original 8,570 pairs in the testing set** for Deepfashion, while no data is filtered out for the MVP dataset. Such a ratio of test pairs (6/8,570) has a negligible effect on the overall performance. To validate this quantitatively, we report the result of our method and the closest competitor [4] (Pose with Style) on the full testing set of Deepfashion. The experimental results are reported in Table 2 in the supplemental material and as follows:\n\n| Method | mIoU (the higher the better) | LPIPS (the lower the better) | FID (the lower the better) |\n| :-: | :-: | :-: | :-: |\n| Pose with Style (filtered) | 62.74 | 0.1997 | 16.35\n| Pose with Style (full) | 62.49 | 0.1998 | 16.35\n| 3D-GCL (Ours, filtered) | 75.76 | 0.1725 | 10.58 |\n| 3D-GCL (Ours, full) | 75.91 | 0.1725 | 10.58 |\n\nIt is apparent from the above results that the effects of the 6 filtered out test pairs can be ignored, and the proposed method is superior to the baseline method in both cases.\n\nSecondly, the closest competing baseline method [4] and also [3] heavily rely on the DensePose results and will thus benefit from this process as well, leading to a fair comparison. \n\n\n**Detail of HardPose**\nWe first employ the calculating mechanism for pose complexity in [5] to filter out easy samples, then manually pick out the pairs that contain visually \"Hard\" posture from the testing set. Pairs with large viewpoint and hand position contrast are added into the testing set. Here we define \"HardPose\" as opposite to standard posture, which is described as face forward and hands down. We compute the complexity score (lower is more complex) of our final filtered subset, getting 74.39 for Deepfashion and 63.45 for MPV on the Hardset, compared to 77.87 for Deepfashion and 70.09 for MPV on the Fullset. This quantitative result is consistent with the picking rules and supports that our HardPose testing set is reliable. We also include some visual examples of our HardPose testing set as well as an easy to hard example in the supplementary.\n\n\n**Structure detail of Ablation study**\nTo evaluate the performance of 3D-GCL w/o global correspondence (3D-GCL $\\ast$), we replace the correlation module of Stage I in Figure 2 with the block that is demonstrated in Figure 3(a). In addition, the loss for the global correspondence $L_o$ is replaced with $L_f$ at resolution of 64 correspondingly.\n\n---\n\n[1] ZFlow: Gated Appearance Flow-based Virtual Try-on with 3D Priors. ICCV 2021\n\n[2] Style-Based Global Appearance Flow for Virtual Try-On. CVPR 2022\n\n[3] Parser-Free Virtual Try-on via Distilling Appearance Flows. CVPR 2021\n\n[4] Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN. ACM Transactions on Graphics 2021\n\n[5] Towards Photo-Realistic Virtual Try-On by Adaptively Generating↔Preserving Image Content. CVPR 2020\n\n", " We thank the reviewer for the constructive feedback and for acknowledging the clear motivation, the efficient method, our extensive experiments and the clarity of the presentation. We are delighted to address the concerns and questions raised by the reviewer in the following:\n\n* **Influence of multiscale/coarse-to-fine structure**: How does the multiscale/coarse-to-fine structure influence the final result?\n* **More ablation study experiment results:** Can the off-the-shelf 3D mesh regressor be applied to (a)(b)(c) in Figure 3? Will the SMPL flow supervision benefit these strategies as well? How does the module structure affect the warping compared with other module structures?\n\n**Influence of multiscale/coarse-to-fine structure**\n\n| Method | mIoU (the higher the better) | LPIPS (the lower the better) | FID (the lower the better) |\n| :-: | :-: | :-: | :-: |\n| 3D-GCL w/o multiscale | 64.93 | 0.1968 | 12.70 |\n| 3D-GCL | **75.76** | **0.1725** | **10.58** |\n\nAs discussed in the introduction and Sec.3.2, flow-based warping optimization tends to fall into local minima due to the limited receptive field, while the multi-scale structure helps facilitate the learning of the intermediate feature representations and thus promotes the overall performance of the model. \n\n**More ablation study experiment results** \n\n| Method | mIoU (the higher the better) | FID (the lower the better) |\n| :-: | :-: | :-: |\n| Structure-a: Feature convolutions | 68.57 | 11.65 |\n| Structure-b: Feature correlations | 72.04 | 11.74 |\n| 3D-GCL | **74.22** | **10.58** |\n\nYes, the 3D mesh regressor can also be applied to (a) and (b), but not to (c) (because structure (c) does not estimate flows explicitly). We implemented two more variants of our method by replacing our GACRMs with the structure of (a) and (b). We evaluated them on the Deepfashion dataset, getting 11.65 FID score and 68.57 mIoU for structure (a), compared to 11.74 FID score and 72.04 mIoU for structure (b). For comparison, performance improves when leveraging GACRM to an FID of 10.58 and a mIoU of 74.22.\n\n\n", " Thank you for your careful and comprehensive comments. We are glad to hear that you appreciate the idea of our 3D-GCL and the experiments conducted in the paper. In the following, we will address your concerns point by point:\n\n* **Dependency on SMPL model**: \"The model depends heavily on the SMPL model estimation.\"\n* **Necessity of stage I**: \"If someone has a one-to-one correspondance from the DensePose, why would there be a need for the Stage I network. This is not to trivialize Stage I model, but rather to make sure that the simple waping filed obtained from DensePose are not enough and hence justifies its role. A baseline with only the one-to-one correspondance from the DensePose is much appreciated I think.\"\n* **Influence of wrong segmentation:** How the method deals with wrong semantic segmentation obtained from the human parser?\n* **Influence of wrong pose prediction:** As mentioned above, SMPL (DensePose) models are known to fail on hard posture especially on dance ones. How would the method deal on such cases? Relying solely on SMPL might give very wrong prediction, it would be good if the authors would develop on this.\n\n**Dependency on SMPL model**\nWe believe that our model depends less heavily on the SMPL model compared to existing SMPL-based methods since we introduce the information through a learning-based approach. This is in contrast to recent methods [1,2], which directly add precomputed flows into the network pipeline, which causes heavy dependence on the SMPL model estimation. Our method instead learns the flow distribution which makes it possible for the network to correct the error introduced by outliers of the estimated human mesh distribution.\n\n**Necessity of stage I**\nIt is correct that a simple warping flow obtained from DensePose is insufficient due to the limited scalability of the predefined UV space as we discuss in L106-L109. \nWhile we did not add such a baseline explicitly, one of the original baseline approaches [1] inpaints the warping flow obtained from DensePose and thus yields a similar comparison. Note, [1] also conducts additional ablation studies which show that directly using the Densepose flow will produce inferior results.\n\n\n**Influence of wrong segmentation**\nMost of the parser-based methods will fail without guidance of correct parsing results and our method is not an exception. We alleviate the influence of parsing errors by selecting the most popular human parsing prediction network [3].\nIn this work, we focus on Virtual Try-on strategies for scenarios of diverse pose and viewpoint variations and empirically ignore the potential negative impact of wrong segmentation results. \nWhile such errors are ignored in consideration of our main focus, there do exist some parser-free solutions [4,5,6] particularly designed to handle parsing errors. One possible solution is to train a parser-free student model with our original pipeline by incorporating knowledge distillation strategies. We have elaborated this in our limitation section and further discussed the possible influence of parsing errors on our model.\n\n**Influence of wrong pose prediction**\nProblems caused by wrong pose estimation share similarity with parsing errors. Our method is unable to handle wrong pose input as it is not a trivial task to infer the target posture without guidance provided by a correct pose representation, e.g. keypoints, mesh, UV map. Since our main goal is to achieve Virtual Try-on, we assume that input pose representations sent into the network are reliable. Based on this assumption, we believe that our 3D-GCL is able to compute the global correspondence of the given poses extracted from source and target person and therefore facilitates garment warping under diverse scenarios. \nTo tackle the difficulty of hard pose estimation, possible solutions are to fuse temporal information from a video or use knowledge distillation methods to exclude non-image inputs. We have added this discussion in the revised version of our paper.\n\n---\n\n[1] Pose with Style: Detail-Preserving Pose-Guided Image Synthesis with Conditional StyleGAN. ACM Transactions on Graphics 2021\n\n[2] Dressing in the Wild by Watching Dance Videos. CVPR 2022\n\n[3] Graphonomy: Universal Human Parsing via Graph Transfer Learning. CVPR 2019\n\n[4] Do Not Mask What You Do Not Need to Mask: a Parser-Free Virtual Try-On. ECCV 2020\n\n[5] Parser-Free Virtual Try-on via Distilling Appearance Flows. CVPR 2021\n\n[6] Style-Based Global Appearance Flow for Virtual Try-On. CVPR 2022\n\n\n", " We would like to thank all reviewers for their positive affirmations on the novelty and potential impact of this paper (e.g., Reviewer AD5s \\& nqeh). With the proposed 3D global correspondence learning framework, our method outperforms existing virtual try on methods on two public datasets, especially on cases with hard postures (**agreed by all reviewers**). Besides, the writing of this paper is clear and easy to follow (Reviewer AD5s \\& nqeh). \n\nFollowing the constructive suggestions and comments of the reviewers, we have revised our paper and provided more experimental results to demonstrate the advantages of the proposed method against existing virtual try-on methods. Particularly, we have \n\n1. explained our experimental setups (training/test splits) in detail (Reviewer bUCs) in Sec. 4.1 of the supplementary material; \n\n2. included our quantitative criterion for identifying HardPose samples (Reviewer 7wqS \\& bUCs) in Sec. 2 of the supplementary material; \n\n3. verified that the number and effects of our filter-out pairs (i.e., without detected SMPL models/IUV maps) can be ignored (Reviewer 7wqS \\& bUCs) in Sec. 6.1 of the supplementary material; \n\n4. included extra state-of-the-art methods for comparison (Reviewer 7wqS) and more ablation study results (Reviewer nqeh) in the revised paper and supplementary material; \n\n\nThe main revisions in our paper and the supplemental material are marked in **RED**. We hope that our efforts address the concerns of all reviewers sufficiently. \n", " The paper present a 3D-aware Global Correspondence Learning to tackle the virtual try-on problem. The method injects 3D priors onto the feature learning step via SMPL-based approach. The method shows benefits on using such an approach on two widely used dataset by the community **Strengths**\n- The paper is well written and the ideas are easy to follow\n- Conducted experiments on two datasets, DeepFashion and MPV\n- Ablation study on the proposed 3D-GCL framework\n- Good qualitative results on the try-on clothes\n\n**Weaknesses**\n- The model depends heavily on the SMPL model estimation. - If someone has a one-to-one correspondance from the DensePose, why would there be a need for the Stage I network . This is not to trivialize Stage I model, but rather to make sure that the simple waping filed obtained from DensePose are not enough and hence justifies its role. A baseline with only the one-to-one correspondance from the DensePose is much appreciated I think\n- How the method deals with wrong semantic segmentation obtained from the human parser? - As mentioned above, SMPL (DensePose) models are known to fail on hard posture especially on dance ones. How would the method deal on such cases? Relying solely on SMPL might give very wrong prediction, it would be good if the authors would develop on this", " This work presents a 3D-aware global correspondence learning (3D-GCL) framework to tackle the image-based person-to-person virtual try-on problem. The core idea is to incorporate the geometric prior of 3D human body to guide the correspondence learning between the source and the target person, aiming at preserving detailed garment textures. To circumvent the difficulty of learning long-range correspondence, the authors introduce a coarse-to-fine framework. The garment warping flow is initialized via the global correlation between high-level image features and then progressively refined. In addition, the SMPL flow estimated from the source and the target person is introduced as the 3D-aware supervision signal to guide the refinement process. Empirical studies on the DeepFashion and MPV dataset demonstrate superior performance over previous methods, especially on hard pose samples. ## Strength\n\n- The proposed method is simple and effective. It demonstrates solid improvement over previous methods in terms of FID and human evaluation scores.\n\n## Weakness\n\n- Missing comparisons. Some recent works also focus on incorporating 3D priors [1] or global correspondence [2] into image-based virtual try-on. There is no deep discussion on the difference between this work and [1] [2]. Quantitative comparisons with [1][2] are also missing. Comparisons with the above works are needed to elucidate the main contributions of this work.\n\n- Unfair experimental setup. Line 224. \"pairs without properly detected IUV maps or SMPL models are filtered out.\" I am concerned whether this is a fair comparison with other methods, as filtering out failure detections is beneficial for testing the proposed method. This makes the empirical results less convincing.\n\n[1] ZFlow: Gated Appearance Flow-based Virtual Try-on with 3D Priors. ICCV 2021\n\n[2] Style-Based Global Appearance Flow for Virtual Try-On. CVPR 2022 \n- Typos: Line 76 summarize --> summarized\n- What are the selection criteria for the HardPose test set? Is there a quantitative analysis of how hard the HardPose test set is compared to the public benchmarks?\n- Abalation study Table 3. What is the framework like for 3D-GCL w/o global correspondences? Yes. The authors addressed the limitations and potential negative societal impact of their work. ", " The paper proposes to reconstruct 3D human mesh from multiview images by fusing features with a transformer and assisting the training by pose alignment between different views. Strengths\n- The writing of the paper is clear. \n- The motivation of the paper and the effectiveness of the proposed 3D-aware global correspondence is well supported by extensive experiments. \n\nWeaknesses\n\nMore ablation studies could be added to analyze the effectiveness of the components of the method.\n- How does the multiscale/coarse-to-fine structure influence the final result?\n- Can the off-the-shelf 3D mesh regressor be applied to (a)(b)(c) in Figure 3? Will the SMPL flow supervision benefit these strategies as well? How does the module structure affect the warping compared with other module structures?\n See weakness yes", " This work proposes a virtual try-on technique to transfer garment from a given source image to target image/pose. A main difference to existing works is the use of dense correspondences between source and target poses and also using SMPL based regularization loss functions. Experiments on DeepFashion and MVP datasets with the proposed test splits demonstrate better results than existing techniques. Strengths:\n- SMPL based regularization losses for dense correspondence estimation.\n- Better results on two different datasets compared to existing works on the proposed test splits.\n\nWeaknesses:\n- The main technical novelty of this work compared to prior art is the use of SMPL estimates for correspondence loss functions. As such the technical novelty is somewhat incremental. \n- Line 224 says that \"image pairs without properly detected IUV maps or SMPL models are filtered out\". This seems unfair filtering of test set as the proposed method relies on IUV map and SMPL model estimation. As a result, the reported numbers could be biased towards the proposed technique. Also it is not clear what it means by 'properly' here.\n- The details on how the 'HardPose' dataset is constructed is not clear. What constitutes a hard pose? Having some visual examples that distinguishes normal and HardPose test sets would be good. And, I do not find any extreme poses in any of the result images. So, it is not clear to me whether any of the images can be considered a hard pose.\n\n\nPost rebuttal:\nAuthors addressed some of the concerns, especially w.r.t. experimental setup, in their response. - The results reported in Tables 1 and 2 do not match to those reported in earlier works. I have checked some of the compared papers and their reported numbers seem different. Does this work use different test split compared to existing works? There is no clear discussion of limitations of the proposed technique. For instance, the proposed technique fails when the dense pose estimation on either source or target images fail." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 4, 3 ]
[ "QeblDBVF9p2", "pBRiFtjMmXh", "81jZfgvdPYV", "KuCUwYRRjN", "81jZfgvdPYV", "81jZfgvdPYV", "YtBLrnUN1rV", "YtBLrnUN1rV", "QXhiYjxyfm_", "nh6sd9Bn5qH", "nips_2022_5Fg3XoHjQ4r", "nips_2022_5Fg3XoHjQ4r", "nips_2022_5Fg3XoHjQ4r", "nips_2022_5Fg3XoHjQ4r", "nips_2022_5Fg3XoHjQ4r" ]
nips_2022_A6AFK_JwrIW
Learning Causally Invariant Representations for Out-of-Distribution Generalization on Graphs
Despite recent success in using the invariance principle for out-of-distribution (OOD) generalization on Euclidean data (e.g., images), studies on graph data are still limited. Different from images, the complex nature of graphs poses unique challenges to adopting the invariance principle. In particular, distribution shifts on graphs can appear in a variety of forms such as attributes and structures, making it difficult to identify the invariance. Moreover, domain or environment partitions, which are often required by OOD methods on Euclidean data, could be highly expensive to obtain for graphs. To bridge this gap, we propose a new framework, called Causality Inspired Invariant Graph LeArning (CIGA), to capture the invariance of graphs for guaranteed OOD generalization under various distribution shifts. Specifically, we characterize potential distribution shifts on graphs with causal models, concluding that OOD generalization on graphs is achievable when models focus only on subgraphs containing the most information about the causes of labels. Accordingly, we propose an information-theoretic objective to extract the desired subgraphs that maximally preserve the invariant intra-class information. Learning with these subgraphs is immune to distribution shifts. Extensive experiments on 16 synthetic or real-world datasets, including a challenging setting -- DrugOOD, from AI-aided drug discovery, validate the superior OOD performance of CIGA.
Accept
Graph NNs have proven to work considerably well in the in-distribution setting. However, they fail when test data come from a different distribution than test, as shown by previous work. This paper aligns with recent works, and aims to study how to obtain invariance to shifts described by the assumed causal model. The assumed causal model is reasonable, and the solution is novel. There is consensus among the referees, as evidenced by the score of 6 from each of them, that these results could be of interest to Neurips.
train
[ "r2RIXtKNbla", "S1bqdlssm5", "sljHPndSz10", "jEpHzrG_t6Q", "4QxPTlCckHq", "m5cqnTem94U", "TKdSok44XPx", "vcBTqL_hQCE", "d0RtVHMK78F", "FXxheM6AMS", "dFr_5epjaPU", "2FKaOONi37x", "DmiNtKjTt5y", "LYbuh2ZkEeW", "jn1spJuh4d", "F8ZSKXbHpev", "-L3EP8eMBwDr", "KiTJrAO0O7y", "naQjdPVFgi", "WGhKB9x2oxJ", "0zty4XzOM8e", "3VpgsTh_GfJ", "Z2qV6MFcwex", "C-ZywI5Xtks", "GeZpsENaiNN", "2A0iG8sKLK0", "C0Z0Hh7AGCQ", "_3N7KSxXut", "HpFhRSN-bHX", "nLDNj69gWII" ]
[ "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you again for your time and efforts in reviewing our paper, and for your valuable comments that helped us to strengthen the paper!", " I have raised my rating from 5 to 6.", " Dear Reviewer QGqs,\n\nAs the window for discussion is closing, we’d be grateful if you can confirm whether our follow-up response below has addressed your concerns. We look forward to more discussions with you if you have any outstanding concerns or questions.\n\nThanks, The Anonymous Authors.", " Dear Reviewer zrMs,\n\nWe'd like to thank you again for your time and efforts in reviewing our paper. Your insightful comments have helped to further strengthen the paper a lot!\n\nThe Anonymous Authors.\n\n", " I would like to appreciate the author's rebuttal to help me better understand the merits of the paper. Hence, I would like to raise my score from 5 to 6.", " Dear Reviewer zrMs,\n\nThanks again for your time and efforts in reviewing our paper. As the window for discussion is closing, we’d be grateful if you can confirm whether our response has addressed your concerns. We look forward to more discussions with you if you have any further concerns or questions.\n\nThanks, The Anonymous Authors. ", " Thanks for your follow-up reply. We’d like to address your concerns in the following response (We will use the references from the previous [reply](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=Z2qV6MFcwex)):\n\nRegarding the efforts to resolve the graph’s complexity:\n\n> From the rebuttal, it seems that the effort to resolve this challenge is to apply a GNN encoder. The proposed method does not treat the spurious correlations in node features and graph structures separately. Is that correct?\n\nOur resolution is not simply applying a GNN encoder. In fact, we are aware of the separation of nodes and edges, and we put a lot of efforts in both theoretical analysis and implementation to resolve the graph’s complexity. We summarize our efforts as follows:\n\nIn theory,\n\ni). When deriving the SCMs, we discuss that spurious correlation could exist at both nodes and edges. That motivates us to take only a subset of both the node features and edges (resulting in a subgraph) for making predictions.\n\nii). As the spurious correlations in node features and the edges can have different correlation modes with the label $Y$, we are motivated to propose a solution that works for the mixed spurious correlations. Combining both (i) and (ii), we arrive at the invariant graph learning objective Eq. 1. \n\niii). When pursuing realizable alternatives of Eq. 1, we adopt several mutual information measures that are estimated based on the extracted subsets of node features and edges in the subgraph.\n\nIn the implementation, the learning objectives derived from the theory require us to estimate the mutual information of the subgraphs that include both nodes and edges. Separately estimating the mutual information based solely on nodes or solely on edges could suffer from larger biases. Therefore, we instead estimate the mutual information based on the learned graph representations that encode the information of the subgraphs (including both nodes and edges) to obtain a more accurate estimation.\n\nIf the reviewer would like to see *more explicit and specialized connections between our methods and graph properties*, we have provided an [example](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=2A0iG8sKLK0) involved with [graphon](https://en.wikipedia.org/wiki/Graphon) which is a typical concept from graph theory. We show that, by assuming $C$ as a graphon, the whole framework of CIGA can be generalized to the previous SOTA solutions [24, 27] that resolve only the graph size shifts.\n\n> The following steps of finding an invariant subgraph is not specific to graphs, but can be applied to vision and language as well. \n\nThanks for pointing out an important aspect concerning the relationship between regular data (vision, languages) and irregular data (graphs).\n\n- In general, many solutions to graphs can be generalized to vision and languages, as both images and natural languages can be considered as specific variants of graphs. For example, an image can be considered as a single node graph, and a sentence can be considered as a line graph.\n- However, the complexity of graphs prohibits the adoption of the methods developed for regular data. As we discussed in Section 2.3, previous OOD methods developed for Euclidean data all fail to resolve OOD generalization on graphs.\n- In addition, the theoretical analysis and the mutual information estimation would be much easier if we do not consider the complexity of graphs.\n", " \nRegarding the complexity of the solution: \nWe’d like to highlight that, **CIGA is the best method that trades off the number of assumptions between the complexity of the solution for OOD generalization on graphs**.\n\n> Implementing the proposed strategy still requires more work than others. \n\nYes, but **we would like to emphasize that all existing OOD methods, hence including our CIGA, require more works**. Specifically, more works are indeed needed for all methods that try to mitigate more distribution shifts, or discard the environment labels, or handle the graph’s complexity, as shown in the following table (modified from Figure 1 (b) in the paper).\n\n| | More distribution shifts | Known environment labels | Involved graph's complexity | Networks/Procedures | Objectives/Constraints |\n|-----------------|--------------------------|--------------------------|-----------------------------|:-------------------:|:----------------------:|\n| ERM | N/A | N/A | No | 1 | 1 |\n| IRM | No | Yes | No | 1 | 2 |\n| IB-IRM | Yes | Yes | No | 1 | 3 |\n| EIIL | No | No | No | 2 | 2 |\n| GIB | No | No | Yes | 2 | 3 |\n| DIR | No | No | Yes | 2 | 3 |\n| **CIGA (Ours)** | **Yes** | **No** | **Yes** | **2** | **3** |\n\nHowever, in contrast to the previous solutions, we resolve the OOD generalization on graphs (more distribution shifts, no environment labels, graph’s complexity) with minimal extra efforts.\n\n> It also requires more tests to understand the effectiveness of each module in the methodology.\n\nNo, no additional tests are required to understand the effectiveness of each module. As also acknowledged in your review, that CIGA is motivated by theory, we understand the functionalities of each component/objective at the beginning without more tests.\n\nWe look forward to more discussion with you if you have any further concerns or questions, and we sincerely hope you could take our above responses into consideration when you consider the rating. Thank you. \n\n", " Thanks authors for the detailed rebuttal. I appreciate the efforts in explaining the minimax objective of Def2.5. However, I would like to point out that the two following questions still remain to be solved.\n\n1. **Graph's complex properties of structures and features**: From the rebuttal, it seems that the effort to resolve this challenge is to apply a GNN encoder. The following steps of finding an invariant subgraph is not specific to graphs, but can be applied to vision and language as well. The proposed method does not treat the spurious correlations in node features and graph structures separately. Is that correct?\n\n2. **The complexity of the proposed method**: I appreciate the authors' efforts in presenting experiments for running time and the sensitivity to hyperparameters. However, implementing the proposed strategy still requires more work than others. It also requires more tests to understand the effectiveness of each module in the methodology.\n\nDue to the issues above, I reserve the previous rating.", " Dear Reviewer tMnp,\n\nThank you again for your time and efforts in reviewing our paper.\n\nThe Anonymous Authors.", " I thank the author for their reply. My concern regarding the presence of $E_G$ has been addressed, and the SCM now seems correct. I will keep my score unchanged.", " Dear Reviewer tMnp,\n\nWe have updated our draft following your suggestions. The changes were listed in this [reply](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=KiTJrAO0O7y). As the window for discussion and draft updating is closing, please let us know ASAP if you feel other changes are needed to address your concerns.\n\nThanks, The Anonymous Authors.\n", " Dear Reviewer QGqs,\n\nWe have updated our draft following your suggestions. The changes were listed in this [reply](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=KiTJrAO0O7y). As the window for discussion and draft updating is closing, please let us know ASAP if you feel other changes are needed to address your concerns.\n\nThanks, The Anonymous Authors.\n", " Dear Reviewer zrMs,\n\nWe have updated our draft following your suggestions. The changes were listed in this [reply](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=KiTJrAO0O7y). As the window for discussion and draft updating is closing, please let us know ASAP if you feel other changes are needed to address your concerns.\n\nThanks, The Anonymous Authors.\n", " Dear reviewer tMnp,\n\nThank you again for your efforts in reviewing our paper and your valuable comments. We’d be grateful if you can confirm whether our response has addressed your concerns. We’d be glad to answer any outstanding questions and look forward to any further discussions.\n", " Dear reviewer QGqs,\n\nThanks again for your time and efforts in reviewing our paper. Here is a summary of our detailed response below. We humbly expect you could check it and confirm whether our response has addressed your concerns:\n1. We established comprehensive discussions about the practical value of CIGA:\n- From the performance perspective, CIGA can serve as a general solution to tackle a variety of graph distribution shifts that could appear in different practical scenarios, due to its solid theoretical foundations.\n- From the architecture perspective, the architecture used in CIGA is shown to bring little computational overhead, while bringing more benefits such as interpretability.\n- From the objective perspective, the learning objectives of CIGA is shown to be robust to the coefficients and require little extra tunning efforts.\n\n2. We improved the readability and clarity of Section 3.2 following your suggestions.\n\n3. We added a discussion of the equivalence between the minmax objective in Definition 2.5 and finding the causal factor C to make predictions motivated by your suggestions.\n\n4. We provided a comprehensive discussion about how CIGA resolves the graph’s complexity:\n- Our causal analysis motivates the invariant learning objective of identifying the invariant subgraph (a subset of edges and node attributes) for making stable predictions.\n- To identify the invariant subgraph under the unavailability of environment labels, we adopt the information-theoretic tools to derive the realizable graph learning objectives. Learning with these objectives can identify the informative and invariant subgraphs as required by the invariance principle.\n- To better illustrate the relatedness, we provided two concrete examples of OOD generalization with the graphon model, and in molecular property prediction.\n\nOur response might be a bit long. We’d appreciate your patience and welcome any further discussions or questions! \n", " Dear reviewer zrMs,\n\nThank you again for your time in reviewing our paper and your valuable comments on our work. We’d be grateful if you can confirm whether our response has addressed your concerns. Here is a short summary:\n\n1. Motivated by your comment, we conducted experiments to show the OOD and IID performance gaps in the experiment tables, and found CIGA can close the performance gaps caused by the distribution shifts in some datasets.\n\n2. We summarized how our theory and solution relate to the graph properties:\n- The causal analysis of graphs motivates the invariant learning objective of identifying the invariant subgraph for making stable predictions.\n- The information-theoretic tools help the induction of realizable invariant graph learning objectives under the unavailability of environment labels, by using and estimating mutual information among different subgraphs and labels.\n- Our analysis and results can generalize to a broad class of graph families such as graphon graphs, Erdos-Renyi graphs, and Stochastic Block Model graphs, when incorporating further assumptions. \n\nWe’d be glad to answer any outstanding questions and look forward to any further discussions.\n", " Dear reviewers,\n\nWe have revised our paper following the suggestions/comments from all the reviewers. The revision is in blue color in the paper. \n\nSpecifically, we have revised our paper to improve its clarity and readability:\n- We revised the SCMs and removed $E_G$ in Sec. 2.2 and in Appendix C to improve the clarity (**tMnp**), while it doesn’t change the other results as the invariant relationships between $C$, $Y$ and $G_c$ remain unchanged.\n- We added a discussion about Def. 2.5 (invariant GNN) in Appendix E.1 to show how the minmax objective corresponds to identifying the causal factor $C$ (**QGqs**). \n- In Sec. 3.2, we reorganized the notations, added subtitles for each paragraph, explanations to the risk term and mutual information in Eq. 1, and explanations to $C$, $C’$, $c$, $c’$ in Eq. 2 (**QGqs**).\n- In experiment tables 1-3, to show the performance gaps caused by the distribution shifts (**zrMs**), we added one line at the bottom to show the \"Oracle\" performance of each dataset, which is obtained by running ERM based on the randomly shuffled (empirically made IID) data of each dataset.\n\nConcerning the common question raised by Reviewer **zrMs** and Reviewer **QGqs** about how CIGA resolves the complex graph properties, we’d like to highlight that:\n- The causal analysis in Sec. 2 not only provides a lens to understand the key challenges in OOD generalization on graphs, but also motivates the ultimate learning objective of CIGA (i.e., Eq. 1). By identifying the invariant subgraph $G_c$ for making predictions about $Y$, the GNN model is expected to be able to generalize to OOD graphs under various distribution shifts.\n- When approaching Eq. 1 under the unavailability of environment label $E$, we leverage the information theory as a proxy to reason and derive the information-theoretic learning objectives (Eq. 3 and Eq. 4), prove their soundness (Theorem 3.1), and implement them using various tools such as variational characterization and mutual information estimator, based on the learned graph representations. This procedure follows the common practice of tackling graph related problems with information theory in the literature.\n- Throughout the paper, we aim for a general characterization and solution under minimal prior knowledge about the OOD generalization on graphs. Nevertheless, our solution is compatible and open for incorporating more graph related inductive biases. In the [reply](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=2A0iG8sKLK0) to Reviewer **zrMs** and the [reply](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=3VpgsTh_GfJ) toReviewer **QGqs**, we show concrete examples of how our theories and solutions generalize to previous state-of-the-art when incorporating the graphon assumption on the graph family.\n\nConcerning the practical value of CIGA, we’d like to highlight that:\n- As in practice, graph distribution shifts can appear in a variety of forms. Owing to the generality of our theory, CIGA is shown to be able to generalize under various distribution shifts and graphs, and achieve new state-of-the-art OOD generalization performances. \n- Notably, CIGA is the only method that consistently outperforms ERM in the industry-provided realistic OOD benchmark, i.e., DrugOOD, demonstrating its high potential to push forward the developments of AI-Assisted Drug Discovery, and enrich the AI tools for facilitating the fundamental practice of science.\n- Moreover, the interpretable GNN architecture used in the current version of CIGA also brings an extra benefit, i.e., interpretability of the results. Hence we also provide interpretability visualization examples in Appendix G.5. From the results we find CIGA can discover interesting patterns, which may provide new insights to human experts in drug discovery.\n- Furthermore, in the replies ([part i](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=naQjdPVFgi), [part ii](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=WGhKB9x2oxJ)) to Reviewer **QGqs**, we provide empirical evidences showing that the architecture and the objectives of CIGA require little additional computational overhead and tuning efforts. In the [reply](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=C0Z0Hh7AGCQ) to Reviewer **tMnp**, we find CIGA remains the state-of-the-art OOD method and brings non-trivial improvements under the single training environment setting. These empirical results could serve as the strong evidence for the high potential of CIGA that can be applied to various application scenarios.\n\nBesides, we also provide a link of our codes for reproducing the results in our paper: https://anonymous.4open.science/r/CIGA-6B5F/ .\n\nWe again thank all reviewers for their efforts and many helpful comments/suggestions.\n", " Thank you for your detailed comments and suggestions! Please see our responses to your questions and suggestions below. (We reorganize the weakness and questions a bit to better clarify the concerns). \n\n**1. Regarding the practical value of CIGA, two GNNs and three loss functions. (weakness 1).**\n\nIn the below we provide a discussion about the practical values of CIGA (coupled with corresponding revisions in the paper):\n\nTo begin with, we’d like to highlight that, **CIGA can provably handle a variety of potential graph distribution shifts that may appear in multiple application scenarios.** This is owe to its solid theoretical foundations built upon the causal analysis. Thus in experiments, CIGA is able to significantly outperforms the previous state-of-the-art methods under different graph distribution shifts. Notably, **CIGA is the only method that consistently improves the OOD generalization performance than ERM on the industry-provided realistic graph OOD benchmark, i.e., DrugOOD [1].** The superiority of CIGA demonstrates its high potential to push forward the developments of AI-assisted Drug Discovery, and enrich AI tools for facilitating the fundamental practice of science.\n\nIn the following, we also carefully discuss the complexity in the design of CIGA and provide more evidences to address your concerns.\n\n**a). CIGA adopts the 2-GNN architecture, but brings little additional overhead, comparing with existing methods in terms of the performance improvements and running time:**\n- In this work, we adopt interpretable GNN architectures primarily for the purpose of prototype verification, motivated by the algorithmic reasoning results that a neural network can learn a reasoning process better if its computation structure aligns with the process better [2,3]. Moreover, decomposing the model into and stacking up two neural networks is common and widely adopted in the literature of both OOD generalization, invariant learning and graph neural networks [4,5,6,7,8] and largely used in practice [9,10,11]. \n- Nevertheless, to examine how much computational overhead is induced by the architecture and the additional objectives in CIGA, we analyze and compare the averaged training time (seconds per epoch) of different methods on the realistic benchmark DrugOOD-Scaffold, where we omit vrex and IB-IRM as their performances and computational costs are similar to those of IRM. \n \n | | ERM | ASAP | GIB | DIR | IRM | EIIL | CNC | CIGAv1 | CIGAv2 |\n |-----------------|:-----:|:------:|:-------:|:-------:|:-----:|:------:|:-----:|:------:|:------:|\n | Running time | 8.055 | 15.578 | 300.304 | 106.919 | 8.73 | 69.664 | 9.795 | 40.065 | 46.181 |\n | OOD performance | 68.85 | 66.19 | 62.01 | 63.91 | 68.69 | 68.45 | 67.24 | 69.04 | 69.7 |\n | AVG Rank | 2 | 5.5 | 9 | 8 | 3 | 6 | 4.5 | 3.5 | 3.5 |\n\n The results show that CIGA enjoys less computational overhead than other interpretable GNNs (i.e., GIB, DIR). Meanwhile, CIGA is the only OOD method that outperforms ERM by a non-trivial margin with a relatively low additional computational overhead, demonstrating its practical potential.\n- Besides, the adopted interpretable GNN architecture also offers an additional benefit, i.e., interpretability, which can further facilitate human understanding of CIGA’s predictions in practice. We provide the interpretability visualization examples of both SPMotif and DrugOOD datasets in Appendix G.5 of the revised paper. Notably, CIGA is able to find interesting substructures in the molecules from DrugOOD, which may provide new insights to human experts during the design of novel drugs in practice.\n\nNevertheless, concerning the time cost and the estimation of mutual information, we believe it’s a promising future direction to improve the architecture used in CIGA to better suit different needs that could appear in real world such as Edge AI.\n", " **b). CIGA introduces 3 losses, but requires little extra tuning efforts, compared with existing methods:**\n- First, we’d like to note that adopting multiple losses or even additional optimization subprocedure is common in the literature of both invariant learning and graph neural networks in practice [4,5,6,7,8,12,13].\n- In the experiments, we didn't tune hyperparameters exhaustively, but CIGA still maintains high performances. To examine the sensitivity of CIGA to the coefficients of the two additional objectives in practice, i.e., $\\alpha$ for $I(\\widehat{G_c};\\widetilde{G}_c\\|Y)$ implemented as the contrastive loss, and $\\beta$ for $I(\\widehat{G}_s; Y)$ implemented as the hinge loss, we conduct the following ablation study on the most difficult datasets, SPMotif-Mixed with a bias of 0.9, DrugOOD-Scaffold, and NCI109, where we vary the values of $\\alpha$, and the values of $\\beta$ under a fixed $\\alpha$ that yields relatively good performance in CIGAv1. Here we provide the results on DrugOOD-Scaffold, where we vary $\\beta$ under a fixed $\\alpha=1$, the other results are added to Appendix G.4 in the revised paper:\n\n| Coefficients | 0.1 | 0.5 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 |\n|-------------:|------------:|------------:|------------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|:-----------:|\n| $\\alpha$ for $I(\\widehat{G_c};\\widetilde{G}_c\\|Y)$ | 68.13(0.81) | 68.15(1.02) | 69.13(1.38) | 69.19(0.80) | 68.95(1.11) | 69.09(0.47) | 69.25(0.95) | 68.75(0.95) | 68.86(1.22) | 68.35(0.98) |\n| $\\beta$ for $I(\\widehat{G}_s;Y)$ | 68.98(0.47) | 68.49(0.50) | 69.43(1.04) | 69.48(0.21) | 69.30(0.81) | 69.51(0.57) | 69.32(0.87) | 69.15(0.48) | 69.02(0.37) | 69.42(0.26) |\n\nFrom the results, we can see that both CIGAv1 and CIGAv2 maintain non-trivial OOD performance improvements across various choices of the hyperparameters, demonstrating their robustness and usefulness in practice.\n\nIn philosophy, we all aim for a simple and principal solution. However, it is usual to require more objectives/constraints when the inductive bias about the graph generation process is lacking. Nevertheless, we believe it is promising to reduce the number of losses by combining more external knowledge and more advanced architecture [14,15], or automatically tuning solvers [16,17].\n\n**2. Regarding the presentation of Section 3.2 (weakness 2).**\n\nWe have reorganized the contents and notations to improve the clarity. Specifically:\n- When deriving the solutions, we added a beginning sentence in italic for each paragraph to demonstrate the purpose of the corresponding paragraphs.\n- We made the formulas in Sec. 3.2 more consistent, switched the expression of empirical risk in Eq. 1 to mutual information, and added an explanation after Eq. 1 to show how it relates to the empirical risk.\n- We added an explanation for the causal factors $c$ and $c’$ after Eq. 2.\n- We improved the readability of the notations(e.g., \\hat, \\tilde), so that readers can recognize the variables used in the analysis more clearly.\n\nPlease let us know if you have any further comments about the updated versions.\n", " **3. How is the minmax objective in Def. 2.5 relevant to finding the causal factor $C$ in Figure 2? (Q1)**\n\nWe agree that it is crucial to understand the significance of Def. 2.5, hence in the revised version, we added more discussions about Def. 2.5 and in Appendix E.1. Here is a brief discussion of the relatedness between Def. 2.5 and finding the causal factor C to make predictions.\n\nDefinition 2.5 basically follows the literature of invariant learning [18,5]. In particular, the minmax formulation is motivated by the causal analysis of the SCMs we built to characterize the potential distribution shifts on graphs. Given a GNN $f$, Def. 2.5 requires $f$ to satisfy the following two requirements:\n\ni). $f$ cannot rely on any parts of $G_s$ or $S$ to make predictions. Otherwise, as $G_s$ and $S$ can be arbitrarily changed under the changes of $E$, thus $f$ can fail catastrophically on the graphs from the specific environment. In other words, the dependence of $f$ on any parts of $S$ would enlarge the maximal possible error in any environment.\n\nii). $f$ needs to fully use the available $G_c$ or $C$ to make predictions. Otherwise, if there are any parts of $G_c$ or $C$ not leveraged by $f$, including those parts in $f$ to make predictions can always improve the predictive power of $f$ about $Y$. In other words, including more information from $C$ would minimize the maximal possible error in any environment.\n\nCombining i) and ii), it suffices to know that the minmax objective in Def. 2.5 needs $f$ to identify and fully use the available information about $C$ presented in the input graph $G$, which lays the foundation for the inductions of Eq. 1 as well as the followup CIGA solutions.\n\n\n**4. Graph’s complex properties are identified as key challenges, but it is unclear how the proposed method resolve the graph’s complexity (Q2).**\n\nThanks for your insightful comments. As one of the major barriers in OOD generalization on graphs (the other is the lack of environment labeling), graph’s complex properties of structure and features can raise two specific challenges:\n\ni). Graphs can contain different levels of distribution shifts, e.g., structure-level shifts or node feature-level shifts. Information theory provides useful tools to handle the complexity of graph structure which is also widely adopted in the literature [20, 7]. The overall rationale is that, we can use the information theoretic tools to derive the desired learning objectives, and then use the proper tools to implement the objectives [20,7]. More concretely in CIGA, based on Eq. 1, we derive two variants of learning objectives that mainly enforce the informativeness and invariance of the estimated invariant subgraph $\\widehat{G}_c$. To implement these objectives, i.e., estimating the corresponding information theoretic objectives, we apply the GNN encoders to translate the complex structure-level and node attribute-level information into the learned graph representations. Then, we use the learned graph representations to approach the maximization of $I(\\widehat{G}_c; Y)$ via a variational characterization [7, 21], and the maximization of $I(\\widehat{G}_c;\\widetilde{G}_c\\|Y)$ via a non-parameteric resubstitution entropy estimator [21,22,23]. The two approximations lead us to the empirical risks of using the estimated $\\widehat{G}_c$ to predict $Y$, and the contrastive loss. \n\nii). The other challenge is that different levels of shifts on graphs can spuriously correlate with the labels in different modes, i.e., FIIF or PIIF. Our causal analysis allows us to leverage the invariant relationship between $G_c$ and $Y$ to make stable predictions, under both FIIF and PIIF spurious correlation modes, which induces Eq. 1 and hence the followup CIGAv1 and CIGAv2 objectives.\n\n*(To avoid tedious replications in the reply, we kindly refer you to the similar [reply](https://openreview.net/forum?id=A6AFK_JwrIW&noteId=GeZpsENaiNN) to Reviewer zrMs for a more detailed discussion of the two aforementioned points.)*\n", " Besides, as the first work that aims to handle the comprehensive graph distribution shifts with theoretical guarantees, **we aim at a general solution guided by general causal models under minimal prior knowledge. However, our CIGA is compatible and open to combining more knowledge about the graph properties.** Here are two examples:\n\ni). If we assume that the graphs come from the graphon family [25]. Our SCMs ($\\mathcal{G}$-Gen SCM and FIIF SCM) generalize to the graph generative SCM studied in [24], as well as the Erdos-Renyi graphs and Stochastic Block Model graphs [26], which are two classical and widely studied random graph families. In this example, the edge connection patterns controlled by the graphon $C$, e.g., motif appearance frequency or subgraph densities, act as an informative and invariant indicator for the label $Y$. In contrast, the node attributes and the number of nodes (or graph sizes) are controlled by the spurious factor $S$. Consequently, the invariant subgraph $G_c$ in this example can be regarded as the informative edges indicating the underlying patterns (e.g., edges across two communities). CIGA is expected to leverage these informative edges to predict $Y$, hence can generalize across different environments $E$, which converges to the rationales of the previous state-of-the-art solutions [24,27]. \n\nii). In a more realistic example, i.e., DrugOOD [1], it’s usually the case that a small functional group in a molecule will control the interested biochemical property prediction, or the binding affinity of a molecule to a protein. The relationships between these functional groups and the activate level of the interested biochemical property (e.g., molecule solubility), or binding affinity to the interested protein (e.g., the COVID-19 viral receptor ACE2 protein), are invariant to the changes of examination environments (assays) or the scaffolds of the molecules. Therefore, these functional groups can act as the invariant subgraph $G_c$ which is expected to be identified by CIGA, to make stable predictions about the activate level. \n\n\n", " We’d be grateful if you could take the above responses into consideration when making the final evaluation of our work. Please let us know if there are any outstanding questions.\n\n---\n\n[1] Ji et al., DrugOOD: Out-of-Distribution (OOD) Dataset Curator and Benchmark for AI-aided Drug Discovery -- A Focus on Affinity Prediction Problems with Noise Annotations, arXiv 2022.\n\n[2] Xu et al., What can neural networks reason about? ICLR 2020.\n\n[3] Xu et al., How neural networks extrapolate: From feedforward to graph neural networks, ICLR 2021.\n\n[4] Ganin et al., Domain-Adversarial Training of Neural Networks, Journal of Machine Learning Research, 2016.\n\n[5] Arjovsky et al., Invariant Risk Minimization, arXiv 2020.\n\n[6] Chang et al., Invariant Rationalization, ICML 2020.\n\n[7] Yu et al., Graph Information Bottleneck for Subgraph Recognition, ICLR 2021.\n\n[8] Wu et al., Discovering Invariant Rationales for Graph Neural Networks, ICLR 2022. \n\n[9] Goodfellow et al., Generative Adversarial Networks, NIPS 2014.\n\n[10] Monti et al., Geometric Matrix Completion with Recurrent Multi-Graph Neural Networks, NIPS 2017.\n\n[11] Dai et al., Learning Transferable Graph Exploration, NeurIPS 2019.\n\n[12] Creager et al., Environment Inference for Invariant Learning, ICML 2021.\n\n[13] Ahuja et al., Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization, NeurIPS 2021.\n\n[14] Bevilacqua et al., Size-Invariant Graph Representations for Graph Classification Extrapolations, ICML 2021.\n\n[15] Miao et al., Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism, ICML 2022.\n\n[16] Sener and Koltun, Multi-Task Learning as Multi-Objective Optimization, NeurIPS 2017.\n\n[17] Chen et al., Pareto Invariant Risk minimization, arXiv 2022.\n\n[18] Peters et al., Causal inference using invariant prediction: identification and confidence intervals, Journal of the Royal Statistical Society, 2016.\n\n[19] Wu et al., Graph Information Bottleneck, NeurIPS 2020.\n\n[20] Alemi et al., Deep Variational Information Bottleneck, ICLR 2017.\n\n[21] Ahmad and Lin. A nonparametric estimation of the entropy for absolutely continuous distributions (corresp.), IEEE Transactions on Information Theory, 1976.\n\n[22] Kandasamy et al., Nonparametric von mises estimators for entropies, divergences and mutual informations, NIPS 2015.\n\n[23] Wang and Isola, Understanding contrastive representation learning through alignment and uniformity on the hypersphere, ICML 2020.\n\n[24] Bevilacqua et al., Size-Invariant Graph Representations for Graph Classification Extrapolations, ICML 2021.\n\n[25] Lovasz and Szegedy, Limits of dense graph sequences, Journal of Combinatorial Theory, 2016.\n\n[26] Snijders and Nowicki, Estimation and prediction for stochastic blockmodels for graphs with latent block structure, Journal of classification, 1997.\n\n[27] Yehudai et al., From local structures to size generalization in graph neural networks, ICML 2021.\n", " Thank you for taking time to carefully review our paper and for your positive feedbacks about our work. Please see our detailed responses to your comments and suggestions below. We reorganize the weakness and questions a bit to better clarify your concerns. \n\n**1. Regarding the experiment tables: some baselines are not in bold fonts (weakness 1); It is hard to recognize the performance gap caused by distribution shifts from the table (questions 1).**\n\nThanks for pointing out the bold typo in our paper, and for your constructive suggestion about the presented tables. We have checked and corrected the bold highlights in all tables in the revised version, where CIGA remains the best OOD method on graphs.\n\nTo show the performance gaps caused by the distribution shifts, we run an \"Oracle\" experiment for each dataset, where we run ERM using the same GNN backbone on the randomly shuffled data. In this way, the training and test data will be empirically made as \"IID\". The \"Oracle\" performances have been updated in the revised version. Here we report some of the results in the table below, where we can observe large performance gaps caused by various graph distribution shifts. In some datasets such as Twitter and Proteins, surprisingly, CIGA can perform on par with the \"Oracle\". Nevertheless, for some more difficult datasets such as DrugOOD datasets, there remains large space for future improvements between CIGA and the \"Oracle\".\n\n| | SPMotif-Struc 0.9 | SPMotif-Mixed 0.9 | DrugOOD-Sca | Twitter | Proteins |\n|--------------|:-----------------:|:-----------------:|:------------:|:------------:|:-----------:|\n| ERM | 49.64 (4.63) | 41.36 (3.29) | 68.85 (0.62) | 60.81 (2.05) | 0.22 (0.09) |\n| Prev. SOTA | 57.29 (14.5) | 50.45 (4.90) | 68.92 (0.98) | 63.50 (1.23) | 0.29 (0.00) |\n| CIGAv1 | 51.78 (7.29) | 49.01 (9.92) | 69.04 (0.86) | 63.66 (0.84) | 0.40 (0.06) |\n| CIGAv2 | 63.41 (7.38) | 54.25 (5.38) | 69.70 (0.27) | 64.45 (1.99) | 0.31 (0.12) |\n| Orcale (IID) | 88.70(0.17) | 88.73(0.25) | 84.71(1.60) | 64.21(1.77) | 0.39(0.09) |\n\n**2. Relatedness of graph property and the proposed method/proof (weakness 2 and question 2).**\n\nIndeed, the complexity in graphs brings more challenges instead of benefits to the OOD generalization on graphs. Graph properties are fundamentally involved during the causal modeling of the challenges raised by the graph’s complexity, as well as during the induction of the CIGA solutions. We provide more detailed discussions as follows.\n", " **The causal analysis lays the foundation for adopting the invariance principle [1,2] and the induction of CIGA solutions (i.e., Eq. 1, that aims to identify the underlying invariant subgraph for making predictions).** Specifically, we model the complex graph generation process and characterize the potential distribution shifts with three SCMs:\n- The causal relationship among the environment $E$ and the spurious subgraphs $G_s$ (the set of edges and node attributes that are spuriously correlated with $E$) enables us to identify and understand the key challenges in OOD generalization on graphs, e.g., the convoluted distribution shifts at different levels and from different causes.\n- Meanwhile, the causal relationship between the latent causal factor $C$, the label $Y$ and the invariant subgraph $G_c$ (the set of edges and node attributes that are invariant to the changes of $E$) provides us the possibility to approach the OOD generalization on graphs with the invariance principle from causality. \n- More formally, the invariant relationship between the invariant subgraph $G_c$ and the label $Y$ fundamentally motivates us to derive the learning objective Eq. 1. Once we identify the underlying invariant subgraph $G_c$ and let the GNN focus only on $G_c$ to predict $Y$, it is expected to be an invariant GNN (Def. 2.5), and be able to generalize to OOD graphs under different graph distribution shifts.\n\nWhen solving for Eq. 1 under the unavailability of environment label, **we adopt the tools from information theory as a proxy to mitigate the complexity of graphs, and to enforce the informativeness and invariance of the extracted subgraphs,** which derives two implementations as CIGAv1 (Eq. 3) and CIGAv2 (Eq. 4). In fact, the information theory is also widely adopted in the literature to handle the complexity in graphs during the objective induction [3,4]. Following the common practice in the literature, once we derive the proper information-theoretic learning objective, we can use various tools to implement it. Similarly and more concretely in CIGA:\n- Through the maximization of the mutual information between the extracted subgraph and the label, i.e., $I(\\widehat{G}_c; Y)$, we can extract the most informative structural and node attributes from the original graph. The extraction is guided via a variational characterization of the mutual information between the subgraph $G_c$ and $Y$ [4,5], which yields a lower bound of $I(\\widehat{G}_c; Y)$ with respect to the negative empirical risk of using $\\widehat{G}_c$ to predict $Y$, i.e., $I(\\widehat{G}_c; Y)\\geq -R(G_c)$. Hence we can maximize the mutual information between $\\widehat{G}_c$ and $Y$ by minimizing the empirical risk of using the estimated $\\widehat{G}_c$ to predict $Y$, where the graph encoder in the GNN model will encode the information of the structures and the node attributes in the learned graph representations that are used to predict $Y$.\n- Besides, to avoid extracting some edges and nodes from the spurious subgraph $G_s$ into the $\\widehat{G}_c$, we simultaneously maximize the mutual information of the extracted subgraphs from the same class, i.e., $I(\\widehat{G}_c;\\widetilde{G}_c \\|Y)$. Intuitively, it enforces the estimated invariant subgraphs from the same class to share high mutual information. However, estimating the mutual information between graphs is difficult [3,4]. We thus adopt the contrastive loss to approximate $I(\\widehat{G}_c;\\widetilde{G}_c \\|Y)$, which can be regarded as a non-parameteric resubstitution entropy estimator via the von Mises-Fisher kernel density [6,7,8]. In the implementation, we also adopt the GNN encoder to encode the information of the structures and the node attributes in the learned graph representations for calculating the losses.\n- Theorem 3.1 (i) proves the equivalence of learning with the above two objectives to identifying the underlying invariant subgraph, while under the assumption that the sizes of $G_c$ is known and fixed. To further mititgate the size constraints, we introduce another objective based on the spurious subgraph $G_s$ and $Y$. If we can simutaneously maximize the mutual information between the estimated spurious subgraph $\\widehat{G}_s=G-\\widehat{G}_c$ and $Y$, i.e., $I(\\widehat{G}_s; Y)$, intuitively it can \"absorb\" the edges and node attributes that are also (spuriously) correlated with $Y$ into the counterpart $\\widehat{G}_s$ of $\\widehat{G}_c$. Theorem 3.2 (ii) proves the usefulness of the incorporation of $I(\\widehat{G}_s; Y)$, and its implementation is also similar to the estimation of $I(\\widehat{G}_c; Y)$ except the additional hinge loss on $R(\\widehat{G}_s)\\geq R(\\widehat{G}_c)$.\n", " \nBesides, as the first work that aims to handle the comprehensive graph distribution shifts with theoretical guarantees, **we start with general causal models as well as its induced solutions** (that is also partially why we use the information-theoretic tools since we aim for a general solution under minimal prior knowledge). **However, CIGA is compatible and open to incorporating more additional knowledge about the graph properties.** For example, if we assume that the graphs come from the graphon family [9]. Our SCMs ($\\mathcal{G}$-Gen SCM and FIIF SCM) are generalized to the graph generative SCM studied in [10]. Specifically:\n- We can additionally assume that the causal factor $C$ is the corresponding graphon in the SCM of [10], which controls the generation of the edges between different nodes [9], and determines the label $Y$ [10]. Thus, the edge connection patterns, e.g., motif appearance frequency and subgraph densities, act as an informative and invariant indicator for the label $Y$. In this case, the invariant subgraph $G_c$ can be regarded as the informative edges indicating the underlying patterns (e.g., edges across two communities), which essentially converges to the rationales of the solutions by [10,11]. \n- On the other hand, the environment $E$ and the graphon $C$ further control the generation of the node attributes (including the number of nodes and the attribute values). Therefore, a GNN model is prone to the changes of the environments if it overfits to some spurious patterns about the graph sizes or the attributes, which is consistent with the observations in the literature [10,11,12] as well as our experimental results.\n\nGiven the two aforementioned additional assumptions, our SCMs are generalized to that of [10]. Moreover, following the discussion of [10], our SCMs can also be generalized to the Erdos-Renyi graphs and Stochastic Block Model graphs [13], which are two classical and widely studied random graph families. It also partially explains why CIGA can perform well under various graphs and distribution shifts, and even close the performance gaps caused by the distribution shifts in some circumstances. We provide a more detailed discussion in Appendix C.1 of the revised paper.\n\nPlease let us know if you have any further questions. We’d be grateful if you could take the above responses into consideration when making the final evaluation of our work. \n\n---\n\n[1] Peters et al., Causal inference using invariant prediction: identification and confidence intervals, Journal of the Royal Statistical Society, 2016.\n\n[2] Arjovsky et al., Invariant Risk Minimization, arXiv 2020.\n\n[3] Wu et al., Graph Information Bottleneck, NeurIPS 2020.\n\n[4] Yu et al., Graph Information Bottleneck for Subgraph Recognition, ICLR 2021.\n\n[5] Alemi et al., Deep Variational Information Bottleneck, ICLR 2017.\n\n[6] Ahmad and Lin. A nonparametric estimation of the entropy for absolutely continuous distributions (corresp.), IEEE Transactions on Information Theory, 1976.\n\n[7] Kandasamy et al., Nonparametric von mises estimators for entropies, divergences and mutual informations, NIPS 2015.\n\n[8] Wang and Isola, Understanding contrastive representation learning through alignment and uniformity on the hypersphere, ICML 2020.\n\n[9] Lovasz and Szegedy, Limits of dense graph sequences, Journal of Combinatorial Theory, 2016.\n\n[10] Bevilacqua et al., Size-Invariant Graph Representations for Graph Classification Extrapolations, ICML 2021.\n\n[11] Yehudai et al., From local structures to size generalization in graph neural networks, ICML 2021.\n\n[12] Knyazev et al., Understanding Attention and Generalization in Graph Neural Networks, NeurIPS 2019.\n\n[13] Snijders and Nowicki, Estimation and prediction for stochastic blockmodels for graphs with latent block structure, Journal of classification, 1997.\n", " Thank you for taking the time to review our paper. Please see our detailed responses to your questions and suggestions below.\n\n**1. The learning objective assumes the presence of multiple training environments, and thus the proposed method would not work in the single environment setting (weakenss 1).**\n\nWhen developing CIGA, we follow the common assumptions of the invariant learning literature that assumes the existence of multiple training environments [1,2]. Therefore, it’s a common issue for all of the existing invariant learning solutions, including CIGA, that they can’t work in the single environment setting.\n\nWe have also conducted preliminary experiments to look into the capabilities of CIGA in single environment OOD generalization, with the hope of providing more guidance for future developments under this circumstance. Specifically, in the experiments, we selected samples that are from the largest assay group in the training data (i.e., the biochemical functionalities of these molecules are tested and reported under the same laboratory setup [3]) as the new training set, while keeping the validation and test data the same. The results are shown in the table below:\n| Methods | OOD Performances | Rank |\n|:-------:|:----------------:|:-----:|\n| ERM | 63.28(2.67) | 5 |\n| ASAP | 63.41(0.70) | 4 |\n| GIB | 62.72(0.59) | 8 |\n| DIR | 62.56(0.79) | 9 |\n| IRM | 63.25(1.45) | 6 |\n| V-REX | 62.18(1.71) | 10 |\n| EIIL | 62.95(1.37) | 7 |\n| IB-IRM | 61.95(1.72) | 11 |\n| CNC | 63.61(0.96) | 3 |\n| CIGAv1 | **63.86(0.57)** | **2** |\n| CIGAV2 | **64.31(0.92)** | **1** |\n\nInterestingly, while invariant learning methods (e.g., IRM, V-REX, etc.) fail as expected, CIGA remains its good performance. We hypothesize that enforcing the mutual information between the estimated $\\widehat{G}_c$ also helps to retain the invariances even under the single training environment setting. That may partially explain why CNC can bring some improvements. \n\nWe believe it is interesting and promising to develop an in-depth understanding of the observed phenomenon, and explore better solutions under single domain/environment OOD generalization in future work, with the guidance of the invariance principle and causal theory [1,2,3,4].\n\n\n\n**2. The relationship between $E_G$ and $E$ is unclear (weakness 2 and Q1). $E_G$ in $\\mathcal{G}$-Gen SCM would change the causal diagram of FIIF SCM and PIIF SCM (Q2).**\n\nThanks for pointing out this potentially confusing point. We intended to note the influences of the environment $E$ on the graphs. Nevertheless, we agree that it may cause readers confusion. Hence we removed $E_G$ in the revised version in order to improve the clarity, but it won’t affect the other results. \n\nMore concretely, the removal/existence of $E_G$ won’t affect the application of the invariance principle and hence the derivation of our solutions. The working rationale of CIGA mainly rely on the invariant relationship $P(Y \\|C)$, which is independent to the changes of the environments $E$ and $E_G$, according to the Independent Causal Mechanism (ICM) assumption [1,3,4]. In other words, $P(Y\\|C)$ remains invariant, no matter the removal/existence of $E_G$. Therefore, $P(Y\\|G_c)$ remains invariant, and it follows that Eq. 1 still holds. Then, as the followup derivations of CIGA solutions mainly aim to realize Eq. 1, the followup results won’t be affected by the removal of $E_G$.\n\nWe hope our responses could clarify your concerns. Please let us know if you have further questions.\n\n---\n[1] Peters et al., Causal inference using invariant prediction: identification and confidence intervals, Journal of the Royal Statistical Society, 2016.\n\n[2] Arjovsky et al., Invariant Risk Minimization, arXiv 2020.\n\n[3] Ji et al., DrugOOD: Out-of-Distribution (OOD) Dataset Curator and Benchmark for AI-aided Drug Discovery -- A Focus on Affinity Prediction Problems with Noise Annotations, arXiv 2022.\n\n[4] Judea Pearl, Causality, Cambridge University Press, 2 edition, 2009.\n\n[5] Peters et al., Elements of Causal Inference: Foundations and Learning Algorithms, The MIT Press, 2017.\n\n", " This paper discusses graph-level distribution shifts on graph neural networks. It propose a causal-based invariant learning objective based on contrastive learning and prove its equivalence/approximation of the designed invariance principles. Various experiments on synthetic and real-world shift demonstrate the effectiveness of CIGA. **Strengths** \n1. This paper poses a great challenge towards existing OOD generalization algorithms on graphs and provide some sort explanation. \n3. The connection between semi-supervised contrastive learning and invariant in Eq.5 is sound.\n\n**Weaknesses** \n1. The results in the experiments are sometimes misleading. For example, in Table 3, when baselines are better, the bond font is still on proposed two methods. \n2. I read through main theorem 3.1 and its proof, most of the induction is around mutual information or information bottleneck, while the property of graph is rarely used. The author claims challenge of graph OOD is not environment label and convoluted causes of shift, while the proposed method seem to not that relevant / benefit from the nature of the graph. 1. What's the accuracy or F1 measure on real-world graphs? It is hard to recognize the performance gap caused by distribution shifts. In other words, the negative effect of distribution shifts are not revealed in current table. \n2. What's the relatedness of graph property and proposed method / proof, maybe I am missing something during reading ( please refer to weakness 2 ) N.A.\n", " This paper studies the OOD generalization problem for graphs. It takes a causal view at the problem and proposes four assumptions to describe the graph generation process. Specifically, three factors C (invariant), S (varying), and E (environment) control the graph generation process. The assumptions include two types of distribution shifts in which 1) graph label Y is fully dependent on C; and 2) Y is spuriously correlated with environment E.\n\nBased on the assumptions, the authors decompose the graph learning problem into finding the an invariant subgraph $\\hat{G}_c$ and minimizing the risks on $\\hat{G}_c$, i.e., Equation (1). The remaining efforts are dedicated to resolve Equation (1)'s intractability. The main idea is to use supervised contrastive learning to maximize the agreement between the invariant part of graphs with the same label .\n **Strengths:**\n\n- OOD generalization for graphs is of high interest for the machine learning community. This paper present a thorough study from the causal view, which should enlighten future researchs in this field.\n- The idea is novel and motivated by theory.\n\n\n\n**Weaknesses:**\n\n- The method seems complex that includes two GNNs and three loss functions. It is unclear about the practical value of this method.\n\n- The presentation can be improved. \n - Section3.2 is hard to parse. The reasoning process between the multiple objective functions is poorly organized and hard to evaluate.\n - Some equations can be made clearer. For example, what does the C, C', c, c' in Equation (2) mean? Why is the risk term in Equation (1) replaced by mutual information in Equation (3)? Q1: since theoretical results mainly address the connection between the proposed method and definition2.5, it is crucial to understand the significance of definition 2.5. How is the minimax objective in definition2.5 relevant to finding the causal factor C in Figure2?\n\nQ2: in Line189-190, graph's complex properties of structures and features are identified as a key challenge for OOD generalization. It is unclear how does the proposed method resolve this challenge?\n none", " The authors consider the problem of out-of-distribution generalization on graphs, where shifts in the distribution can change both the attributes and the structure of the graphs. They propose two causal DAGs describing the shift, and assume the existence of a subgraph which is invariant to the shift and that contains most of the information about the label. Then, they present two learning objectives to identify this critical subgraph which is then used for the label prediction. #### Strenghts\n1. The assumption of the presence of an invariant subgraph is interesting and reasonable, and it is in line with previous work.\n2. The empirical evaluation shows the effectiveness of the proposed approach.\n\n#### Weaknesses\n1. The learning objective assumes the presence of multiple training environments, and thus the proposed method would not work in the single environment setting.\n2. The relationship between $E_G$ and $E$ is unclear, since they are not causally related. Also, since $E_G$ is an input of $ f_{gen}^G$, and $E_G \\subseteq E$ there should be an arrow from $E$ to $G$.\n 1. What is the relationship between $E_G$ and $E$ ? I can see that $E_G \\subseteq E$ but if we replace G in the FIIF SCM with its $\\mathcal{G}$-Gen SCM, then: (1) there is no causal relationships between $E_G$ and $E$ and (2) $E_G \\perp E$, which is contradictory.\nCan't you simply remove the random variable $E_G$? \n2. From Assumption 2.1 we have $G = f_{gen}^G(G_C, G_S, E_G)$ and since $E_G \\subseteq E$ then the environment also causes G. This obviously would change the causal diagram since then there should be an arrow from the environment to G, changing all the causal models. Please clarify.\n\n N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4 ]
[ "S1bqdlssm5", "DmiNtKjTt5y", "d0RtVHMK78F", "4QxPTlCckHq", "2A0iG8sKLK0", "_3N7KSxXut", "d0RtVHMK78F", "d0RtVHMK78F", "Z2qV6MFcwex", "dFr_5epjaPU", "2FKaOONi37x", "nLDNj69gWII", "HpFhRSN-bHX", "_3N7KSxXut", "nLDNj69gWII", "HpFhRSN-bHX", "_3N7KSxXut", "nips_2022_A6AFK_JwrIW", "HpFhRSN-bHX", "HpFhRSN-bHX", "HpFhRSN-bHX", "HpFhRSN-bHX", "HpFhRSN-bHX", "_3N7KSxXut", "_3N7KSxXut", "_3N7KSxXut", "nLDNj69gWII", "nips_2022_A6AFK_JwrIW", "nips_2022_A6AFK_JwrIW", "nips_2022_A6AFK_JwrIW" ]
nips_2022_8ow4YReXH9j
Ultra-marginal Feature Importance
Scientists frequently prioritize learning from data rather than training the best possible model; however, research in machine learning often prioritizes the latter. Marginal contribution feature importance (MCI) was developed to break this trend by providing a useful framework for quantifying the relationships in data in an interpretable fashion. In this work, we aim to improve upon the theoretical properties, performance, and runtime of MCI by introducing ultra-marginal feature importance (UMFI), which uses preprocessing methods from the AI fairness literature to remove dependencies in the feature set prior to measuring predictive power. We show on real and simulated data that UMFI performs better than MCI, especially in the presence of correlated interactions and unrelated features, while partially learning the structure of the causal graph and reducing the exponential runtime of MCI to super-linear.
Reject
This work makes a significant contribution to establishing the theoretical foundations for feature importance. The authors suggest a set of axioms that a feature importance score should have and introduce an algorithm that computes a feature importance score that has these required properties. In addition to the theoretical work, a compelling empirical evaluation is conducted showing significant improvement over previous results. After a good discussion between the reviewers and the authors and improvements to the paper introduced due to this discussion, the result is a good paper that is of great interest to the NeurIPS community. However, the added content also raised some concern about the accuracy of some statements, especially with respect to the blood relation. The main concern is that it is not clear that the algorithm provided has the blood relation property. Moreover, it is not clear that it is possible to fulfil this relation. Here are two scenarios that may be problematic: 1. In the fully observed setting if X is a confounder of Y and Z while Z is identical to X then X blocks the backdoor from Y to Z and therefore, according to the blood relation axiom the importance of Z should be zero while the importance of X should be positive since it has direct causal relation with Y. However, the roles of X and Z are indistinguishable and therefore it might as well be that Z is the confounder and therefore should have a non-zero importance. 2. In the partially observed setting, if S is an unobserved uniformly distributed integer in the range 1..8, Y is the sign of S, and X is an indicator of S being greater then 4 the according to the blood relation axiom, since there is a backdoor between Y and X when S in unobserved then the importance of X should be non-zero. However, this setting is indistinguishable from the setting in which X and Y are uncorrelated Bernoulli variables in which case the importance of X should be zero. Hence, is looks as if the blood relation requirement might be too strong. When reviewer suggested that this problem might be eliminated by saying that the importance of a feature is 0 if there exists a graphical model in which the feature is not in the same connected component as the target (note that this is a graphical model and not a causal model). However, this corollary should be theoretically analyzed. Some additional comments that emerge in the revised paper: 1. The axioms do not define a unique solution. Indeed, if a function Imp has all three required properties (axioms) then multiplying it by any positive constant would generate another valid feature importance score. It would be nice to add another requirement that will force a unique solution like Shapley Value or MCI. 2. The proof in the appendix shows that UMFI has the three required properties only when certain assumptions hold on the distribution. However, in the body of the paper, these limitations are not mentioned. 3. In line 211 it is stated that the proofs are presented in Section 3, however, the proofs are presented only in the appendix
test
[ "3ot3m0hNLrn", "ewha3HILv8", "ntW74zE5Qe", "XK_W3Rlwho5", "VtIYzDrjPs", "WaZ9SL-ykTD", "e38xCaIre9F", "d7R91_ghF9j", "9RQz7cHKntc", "_GnTpbI5Aw8", "NwuyOR4Mu_y", "9wN6OclgVuZ", "GWYg-6tCCK6", "DVgoiRwG8kW", "2V13JwMbB7", "rNHhmDsyDQ", "U4Rf1jgFoRA", "_qWyE-gOGk", "BGO8ffog3jG", "OoWGp8aDyUP", "qQqno9Q_V28", "ffXFHK96AyV" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Sounds good, I think this resolves all my concerns. And yes I think discussing these points and making the changes from the previous posts will improve the paper. Best of luck.", " **Reply #6 and #13.** \n\nAh, we see what you are saying now. Thank you for catching this. We will ensure that this wording is changed for the camera-ready version if we get accepted.\n\n**Reply #10**\n\nThe code that we shared does not show in-sample $R^2$. See page 22 of the documentation of the ranger package. The “r.squared” output is based on out-of-bag data. For a detailed explanation on what out-of-bag $R^2$ is measuring, see Section 3.1 of Breiman, 2001. As Breiman notes, the out-of-bag error is just as accurate as a set aside test set. Indeed he states “Therefore, using the out-of-bag error estimate removes the need for a set aside test set.”. We note that if one looks at the procedure for out-of-bag error estimation, the out-of-bag $R^2$ (which we use throughout our paper) is similar to cross validation. Therefore, it is certainly an error estimate on out-of-sample data. Further we can look to the following code example to show that the “r.squared” parameter is not the same as the in-sample $R^2$ (it is a bit stochastic, so run it a few times):\n\n```\nlibrary(ranger)\nnobs=1000\n#with extra noise feature\ndat<-data.frame(x=rnorm(nobs),x2=rnorm(nobs))\ndat$y=dat$x+rnorm(nobs)\nmod<-ranger(y~.,data=dat)\nmod$r.squared # about 0.44\ninsamp_preds<-predict(mod,dat)$predictions\ninsamp_R2<-1-sum((insamp_preds-dat$y)^2)/sum((dat$y-mean(dat$y))^2)\ninsamp_R2 # about 0.87\n\n#with only one feature\nmod<-ranger(y~x,data=dat)\nmod$r.squared #about 0.35\ninsamp_preds<-predict(mod,dat)$predictions\ninsamp_R2<-1-sum((insamp_preds-dat$y)^2)/sum((dat$y-mean(dat$y))^2)\ninsamp_R2 # about 0.86\n```\nWe must clarify one point. The collider example and the property where random noise can increase out-of-sample accuracy are separate issues, but they lead to the same overarching issue (MCI overestimates the importance of features unrelated to the response). The out-of-sample accuracy issue only happens in practice for many machine learning algorithms when the size of the feature set is very small (see code above), but as you pointed out, if we look to theory (i.e., use mutual information to calculate $\\nu$), the problem disappears. You correctly attributed this discrepancy to the training procedure, but this is still a major issue, as we have seen this issue across all popular statistical learning algorithms (random forests, extremely randomized trees, and xgboost), even when we use arbitrary hyperparameters and out-of-sample data to estimate predictive power. On the other hand, the collider issue exists in both theory (see Supplement B and Harel et al., 2022) and in practice (see our paper, Section 4.1.4).\n\n**Modified UMFI.**\n\nThank you for raising your score, and more importantly, we thank you for having such an in-depth conversation with us. Your suggestions have certainly made our paper better.\n\n**References**\n\nhttps://cran.r-project.org/web/packages/ranger/ranger.pdf\n\nBreiman, L. (2001). Random forests. Machine learning, 45(1), 5-32.\n", " **Reply #6 and #13.** I'm not claiming that SAGE/SPVIM will necessarily achieve better performance than MCI/UMFI, but it's inaccurate and unfair to the authors to claim that there are no other methods designed to explain importance in the true data distribution. With SPVIM for example, that's precisely what it was designed to do.\n\n**Reply #10.** Thanks for providing a speedy example, but the code you've shared is based on in-sample $R^2$ so I'm not sure it addresses what we've been talking about. On the other hand, I see what you're saying about the collider scenario - that seems like a valid example of an unimportant feature that receives non-zero importance under MCI. It's somewhat up for debate whether $E$ is truly uninformative though, because it's not independent from $Y$ under arbitrary conditioning, but I can see why you would want to assign it zero importance. Still, your argument for why an independent random noise feature should improve out-of-sample performance doesn't sound right, and the example provided doesn't support it. $I(Y; S, x) = I(Y; S)$ if $x$ is independent random noise, so any difference you observe in practice may be an artifact of the training procedure.\n\n**Modified UMFI.** Sounds good, I'm going to raise my score then. I still think the issue above (Reply #10) is a concerning aspect of the experiments. It's slightly mitigated by the fact that MCI observed the same issue in the original paper, but I would ask the authors to think about this more carefully to avoid writing something misleading in the paper.", " **Reply #6 and #13** \n\nSAGE, which seems fairly similar to SPVIM, is already compared to MCI in the original MCI paper and they show that MCI outperforms SAGE in several experiments. SAGE and SPVIM do not follow the elimination axiom, so adding a feature to the feature set could decrease the importance of other features, which does not make sense in many contexts (Figure 1 of the MCI paper).\n\n**Reply #10**\n\nIf you have access to R you can run the following code:\n\n```\nlibrary(ranger)\nnobs=1000\n#with extra noise feature\ndat<-data.frame(x=rnorm(nobs),x2=rnorm(nobs))\ndat$y=dat$x+rnorm(nobs)\nmod<-ranger(y~.,data=dat)\nmod$r.squared #about 0.44\n\n#with only one feature\nmod<-ranger(y~x,data=dat)\nmod$r.squared # about 0.35\n```\n\n\nWe are not doing any special hyperparameter tricks here. We are just using default hyperparameters as picking hyperparameters for each of the 1000s of models is fairly impractical. We showed in our Supplemental experiments that this can also happen with extremely randomized trees. We agree that we need to do a better job of disentangling the downsides of real-world estimates of MCI vs theoretical idealized estimates of MCI. In both cases (real-world and theoretically) MCI can give importance to features that are completely unrelated to the response. We can see this in the real-world from the example code provided above and in several of our experiments (Subsection 4.1.3 and 4.2). We see this theoretically in the downsides of the marginal contribution axiom when faced with data generated from the causal graph $Y \\gets S \\to G \\gets E$ as pointed out by Harel et al., 2022 and discussed in Supplement B.\n\n**Reply #16** \n\nYou are correct. \n\n**About the modified UMFI definition and theory.**\n\nWe are happy to reverse the order of definitions 1 and 2 in the revised text, which will be posted shortly. You are correct in saying that none of the procedures for UMFI have changed. We have only altered the mathematical characterization of UMFI by removing the maximization connection with MCI, and have updated the theoretical justifications surrounding UMFI. Because none of the procedures for UMFI have changed, none of the experimental results change, though we do add an additional experiment to emphasize the use of the blood relation axiom (Subsection 4.1.4).\n", " **Reply #6 and #13.** Apologies for making such a fuss about this, it's ultimately not a crucial part of the paper and just helps explain UMFI in the context of prior work. The resolution you've described sounds good to me, I suspect it will make the categorization of methods a bit easier to follow. (For example, I don't know how to make sense of permutation tests being exactly in the middle of \"marginal\" and \"conditional\" methods under the previous categorization.) \n\nOne more nit I have about related work is that on lines 65-67, I don't see what makes SAGE or SPVIM less well-suited to explaining data relationships than MFI. Both papers talk explicitly about explaining real relationships in the data distribution. Perhaps SAGE suffers from its use of the marginal distribution in its approximation algorithm (although one could use a better conditional distribution estimate), but SPVIM does not have this issue (as it actually retrains the model with each feature subset). \n\n**Reply #10.** Thanks for explaining this, but this still doesn't sound right to me. A model trained on one feature does not seem capable of overfitting, this is much more likely to occur with high-dimensional data. (Note that this is why tree-based models use column subsampling as a means of regularization.) So I don't think I follow the argument for why a random noise feature $x$ should improve a model's out-of-sample performance, and I suspect that if you're observing any improvement in practice it could be due to your choice of hyperparameters for the original model. Furthermore, I think it's important to disentangle the real-world approximation procedure the theoretical version of the method based on mutual information: if we can calculate the true mutual information, there is surely no benefit to including a random noise feature in the model.\n\n**Reply #16.** Like you said, in the limit $var(\\epsilon) \\to \\infty$, it seems like we would have $MCI(1) = max_S v(S \\cup 1) - v(S) = v(1) - v(\\emptyset) = v(2, 3) - v(3) = max_S v(S \\cup 2) - v(S) = MCI(2)$. This means that at least MCI yields an undesirable outcome in this case. I guess I thought UMFI shared the flaw because I was thinking of the previous version of UMFI where we maximized over any preprocessing of the features. In this case, I believe the preprocessing that would have yielded the largest impact is $S_2 = x_3$ with $v(2, 3) - v(3)$. So it seems like the modified UMFI definition is a key factor here.\n\n**About the modified UMFI definition and theory.** The revisions made since the original reviews are substantial, and I haven't been able to revisit the revised paper with the same attention as the original version. The modifications to UMFI in section 3 do seem reasonable, but my conviction isn't 100%. (And based on an initial reading, it would be helpful to present definitions 1 and 2 in reverse order.) It seems like the main procedure has not changed, just the mathematical/theoretical characterization of the algorithm. Can the authors confirm whether this is correct?", " Thank you for the suggestion. We will add justifications for the other limitations in the introduction and upload the revised paper shortly. To be abundantly clear, we will explain the shortcomings of MCI in this comment as well.\n\n**Limitation of MCI #2: Second, although it can handle complex feature interactions and data with correlated features, MCI underestimates the importance of correlated features that form interaction effects.**\n\nWe are the first to point out this shortcoming of MCI, so we will explain this issue with a simple example. MCI underestimates correlated features that form interaction effects because if a feature $x_2$ is highly correlated with the feature of interest $x_1$, then it is unlikely that the correlated feature will appear in the maximizing subset for MCI when calculating the importance of $x_1$ even if $Y$ contains an interaction effect with both $x_1$ and $x_2$. This is because the additional predictive power offered by $x_1$ on top of a subset $S$ would be diminished by the presence of $x_2 \\in S$. Evidence for this limitation is further supported by the experiment in Subsection 4.1.2.\n\n**Limitation of MCI #3: Third, MCI can give non-zero importance to features that are completely unrelated to the response variable.**\n\n\nFirst, we note that evidence for MCI giving non-zero importance to features that are completely unrelated to the response is provided on page 9 of the Supplement of the original MCI paper (Catav et al., 2021). In Figure 3, MCI is shown to give non-zero importance to all 50 genes even though 40 of the genes are randomly selected from the genome, so it is extremely unlikely that all 50 of these genes should be important. In practice, this happens because many ML models poorly estimate dependence when given a small feature set due to overfitting. This can result in $\\nu(S)<\\nu(S,x)$ when $|S|=1$ even if $x$ is a randomized feature. This issue is further justified in theory by issues arising from the marginal contribution axiom, as discussed in Supplement B. Indeed, MCI may give importance to completely unrelated features, as pointed out by the collider example $Y \\gets S \\to G \\gets E$ in Harel et al. (2022). $E$ has no relationship with $Y$, however, if $F=${$G,E$}, then the ablation score $A_\\nu(E)=\\nu(G,E)-\\nu(G)>0$, since $E$ can be used to denoise $G$ and gain information about $S$ and $Y$. Then, by the marginal contribution axiom, which is one of the three axioms that MCI is based on, $E$ is given non-zero importance by MCI. Evidence for this limitation is further supported by the experiments in Subsections 4.1.3, 4.1.4, and 4.2.\n\n**References**\n\nHarel, N., Gilad-Bachrach, R., & Obolski, U. (2022). Inherent Inconsistencies of Feature Importance. arXiv preprint arXiv:2206.08204.\n\nCatav, A., Fu, B., Zoabi, Y., Meilik, A. L. W., Shomron, N., Ernst, J., ... & Gilad-Bachrach, R. (2021, July). Marginal contribution feature importance-an axiomatic approach for explaining data. In International Conference on Machine Learning (pp. 1324-1335). PMLR.\n", " **Transformations**\n\nWhen evaluating the importance of feature $x_i$, we seek to transform the feature set to be independent of $x_i$ while minimizing distortion. Different features of interest $x$ will necessitate different transformations $S^F_x$. We have no reason to believe that importance scores for different features will lead to inconsistent importance scores. In fact, we have demonstrated theoretically and experimentally that UMFI consistently exhibits desirable properties. Transformations are discussed in Lines 118-130, and implementation details for optimal transport and linear regression are presented in Supplement E. Line 2 of Algorithm 1 details the need for a transformation, but we do not assert a specific method in this line of the algorithm because UMFI is just a framework. There are many different algorithms one can use to transform the feature set, so we leave this part open to the user.\n\n**Blood relation axiom** \n\nGiven the causal graph $Y \\gets X_1 \\to X_2$, $Y$ is blood related to $X_2$ because, when viewed as a family tree, $Y$ and $X_2$ are siblings as they share a parent $X_1$. We argue that $X_2$ must be given non-zero importance for several reasons. First, as you correctly point out, $X_1$ may not be measured, so in this case, $X_2$ should be assigned importance. Second, in genome-wide association studies, scientists want a list of all genes associated with some disease. Indeed, two variables are statistically associated iff they are blood related (see Williams et al., 2018, Greenland et al. 1999). Third, in Earth sciences, we often rely on proxy variables that do not directly cause the response, but that are blood related to the response. For example, in hydrology, we know that low flow characteristics are directly caused by snow fraction. However, snow fraction is not reliably globally available, so snow persistence (which is caused by snow fraction) can be used as a valid proxy because it can be easily measured from satellite observations. If one were to use a dataset with both features, it would be misleading to conclude that snow persistence is not important because it may lead researchers to not consider snow persistence as a valid proxy in future work. Finally, even if one believes that $X_2$ should have zero importance in the graph $Y \\gets X_1 \\to X_2$, this graph is Markov equivalent to the causal graph $Y \\gets X_1 \\gets X_2$, so when we have access to the data, but not the casual graph, $X_2$ must be found to be important in an explanatory setting (see Section 7 of Gromping, 2009).\n\n**Collider** \n\nWe did not say that the importance of $G$ should be zero. We said that $E$ should have zero importance given the causal graph $Y \\gets S \\to G \\gets E$. Indeed, $G$ is blood related to $Y$ through their common parent $S$. So by the blood relation axiom, $G$ should be given positive importance, but $E$ is not blood related to $Y$ and should be given 0 importance. $G$ inherently contains information about $Y$, but this information is noised up by $E$. Therefore, although $E$ can be used to denoise $G$ and predict $Y$ better, only $G$ should be given importance when explaining the data, and indeed, only $G$ is blood related to $Y$. We have revised this passage of the Supplement to clarify our reasoning.\n\nWe believe that the causal discussions add significant value to the paper and that UMFI obeying the blood relation axiom under many circumstances both theoretically and experimentally is a beautiful and impactful result. We include all relevant assumptions for discussing causality in Supplement C, and in the revised text, we refer the reader to additional papers for more information about relevant causal graph concepts.\n\n**Variability**\n\nThe variability shown in the experiments comes from the stochasticity of ML methods. Additional variability appears due to subsampling the data at each iteration. We use the median because UMFI can give exactly zero importance, and taking the median of 200 iterations where 190 of them are 0 and 10 of them are 0.001 produces clear results.\n\nWe agree that the usage of multiple iterations should be emphasized in Section 3, and we have revised the text accordingly. We did not revise this in the algorithm since UMFI can also be computed using methods such as HSIC (see reviewer qdSP), which could result in little-to-no variability. \n\nIn Figure 3, the importance of each feature in the current dataset is calculated once and have revised the caption. The time for transformations is included in this experiment, as stated in lines 253-261. \n\n**References**\n\nGreenland, S., Pearl, J., & Robins, J. M. (1999). Causal diagrams for epidemiologic research. Epidemiology, 37-48.\n\nGrömping, U. (2009). Variable importance assessment in regression: linear regression versus random forest. The American Statistician, 63(4), 308-319.\n\nWilliams, T. C., et al. (2018). Directed acyclic graphs: a tool for causal studies in paediatrics. Pediatric research, 84(4), 487-493.\n\n\n", " This is question to the authors.\n\n In the introduction (lines 50-54) three limitations of MCI are presented. While the first one is provided with a supporting argument (the time complexity is due to the need to retrain the model), the other limitations are provided without such a supporting argument. Can you please add the justification for these statements? This can be a reference to another paper that already demonstrated it, or otherwise an example or a mathematical proof.\n", " Thanks for answering my queries and revising the paper. \n\nI have some follow-up questions (discussions).\n\n“ Information subsets are introduced in order to allow for independent representations of data, which are transformations of the feature set, and therefore not generally part of the raw feature set. ”\n\nTransformation is very important in UMFI. My question is whether it is done for each $X_i$? If so, will the different transformations for different features lead to inconsistency in feature importance estimation of different features? \n\nTransformation should be explained briefly in the main text and also be included in the algorithm. So readers can understand the complete process of UMFI. \n\n\n\n“ This motivated our introduction of the blood relation axiom in Section 2, which stipulates that if data is generated from a causal graph, then a feature importance method should give non-zero importance to a feature f if and only if f is blood related to Y in the causal graph.”\n\nThe authors’ Blood relation axion is likely incorrect. “Two vertices in a causal graph are said to be blood related if … or if there is a backdoor path between them via a common ancestor.” For example, in the causal graph $X_1 \\to X_2 $ and $X_1 \\to Y$, $X_2$ is blood related to Y. Note that there is not a causal relationship between $X_2$ and $Y$ since there is no edge between them. We usually say that the relationship between them is spurious. I do not believe that such a relationship should be called a blood relationship. I understand that the authors may argue that $X_2$ is necessary when $X_1$ is unmeasured. However, this is not clear in the axion. \n\nAnother note: I do not agree with the authors' explanation of the collider example of Harel et al. [25]. In the causal graph, $Y \\leftarrow S \\to G \\leftarrow E$ where $S$ is unmeasured (Lines 83-90 in the Appendix). The Authors think that the feature importance of $G$ on $Y$ should be zero since $G$ has no causal relationship with $Y$. Here, the role of $E$ is the same as the role of $X_2$ in the above example since they both do not have causal relationships with $Y$, but are proxies for un-measured causes of $Y$, i.e. $X_1$ and $S$. So, the feature importance of $G$ on $Y$ should not be zero. If $G$'s feature importance is zero, which variable explains $Y$ in the graph? \n\nI suggest that authors remove causal discussions since causal definitions need quite a few assumptions and basic concepts, for example, $d$-separation and backdoor paths. Without the assumptions and basic concepts, discussions can be confusing. \n\n\n\n “in the real data study, presented in subsection 4.2 and Supplement G.2, we calculate the feature importance 200-5000 times. Indeed, we would not recommend someone to put great trust in a single estimate of UMFI. ”\n\nIt will be helpful to discuss the sources of the variability. My question is what randomisation mechanism ensures the median feature importance to approximate the optimal estimation? \n\nThe authors do not recommend the single estimate of UMFI because of its higher variability than MCI, and hence the multiple iterations should be included in the algorithm to make this clear. \n\nIn figure 3 of computational time, what is the number of iterations being considered in the time calculation of UMFI? Has the time for transformations been included?\n", " **Reply #6 and #13**\n\nAuthors: Thank you for following up with us on this topic and referring us to some interesting papers. To answer your question about how we would classify standard permutation importance, please refer to Subsection G.1.3 in the Supplement, where we note that permutation importance is exactly in the middle of true-to-model and true-to-data methods with regards to its treatment of correlated features. This is because when two features are duplicates and equal to the response (let’s say $x_1=x_2=Y$), then, the squared correlation, MCI, and UMFI all give full importance to $x_1$ and $x_2$, permutation importance gives half of the importance to $x_1$ and half to $x_2$ (because $x_1$ will appear in approximately half the model and $x_2$ will be in the other half), and conditional permutation importance will give zero importance to both $x_1$ and $x_2$.\n\nWe recognize the reviewer’s perspective of marginal vs. conditional in the distributional sense. **We have decided to relabel this division in the paper as true-to-data vs true-to-model** in order to completely disambiguate it from the marginal vs. conditional distributional divide as understood in other papers.\n\n**Reply #10**\n\nAuthors: We agree that under most circumstances, out-of-sample data should be used to compute $\\nu$. Indeed, in all of our experiments, we use out-of-sample data to compute $\\nu$ (the out-of-bag error comes from out-of-sample data). Using out-of-sample data is a very important step if $\\nu$ is assessed using machine learning models (e.g. random forests or extremely randomized trees) as the reviewer correctly points out that training accuracy can be heavily biased. However, even while using out-of-sample data, we may observe circumstances where $\\nu(S,x)>\\nu(S)$ when $x$ is random. If $S$ only contains one feature, there could be huge amounts of overfitting if the trained model is large (as is the case in random forests, xgboost, extremely randomized trees, etc…), leading to $\\nu(S)$ being very small. Even if $x$ is completely random, it could reduce overfitting, leading to a bit of an increase in $\\nu(S,x)$ compared to $\\nu(S)$ when it is calculated using out-of-sample data. We have seen this happen in many experiments. We also note that out-of-sample data is not always needed. As reviewer qdSP pointed out, $\\nu$ could be calculated with HSIC, which may not need out-of-sample data since no model is fit to the data, so overfitting is impossible. As a second example, suppose we could exactly calculate the mutual information $I(Y;S,x)$ and $I(Y;S)$, then again, no out-of-sample data would be needed. \n\n**Reply #16**\n\nAuthors: We agree that it would be undesirable for $x_1$ and $x_2$ to be assigned equal value in the example given by the reviewer, but we disagree that MCI and UMFI would in fact assign $x_1$ and $x_2$ equal value. $x_1=y$, so $x_1$ would have maximal importance, given by $\\nu(x_1)$, for both MCI and UMFI. Since $x_2=y+\\epsilon$ and $x_3=y-\\epsilon$, if we have both in a model, we would also predict $y$ perfectly, so $\\nu(x_2, x_3)= \\nu(x_1)$. As the variance of $\\epsilon$ goes to infinity, we observe that $MCI(x_2) \\to MCI(x_1)$, since $MCI(x_2)= \\nu(x_2, x_3) - \\nu(x_3) \\to \\nu(x_1) = MCI(x_1)$ since $\\nu(x_3) \\to 0$ as $var(\\epsilon) \\to \\infty$. So within MCI, the importance of $x_1$ is larger than the importance of $x_2$, but this difference can be made arbitrarily small. UMFI avoids this problem entirely because we preprocess the data by removing dependencies on the feature of interest. For instance, when removing dependencies on $x_2$ from $x_3$, the transformed variable $S^F_2$ would still have significant correlation with $y$, and hence, $U_\\nu(x_2)=\\nu(x_2,S^F_2)-\\nu(S^F_2) < \\nu(x_1)=U_\\nu(x_1)$. After running simulated experiments following your example, we observed that UMFI gives large importance to $x_1$ and small importance to $x_2$ across different scales of variances for $\\epsilon$. One way of seeing this is that since $\\epsilon$ is random, it is not blood related to $y$, and hence has zero UMFI importance. Then, as the variance of $\\epsilon$ increases, $x_2$ and $x_3$ become more and more similar to $\\epsilon$, which reduces their respective UMFI scores. \n", " Thanks to the reviewers for their detailed response. I'm still examining the updates made on the theoretical side, but in the meantime there are a couple simpler topics to discuss. I'll use the authors' numbering of topics so it's clear what we're talking about.\n\n**Reply #6.** I see, I'm not as familiar with the terminology from Gromping and Strobl. In the two examples you mentioned, squared correlations and conditional permutation tests, I see that there's a significant difference between the two approaches. As a clarifying question, I'd ask where a (standard) permutation test falls on this spectrum - is it \"marginal\" or \"conditional?\" In the sense of how they deal with held-out features, I'm tempted to say a conditional permutation test is conditional whereas a standard permutation test is marginal, because one literally samples from the conditional distribution while the other samples from the marginal distribution. But I'm not sure what your categorization would say. When methods can vary in multiple dimensions (how they handle held-out features, and whether they consider a feature's effect in isolation or with all other features considered), I'm not sure the differences are adequately described by a single spectrum spanning marginal and conditional methods. \n\nI suppose the terminology I'm advocating for is best reflected in [1], where various feature importance methods are categorized across 3 axes: how they handle held-out features, what model behavior they focus on (loss vs. single prediction), and how they summarize each feature's contribution (e.g., deleting a single feature). Much of today's feature importance literature seems to be concerned with that first choice, where the key difference between methods is often whether held-out features are sampled from their conditional or marginal distribution (I can provide examples if helpful). This seems like a more concrete definition of conditional vs. marginal methods, but I now see that this isn't what the authors meant when alluding to the \"conditional vs. marginal divide.\" I'm not sure what the resolution is, but after the authors response I still find that the framing of UMFI/MCI as marginal methods isn't very helpful and could at the very least be explained better.\n\n[1] Covert et al., \"Explaining by removing: a unified framework for model explanation\" (2021)\n\n**Reply #10.** I think I follow the authors' argument, but this prompts a follow-up question. For a random noise feature $x$, you could *maybe* observe $\\nu(S, x) > \\nu(S)$ for a very small dataset and when only examining in-sample (training) data, but the effect should disappear when evaluating the model on out-of-sample data. My follow-up is, were the experiments in the paper conducted using in-sample data? This seems problematic because training accuracy is heavily biased, and the large importance values for irrelevant features under MCI might vanish if the experiments were performed with out-of-sample data. \n\n**Reply #13.** Apologies, the authors are right about the paper not using the term marginal. There's been an unfortunate rebranding of sampling from the marginal distribution as an \"interventional\" approach, even though it's not causal and there are other methods that are actually are (see [2] based on the underlying causal graph in the data). [3] is a more recent paper that replaces the \"interventional\" term with \"marginal.\" Anyway, what the authors call \"interventional\" in Chen et al. is in fact sampling held-out features from their marginal distribution, like how standard permutation tests do. So the trade-offs here are in fact between what I understand as \"marginal\" and \"conditional\" methods, i.e., those that sample held-out features from their marginal or conditional distributions.\n\n[2] Heskes et al., \"Causal Shapley values: exploiting causal knowledge to explain individual predictions of complex models\" (2020)\n\n[3] Chen et al., \"Algorithms to estimate Shapley value feature attributions\" (2022)\n\n**Reply #16.** The issue in the example I described is that $x_1$ is informative on its own and therefore (arguably) more important than $x_2$, which requires $x_3$ to be informative. For example, if we have a response variable $y$, then a noiseless feature $x_1 = y$ is more useful than a noisy feature $x_2 = y + \\epsilon$ for $\\epsilon \\sim N(0, 100)$. However, $x_2$ yields a similar accuracy improvement to $x_1$ when it is introduced with $x_3 = y - \\epsilon$ already present, so MCI/UMFI will assign $x_1$ and $x_2$ equal value. What do the authors think, do they agree that this seems undesirable? ", " Reply #1: We thank the reviewer for pointing out some weaknesses in the paper. We agree with many of the points that the reviewer made, and provide a revised version of the text that improves upon these weaknesses.\n\n**Reviewer: The main ingredients were already all in the original paper introducing marginal feature importance. From this perspective the novelty of the work is not dramatic.**\n\nReply #2: Due to finding more examples where MCI does not accurately describe data, but UMFI does, we move further away from the original MCI method in the revised version of the text. In particular, we explain that MCI’s marginal contribution axiom is at odds with providing appropriate importance scores in some causal settings (See general response to all reviewers), and this is explored experimentally in subsection 4.1.4. With these clarifications, the addition of our newly proposed axioms, and corresponding proofs provided in Supplement C, we believe that our paper presents novel insights about explaining relationships in data using feature importance.\n\n**Reviewer: There is a lack of intuition in introducing the ultra-marginal feature importance criterion. It would be useful to discuss the role of g and its use in dependency removal when introducing it, possibly providing an illustrative example. Overall, the methodological section is very shallow and should be expanded to better highlight the rationale and relevance of the contribution.**\n\nReply #3: We agree that the methodological section was lacking and have significantly altered it in the new revision. This includes an explicit definition of optimal preprocessings (achieved by a function $g(F)$). Illustrative examples of such functions $g$ used for dependency removal can be found in Supplement E. Further, we improve the theoretical motivations behind UMFI with axioms in Section 2 and proofs of some desired properties in Supplement C.\n\n**Reviewer: I suggest to better contextualize the related work or move it to the end of the paper, as it's unclear why certain topics are being discussed (e.g. orthogonal predictors).**\n\nReply #4: We agree that the previous formulation of the related work was slightly confusing. In the updated paper, we combine the MCI and related works section and make it more clear how orthogonal predictors relate to our work. In essence, we aim to find a representation of the data that is independent of some other feature. Orthogonality is a weaker version of independence, thus the two are related. We add this explanation to make the link more clear (Lines 69-74).\n\n**Reviewer: In the proof of Theorem 3.1, you seem to assume that the maximizer of eq. 3 is unique. But what if $\\tilde{S}^\\*$ is also a maximizer of 3? I guess for your scope it is enough that $S^\\*$ is one of the maximizers, but this should be clarified for the sake of soundness.**\n\nReply #5: We agree that it is important to clearly explain the non-uniqueness of $S^\\*$, and we believe we did a poor job of explaining this. In our revised version of the text, we provide a rigorous definition for an optimal preprocessing $S^\\*$ (now $S^F_x$) in Definition 1, and we specify its non-uniqueness in multiple parts of the paper (Lines 120-132).\n", " Reply #1: We thank the reviewer for their questions and suggestions. Overall, it seems that the main issues pointed out by this reviewer come from the theoretical framework presented in Section 3 of the original submission . To ensure that the theory is equally as strong as the experiments or even stronger, we have rewritten Sections 2 and 3 to make our framework more rigorous and substantive.\n\n**Reviewer: The authors do not justify why an information subset is necessary.**\n\nReply #2: Information subsets are introduced in order to allow for independent representations of data, which are transformations of the feature set, and therefore not generally part of the raw feature set. These concepts are tied together explicitly in Section 3 of the revision. The integration of independent representations of data into our method enables UMFI to have some desirable properties, as proved in Supplement C, as well as better performance, as shown in the experiments.\n\n**Reviewer: If there is no such an $S^\\*$, how is the importance of f evaluated? This is real when all features are correlated.**\n\nReply #3: In the setting where all features are perfectly correlated, we note that $S^\\*$ would be a constant predictor bearing no information about $Y$, and hence the importance of $x$ would be given by $\\nu(x)$. It is difficult to say if the optimal preprocessing will always exist, however, we prove in the Gaussian case that it exists via linear regression (see Theorem C.1 in the Supplement). Further, UMFI is always approximated in practice, so even if an optimal preprocessing does not exist, we approximate something close to it. In our experimental results, we have shown that UMFI produces very good results even though the preprocessings were not optimal.\n\n**Reviewer: I need to be convinced to use the largest information subset $S^\\*$...**\n\nReply #4: We do not remove the effect of $S^\\*$ on $Y$ when estimating the effect of feature $f$ on $Y$. As shown in Definition 2, we effectively measure the difference in prediction power towards $Y$ of $f$ on top of $S^\\*$. We strongly agree that feature importance should be related to the underlying causal graph as well. This motivated our introduction of the blood relation axiom in Section 2, which stipulates that if data is generated from a causal graph, then a feature importance method should give non-zero importance to a feature $f$ if and only if $f$ is blood related to $Y$ in the causal graph. We note that a feature $x$ being blood related to $Y$ is equivalent to $x$ being statistically associated with $Y$ (Williams et al. 2018). The formulation of UMFI with $S^\\*$ enables it to satisfy the blood relation axiom in various settings (see Supplement C and Subsection 4.1.4 in the revision). The other tested metrics, which include MCI, ablation, permutation importance, and conditional permutation importance, all fail in this respect (see Supplement G.1.4 and Section 4.1.4).\n\n**Reviewer: By reading the algorithm, the importance of a feature is related to a model…**\n\nReply #5: No, the goal of UMFI is to explain hidden relationships in data. We have made several revisions to make this more clear. First, as pointed out by Reviewer qdSP, a model need not even be involved in UMFI. All that is required is a measure of dependence (e.g., HSIC or mutual information). We have clarified this point in the revised version of Section 3. Second, even when UMFI is implemented with models, we train two independent models, so if one were to argue that our feature importance method explains a model, we would have to ask “which model?”. Instead, UMFI explains the data because we can learn something about the causal graph from it. We have clarified this with our axioms in Section 2, further discussion in Supplement A.2, and corresponding proofs in Supplement C.\n\n**Reviewer: The predictions of a model depend on the parameters in practice…**\n\nReply #6: Indeed, predictions and model accuracy can change due to slight changes in model parameters due to different data samples or just due to different random seeds. In our experiments, we fight against this possible limitation by finding the median importance value over many estimates of the feature importance, which is practical due to the moderate time complexity required for computing UMFI scores. In the simulation studies in subsection 4.1, we calculate the feature importance 100 times, and in the real data study, presented in subsection 4.2 and Supplement G.2, we calculate the feature importance 200-5000 times. Indeed, we would not recommend someone to put great trust in a single estimate of UMFI. Instead, calculating UMFI and estimating its median over many initializations would be wise. We emphasize this point more in our revised text (Lines 143-144).\n\nReferences:\n\nWilliams, T. C., Bach, C. C., Matthiesen, N. B., Henriksen, T. B., & Gagliardi, L. (2018). Directed acyclic graphs: a tool for causal studies in paediatrics. Pediatric research, 84(4), 487-493.", " **Reviewer: The notation in this work was confusing at times**\n\nReply #11: We believe we have resolved the notational issues in accordance with your suggestions in the resubmission. \n\n**Reviewer: Nits**\n\nReply #12: Thank you for pointing out SPVIM to us, as we had not seen this paper before. We added this reference to the introduction/related works. From our understanding, SAGE is in fact a subset based method as can be seen in Covert et al. (2020) Section 3.3. You are correct that permutation importance is not a subset based method. We make this part of the introduction more clear in the updated text.\n\nReply #13: It seems that Chen does not mention the word “marginal” in the “true to the data paper”. Were you thinking of another paper?\n\n**Reviewer: Questions**\n\nReply #14: The domain and codomain of $g$ is not important, as long as $g$ can act on $F$. We have removed this from the resubmission to improve clarity.\n\nReply #15: We think we have clarified all of the bullet points in this section in our above responses and the general response, but if anything is still unclear to you, please feel free to ask.\n\n**Reviewer: Limitations**\n\nReply #16: We do not necessarily see any downsides to the example you laid out. As long as the variance contributions to the response are approximately equal, $x_1$ and $x_2$ should have around the same importance. But your intuition that the maximization framework could pose problems is correct. We address those in the general comments to all reviewers.\n\nReferences:\nCovert, I., Lundberg, S. M., & Lee, S. I. (2020). Understanding global feature contributions with additive importance measures. Advances in Neural Information Processing Systems, 33, 17212-17223.\n\nDebeer, D., & Strobl, C. (2020). Conditional permutation importance revisited. BMC bioinformatics, 21(1), 1-30.\n\nGrömping, U. (2009). Variable importance assessment in regression: linear regression versus random forest. The American Statistician, 63(4), 308-319.\n", " Reply #1: We thank reviewer sCQt for their response. It is clear that the reviewer spent a great deal of time trying to understand our paper thoroughly and providing many useful suggestions. We are especially grateful for their suggestion to make our theoretical points more rigorous. Please see our revised paper and specific responses to the points raised below. We look forward to discussing these topics further with you.\n\n**Reviewer: Weaknesses about the method**\n\nReply #2: In the revised work, we defined what it means to optimally remove dependencies with three criteria in Definition 2. Still, the optimal preprocessed feature set is not necessarily unique, thus we added an explicit sentence about the non-uniqueness of $S^F_x$ in the updated text (Lines 127-128).\n\nReply #3: We understand your concern, but over time, we have grown quite fond of the term “information subsets”, but if you have any better suggestions for a name that is intuitive/concise/clear, we would be willing to change it.\n\nReply #4: It is not correlated features that necessarily cause issues in MCI, but rather correlated features that carry synergistic information about the response (e.g., $cor(x_1,x_2)=0.8$ and $Y=x_1+x_1*x_2+x_3$). In essence, when MCI is calculated for $x_1$, if $x_2$ shares a lot of information with $x_1$ then it cannot usually be included in the subset that maximizes the difference in the evaluation function, **even if the two features contain synergistic information (interaction effects) about the response**. We further clarified this in lines 181-189, but if it is still not clear, let us know and we would be more than happy to continue the discussion.\n\nReply #5: As summarized in the general response, UMFI values do not necessarily dominate MCI values. We removed this aspect in the revision. As for why UMFI does not share the flaw of giving non-zero importance to unrelated features, please see Reply #10, or refer to the proofs of UMFI satisfying the blood relation axiom in certain settings, found in Supplement C, as well as relevant experimental results (Subsection 4.1.4 and G.1.4).\n\nReply #6: We agree with the reviewer’s comments on the marginal vs conditional framing to an extent. However, our intuition for viewing UMFI and MCI as marginal methods comes from papers by Gromping and Strobl, where marginal and conditional frameworks are said to define the two extremes for feature importance, and that they differ only in the face of dependent features. At one extreme, there is pairwise squared correlations (marginal) and at the other extreme, there is conditional permutation importance (conditional). Suppose that the feature set is composed of only two features, which are duplicates of each other, and they are correlated with the response with $r=0.5$. Then, the feature importance given by squared correlation is 0.25 for both features, but the importance given by conditional permutation importance is 0 for both features. If the evaluation function is $R^2$, both MCI and UMFI would give an importance of 0.25 to both features. Thus, MCI and UMFI act very similarly to the squared correlation, which is an extreme marginal method, and the way in which these methods treat correlated features is opposite to conditional permutation importance. This is a very interesting discussion and we would love to discuss this further, so feel free to argue with us on these points. We would like to make these distinctions more clear in our text.\n\n**Reviewer: Some potential issues with the experiments**\n\nReply #7: We included the unmodified BRCA experiment in the supplement. Also, we included an experiment on a real dataset from hydrology in the supplement. Both of these are also in the revised version of the paper.\n\nReply #8: The predictive performance of models is not a concern for us. The goal of the UMFI framework is to accurately rank features based on their association to the response within the data, not to optimize the model.\n\nReply #9: In the supplementary material, we already examined many other baseline methods including ablation, permutation importance, and conditional permutation importance. In the revised version of the main text, we clarify that these baselines are tested in the Supplement (Lines 154-158).\n\nReply #10: MCI giving high importance to randomized features is not a mistake. This is due to the fact that many ML models have poor estimation of dependence when given a small feature set, i.e. if $|S|=1$, then $\\nu(S)<\\nu(S,x)$ even if $x$ is a randomized feature. This issue was shown in the original MCI paper on page 9 of the MCI paper’s supplement. MCI gives highly significant importance to all features, whereas the baselines give zero or negative importance to about 5-15 of the features.", " Reply #1: We thank reviewer qdSP for the kind review, and more importantly for pointing out the HSIC method, which we were unfamiliar with. Please see the updated version of our paper as well as our more specific responses below.\n\n**Reviewer: One weakness is the authors have found a dataset where their choice of optimal transport/linear regression for dependence removal failed, but it is relegated to supplementary materials.**\n\nReply #2: Indeed we put the hydrology experiment in the supplement, but we do point out this limitation in the conclusion of the main text. We put the hydrology experiment in the supplement because of space constraints, and we wanted to be consistent with the previous feature importance papers which used BRCA as their main “real data” experiment. We would like to emphasize that UMFI is just a framework. An innumerable amount of different methods are available for removing dependencies, but we just choose two simple ones to demonstrate the potential capabilities of our framework. Even when the removal of dependencies did not occur optimally, as was the case in the hydrology example, the feature importance outputs were still reasonable, which we find encouraging. We emphasize that the reasonable results in hydrology in the face of nonoptimal dependency removal are encouraging in the revised version of the conclusion (Lines 282-291).\n\n**Reviewer: minor: line 301 is missing the word \"features\"**\n\nReply #3: Thanks! This is fixed in the new version.\n\n**Reviewer: It's clear many methods could be used as $\\nu$, and is discussed in the conclusion, but no alternative is explored in the empirical results. It's quite natural to wonder how critical the choice of supervised learning methods is: have this been explored?**\n\nReply #4: Yes, many methods can be used to calculate $\\nu$, and we think this is one of the beautiful properties of UMFI and MCI. In the original supplement, we ran the same simulation studies comparing MCI and UMFI with extremely randomized trees instead of random forests. We found that the main conclusions of these simulation results do not change for random forests vs extremely randomized trees. The hydrology example was run with extremely randomized trees instead of random forests as well. All of these experiments are also in the revised text.\n\n**Reviewer: The choice of $\\nu$ can also be more generic dependency measures, so what about easy to estimate measures of dependence like HSIC?**\n\nReply #5: We had not heard of HSIC before you pointed this out. Thank you for informing us of this method as we had previously wondered if such a method existed. We add this method in our discussion of applicable methods for UMFI (Lines 141-142) and may implement it for future work (Lines 303-304). \n", " We thank the reviewers for their thoughtful comments. In this reply, we will address the experimental issues laid out by several of the reviewers. Please see the revised version of the text that we just submitted. \n\nSeveral reviewers pointed out the need for additional experiments. For example, sCQt requests us to “present additional results with the unmodified BRCA data”. This was already done in the original submission (Supplement G.3). Others also suggested additional experiments with different evaluation functions $\\nu$ and additional baselines. In a question relating to our experiments, rqHi asks us how we “avoid the effect of parameter variation on evaluating the importance of a feature” when UMFI is used in practice, sCQt requested that we should “include other baselines'', and qdSP asks \"it's quite natural to wonder how critical the choice of supervised learning methods is: have this been explored?\". This was also already done in the supplement under Appendix G by rerunning the experiments with extremely randomized trees instead of random forests for comparing MCI vs UMFI and running the same experiments over permutation importance, conditional permutation importance, and ablation for further baseline comparisons.\n\nTo clarify why MCI fails to detect correlated interactions as requested by sCQt, we slightly change the nonlinear interaction study to make it more comparable to the correlated interaction study. Further, though it was not directly requested, we decided to run an additional simulation study based on the collider example from Harel et. al (2022) to test the ability of methods to return importance scores that are consistent with the causal structure of the data. This was first tested on MCI, UMFI_LR, and UMFI_OT in Section 4.1, and then on additional methods in Supplement G. **The results show that both implementations of UMFI succeed in giving non-blood related features 0 importance while giving significant importance to blood related features.** We note that this experiment was performed in a synergistic and non-Gaussian setting, which further demonstrates the power of UMFI to obey the blood relation axiom outside the scope of the conditions that we have proved thus far. The other tested metrics, which include MCI, ablation, permutation importance, and conditional permutation importance, all fail in this respect. **In all, we believe that the resubmission makes the experimental evidence supporting UMFI even stronger.**\n\n\nHarel, N., Gilad-Bachrach, R., & Obolski, U. (2022). Inherent Inconsistencies of Feature Importance. arXiv preprint arXiv:2206.08204.\n", " We thank the reviewers for their thoughtful comments. In this reply, we will address the theoretical issues laid out by several of the reviewers. Please see the revised version of the text that we just submitted.\n\nMany of the reviewers pointed out that our initial submission had weak theoretical justifications. For example, 3eYw states that “there is a lack of intuition in introducing” UMFI, and the methods section is “very shallow and should be expanded to better highlight the rationale and relevance”. Also, rqHi is unsure if we are truly explaining the data and states that we “do not justify why an information subset is necessary”. **We have deepened the theory of UMFI by introducing three axioms, which we argue capture intuitive and useful properties for explaining the data. These axioms are presented in Section 2 of the resubmission, and proofs of UMFI satisfying these axioms are provided in Supplement C.** We note that dependency removal is critical to the proofs of UMFI satisfying the desired axioms.\n\nHarel et. al (2022), coauthored by one of the authors of the MCI paper, demonstrate inherent problems with the marginal contribution axiom, which was one of the axioms introduced in the MCI paper. The key example to consider is the causal graph with a collider (C<-S->G<-E) given in Subsection 3.3 in Harel et. al (2022). Because of this issue, we choose to improve the MCI framework, rather than generalize it, by accepting the elimination axiom and a generalization of the duplication invariance property from the MCI paper as our first two axioms and by replacing the marginal contribution axiom with our blood relation axiom. **With the blood relation axiom, we prove that UMFI can detect part of the structure of the underlying causal graph, and UMFI gives non-zero importance iff the feature is statistically associated with the response.**\n\n\nThree out of the four reviewers questioned our use of S*. 3eYw questions its uniqueness, rqHi questions its existence, and sCQt questions the definition and rigor of removing dependencies and S*. In the new revision, we provide an explicit definition for an optimal preprocessing S* (see Definition 1 of revised text). We proved the existence of optimal preprocessings in the multivariate normal setting (see Supplement C in the revised text). Although we are interested in proving the existence of optimal preprocessings in broader settings, non-existence is not typically a concern in practice, since the experimental results demonstrate that UMFI achieves strong results when computed using non-optimal preprocessings. We also clarify that optimal preprocessings, when they exist, are not unique. Indeed, we may adjust optimal preprocessings via constant factors without violating independence or the information content. This is further clarified in Section 3 of the revised text.\n\n\nsCQt correctly points out that we did not properly justify how UMFI can better detect unrelated features if UMFI is “strictly larger” than MCI by solving a more general maximization problem. **This is an important criticism, as it turns out that the actual computed UMFI score for a feature $x$, given by $U^{F,Y}_\\nu(x)=\\nu(S^F_x,x)-\\nu(S^F_x)$ ($S^F_x$ was previously denoted $S^\\*$) does not necessarily solve Equation (3) of our paper, and therefore UMFI does not generalize the maximization problem from MCI.** Although no reviewers commented on this, we realized that **our proof of Theorem 3.1 was flawed since we misused monotonicity in line 138 of the original submission**. Indeed, removing information pertaining to a feature $x$ from a set of features $A$ may increase the mutual information between $A$ and the response $Y$. **We apologize for this mistake, and we remove the maximization formulation of UMFI in our new revision, clarifying that UMFI is defined by the direct computation $U^{F,Y}_\\nu(x)=\\nu(S^F_x,x)-\\nu(S^F_x)$.**\n\nAlthough we found that UMFI does not generalize the framework of MCI as closely as we first expected, **we emphasize that this does not alter the previous experimental results in any way, since we always use the direct computation, $\\nu(S^F_x,x)-\\nu(S^F_x)$, rather than the solution to the maximization problem in Equation (3). In fact, the theoretical and experimental results are improved with new axioms and an additional causality experiment that shows that UMFI is superior to MCI and other baseline methods.** The true formulation for UMFI is given by $U^{F,Y}_\\nu(x)=\\nu(S^F_x,x)-\\nu(S^F_x)$ and Equation (3) in the original text was only meant to link UMFI with MCI, so removing this link does not hinder our paper, in fact, moving further away from MCI was seen as a potential improvement by 3eYw. **UMFI remains a strong true-to-the-data importance metric that performs better than competing methods across diverse settings, while requiring a fraction of the runtime of MCI. In all, we believe that the resubmission greatly improves upon the theoretical justifications for UMFI.**\n", " The authors provide a simple yet effective extension of the recently proposed marginal feature importance criterion that allows a fast and accurate estimate of the marginal importance of each feature in the presence of feature correlation and non-linear interactions. Strengths\n\nA simple but clever extension of the marginal feature importance criterion allows to circumvent the need for enumerating all potential feature subsets, substantially increasing the practical applicability of the approach.\n\nWeaknesses:\n\nThe main ingredients were already all in the original paper introducing marginal feature importance. From this perspective the novelty of the work is not dramatic.\n\nThere is a lack of intuition in introducing the ultra-marginal feature importance criterion. It would be useful to discuss the role of g and its use in dependency removal when introducing it, possibly providing an illustrative example. Overall, the methodological section is very shallow and should be expanded to better highlight the rationale and relevance of the contribution.\n\nMinor: \n\nIn theorem 3.1, S should be orthogonal to f, not to F.\n\nI suggest to better contextualize the related work or move it to the end of the paper, as it's unclear why certain topics are being discussed (e.g. orthogonal predictors).\n\nAFTER REBUTTAL\nThe authors improved the clarity of manuscript and better highlighted the significance of the contribution. \n In the proof of Theorem 3.1, you seem to assume that the maximizer of eq. 3 is unique. But what if \\tilde{S}^* is also a maximizer of 3? I guess for your scope it is enough that S^* is one of the maximizers, but this should be clarified for the sake of soundness.\n\n The limitations of the approach are exhaustively discussed in the conclusion on the paper.", " This paper proposes an ultra-marginal feature importance (UMFI) method by extending marginal conditional importance (MCI) methods for evaluating feature importance. Some experiments have been conducted on simulated and real-world data sets.\n This paper studies an important problem. The authors have reviewed the related work well. \n\nThe extension of the work from MCI methods is by introducing an information subset in feature importance evaluation. The authors do not justify why an information subset is necessary. I have a few questions about the proposed method. \n\nWhen evaluating the feature impotence of f, there needs to find the largest information subset S*, such that f and S* are independent. My question is if there is no such an S*, how the importance of f is evaluated? This is real when all features are correlated. \n\nI need to be convinced to use the largest information subset S* in evaluating the importance of f. I use a causal effect estimation to understand the solution since feature importance is related to the causal effect of f on Y. When S* is independent of f, it does not ``confound\" f in a viewpoint of causal effect estimation. Why should we remove the effect of S* on Y when estimating the effect of f on Y? \n\nBy reading the algorithm, the importance of a feature is related to a model, see Lines 3-5. Do authors mean to explain feature importance in the model? This is not clear in the paper. \n\nThe predictions of a model depend on the parameters in practice. Based on Lines 4 and 5 in Algorithm 1, the importance of a feature can be dependent on the parameters. How do authors avoid the effect of parameter variation on evaluating the importance of a feature? \n\n---\n\nAfter discussions.\n\nThe authors answered my queries, and their revisions have addressed my major concerns, I raise my overall rate.\n See above. Limitation statements are given. \n", " This work considers how to quantify the importance of different features in a ML model (for each feature $f$ from the set of all features $F$) to the response variable $Y$. To do so, they modify a method known as \"marginal contribution feature importance\" (MCFI). In simple terms, rather than measuring the maximum contribution that a feature makes to a subset of other features (where contribution roughly means how much it improves predictive performance), \"ultra marginal feature importance\" (UMFI) measures the maximum contribution to any random variable that is a function of the full feature set. \n\nThis sounds like it could make computation harder, but in fact the authors prove that it can be computed efficiently by removing $f$'s dependencies/signal from $F$ (in a sense that isn't precisely defined, in my view), and then measuring how the newly trained model's performance degrades. Thus, the computational cost is significantly lower than computing the maximum contribution over $2^{d-1}$ feature subsets (where there are a total of $d$ features).\n\nThe authors' proof rests on several assumptions that won't hold in practice, but the method should still give reasonable results. And there is no gold-standard way of removing dependencies, but the authors demonstrate the method with a couple viable options (e.g., residualizing out with linear regression). ### Strengths\n\n- This work presents a new perspective on how to define feature importance in ML models. Rather than removing/corrupting subsets of features, it considers the maximum contribution that each feature $f$ makes to any random variable that's a function of the full feature set. Leave-one-out or Shapley value approaches wouldn't make sense in the context of arbitrary functions of the features (the value function $\\nu$ is no longer a set function, but a function of any random variable), but it's a neat result that the random variable to which a feature $f$ contributes most is somewhat easy to approximate.\n- UMFI seems to provide reasonable results in the experiments.\n\n### Weaknesses\n\nAbout the method:\n- The idea of removing dependencies of one random variable from another does not seem to be precisely defined anywhere in the paper. It seems like there is not a unique way to do this - for example, I can remove $f_1$'s dependencies from $f_2$ by setting $f_2$ to a constant. If the only requirement in this work is that the modified $f_2$ variable becomes independent from $f_2$, and the non-uniqueness of the transformation isn't a problem, that should be discussed more prominently.\n- The name \"information subsets\" for $I(F)$ was a bit confusing, it doesn't seem like there are subsets involved here (the way they are in MCI or SAGE). It seems to mean the set of all possible functions applied to the features/input variables. I guess I get the spirit of the name, but it's confusing.\n- On lines 55-57, one of the stated issues with MCI is that it underestimates the importance of correlated features. I didn't quite understand how MCI gets correlated features wrong, or how UMFI is supposed to fix the problem - can you elaborate on this? This point doesn't seem very well supported in theory, and it's only supported in a limited sense by the experiments.\n- On lines 57-58, one of the stated issues with MCI is that it can give non-zero importance to features unrelated to the model. As UMFI yields strictly larger importance than MCI (this is mentioned later in the paper), UMFI should share this flaw rather than fixing it. The results in Figure 2 don't seem to reflect this, which is very strange.\n- I believe the authors are potentially confusing two different meanings of the term \"marginal.\" The authors labeled both MCI and UMFI as \"marginal methods\", but I don't think they are if you look at what the \"marginal vs. conditional divide\" is about. The divide is about whether a feature importance method handles held-out features with their marginal or conditional distribution; e.g., KernelSHAP often uses the marginal distribution, and SAGE advocates for using the conditional. Because MCI retrains models for each subset of features, it's actually a *conditional* method, at least approximately (if it's helpful, I can point to a paper that proves how retraining and sampling held-out features from their conditional distribution are approximately equivalent). UMFI on the other hand is *neither* - the models are trained on completely different features so there's no question of re-using a model and sampling replacement values for held-out features. For MCI, \"marginal\" refers to \"marginal contribution\" from the game theory context, which has nothing to do with the marginal vs. conditional label. UMFI is also roughly about marginal contributions (even though $\\nu$) isn't a cooperative game, so I could see calling it a \"marginal contribution\" method. But calling both of these \"marginal methods\" isn't very helpful given the ambiguity.\n\nSome potential issues with the experiments:\n- The experiments don't seem to use any real datasets. Even the BRCA experiment synthetically modifies certain features to ensure that they're unimportant to the response variable. Would it be possible to use a real dataset, or at least present additional results with the unmodified BRCA data?\n- Related to the request for a real dataset, another type of metric would be to measure the predictive performance of models trained with the most important features - and this would be simple to run even with real datasets. Would the authors consider adding something like this?\n- Would it be possible to include other baselines in the experiments? For example, the methods examined in MCI or SAGE? \n- MCI giving large importance to randomized features in Figure 2a is a bit hard to believe, it's very counterintuitive. Can you verify that there isn't a mistake here, or explain why this occurs? \n\nThe notation in this work was confusing at times. A couple choices that made this paper difficult to read were:\n- $f$ denotes a function in most ML papers, but here it denotes a feature. Many papers instead represent the features as $x = (x_1, \\ldots, x_d)$ when there are $d$ features, and the model inputs/outputs are usually written as $(x, y)$. $y$ is retained here, but not $x$ for some reason\n- It would be helpful to write explicitly that $F$ is simultaneously used as both a set (e.g., $S \\subseteq F$) and a random variable (e.g., $I(Y; F)$). Similar to the above, the notation that's often used is $S \\subseteq D = ${$1, \\ldots, d$} to denote the feature indices and $x_S$ to represent a random variable that concatenates the set of features\n- It would be helpful to say explicitly that $g(A)$ represents a random variable resulting from applying a function $g$ to a subset of features (towards the beginning of section 3)\n- The symbol used to denote independence (see theorem 3.1) is never defined and likely unfamiliar to some readers\n- $g$ is overloaded to denote two different types of functions: one is a predictive model for $Y$ (equation 2) and one is an arbitrary function on the feature space (definition of $I(F)$)\n\nNits: \n- On line 49, there are methods that suggest training models with many feature subsets (another one is SPVIM, Williamson & Feng 2020), but several do not - including SAGE and permutation tests. There are a variety of tricks for handling the held-out features including setting to the mean, sampling replacement values (from the dataset or from a generative model), training a model to handle missing features, etc (see \"Explaining by removing: a unified framework for model explanation\" by Covert et al.)\n- On lines 29-30, marginal (rather than conditional) methods are not typically thought of as being for interpreting the data. See \"True to the model or true to the data\" by Chen et al., it discusses how marginal methods are better suited for understanding the model rather than the data\n- I think there's a typo in the subscript of the union operator on line 134 - shouldn't it be $f$ rather than $F$? On line 120, is it important that $g$'s domain is $A \\subseteq F$ rather than simply $F$? I don't see why it can't simply be $F$. And is it important that the co-domain be $\\mathbb{R}^{|A|}$ rather than some higher- or lower-dimensional Euclidean space? \n\nA couple points from above that would be helpful to comment on:\n- Non-uniqueness of dependency-removing transformation\n- How MCI handles correlated features wrong, and how UMFI is supposed to fix the problem\n- Why MCI gives non-zero importance to unrelated features but UMFI somehow does not\n- Various points about the experiments The authors included a nice discussion of limitations towards the end of the paper. One additional point that I would have liked to see covered, besides the main challenge of properly removing dependencies: whether there are any downsides to finding the maximum contribution. For example, are there cases where finding the max would yield equal importance for two features $x_1$ and $x_2$, where $x_1$ is useful on its own but $x_2$ is only useful when present along with a complementary feature $x_3$? This seems like a potential issue for both MCI and UMFI (depending on what one wants out of their importance values). ", " The problem of estimating feature importance is considered, specifically\nestimating marginal contribution importance (MCI). In essence, MCI estimates the\nvalue of a feature based on the performance drop when the feature is excluded.\nThis paper studies an extension which the authors dub ultra-marginal feature\nimportance (UMFI) that maximises over \"information subsets\" rather than the all\nsubsets of features (like the original MCI).\n\nIt turns out this slight change provides large computational benefits and\nimproved performance to boot. The authors prove a key result that allows the\nmaximising information subset to be approximated using dependency removal\ntechniques such as those from the fair learning literature. This reduction in\ncomplexity allows scaling to previously unobtainable number of features.\n This is a great extension to the MCI framework as it is simple, novel, and has a\nlarge practical impact. The presentation is excellent, theorem proof\nstraightforward, and empirical experiments well chosen. Limitiations of the\nstudy are well discussed.\n\nOne weakness is the authors have found a dataset where their choice of optimal\ntransport/linear regression for dependence removal failed, but it is relegated\nto supplementary materials.\n\nminor: line 301 is missing the word \"features\"\n It's clear many methods could be used as ν, and is discussed in the conclusion, but\nno alternative is explored in the empirical results. It's quite natural to wonder how\ncritical the choice of supervised learning methods is: have this been explored?\n\nThe choice of ν can also be more generic dependency measures, so what about\neasy to estimate measures of dependence like HSIC?\n It's clear many methods could be used as ν, and is discussed in the conclusion, but\nno alternative is explored in the empirical results. It's quite natural to wonder how\ncritical the choice of supervised learning methods is: have this been explored?\n\nThe choice of ν can also be more generic dependency measures, so what about easy\nto estimate measures of dependence like HSIC? Exploration of supermodularity\nmight be more straightforward.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "ewha3HILv8", "ntW74zE5Qe", "XK_W3Rlwho5", "VtIYzDrjPs", "_GnTpbI5Aw8", "d7R91_ghF9j", "9RQz7cHKntc", "nips_2022_8ow4YReXH9j", "GWYg-6tCCK6", "NwuyOR4Mu_y", "DVgoiRwG8kW", "BGO8ffog3jG", "OoWGp8aDyUP", "qQqno9Q_V28", "qQqno9Q_V28", "ffXFHK96AyV", "nips_2022_8ow4YReXH9j", "nips_2022_8ow4YReXH9j", "nips_2022_8ow4YReXH9j", "nips_2022_8ow4YReXH9j", "nips_2022_8ow4YReXH9j", "nips_2022_8ow4YReXH9j" ]
nips_2022_OcNoF7qA4t
Non-Linear Coordination Graphs
Value decomposition multi-agent reinforcement learning methods learn the global value function as a mixing of each agent's individual utility functions. Coordination graphs (CGs) represent a higher-order decomposition by incorporating pairwise payoff functions and thus is supposed to have a more powerful representational capacity. However, CGs decompose the global value function linearly over local value functions, severely limiting the complexity of the value function class that can be represented. In this paper, we propose the first non-linear coordination graph by extending CG value decomposition beyond the linear case. One major challenge is to conduct greedy action selections in this new function class to which commonly adopted DCOP algorithms are no longer applicable. We study how to solve this problem when mixing networks with LeakyReLU activation are used. An enumeration method with a global optimality guarantee is proposed and motivates an efficient iterative optimization method with a local optimality guarantee. We find that our method can achieve superior performance on challenging multi-agent coordination tasks like MACO.
Accept
This paper is a very clear accept. The reviews had only minor quibbles, which I trust the authors will address in their final version.
train
[ "21tERp8EpxK", "wmCSWCNUu9", "SvcOYpYq0ec", "b1G5FVsTqK5", "6sAii9JjuZH", "ZvPvEEuQm_P", "5jPzI5gwRpW", "ltWhrJF0i8c", "xZ8avovdPOp", "HENoABMXRy", "O1atqOkoHr4_", "w36tsjttoP6", "WmjhtPuMGPtG", "uvcjTJ87OWd", "B9Ta4WYArJ", "L7-ipKKD1SWx", "qG4ndEzbKgG", "01OSMHkARYl", "FF0UzYZx5H", "axYDqevqpc5", "KO6QiPjGG_" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the reviewer's work!", " Thanks for the reviewer's work!", " Thanks for the reviewer's work.", " Thanks for the reviewer's work!", " I appreciate your explanations, the point is clearer now. I would like to thanks the authors, I do not have any further concern or doubt.", " Dear authors,\n\nThanks for the authors' hard work on the responses. After reading the authors' response and revised paper. Most of my concerns are addressed. I raised the score.\n", " Thanks very much for your feedback! We further modify our paper according to your comments.\n\n> This should be clearly stated in the paper to clarify this. (For both DCG and our method, we use the complete graphs for all experiments in the paper.)\n\nThanks for this good suggestion. We go through our paper and state this point clearly in the beginning of the method section (line 123), the discussion subsection (line 216-225), and the experiment setup descriptions (line 267-268).\n\n> Please clearly state that we stop iteration when $n_{max}$ slope configurations are visited.\n\nWe now state this explicitly in the experiment setup descriptions (line 270).\n\n> About Figure 5.\n\n- The usage of the term global optimality is sloppy. The reviewer is right that we cannot guarantee this due to possibly insufficient representational power and the usage of deep networks. \n\n- We now modify Figure 5 and use ground-truth Q values (averaged Monte Carlo returns) as the reference point as suggested by the reviewer. Specifically, \n\n 1. We compare the ground truth Q estimates of DCG and NL-CG (embed=3, iterative optimization, $n_{max}=4$). The result shows that NL-CG learns a policy with higher value. \n\n 2. We compare the $Q_{tot}$ value of our enumerative and iterative optimization methods. The result demonstrates that the local optimum solutions found by our iterative method with simulated annealing typically have the same value as the solutions found by the enumerative method.\n\n 3. We compare the $Q_{tot}$ values estimated by NL-CG (embed=3, iterative, $n_{max}=4$) against ground truth Q estimates. The estimation errors on all tested state-action pairs are less than 20\\%.\n\n- We now explain why some q-values with iterative NL-CG are better than those of enumerative NL-CG. In practice, we use Max-Sum on individual linear pieces, which is an approximation algorithm. Examples like the following one can be constructed to demonstrate that, in such situation, it is possible that enumerative NL-CG may find a sub-optimal solution.\n\n Assume that there are 4 linear pieces $\\rho_{1}$, $\\rho_{2}$, $\\rho_{3}$, $\\rho_{4}$ and the true maximum $Q_{tot}$ values on them are 10, 11, 12, 13. By applying Max-Sum on each linear piece we get actions $\\boldsymbol a_1$, $\\boldsymbol a_2$, $\\boldsymbol a_3$, and $\\boldsymbol a_4$, respectively. Since Max-Sum is not accurate, the obtained actions may not be local optimizer. Assume that the values of these actions are $\\rho_{1}(\\boldsymbol q(\\boldsymbol a_1))=6$, $\\rho_{2}(\\boldsymbol q(\\boldsymbol a_2))=9$, $\\rho_{3}(\\boldsymbol q(\\boldsymbol a_3))=8$, and $\\rho_{4}(\\boldsymbol q(\\boldsymbol a_4))=7$.\n\n When we enumerate all the pieces, we will select $\\rho_{2}$ and get the final $Q_{tot}$=9. \n\n Let's consider a case where the iterative search stops at $\\rho_{3}$ after $n_{max}$ iterations. Suppose that Max-Sum on $\\rho_{3}$ returns $\\boldsymbol a_3$ that actually falls in the cell of $\\rho_{4}$, and $\\rho_{4}(\\boldsymbol q(\\boldsymbol a_3))=13>9$. In this case, the solution found by iterative NL-CG is better than that by enumerative NL-CG.", " Thanks a lot for your feedback and pointing out the problem of Figure 5.\n\nWe modify Figure 5 in the revised paper and introduce ground truth Q value estimates (averaged Monte Carlo returns) as suggested by the reviewer. \n\nSpecifically, \n\n1. We compare the ground truth Q estimates of DCG and NL-CG (embed=3, iterative optimization, $n_{max}=4$). The result shows that NL-CG learns a policy with higher value. \n\n2. We compare the $Q_{tot}$ value of our enumerative and iterative optimization methods. The result demonstrates that the local optimum solutions found by our iterative method with simulated annealing typically have the same value as the solutions found by the enumerative method.\n\n3. We compare the $Q_{tot}$ values estimated by NL-CG (embed=3, iterative, $n_{max}=4$) against ground truth Q estimates. The estimation errors on all randomly selected state-action pairs are less than 20\\%.", " Thanks to the authors for their answers.", " Thank you for the clarifications, I am still a bit confused by the usefulness of figure 5. By comparing the estimated Qtot, how could you distinguish between the case where the method is indeed learning a policy with higher value, and the case where the method is simply over-estimating the value? \nIt seems that it lacks a ground truth value estimate. ", " The reviewer would like to thank the authors for their useful comments. Let me now try and discuss a bit on some of these (I generally acknowledge the corrections and clarifications that you applied straight away, without the need to explicitly mention these here):\n\n> - For both DCG and our method, we use the complete graphs for all experiments in the paper.\n>\n> - This is because complete graphs are the most basic topology and do not need to be pre-defined. Given that we aim to establish the basic concepts of non-linear CGs in this paper, complete graphs are a good choice.\n\nFine, but this should be clearly stated in the paper to clarify this.\n\n> We provide additional experimental results showing the the iterative method is not enumerating all possible slope configurations.\n\nThis is a nice addition and is definitely going to improve the overall quality of the work.\n\n> We guarantee this by directly stop iteration when $16$ slope configurations are visited.\n\nAgain, please clearly state this.\n\n> Yes, the reference in Figure 5 is the $Q_{tot}$ found by NL-CG with enumeration. This is because this method has a global optimality guarantee.\n\nThis puzzles me a bit. First of all, how can you say that your method has **global optimality** guarantees? This is a condition that is hard to attain in general, and the CG you are using (although non-linear) may be still an approximation of the true problem structure (that may indeed not be a decomposable one by its own properties). Finally, you are using neural networks, that are known to invalidate the convergence guarantees of most RL methods. How are these global optimality guarantees derived?\n\nMoreover, the fact that $Q_{tot}$ is taken from NL-CG, although with complete enumeration,makes the fact that the best results comes from NL-CG with iterative search a bit less interesting and surprising: the learned representation is probably the closest one to that used as a reference, so it is probable that its results are the best.\n\nAlso, how can some $\\mathbf{q}$-values be better than those of the NL-CG with complete enumeration then?", " \n> **Question 5 & Quality 1** What are the CG topologies for the different experiments for both your method and DCG? Why have you come up with such topologies?\n\n- For both DCG and our method, we use the complete graphs for all experiments in the paper. \n\n- This is because complete graphs are the most basic topology and do not need to be pre-defined. Given that we aim to establish the basic concepts of non-linear CGs in this paper, complete graphs are a good choice.\n\n- Sparse or dynamic topologies may have some interactions (which are now still unclear) with non-linear CGs and may further improve the performance. We believe it is an interesting topic to be studied in further work.\n\n- Loopy graph topologies influence some analyses in the paper, we add a subsection at the end of the method section (line 216-225) to discuss the influence in detail.\n\n> **Quality 2** Is there any evidence that the iterative algorithm is performing well because it is indeed a good approximation and not because it is simply solving the constraint at every possible slope configuration?\n\nWe provide additional experimental results showing the the iterative method is not enumerating all possible slope configurations. On task *Aloha*, with a mixing network of width 10, 12.756 slope configurations are visited on average out of all 1024 configuration. Moreover, 2.647 and 4.280 configurations are visited on average when the width is 3 and 4. This kind of approximation can often leads to nearly optimal results as in enumerating all configuartions.\n\n> **Question 1** Castellini et al., 2021, is such an extremely relevant citation here.\n\nThanks for pointing out this important work. In the revised paper, we discuss Castellini et al., 2021 in the related work section.\n\n> **Question 2** Your explanation of relative overgeneralization is a bit not-on-point.\n\nWe changed our explanation of relative overgeneralization to \"relative overgeneralization embodies that, due to the concurrent learning and exploration of other agents, the employed utility function may not be able to express optimal decentralized policies and prefer suboptimal actions that give higher returns on average.\". (l.29 - l.31)\n\n> **Question 3** It may not be immediately clear how you decompose LeakyReLU($\\boldsymbol{o}_i$) as $\\boldsymbol{c}_i \\circ \\boldsymbol{o}_i$.\n\nThanks for this comments! We add clarification about what $\\boldsymbol{c}$ is and how we get it at l.152.\n\n> **Question 4 & Clarity 1** what do you mean exactly by a joint action $\\boldsymbol{a}$ falling outside a given piece $\\rho_k$ on line 164? Perhaps you mean that the $\\boldsymbol{q}$s generated by the joint action $\\boldsymbol{a}$ generates a different slope configuration ?\n\n- Yes, the reviewer is correct, and we mean the joint action yields a different slope configuration. \n\n- We improve the presentation of this part by defining the *cell* $P_k$ of an affine function $\\rho_k$ where $\\boldsymbol{q}\\in P_k$ actually yields $\\boldsymbol{c}^k$ in a forward pass. Other parts of the method section are also modified, using this concept to eliminate possible ambiguity.\n\n> **Question 6** In Figure 2, when using NL-CG with an embedding of 4 units, is there any evidence that the iterative algorithm is performing well because it is indeed a good approximation and not because it is simply solving the constraint at every possible slope configuration (thus reducing to the enumeration algorithm in practice)?\n\nYes. On task *Aloha* we count the average iterations that is needed before converging to a local optima, and the number is 4.28 for NL-CG with an embedding of 4 units. Moreover, 2.647 and 12.756 configurations are visited on average before converging with an embedding of 3 and 10 units, respectively.\n\n> **Question 7** In Figure 4, how do you guarantee that the iterative method is only solving the constrained problem for 16 slope configurations and not more?\n\nWe guarantee this by directly stop iteration when $16$ slope configurations are visited. \n\n> **Question 8** In Figure 5, what $Q_{tot}$ are you using as a reference? Are you using the one from NL-CG with enumeration? Or perhaps you are computing some sort of ground-truth Q-function?\n\nYes, the reference in Figure 5 is the $Q_{tot}$ found by NL-CG with enumeration. This is because this method has a global optimality guarantee.\n", " > **Limitation 1** The weighted max-sum could be moved to appendix.\n\nAs suggested by the reviewer, we move this algorithm to Appendix B of the revised paper.\n\n> **Question 1** Is there an application that would lead to similar dynamics as the problem illustrated in section 4?\n\nYes. This task actually features relative overgeneralization. The actions of other agents may shadow the better choice (State 2B) with their exploration, rendering it less attractive than a worse choice (State 2A).\n\nThis example shows that DCG cannot address some cases featuring relative overgeneralization.\n\n> **Question 2** The graph structure is barely discussed, in the original max-plus there are issues with graphs presenting cycles, how would your algorithm be affected?\n\nThanks for this important question! We assume that max-plus has an error rate of $e$ in loopy graphs. From the empirical study in [Wang et al. 2022], $e$ is typically smaller than 5\\%.\n\n- Lemma 1 is not affected because it is a property of LeakyReLU Networks. \n\n- For Lemma 2, the maximum of solutions found by message passing in all slope configurations is the global optimum with a probability of $1-e$. An error occurs when message passing cannot find the right solution on the piece where the global optimum is located. \n\n- Our iterative method may stop earlier when message passing returns a wrong solution located in the current cell. The probability of this situation is less than $e$. Thus we have at least a probability of $(1-e)^{n}$ ($n$ is the number of iterations) to find the piece where the local optimum is located, and the final probability of finding the local optimum is larger than $(1-e)^{n+1}$.\n\nIn the revised paper, we add a paragraph discussing the influence of loopy graph structures at the end of the method section (line 216-225). \n\n> **Question 3** Why are the NL-CG method starting at higher position than the other methods in figure 2?\n\n- The first point is the performance after training with around 20K samples. NL-CG can already learn something using these samples. \n\n- We further show results on the predator-prey task in the revised paper. Similarly, our method requires very few (20K-30K) samples to achieve DCG's performance after converges.\n\n> **Question 4** The performance of QMIX is surprisingly low, why is that? It would have been useful to compare in another MARL where QMIX is not so bad e.g. starcraft.\n\n- MACO benchmark features tasks that require sophisticate coordination among agents. Not only QMIX, most fully decomposed value function methods (e.g., DICG in Figure 5) cannot perform well on these tasks.\n\n- On a super-hard scenario, MMM2, from the SMAC benchmark, our method still outperforms QMIX by a large margin.\n\n\n> **Question 5** In figure 5, how can DCG be compared with the method since they should use different Qtot (linear vs non linear)?\n\nAlthough DCG and NL-CG uses different network structures and optimization methods, they are learning under the same environments, and thus the same reward settings. The maximum expected accumulated rewards should be the same.\n\n> **Limitation** The authors do not explicitly mention any limitations of their method.\n\n- As suggested by the reviewer, a major limitation of our method is possible failure case in loopy graphs. We add related discussion in the revised paper.\n\n- Another limitation is that we only consider complete coordination graphs in this paper. The interaction between sparse topologies and non-linear mixing function is quite interesting but remains largely unknown.\n\nReference:\n\n[Wang et al. 2022] Wang, T., Zeng, L., Dong, W., Yang, Q., Yu, Y. and Zhang, C., 2021, September. Context-Aware Sparse Deep Coordination Graphs. In International Conference on Learning Representations.\n", " \n> **Strengths And Weaknesses 1**\n> \n> The authors should better distinguish between the domain of a function and the inputs that \"correspond\" to a function.\n\n- As suggested by the reviewer, we differentiate these two concepts by defining the *cell* $P_k$ of an affine function $\\rho_k$ where $\\boldsymbol{q}\\in P_k$ yields $\\boldsymbol{c}^k$ in a forward pass. The presentation of the method section is also modified, using this concept to eliminate possible ambiguity. \n\n> **Strengths And Weaknesses 2**\n>\n> The authors should mention that their restrictions on the mixing network is almost identical to input-convex neural nets\n\n- Thank you very much for pointing out the relationship to ICNN. We mention ICNN at l.134 of the revised paper and discuss how Proposition 1 in the ICNN paper relates to the representational power of our mixing network.\n\n> **Detailed comments 1** eq.4+5: as the mixing network depends on $s$, so must be $Q_{tot}(s,a)$. \n\n- We changed $Q_{tot}(\\boldsymbol\\tau, \\boldsymbol a)$ in Eq. 4 and 5 to $Q_{tot}(s, \\boldsymbol a)$.\n\n> **Detailed comments 2** l.132: \"extended to other activation functions like ReLU\" -- ReLU is already a special case of LeakyReLU \n\n- We removed \"extended to other activation functions like ReLU\".\n\n> **Detailed comments 3** l.150: the outputs $o_i$ can easily be confused with the observations $o_i$.\n\n- Throughout the revised paper, we use a different notation $\\boldsymbol{z}_i$ for the output to improve our presentation.\n\n\n> **Detailed comments 4** l.162ff: you should differentiate between the domain of affine function $\\rho_k$ and the inputs that \"correspond\" to $\\rho_k$.\n\n- See Strengths and Weaknesses 1.\n\n> **Detailed comments 5** Lemma 1 is should be expressed more precisely.\n\n- We improved the expression of Lemma 1 by specifying the function class for which it holds, giving the definition of linear pieces, and removing the restriction $r\\ne s$.\n\n> **Detailed comments 6** l. 169: it took the reviewer some time to believe the claim $h^r_{ij}\\ge h^s_{ij}$.\n\n- To make why $h^r_{ij}\\ge h^s_{ij}$ more clear, we added that the output $o_{ij}$ (in the revised notation, $z_{ij}$) is the same and explicitly showed the comparison between $h^r_{ij}$ and $h^s_{ij}$ by expanding them in a new equation 6.\n\n> **Detailed comments 7** l.184: the word \"infeasible\" is unclear here.\n\n- We made the meaning of \"infeasible\" clear by incorporating the terminology *cell* defined above.\n\n> **Detailed comments 8** l.200: as you mentioned later, using the condition $c_{real} = c_p$ can lead to loops. Why don't you just use $\\rho_{real}(q(a_{real})) = \\rho_{p}(q(a_{p}))$?\n\n- This is because, in practice, we run Max-Sum in each piece, which may be inaccurate in loopy graphs, and exact equality between Q values is a strict condition.\n\n> **Detailed comments 9** l.217: the \"reward is invariant to the identity\" only for the second decision.\n\n- l.217: This also holds for the first decision, because the reward depends only on the number of agents that take Action B for both decisions.\n\n> **Detailed comments 10** l.221ff: the example is very unclear. How do the 5 equations look like? Why do you set $q_i(s_{2B}, A)=0$? All together the reviewer did not see a large benefit of Section 4.\n\n- We specify the system of equations in Appendix C of the revised paper. This system has no solution because the rank of its augmented matrix is larger than its coefficient matrix. \n\n> **Detailed comments 11** l.252: why 4 slopes?\n\n- l.252: Sorry this is a typo, there are 8 slope configurations to enumerate.\n\n> **Detailed comments 12** l.252: it states that $n_{max}=4$, but the Figure states $n_{max}=8$.\n\n- l.252: we updated our paper and $n_{max}$ should be 8.\n\n> **Detailed comments 13** l.272f: it is unclear how you compare NL-CG with DCG in Figure 5. How does DCG get different entries on the x-axis?\n\n- l.272: One point in Figure 5 corresponds to one timestep (and thus a graph instance) in the game. Its x-value is $Q_{tot}$ of the solution found by DCG, while its y-value is $Q_{tot}$ of the solution found by NL-CG.\n", " \n> **Question 1 \\& Limitation 1**\n>\n> Is the wall-clock time of NL-CG simply DCG\\_time times num\\_iterations, or does the initialization with the optimal actions of previous iterations speed up the message passing?\n\n- The initialization with the optimal actions of previous iterations speed up the message passing. \n\n- On task *Aloha*, we test the runtime of our method. With an embedding of 2, 4, and 10, our iterative method needs 2.647, 4.28, and 12.7 iterations on average to converge to local optimum. These iterations consumes 0.12 $ms$, 0.15 $ms$, and 0.3 $ms$ together. So the average time for one Max-Sum drops from 0.0453 $ms$ to 0.0350 $ms$, and further to 0.0236 $ms$ when the iteration number increases.\n\n> **Question 2**\n>\n> Did you follow all the design decisions of the DCG implementation, or did you leave out some things (e.g. the state-dependend bias)?\n\nWe follow all the design decisions of DCG. Both NL-CG and DCG are tested with the state-dependent bias.\n\n> **Limitation 2**\n>\n> It is unclear how well the algorithm scales to realistic applicatons with large action spaces, like the StarCraft2 experiments in the DCG paper.\n\n- In the revised paper, we test our method on predator-prey tasks and SMAC. The results are shown in Figure 5.\n\n- On predator-prey, our method requires very few (20K-30K) samples to achieve DCG's performance after converges.\n\n- On a super-hard scenario, MMM2, from the SMAC benchmark, our method achieves a win rate of 80\\% after 3M training steps, while DCG achieves around 60\\%.\n\nThese results demonstrate the effectiveness of our method in complex scenarios.\n", " > **Weakness 1**\n>\n> Using a mixing network to represent the coordination graph is not new. Previous work [1] uses GNN, which can seem like a mixing network, and ReLU or LeakyReLU can also be used in GNN.\n\n- The concept of *coordination graphs* is different in [1] and our work. [1] mixes **individual** utility functions $q_i(a_i)$. In contrast, we additionally mixes **pairwise** payoff functions, which represent a higher order decomposition of the global value function.\n\n- The existence of pairwise payoff functions makes a big difference, because the value-maximizing actions of local utility functions are no longer global value-maximizers, and we have to develop new DCOP algorithms for greedy action selection.\n\n- The above claim can be supported by experimental results in Figure 5. We can see that DICG has similar performance to QMIX on task predator-prey and MMM2.\n\n> **Weakness 2**\n>\n> The experiments are not strong, CASEC [2] is not compared.\n\n- CASEC is orthogonal to our work. Our non-linear coordination graphs can also use the technique in CASEC to exploit the benefits of sparse graph topologies, which will further improve the performance of our methods. \n\n- This paper aims to develop basic and general concepts/properties of non-linear coordination graphs. In our humble opinion, the interaction between sparse (or any other) topologies and non-linear mixing functions deserves in-depth studies in the future work.\n\n> **Weakness 3**\n>\n> More scenarios should be tested. As this paper considers the representation capacity issue, more experiments in complex scenarios, for example, the predator-prey task and SMAC in [3], should be conducted to show the merit of the method.\n\nAs suggested by the reviewer, we test our method on predator-prey tasks and SMAC. The results are shown in Figure 5 of the revised paper.\n\n- On predator-prey, our method requires very few (20K-30K) samples to achieve DCG's performance after converges.\n\n- On a super-hard scenario, MMM2, from the SMAC benchmark, our method achieves a win rate of 80\\% after 3M training steps, while DCG achieves around 60\\%.\n\nThese results demonstrate the effectiveness of our method in complex scenarios.\n\n> **Weakness 4**\n>\n> Some curves are not fully shown. For example, in Fig 3, the curves of NL-CG are not fully shown.\n\n- In Fig. 3, we didn't show the full curves of the enumerating method with a mixing network of width 10. This is because we can draw a conclusion from the partial results -- a wide mixing network can help improve performance in some tasks (e.g., Hallway).\n\n- Besides performance, another dimension to compare the two versions of our method is time complexity. Running these incomplete curves is extremely time-consuming because they are enumerating all possible slope configurations. For comparison, with a mixing network of the same width, our iterative method runs much faster (Figure 4). Therefore, these incomplete results are in line with our motivation to develop the iterative optimization method.\n\n> **Question 1**\n>\n> In lines 112-113, why is a DNN with piece-wise linear (PWL) activation functions (e.g. ReLU, LeakyReLU, PReLU) is equivalent to a PWL function? Did it motivate you to investigate the problem of the non-linear coordination graph?\n\nThe property of DNNs with piece-wise linear activation functions is well studied. We refer to [Chu et al. 2018] for detailed discussion. Our method is based on this property, which indeed provides an opportunity of extending coordination graphs to the non-linear case.\n\n> **Question 2**\n>\n> In lines 141-142, when the mixing network is non-linear, maximizing $Q_tot$ is NP-hard. Can you elaborate more? As far as I know, deep networks have capacities to learn good models.\n\n- $Q_tot$ is defined over the space of joint actions. When the mixing network is non-linear, to maximize $Q_tot$, one needs to enumerate all joint actions. The number of joint actions grows exponentially with the number of agents, and thus the problem is NP-hard.\n\n- Deep networks can learn good models, but the problem here is non-convex optimization problem over the *input* (instead of parameters) of a network in a exponentially growing space.\n\n> **Question 3**\n>\n> Can you highlight your contributions in Alg. 1, 2 and 3?\n\nAs stated in the answer of the previous question, maximizing $Q_tot$ with a non-linear mixing network needs an enumeration over a space growing exponentially with the number of agents. Fortunately, we find that if the mixing network has a specific feature, i.e., if they use ReLU or LeakyReLU activation, the problem can be solved efficiently by two algorithms (Alg. 2 and 3). Our contribution is the procedure of Alg. 2 and 3. Alg. 1 is a sub-module for implementing Alg. 2 and 3, which extends the classic Max-Sum algorithm to weighted cases.\n", " Reference:\n\n[Chu et al. 2018] Lingyang Chu, Xia Hu, Juhua Hu, Lanjun Wang, and Jian Pei. Exact and consistent interpretation for piecewise linear neural networks: A closed form solution. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 1244–1253, 2018.", " This paper investigates the problem of learning non-linear coordination graphs in multi-agent reinforcement learning. It proposes the first non-linear coordination graph by extending CG value decomposition beyond the linear case. It proposes the weighted max-sum algorithm to solve the greedy-action selection problem in the non-linear coordination graph. Strengths: \n\nThis paper investigates a very important problem in coordination graphs. Previous methods focus on the linear case, which has limited representation capacity on Q values. This paper proposes a novel method by using a mixing network, the active function is LeakyReLU.\n\n\nWeaknesses: \n\n1. Using a mixing network to represent the coordination graph is not new. Previous work [1] uses GNN, which can seem like a mixing network, and ReLU or LeakyReLU can also be used in GNN.\n\n2. The experiments are not strong, CASEC [2] is not compared.\n\n3. More scenarios should be tested. As this paper considers the representation capacity issue, more experiments in complex scenarios, for example, the predator-prey task and SMAC in [3], should be conducted to show the merit of the method.\n\n4. Some curves are not fully shown. For example, in Fig 3, the curves of NL-CG are not fully shown.\n\n5. This paper is hard for me to follow. The writing can be improved.\n\n[1] Deep Implicit Coordination Graphs for Multi-agent Reinforcement Learning\n\n[2] CONTEXT-AWARE SPARSE COORDINATION GRAPHS\n\n[3] Deep Coordination Graphs\n\n I have the following questions:\n\n1. In lines 112-113, why is a DNN with piece-wise linear (PWL) activation functions (e.g. ReLU, LeakyReLU, PReLU) is equivalent to a PWL function? Did it motivate you to investigate the problem of the non-linear coordination graph?\n\n2. In lines 141-142, when the mixing network is non-linear, maximizing Q_tot is NP-hard. Can you elaborate more? As far as I know, deep networks have capacities to learn good models.\n\n3. Can you highlight your contributions in Alg. 1, 2 and 3? \n Please see the above comments.", " The paper introduces a convex mixing network, that is learned using a QMIX-style hyper-network, for coordination graphs. The authors prove that a maximum over the piecewise-linear parts of the mixing network corresponds to the global maximum and introduce an iterative optimization method for larger mixing networks that converges to a local maximum piecewise-linear solution. The resulting algorithms are compared with DCG on some MACO benchmark, and show impressive improvements on some. The authors also compared the enumeration over all parts with their iterative method in ablation studies. The reviewer liked the paper a lot, especially the theoretical part. Although the authors should better distinguish between the domain of a function and the inputs that \"correspond\" to a function (see detailed comments), the reviewer has checked the proofs thoroughly and believes they are correct. The authors should mention that their restrictions on the mixing network is almost identical to input-convex neural nets (ICNN, Amos et al., 2016). ICNN are a bit more general, but the theoretical analysis should still hold. In particular Lemma 1 looks as if it could have useful applications in other fields using ICNN.\n\nSection 4 is not particularly illustrative, but the experimental experimental validation looks very well done and the results are very good on Aloha and Sensor. The biggest open question is scalability: a mixing network of size 10 might still be considered small. The authors could provide a better runtime comparison of their algorithm in comparison to DCG (see questions), and with number of network size and number of iterations of the approximate algorithm. \n\n**Detailed comments**\n\n- eq.4+5: as the mixing network $f_n$ depends on $s$, so must $Q_{tot}$\n- l.132: \"extended to other activation functions like ReLU\" -- ReLU is already a special case of LeakyReLU with $\\alpha=0$\n- l.150: the outputs $o_i$ can easily be confused with the observations $o_i$\n- l.162ff: you should differentiate between the domain of affine function $\\rho_k$, which is the entire input space (otherwise one could not compute the output), and the inputs $q$ that \"correspond\" to $\\rho_k$, that is, where the forward pass produces $c^k$. The two words are currently nowhere properly defined and used synonymous (e.g. Lemma 1 or l.172). Maybe define the set $q \\in Q_k$ in which the forward pass yields $c^k$ to be precise.\n- Lemma 1 is really interesting, but should be expressed more precisely: for which class of functions does it hold, and how the $\\rho$ functions are defined. This will allow easier transfer to other fields. You can also remove the $s\\neq r$ restriction, as for $s=r$ the inequality still holds.\n- l. 169: it took the reviewer some time to believe the claim $h^r_{ij} \\geq h^s_{ij}$. It would help to emphasize that the two slope configurations are identical, except for entry ${ij}$, and that the output $o_{ij}$ is therefore the same in both functions. A sentence about how the inequality holds might also help.\n- l.184: the word \"infeasible\" is unclear here; you need to establish the terminology as suggested above\n- l.200: as you mentioned later, using the condition $c_{real} = c_p$ can lead to loops. Why don't you just use $\\rho_{real}(q(a_{real})) = \\rho_p(q(a_p))$?\n- l.217: the \"reward is invariant to the identity\" only for the second decision\n- l.221ff: the example is very unclear. How do the 5 equations look like? Why do you set $q_i(s_{2B}, A)=0$ (doesn't this already used up the equation for 0 action B's)? All together the reviewer did not see a large benefit of Section 4. \n- l.252: why 4 slopes? Shouldn't it be 3^2=8 slope configurations?\n- l.252: it states that \"$n_{max}=4$\", but the Figure states $n_{max}=8$\n- l.272f: it is unclear how you compare NL-CG with DCG in Figure 5. How does DCG get different entries on the x-axis? \n \n**References**\n\nAmos et al., 2016: Input Convex Neural Networks; https://arxiv.org/abs/1609.07152 1) Is the wall-clock time of NL-CG simply DCG_time times num_iterations, or does the initialization with the optimal actions of previous iterations speed up the message passing?\n\n2) Did you follow all the design decisions of the DCG implementation, or did you leave out some things (e.g. the state-dependend bias)?\n\n3) Do you have an explanation why NL-CG performed so much better than DCG in Aloah and Sensor (but not in Gather)?\n - A wallclock-time comparison to DCG would have been nice.\n- It is unclear how well the algorithm scales to realistic applicatons with large action spaces, like the StarCraft2 experiments in the DCG paper.\n", " This paper addressed the problem of cooperative multi-agent reinforcement learning by proposing an improvement over value decomposition method. They extend the concept of coordination graph to non-linear combination of value functions. \n\nThe core of the method relies on coordination graph. In coordination graphs, an edge is drawn between two interacting agent and a joint value function is learned through reinforcement learning. Given such graph, a globally optimal joint action can be computed to maximize a combination of the edge value functions. Traditionally, those value functions are summed and we can use algorithms such as max-plus to find the joint action. The author extend this process to non-linear combinations with deep network using LeakyReLU activations. \n\nThey extend the max-plus algorithm to this class of graphs by decomposing the problem as piecewise linear combinations. The first proposed algorithm has an exponential time complexity in the width of the mixing network due to enumerating all possible slope configuration but the authors propose an approximation which converge to a local optimum and rely on an annealing strategy to try to escape it. \n\nThe experiments compare the performance of the whole algorithm on the MACO benchmark against two baselines: QMIX and DCG. \nA second set of experiments compare the optimality of the action selection only between the two propose algorithm, DCG and a random baseline.\n The problem of cooperative multi-agent RL has many societal applications and there is still a lot of progress to be done in being able to solve it efficiently. This paper contributes towards this goal by extending state of the art techniques based on value decomposition methods. \n\nExtending CGs to non-linear combination does intuitively increase the representation capability of the global value function, similarly to QMIX vs VDN, I believe the idea is sound. The author demonstrates this through the use of an example in section 4 and further validate it empirically. \n\nOverall the paper is well written and quite easy to follow. The algorithms are sound and clearly explained.\n\nWhen extending the max-plus algorithm to piecewise linear combination, they clearly explain the results and the derivation seems correct. \n\nThe weighted max-sum could be moved to appendix. \n\nThe plots from figure 5 and 6 are quite hard to follow, and the axes are not clear at all. Even if the author try to explain their meaning in the caption, it would be wise to choose another type of representation that is more intuitive for the reader. Furthermore I believe there are some unclarity on the correctness of this experiment regarding the choice of Qtot (see questions). \n 1. Is there an application that would lead to similar dynamics as the problem illustrated in section 4? It would be useful to provide an example to further motivate why we would need the extra complexity. \n2. The graph structure is barely discussed, in the original max-plus there are issues with graphs presenting cycles, how would your algorithm be affected? \n3. Why are the NL-CG method starting at higher position than the other methods in figure 2? \n4. The performance of QMIX is surprisingly low, why is that? It would have been useful to compare in another MARL where QMIX is not so bad e.g. starcraft. \n5. In figure 5, how can DCG be compared with the method since they should use different Qtot (linear vs non linear)? \n The authors do not explicitly mention any limitations of their method. I believe it would be valuable to add a paragraph. An obvious limitation to this family of method is that they require the graph structure to be known. Discussing different type of graph structure or sparsity would have been useful. ", " The paper proposes to extend the coordination graph framework to allow for non-linear mixing of agent payoffs/pairwise utilities, in a way similar to how the QMIX algorithm extends VDN by mean of mixing network. The problem of being able to solve the subsequent decentralized constraint optimization required to compute the optimal joint action is tackled by resorting to piece-wise neural network analysis, and nothing that it suffices to solve a linear DCOP for each possible slope configuration in order to find such optimal action. Moreover, an iterative version of this algorithm is proposed, possibly trading off optimality of the selected joint action with a reduced number of DCOPs to solve. **Originality:** Up to the best of reviewer's knowledge, the idea of using a non-linear function to combine components in a coordination graph is novel, and so it is the algorithm proposed to identify the optimal joint action in such a non-linear representation. \n\n**Quality:** The paper is in general technically sound, and the methodological claims are well supported by experimental results. The set of experiments and analysis proposed is good, and is really capable of addressing most of the concerns a reader may come up with. I only have some remarks for the authors, mainly concerning some more in-depth explanation of experimental details or to provide support of some claims. For example, it is never explained what CG topologies are used for the proposed experiments (also valid for the DCG baseline), while I think this is a relevant detail that should be reported. Or, where did you get the $Q_{tot}$ used in Figure 5? This is never explicitly explained, but is an important information to understand what the figure is representing.\n\nAlso, I have some small concerns about the performance of the iterative algorithm. For example, when using NL-CG with an embedding of 4 units in Figure 2, is there any evidence that the iterative algorithm is performing well because it is indeed a good approximation and not because it is simply solving the constraint at every possible slope configuration? This should be assessed before claiming that is is reducing the number of solved slope configurations as you are doing.\n\n**Clarity:** The work is in general clearly written, and given the reader all of the required information to correctly understand both the proposed methodology and the experimental results, with just some minor exceptions (see **Questions** below).\n\nPerhaps the explanation of the adaptation of Max-Sum to the non-linear DCOP setting may be a bit improved: what do you mean exactly by a joint action $\\mathbf{a}$ falling outside a given piece $\\rho_k$ on line 164? Perhaps you mean that the $q_i,q_{ij}$s generated by the joint action $\\mathbf{a}_k$ (I think these are $\\mathbf{q}_k$ in your notation) generates a different slope configuration $\\mathbf{c}^{m\\=k}$?\n\n**Significance:** The presented idea is indeed significant, and advances our understanding of the field. With the great interest that both value-decomposition methods (with a and growing impact of higher-order decompositions) and non-linear mixing architectures are gaining, this is a valuable contribution and could possibly allow for even better application of mixing techniques to coordination graph learning (that has proved indeed useful in learning better representations [Castellini et al., 2021](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwj92tifi634AhU6RPEDHUIuBXUQFnoECBoQAQ&url=https%3A%2F%2Fresearch.tudelft.nl%2Ffiles%2F94310595%2FCastellini2021_Article_AnalysingFactorizationsOfActio.pdf&usg=AOvVaw0pSTZXIivp-1y3apJsQhMH). - [Castellini et al., 2021](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwj92tifi634AhU6RPEDHUIuBXUQFnoECBoQAQ&url=https%3A%2F%2Fresearch.tudelft.nl%2Ffiles%2F94310595%2FCastellini2021_Article_AnalysingFactorizationsOfActio.pdf&usg=AOvVaw0pSTZXIivp-1y3apJsQhMH), is such an extremely relevant citation here, showing the benefits of higher-order factorizations on tackling an exponential number of joint actions over both centralized controllers and independent learners.\n- Your explanation of relative overgeneralization is a bit not-on-point (indeed, each agents is not supposed to know the actions of the others). Relative overgeneralization occurs when agents are pushed towards favouring a suboptimal behaviour because this gives, on average, a higher reward than the optimal coordinated behaviour. This happens because the concurrent learning process of the other agents shadows the optimal strategy with its exploration (equilibrium shadowing), rendering it less attractive than a suboptimal one.\n- It may not be immediately clear how you decompose LeakyReLU$(\\mathbf{o}_i)$ as $\\mathbf{c}_i\\circ \\mathbf{o}_i$, where $c_i$ is simply the multiplication coefficient of the LeakyReLU (so either $\\alpha$ or $1$). Therefore a reader may be a bit surprired of the definition of the slope configuration $\\mathbf{c}\\in\\{\\alpha,1\\}^m$. Please clarify what $\\mathbf{c}_i$ is and where do you get it.\n- What do you mean exactly by a joint action $\\mathbf{a}$ falling outside a given piece $\\rho_k$ on line 164? Perhaps you mean that the $q_i,q_{ij}$s generated by the joint action $\\mathbf{a}_k$ (I think these are $\\mathbf{q}_k$ in your notation) generates a different slope configuration $\\mathbf{c}^{m\\not=k}$? This should be better clarified.\n- What are the CG topologies for the different experiments for both your method and DCG? Why have you come up with such topologies?\n- In Figure 2, when using NL-CG with an embedding of 4 units, is there any evidence that the iterative algorithm is performing well because it is indeed a good approximation and not because it is simply solving the constraint at every possible slope configuration (thus reducing to the enumeration algorithm in practice)? This should be shown somewhere, possibly in an Appendix if the space constraints are too limiting (although I would prefer to see such a figure rather than the very lengthy Algorithms 1, 2 and 3, that are not really adding much to the method explanation, that is already quite clear).\n- In Figure 4, how do you guarantee that the iterative method is only solving the constrained problem for 16 slope configurations and not more?\n- In Figure 5, what $Q_{tot}$ are you using as a reference? Are you using the one from NL-CG with enumeration? Or perhaps you are computing some sort of ground-truth Q-function? That should be specified for a reader to clearly understand and appreciate the results. The possible limitations of the proposed method are actually never discussed in depth. The authors should make an effort and address this, trying to identify possible situations in which the proposed method could not perform as expected, and the reasons for this being the case." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 5 ]
[ "6sAii9JjuZH", "HENoABMXRy", "xZ8avovdPOp", "ZvPvEEuQm_P", "5jPzI5gwRpW", "01OSMHkARYl", "O1atqOkoHr4_", "HENoABMXRy", "B9Ta4WYArJ", "WmjhtPuMGPtG", "w36tsjttoP6", "KO6QiPjGG_", "axYDqevqpc5", "FF0UzYZx5H", "FF0UzYZx5H", "01OSMHkARYl", "01OSMHkARYl", "nips_2022_OcNoF7qA4t", "nips_2022_OcNoF7qA4t", "nips_2022_OcNoF7qA4t", "nips_2022_OcNoF7qA4t" ]
nips_2022_0xbhGxgzd1t
ComGAN: Unsupervised Disentanglement and Segmentation via Image Composition
We propose ComGAN, a simple unsupervised generative model, which simultaneously generates realistic images and high semantic masks under an adversarial loss and a binary regularization. In this paper, we first investigate two kinds of trivial solutions in the compositional generation process, and demonstrate their source is vanishing gradients on the mask. Then, we solve trivial solutions from the perspective of architecture. Furthermore, we redesign two fully unsupervised modules based on ComGAN (DS-ComGAN), where the disentanglement module associates the foreground, background and mask with three independent variables, and the segmentation module learns object segmentation. Experimental results show that (i) ComGAN's network architecture effectively avoids trivial solutions without any supervised information and regularization; (ii) DS-ComGAN achieves remarkable results and outperforms existing semi-supervised and weakly supervised methods by a large margin in both the image disentanglement and unsupervised segmentation tasks. It implies that the redesign of ComGAN is a possible direction for future unsupervised work.
Accept
The paper proposes a compositional GAN model with a novel network architecture that solves the vanishing gradient problem underlying trivial solutions. The proposed model achieves strong results on image disentanglement and unsupervised segmentation tasks. The rebuttals by the authors have successfully addressed most of the concerns of the reviewers. All the reviewers are positive about this paper. Reviewer Tkuw's main concerns regarding the evaluation and the clarity of the method were addressed. The reviewer raised the rating. Reviewer 19KW felt positive about section 3.1 in the revised version and the additional empirical results regarding the gradient values observed during a training stage as given in Figure 5. The reviewer also updated the initial rating. Reviewer rWMN's concerns have also been addressed. The reviewer appreciates the additional detailed theoretical analysis on the problem.
train
[ "Q5FzOXK9bxU", "ckyEektziO8", "ftycyuBW4FE", "7-sKLOpgu4S", "xtsai1wwrkL", "1EizKaEfZRmX", "Jf2S7GrKlw", "THgGlMRDfV9", "7XiG-cjyLN", "_FjnonuxRS", "HNcjZEda-a", "2rT2J8hnJ9C", "gOR6FOs7E-s", "O0FJYU9yOI8", "MT0a2UoUuy9" ]
[ "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response and appreciation. We are glad that our feedback addressed your concerns. We will be encouraged if you would like to raise your rating accordingly. Please let us know if there is anything else we can do to make the paper better.", " Thank you for your comments. My main concerns regarding the evaluation and the clarity of the method were addressed. I think Sec. 3.1 is much more readable now and Remark 1 is very helpful to understand the proposed changes in relation to existing works (minor suggestion: maybe include the examples for priors works that use $\\Pi_1$, $\\Pi_2$ that you gave in the answer to another reviewer, too). I raised my rating to 6.", " Thanks for the authors’ effort in the feedback. My concerns have been addressed. I appreciate the additional detailed theoretical analysis on the problem.", " Dear Authors,\n\nThank you for the additional work!\nYour response did resolve most of my concerns, so I adjusted my initial evaluation.\nThe section 3.1 in the revised version seems more convincing than the prior one.\nI also like the additional empirical results regarding the gradient values observed during a training stage as given in Figure 5!", " Dear Reviewer rWMN,\n\nThanks again for your valued advice! We have responded to your initial comments. We are looking forward to your feedback and will be happy to answer any further questions you may have.\n\nThank you, author\n", " Dear Reviewer 19KW,\n\nThanks again for your valued advice! We have responded to your initial comments. We are looking forward to your feedback and will be happy to answer any further questions you may have.\n\nThank you, author\n", " Dear Reviewer Tkuw,\n\nThanks again for your valued advice! We have responded to your initial comments. We are looking forward to your feedback and will be happy to answer any further questions you may have.\n\nThank you, author", " We sincerely thank you for your kind words and thoughtful comments. Below, we provide point-to-point responses to address the questions.\n\n**Comment 1**: The presentation of the method part is not clear enough. Please check.\n\n**Response 1**: Thank you for your comment. The main revisions of the method part can be summarized as follows. First, we have significantly revised sub-section 3.1 (see response 2) to improve readability. Second, we added the step of adding global features and the reason for generating $x_z$ in sub-section 3.2. Third, we completed the description of the mask distribution alignment, which includes the specific operations and effects of the alignment in sub-section 3.3.\n\n**Comment 2**: The description in sub-section 3.1 is not that readable. [...]\n\n**Response 2**: Thank you for your suggestion. To improve the readability of sub-section 3.1, the revisions are detailed as follows. First, Remark 1 is added to show that the proposed ComGAN is a generic method and alleviates the shortcomings of the two typical methods. Second, we remove Proposition 1 and present Theorem 1 with more theoretical details. Theorem 1 proves that if the synthetic image satisfies Equation (2) and a set of constraints is satisfied, then there exists a lower bound on the gradient norm of the mask generation process. Third, we perform a simple discussion, i.e., the model avoids the first trivial solutions if the lower bound > 0. Fourth, we present Corollary 1, which maps the lower bound to a restriction on the modules in ComGAN. Eventually, the synthetic image satisfies Equation (2) and the modules meet the restrictions in Corollary 1, then the model can effectively avoid the trivial solution from the network architecture perspective.\n\nEquation (2) represents the image synthesized by ComGAN. For your convenience, it is quoted as follows,\n\t$$\\bar{x}={F}(\\Phi(z)) \\odot {M}(\\Phi(z))+{B}(\\Phi(z)) \\odot(1-{M}(\\Phi(z)))$$.\nWe tried to describe our contribution briefly with illustrations, but due to the page limit (within 9 pages), we finally chose to describe the contribution with theoretical details. \nFor more details about how to avoid vanishing gradients from the perspective of network structure, please see response 1 in the official review by reviewer 19KW. or 3.1 sub-section in the revised paper.\nThank you for your careful reading. This is a typo, we have corrected all the typos and carefully examined the whole manuscript. The variable $f$ in Equation (9) and the variable $F$ in Figure 3 represent the same thing, i.e., the foreground variable. DS-ComGAN maximizes the mutual information between foreground variable $f$ and foreground image, so that the controllable image generation is achieved (Please see Figure 6 in the revised paper).\n\n**Comment 3**: The description of the mask distribution alignment could also be improved. [...]\n\n**Response 3**: Thank you for your comment. Yes, you understand correctly that the segmentation network does not require any segmented data input to the network. We add additional content to enhance the description of the mask distribution alignment in the revised paper.\n\n**Comment 4**: The learning process is not clear. Does DS-ComGAN need to be trained in two stages? Besides, the overall objective is missing and the loss term $\\beta L_{\\text {binary }}$ is introduced in the experiment section instead of the method section.\n\n**Response 4**: Thank you for your comment. We present the learning objectives for ComGAN and DS-ComGAN respectively.\n\n- ComGAN is trained by adversarial loss and binary regularization, and the learning objective is as follows,\n\n $$ L_{\\text {all}}= \t\\min _{\\Phi, \\mathcal{F}, \\mathcal{B}, \\mathcal{M}} \\max _{D} \\mathcal{L} _D^{a d v} + \\min _{\\Phi, \\mathcal{F}, \\mathcal{B}, \\mathcal{M}}\\beta \\mathcal{L} _{\\text {binary}}$$\n\n- DS-ComGAN performs two unsupervised tasks, image disentanglement and object segmentation. \n\n$$ L_{\\text {all}} = \\max _{D_f, D_b} L _{\\text {info}} + \\min _{G, \\mathcal{F}, \\mathcal{B}, \\mathcal{M}} \\max _{D_z} \\mathcal{L} _{D_z}^{a d v} + \\min _{S} \\max _{D_m} \\mathcal{L} _{D_m}^{a d v} + \\min _{S} \\lambda\\mathcal{L} _{\\text {cons}}$$\n\nThe disentangled image generation task is performed in a single stage, and its overall loss function is $\\max _{D_f, D_b} L _{\\text {info}} + \\min _{G, \\mathcal{F}, \\mathcal{B}, \\mathcal{M}} \\max _{D_z} \\mathcal{L} _{D_z}^{a d v} $. The segmentation network needs the images and the semantic mask generated by the disentanglement module. Therefore, the segmentation task needs to be trained in two stages and its overall loss function is $+ \\min _{G, \\mathcal{F}, \\mathcal{B}, \\mathcal{M}} \\max _{D_z} \\mathcal{L} _{D_z}^{a d v} + \\min _{S} \\max _{D_m} \\mathcal{L} _{D_m}^{a d v} + \\min _{S} \\lambda\\mathcal{L} _{\\text {cons}}$.\n\nUnfortunately, due to page limits, we could not place it in the methods section, but rather B.2 and B.3 in the revised supporting material.", " - Differences in image disentanglement methods. \n\nExisting methods rely on additional supervised information to learn the distinction of image regions. Notice that ComGAN achieves foreground-background disentanglement in an unsupervised way. As a result, we add more global information to the shared features in ComGAN and maximize the mutual information between variables and images. Our image disentanglement method is unsupervised and simplifies the previous hierarchical generative network and outperforms the SOTA semi-supervised and weakly supervised image disentanglement methods.\n\n- Differences in unsupervised segmentation methods. \n\nExisting methods rely on strong assumptions, such as that the foreground and background are largely independent, which limits their applicability. Different from these methods, we train a segmentation network using the images and semantic masks synthesized by the ComGAN variants. By adversarial training strategy, the image distribution and the mask distribution are aligned. Furthermore, a consistency regularization is introduced to ensure that the predicted masks are consistent with the input images. Our unsupervised segmentation method relies only on mild assumption 1 and outperforms the SOTA unsupervised segmentation methods.\n\n*All four of these aspects are added to the revised paper, so as to highlight the differences between our methods and the previous methods.*\n\n**Comment 6**: The requirements for Proposition 1 regarding what it means for an architecture to be \"similar to ComGAN\" are not stated clearly, [...]\n\n**Response 6**: Thank you for your suggestion. To clearly demonstrate that the proposed architecture can help the model avoid trivial solutions, we remove Proposition 1 and present Remark 1, Theorem 1 and Corollary 1 in the revised paper. \nIn the revised paper, we have replaced \"the generated images satisfy that $\\bar x = {F}(\\Phi(z))\\odot{M}(\\Phi(z))+{B}(\\Phi(z))\\odot(1-{M}(\\Phi(z)))$\" with \"similar to ComGAN\" for clarity. \nAs stated in Corollary 1, a minimal ${M}$ can help the model avoid trivial solutions. Because if the ${M}$ is a deep neural network, it may fail to map the fluctuations of shared features to the mask, i.e., ${M}(\\phi) \\approx {M}(\\phi +\\triangle\\phi)$. If we fix the ${M}$ as a sigmoid layer, although the model can avoid trivial solutions, the generalization of the model decreases dramatically. Therefore, we perform the experiment of relaxing constraints on the ${M}$ (Please see C.3 in the revised support material). We improve the capacity of ${M}$ by adding residual blocks. The experimental results show that the model is robust to the capacity of ${M}$, and the ${M}$ with one residual block improves the model performance.\n\nWe place restrictions on each module in Corollary 1. For your convenience, it is quoted as follows:\n\n$\\mathbf{Corollary1}$: The following modules restrictions help the model avoid trivial solutions: the ${F}$ and ${B}$ are lightweight and differential, the ${M}$ is a shallow network and the capacity of $\\Phi$ is enough. \n\nFineGAN can be written as model $\\Pi_{1}$ in Remark 1. Furthermore, in the revised paper experiments, we plot the gradient norm figure, where the model $\\Pi_{1}$ is taken from the official code of FineGAN. From the Figure 5, we intuitively observe that the gradients of both the mask and foreground network of model $\\Pi_{1}$ converge to 0.\n\n**Comment 7**: The lemmas, propositions and proofs seem a bit vague as they do not clearly state assumptions or define all involved quantities, [...]\n\n**Response 7**: Thank you for your comment. This is a typo in I. 160-161, we have corrected all the typos and carefully examined the whole manuscript. Your inference is correct when the $F$ and $B$ contain a large number of parameters and layers. However, we get a lower bound in Theorem 1 that is related to the parameters of $F$ and $B$. One way to improve this lower bound is to reduce the parameters of $F$ and $B$. Thus, it is possible that this statement is correct only in Corollary 1, i.e., \"Any change in the foreground or background affects the mask. \"\n\n**Comment 8**: Motivation for DS-ComGAN architecture is unclear: Why is the $x_z$ output needed in addition to the output composited from foreground, background and mask?\n\n**Response 8**: Thank you for your comment. The role of $x_z$ is to add global information to the features of $G$ via adversarial training. Since the features in $G$ are fed into three subnetworks, if the features contain much local information, the model focuses on the abrupt regions. The operation is necessary for specific datasets such as Stanford Cars, because it permits the mask to capture the across-the-board distinction between the foreground and background. Please see D.2 in the revised supporting material for the ablation study on global feature extraction.", " We sincerely thank you for your time and constructive comments. We hope our detailed responses below would resolve the remaining questions you have.\n\n**Comment 1**: l. 47-60 could be moved to the related work section.\n\n**Response 1**: Thank you for your suggestion. We move l. 47-60 to the related work section and rewrite the introduction and related work section in the revised manuscript. In the introduction, we describe the negative role of trivial solutions on two related tasks and highlight the differences between DS-ComGAN and previous methods on both tasks.\n\n**Comment 2**: l. 294 refers to the wrong figure.\n\n**Response 2**: Thank you for the careful reading. We have corrected the index and double-checked the entire manuscript.\n\n**Comment 3**: l. 298: how does fine-tuning mitigate masks that are inconsistent with foreground images? [...]\n\n**Response 3**: Thank you for your comment. We think that this inconsistency is caused by the adversarial training strategy, which is similar to a model collapse. However, we alleviate the inconsistency issue by increasing the weight of $L_{cons}$, i.e., $\\lambda$ to force the segmentation network to learn more mask features. The main effect of mask distribution alignment is that the model learns the low dimensional manifold of the masks, so that the predicted segmentation masks contain more details and are clearer. \nPlease see Fig. 8 in the revised paper, the predicted masks segment visual details precisely, such as the legs of birds and the rearview mirrors of cars. For more segmentation results, please see D.4 in the revised supporting material. The $L_{cons}$ in Eq. (13) is a pixel-wise loss. This kind of loss reduces the correlation between pixels. If we train the segmentation model $S$ with the $L_{cons}$ only, the predicted segmentation masks may be blurred and lack details.\n\n**Comment 4**: It would be helpful to use the same evaluation protocol as in FineGAN (based on 30k samples) for Tab. 3. The reported Inception Scores seem very low.\n\n**Response 4**: Thank you for your suggestion. We adopt the same evaluation protocol as SSG-GAN, instead of FineGAN. According to the official code of SSG-GAN, they generate 20K samples to evaluate the model performance. The benefits of choosing SSG-GAN as our evaluation protocol are as follows: 1) SSG-GAN compares a wide range of GAN-based generation models, including weakly supervised models such as FineGAN and semi-supervised models such as Triangle-GAN. 2) SSG-GAN is the SOTA semi-supervised model in the image disentanglement generation task. Therefore, it is clear from Table 3 that our unsupervised method outperforms the SSG-GAN, which highlights the advantages of our model, i.e., our model is unsupervised and outperforms SOTA. The reported Inception Scores is taken from Table 2 in SSG-GAN. Moreover, according to the official code of FineGAN, they use the fine-tuned version for computing the Inception score, where the inception model is fine-tuned on all the 200 categories (for birds). This may be the reason why the reported Inception Scores seem to be low.\n\n**Comment 5**: The core idea and differences to previous approaches are not clearly stated.\n\n**Response 5**: Thank you for your comment. Our core idea can be summarized in a sentence, i.e., \"find the causes of trivial solutions, solve trivial solutions via the perspective of architecture and generalize the model to downstream tasks.\" One of our main contributions is to focus on solving trivial solutions from the perspective of architecture. The key points of this method are as follows: 1) the model is a generic image compositional generation method. 2) the model presents a lower bound on the gradient 2-norm of the mask generation process. 3) the module restrictions help the model to better avoid trivial solutions. For more details, please see response 1 in the official review by reviewer 19KW.\n\nThe differences with the previous methods are summarized into four aspects, which are 1) differences in alleviating trivial solutions; 2) differences in generative models; 3) differences in image disentanglement methods; 4) differences in unsupervised segmentation methods.\n\n- Differences in alleviating trivial solutions.\n\nTo the best of our knowledge, no previous work has indicated that the source of trivial solutions is vanishing gradients on the mask. Our method is also the first to solve trivial solutions from the perspective of architecture. Existing work alleviates trivial solutions in two ways. One way is to add supervised information, such as CGN avoids trivial solutions by adding pre-trained U2-Net. Another is to design clever regularization and fine-tune the parameters.\n\n- Differences in generative models.\n\nPlease see response 1 in the official review by reviewer 19KW.\n\n", " **Comment 2**: The connection between the theoretical analysis and [...].\n\n**Response 2**: Thank you for your comment. We give the connection between the proposed model and the theoretical analysis in Response 1. In addition, we give a visualization to show that the proposed architecture helps the model not to fall into trivial solutions. In the revised paper, we trace the gradient norm of Model $\\Pi_1$, Model $\\Pi_2$ and our model and plot it in Figure 5.\nIt is clear from Figure 5 that the gradients norm of mask networks in model $\\Pi_1$ and model $\\Pi_2$ converge to zero and the model degrades to a raw GAN, while our method effectively avoids vanishing gradients.\n\n**Comment 3**: Why the performance of PerturbGAN was not compared?\n\n**Response 3**: Thank you for your comment. As reported by IEM+SegNet, IEM+SegNet outperforms PerturbGAN by a big margin, while the performance of our method outperforms IEM+SegNet. Therefore, we do not compare our method with PerturbGAN. The suggestion improves the persuasiveness of this paper, in the revised paper we compare the performance of PertureGAN and add it to Table 5.\n\n**Comment 4**: Maybe simplegan --> finegan at the line 221?\n\n**Response 4**: Thank you for your comment. Our baseline model is SimpleGAN and not FineGAN. Notice that SimpleGAN is also the baseline model and backbone of FineGAN. For a fair comparison, we build the baseline model based on SimpleGAN, which generates global features as a feature decoder.\n\n**Comment 5**: Authors did not address the limitations and potential negative societal impact of their work.\n\n**Response 5**: Thank you for your comment. We describe the limitations of our work in the Conclusion, which is quoted as follows:\n\n\"As the limitation of method, DS-ComGAN has struggled to achieve the desired performance when the foreground object features are highly diverse (e.g., HKU-IS ).\"\n\nTo clarify the potential negative societal impact of our work, some comments have been added to the Conclusion of the revised manuscript. For your convenience, it is quoted as follows:\n\n\"DS-ComGAN performs excellently in controlled image synthesis tasks, which may cause the incidence of image falsification.\"\n\n**Comment 6**: I want to know the authors' opinion on how much would it be difficult to apply this method on coarse grained datasets like ImageNet?\n\n**Response 6**: Thanks for this good question! \nHistorically, the existing methods that perform well on fine-grained datasets have been difficult to migrate to coarse-grained datasets. The issue is not only related to the model capacity, but also to both the dataset-sensitive regularization and hyperparameters. It is notable that our method contains a few hyperparameters and it is robust to these hyperparameters. Hence along the lines of the issue, we migrated DS-ComGAN directly to the coarse-grained dataset CIFAR-10 for experiments (Compared with ImageNet, we trained DS-ComGAN on CIFAIR-10 within 24 hours). \nWe set that dimensionality of latents $N=10$, binary regularization weight $\\beta=0.5$ and scale all the images to 64 $\\times$ 64 pixels. Although the model has not exhibited superior performance as on the fine-grained dataset, the model shows similar results on CIFAIR-10 without other fine-tuning, that is, DS-ComGAN synthesizes the images and the corresponding semantic masks. More visualized experimental results are added to the supplementary material ( Please see D.3 in the revised supplementary materials ). Experiments on CIFAIR-10 once again demonstrated that our method is flexible and robust to different datasets. Returning to the question, we consider that there are at least two difficulties in applying our method to the ImageNet dataset. First, the capacity of the model should be improved to learn more features. Second, since ImageNet has a large number of categories, the model needs additional regularization or supervised information to learn the significant category variations.", " We sincerely thank you for your time and constructive comments. Below, we provide point-to-point replies to your comments in order and hopefully resolve the remaining questions you have.\n\n**Comment 1:** Why the proposed network is helpful for avoiding the vanishing gradient issue?\n\n**Response 1:** Thank you for your comment. We summarize the following three points to explain this issue.\n- Our model is a generic image compositional generation method.\n\nWe review that the image synthesized by ComGAN can be written as follows:\n $$\\bar x = {F}(\\Phi(z))\\odot{M}(\\Phi(z))+{B}(\\Phi(z))\\odot(1-{M}(\\Phi(z)))$$\n\n$\\mathbf{Rmark 1}$:\n*This form generalizes two typical image compositional generation methods:*\n\n *- If only $\\Phi(\\cdot)$ in ${B}$ is an identity map, i.e., ${B}(\\Phi(z))={B}(z)$, then this form is equivalent to model $\\Pi_{1}$, that is, two independent generators synthesize a composite image where shared features exist in the foreground and mask generation. To our knowledge, FineGAN, MixNMatch, OneGAN, C3-GAN and Labels4Free, etc. can be written as model $\\Pi_{1}$.* \n\n*- If $\\Phi(\\cdot)$ is an identity map, i.e., $\\Phi(z)=z$, then this form is equivalent to model $\\Pi_{2}$, that is, three independent generators synthesize a composite image where foreground, background and mask are generated by three generators respectively. To our knowledge, PerturbGAN and CGN, etc. can be written as model $\\Pi_{2}$.* \n\nFrom the above observations, we notice that both the two typical methods have a common shortcoming.\nBoth models $\\Pi_{1}$ and $\\Pi_{2}$ contain an independent background generation process, which not only ignores the feature connection between foreground, background, and mask, but also may be limited by shortcomings of GANs, such as mode collapse.\n\n- Our model presents a lower bound on the gradient 2-norm.\n\n$\\mathbf{Theorem 1}$: Given a generation $G_{\\theta}$ composed of a decoder $\\Phi_{\\theta_\\phi}: \\mathcal{Z} \\rightarrow \\phi$ and three subnetworks $F_{\\theta_f}, B_{\\theta_b}$ and $M_{\\theta_m}$: $\\phi \\rightarrow\\mathcal{X}$. Let $D$ be a discriminator and $D^*(G^*(\\cdot))$ be Nash equilibrium. If the generated images satisfy that $\\bar x = {F}(\\Phi(z))\\odot{M}(\\Phi(z))+{B}(\\Phi(z))\\odot(1-{M}(\\Phi(z)))$, $\\lVert D(G(\\cdot))- D^*(G^*(\\cdot)) \\rVert <\\epsilon$, $\\max$ { $\\mathbb{E} _{ \\phi\\sim p(\\Phi(z))} \\rVert J _{\\theta_f} F ( \\phi ) \\rVert$, $\\mathbb{E} _{\\phi \\sim p(\\Phi(z))} \\rVert J _{\\theta_b} {B} ( \\phi ) \\rVert $} $\\leq \\delta ^{2}$ and $\\lVert L^{adv} _D \\rVert \\geq \\sigma$, then\n\n$$\\rVert \\nabla_{(\\theta_{\\phi}, \\theta_m)} \\mathbb{E} _{z \\sim p(z)} \\rVert \\log(1-D(F(\\Phi(z))))\\rVert^2_2 \\geq \\sigma^2 -\\delta \\frac{\\epsilon^2}{1/2 - \\epsilon^2}$$\n\n*Proof: Please see Theorem 1 in the revised paper.* \n\nWe denote that $\\rho=\\sigma^2 - \\delta^{2} \\frac{\\epsilon^{2}}{(1/2-\\epsilon)^{2}}$ and trivial masks as\n $\\bar x^*_m ={M} _{\\theta^*_m} (\\Phi _{\\theta ^*_\\phi}(z))$. If $\\rho>0$, which implies that the $\\Phi$ and ${M}$ are updated. The updated masks are $\\bar x ^+ _m ={M} _{\\theta ^+ _m} ( \\Phi _{\\theta ^+_\\phi}(z))$, which means that the model can escape from the first trivial solutions, i.e. $ \\bar{x}^+_m \\neq \\bar x^*_m$ and $\\partial L ^{adv} _D/ \\partial \\bar{x}^+_m\\neq0$.\n\n- Module restrictions in our model.\n\n$\\mathbf{Corollary 1}$: The following modules restrictions help the model avoid trivial solutions: the ${F}$ and ${B}$ are lightweight and differential, the ${M}$ is a shallow network and the capacity of $\\Phi$ is enough.\n\n*Proof: By Theorem 1, the larger the value of $\\rho$, the easier it is for the model to escape from the trivial solution. We observe that $\\max$ { $\\mathbb{E} _{ \\phi \\sim p( \\Phi ( z ) ) } \\rVert J _{\\theta_f} F ( \\phi ) \\rVert$, $\\mathbb{E} _{\\phi \\sim p(\\Phi(z))} \\rVert J _{\\theta_b} {B} ( \\phi ) \\rVert $} $\\leq \\delta ^{2}$, which means that reducing the parameter of $\\theta_f$ and $\\theta_b$ can effectively increase $\\rho$. Notice that when $\\theta_f$ and $\\theta_b$ have too few parameters, or even no parameters, our model degrades into a raw GAN. Therefore, $F$ and $B$ should be designed as lightweight modules (e.g., adding residual connections), so as to enhance the capacity of the module with fewer parameters. Furthermore, the $\\rho$ is actually the gradient norm from the decoder $\\Phi$. It is a natural idea to increase $\\rho$ by raising the capacity of $\\Phi$.\nIn addition, the update of $\\Phi$ does not necessarily mean the update of $M$. If $M$ is a deep neural network, it may not be able to map the fluctuations of features to the mask space, i.e. ${M}(\\phi) \\approx {M}(\\phi +\\triangle\\phi)$. Therefore, $M$ should be designed as a shallow neural network. As for the second trivial solution, $\\bar x_f = \\bar x_b$ is a fragile equilibrium. We can break the identical mapping, i.e. ${F} _{\\theta_f}(\\phi) = {B} _{\\theta_b}(\\phi)$, by changing parameters of $\\theta_f$ and $\\theta_b$ or modifying the structure of ${F}$ and ${B}$.*", " The paper considers the task of learning GANs that decompose the image formation process into foreground, background and mask generation and composition. Compared to previous methods, the proposed ComGAN aims to avoid trivial solutions (where masks do not correspond to foreground objects) mainly through the network architecture instead of regularizations, which often require extensive hyperparameter searches for suitable regularization strengths. - Strengths\n - The proposed approach simplifies the design of compositional GANs compared to previous methods and demonstrates improved performance in terms of synthesis quality as well as unsupervised segmentation performance.\n - Compared to similar compositional generative models like FineGAN [25], the supervision requirements regarding weak background supervision is further reduced.\n - Finding a compositional generator architecture that is stable to train has many applications beyond GANs. Thus, the work is potentially interesting for a larger audience.\n - Although there are still a few hyperparameters (mask consistency loss weight, binary regularization weight, dimensionality of latents, relative sizes of subnetworks), experiments demonstrate some robustness to these parameters.\n- Weaknesses\n - The core idea and differences to previous approaches are not clearly stated.\n - The requirements for Proposition 1 regarding what it means for an architecture to be \"similar to ComGAN\" are not stated clearly. I assume the key point is that M consists of only an sigmoid layer. However, the formulation in l. 149 seems to be the only place where this is stated and even there it remains vague and could be interpreted as containing the same layers as F and B and in addition a sigmoid layer. The latter interpretation is also what Fig. 3 suggests (albeit with one residual block less). If this (a shared decoder with a minimal mask decoder) is the key idea of the paper, it should be communicated more clearly and probably also on a higher level already in the introduction. Without clear restrictions on the architecture, one could also think that FineGAN satisfies the requirements with G set to the identity.\n - The lemmas, propositions and proofs seem a bit vague as they do not clearly state assumptions or define all involved quantities. In l. 160-161 it is not clear what is meant by $\\bar{x}_m = M^{-1}(G(z))$ - why would the input of M equal its output? Intuitively, I also don't see how \"it is clear that any change of foreground or background affects the mask, [...]\". Since F and B do contain additional layers, foreground and background could be affected by changes in those layers even though G(z) and hence the mask would remain unaffected, no? The other way, that any change of the mask affects both foreground and background, seems to be true.\n - Motivation for DS-ComGAN architecture is unclear: Why is the $\\bar{x}_z$ output needed in addition to the output composited from foreground, background and mask? - l. 47-60 could be moved to the related work section\n- l. 294 refers to the wrong figure\n- l. 298: how does fine-tuning mitigate masks that are inconsistent with foreground images? Also, what is the effect of the mask distribution alignment (l. 206)? Why not train the segmentation model S with Eq. (13) only?\n- It would be helpful to use the same evaluation protocol as in FineGAN (based on 30k samples) for Tab. 3. The reported Inception Scores seem very low.\n- See also weaknesses for questions Limitations and potential negative societal impact have been addressed adequately.", " The authors point out two factors that lead a scene decomposition model to fall into trivial solutions in a mathematical analysis.\nThose are related to the vanishing gradient phenomena that goes into a mask generator.\nTo avoid these, they propose a novel network architecture, where features for generating decomposed scene elements are composed of the ones that used for generating the entire scene at once. \nWith this architecture, they achieved the SOTA scores on both the mask prediction and the image quality evaluation metrics. Strengths\n- The authors tackle the important problem in the scene component generation models.\n- They propose a novel architecture for robust mask generation based on the theoretical analysis on the problem.\n- The authors did a thorough ablation study to show that each element proposed are all effective.\n- The comparison of both quantitative and qualitative results with previous works imply that the proposed method is effective in boosting the scene decomposition performance.\n\nWeaknesses\n- The connection between the theoretical analysis and the proposed architecture design is not well established. More details are needed to understand why the proposed architecture is helpful for model to not fall into the vanishing gradient phenomena.\n Questions\n- Why the proposed network is helpful for avoiding the vanishing gradient issue?\n- Why the performance of PerturbGAN was not compared?\n\nSuggestions\n- Maybe simplegan —> finegan at the line 221? Authors did not address the limitations and potential negative societal impact of their work.\n\nSuggestions\n- I want to know the authors’ opinion on how much would it be difficult to apply this method on coarse grained datasets like ImageNet?", " This work analyses the reason for trivial solutions during mask learning in image composition GANs, and introduce a new model architecture ComGAN to solve the trivial solution issue. Furthermore, an unsupervised object segmentation module is also involved to construct the DS-ComGAN model. DS-ComGAN can perform both disentangled image generation and object segmentation, and outperforms semi-supervised and weakly supervised baselines. Strengths\n\n- It is claimed that this work is the first to solve the trivial solution in disentangled image generation by changing the network architecture. The change is simple to apply and obtained significant improvement. I believe this technic can also be useful for other models and tasks.\n\n- Both disentangled image generation and object segmentation tasks can be performed in a single framework. More importantly, the learning can be achieved in an unsupervised way by a carefully designed adversarial learning strategy. \n\n- Sufficient experiments and analyses have been done to demonstrate the effectiveness of the proposed method.\n\nWeaknesses\n\nMy main concerns are about the description.\n\n- The description in sub-section 3.1 is not that readable. I suggest improving the description with a more intuitive illustration to point out the key contribution: how to avoid vanishing gradient from the network architecture perspective. Besides, some symbols seem not consistent. Do the variable f in equation 9 and the variable F in Figure 3 represent the same thing? What does the variable mean?\n\n- The description of the mask distribution alignment could also be improved. If I understand correctly, the proposed Segmentation Networks S does not need any paired/unpaired segmentation data. For $D_m$, $\\bar{x}_m$ is regarded as the real and $\\hat{x}_m$ and $x_m$ are regarded as the fake.\n\n- The learning process is not clear. Does DS-ComGAN need to be trained in two stages?
 Besides, the overall objective is missing and the loss term βLbinary is introduced in the experiment section instead of the method section.\n The presentation of the method part is not clear enough. Please check the questions in the weaknesses part. The limitations and potential negative societal impact have been well described." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3 ]
[ "ftycyuBW4FE", "7XiG-cjyLN", "THgGlMRDfV9", "HNcjZEda-a", "MT0a2UoUuy9", "O0FJYU9yOI8", "gOR6FOs7E-s", "MT0a2UoUuy9", "_FjnonuxRS", "gOR6FOs7E-s", "2rT2J8hnJ9C", "O0FJYU9yOI8", "nips_2022_0xbhGxgzd1t", "nips_2022_0xbhGxgzd1t", "nips_2022_0xbhGxgzd1t" ]
nips_2022_9TsP2Gg0CM
Homomorphic Matrix Completion
In recommendation systems, global positioning, system identification and mobile social networks, it is a fundamental routine that a server completes a low-rank matrix from an observed subset of its entries. However, sending data to a cloud server raises up the data privacy concern due to eavesdropping attacks and the single-point failure problem, e.g., the Netflix prize contest was canceled after a privacy lawsuit. In this paper, we propose a homomorphic matrix completion algorithm for privacy-preserving data completion. First, we formulate a \textit{homomorphic matrix completion} problem where a server performs matrix completion on cyphertexts, and propose an encryption scheme that is fast and easy to implement. Secondly, we prove that the proposed scheme satisfies the \textit{homomorphism property} that decrypting the recovered matrix on cyphertexts will obtain the target complete matrix in plaintext. Thirdly, we prove that the proposed scheme satisfies an $(\epsilon, \delta)$-differential privacy property. While with similar level of privacy guarantee, we reduce the best-known error bound $O(\sqrt[10]{n_1^3n_2})$ to EXACT recovery at a price of more samples. Finally, on numerical data and real-world data, we show that both homomorphic nuclear-norm minimization and alternating minimization algorithms achieve accurate recoveries on cyphertexts, verifying the homomorphism property.
Accept
This paper concerns privacy-preserving matrix completion in a distributed manner. The communication security is based on homomorphic encryption, while the notion of privacy is defined as the subspace-aware join differential privacy. The paper received a mixed evaluation from the reviewers, ranging from accept (7) to reject (3), and the reviewers that gave these scores decided to keep them after the rebuttal and the following discussion. The strengths of the paper mentioned by the reviewers were: - Focusing on an interesting and important problem - Providing an algorithm that guarantees the exact recovery, as opposed to sacrificing accuracy in the prior work - Solid experimental results On the other hand, the identified weaknesses were: - The DP side does not seem very interesting, just involving Gaussian mechanism - Necessity to know at least an estimate of the rank of M before the homomorphic encryption (a 2-phase solution to that is sketched in the rebuttal) - The paper not being self-contained - Some technical issues, which (I believe) were clarified in the feedback Despite the weaknesses mentioned above, I lean toward the acceptance with my recommendation, although with a limited confidence.
train
[ "qeuIRXvvywy", "YCPArOklwc8", "sbcS4S0de8G", "RXAdDqTY7VY", "L1woWzBytdQ", "lF5WNN1THmu", "71-6MD5EA1", "GXvsEnkc-A", "DjkNb73cJ4", "IRj2ANJxif", "yjvzL6L9Tv", "b7y5nQsj1_" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The reviewer (Reviewer UWAB) raised an interesting question about the “block distributed” setting, which is a special case covered by the proposed homomorphic encryption framework, and the provided theoretical results naturally apply.\n\nAbout ​​parameter k and the true rank r of data matrix M, the authors restate the two-round scheme (given in the Appx. B1) in the responses, which does not require prior knowledge of rank r or proper selection of parameter k.\n\nWould like to know whether Reviewer UWAB thinks that the EXACT recovery (the novel homomorphism property) of matrix completion is straightforward.\n\nThanks very much!", " The authors' response has adequately addressed my concerns and questions.\nFor some reason, I am not able to see the revised version of the paper or the appendix within OpenReview, but I am quite satisfied with the responses provided.\nI wish to thank them for their insightful answer to my question regarding the choice of $k$ via the two-round scheme.\nIt's also nice to know that the results have been extended to a block-distributed setting in a longer (journal ?) version of this paper.\nEven after considering all other reviewer feedback, my positive rating on this paper remains unchanged.\n\n\n", " The authors sincerely thank all reviewers and area chair. To recap, this work has made the following major contributions.\n\n1. A novel homomorphic encryption framework for the distributed matrix completion problem. In contrast to conventional homomorphic encryption methods that are slow, the proposed scheme is quite fast.\n2. With a similar level of $(\\epsilon, \\delta)$ differential privacy, this paper improves the existing best-known error bound $O(\\sqrt[10]{n_1^3n_2})$ in [14] to EXACT recovery at a cost of more samples. It changes the conventional utility (accuracy)-privacy tradeoff to a novel #samples-privacy tradeoff.\n3. The reviewer (Reviewer UWAB) raised an interesting question about the “block distributed” setting, which is a special case covered by the proposed homomorphic encryption framework, and the provided theoretical results naturally apply.\n4. Both Reviewer WYT8 and Reviewer UWAB raise questions about ​​parameter $k$ and the true rank $r$ of data matrix $M$. The authors restate the two-round scheme (given in the Appx. B1) in the responses, which does not require prior knowledge of rank $r$ or proper selection of parameter $k$. It will be made clear in the paper.\n\nReviewer UWAB is right about the underlying idea of Alg. 1 and Alg. 2 that replaces random noise of differential privacy as the product of public key and random noise. However, the EXACT recovery (the novel homomorphism property) of matrix completion is non-trivial. ", " > The parameter $k$ is a design parameter of the algorithm but the authors simply set it to $10$ in all experiments. Shouldn't it vary with rank $r$ or $p$?\n\nThe impact of the parameter $k$ can be observed in Fig. 3 directly. The homomorphic matrix completion essentially hides the plaintext $M$ with rank $r$ into a larger space $\\overline{M}$ with rank $\\overline{r}$, where $\\overline{r} \\leq r + k$ and $k$ is the dimension of the keys. Therefore, the impact of the parameter $k$ can be observed by varying the rank $r$ shown in Fig. 3, in which a larger $k$ requires more observed entries to achieve exact recovery. An interpretation is also available in Line 293~296.\n\nFurthermore, we like to refer to a relevant response for a question raised by ReviewerUWAB:\n\n> > One key assumption in matrix completion is low-rankness. However, the transformation M+KR in the server is determined by the public key K to guarantee low rankness. In practice, how do we choose the dimension k since we have no idea the true rank r of M?\n> \n> This is an interesting question. It would be quite interesting to point out how a two-round encryption scheme in Appx. B1 would provide a smart way to address the reviewer’s concern. \n> \n> * In the first round of encryption, one can choose an arbitrary value $k’ >= 1$. The server obtains an encrypted $\\overline M1$ and estimates a column subspace $K2 = \\overline U$ with dimension $k = k’ + r$. \n> * In the second round of encryption, the server sends both $K2 = \\overline U$ and $k= k’+r$ to all user nodes. \n> \n> Using the above method, our scheme does not require the true rank $r$ of the data matrix $M$.\n> \n> We would like to further discuss real-world applications where the data matrix $M$ is approximately low-rank, i.e., not strictly low-rank with a true rank $r$. Some works [1, 2, 3] derived a sample complexity with a polynomial dependence on the conditional number $\\kappa = \\sigma_1 / \\sigma_r$. \n> \n> If the maximum value of the generated random numbers is larger than $\\sigma_1$, or the minimum value is smaller than $\\sigma_r$, or both cases hold simultaneously, then those analytical results in [1, 2, 3] need to use a new \\kappa value.\n> \n> * [1] Jain, Prateek, Praneeth Netrapalli, and Sujay Sanghavi. \"Low-rank matrix completion using alternating minimization.\" Proceedings of the forty-fifth annual ACM symposium on Theory of computing. 2013. \n> * [2] Hardt, Moritz. \"Understanding alternating minimization for matrix completion.\" 2014 IEEE 55th Annual Symposium on Foundations of Computer Science. IEEE, 2014. \n> * [3] Keshavan, Raghunandan Hulikal. Efficient algorithms for collaborative filtering. Stanford University, 2012.", " Thank you for your positive and thoughtful comments. We would like to address your concerns and answer your questions in the following.\n\n> The experimental results as summarized above appear to be quite solid. The observation regarding the significant drop in error for HNN in Sec. 6.3 is also quite interesting.\n\nSuch a significant drop in error is closely related to the phase transition phenomena of matrix completion. It is worth more effort to investigate. We are planning to provide more numerical results in an appendix, for completeness purposes.\n\n> The authors provide geometric intuition for Lemma 1 regarding why $M$ is the exact solution to Eq (3) when a certain condition holds. It appears that Lemma 1 and this geometric interpretation is original and not already present in reference [3] or [6].\n\nBoth Lemma 1 and the geometric interpretation are not from [3, 6], and yes, they can be considered original. But we would like to credit them back to [3, 6], since the fundamental ingredients come from [3, 6], through some fruitful discussion with those authors.\n\nAlso, Theorem 2 is a homomorphic version of the Rudelson selection estimation theorem. The proof of Theorem 3 in Appx. A3 reveals an essence of the homomorphism property of the matrix completion problem, The underlying beauty motivates the algorithmic design of Alg. 1 and Alg. 2, which turns out to be quite simple and quite effective. \n\n> Homomorphic encryption is generally supposed to be slow, but the proposed encryption / decryption method using public / private random matrices seems to be quite fast.\n\nThanks for your positive comment. Yes, general full homomorphic encryption is quite slow. Our homomorphic version of matrix completion is really fast. \n\n\n> This reviewer found the math in the paper hard to follow and hard to constantly refer to [3] or other references because the paper didn't seem to be self-contained. The tangent cone mentioned after eq (12) is not defined in the paper and neither is the incoherence parameter used in (13).\n\nSorry that we should clearly point out that the tangent cone $T$ is just the linear space spanned by matrices with the same column space or row space as M. Its definition was put in equation (20) in Appx. A2, which should be moved to right after (12). It was a mistake. \n\nAlso, we should move the definition of incoherence parameter, Def. 6 in Appx. A5, to before (13). \n\n\n> Both constants $C$ and $c$ are present in Lemma 3 and Corollary 1. It seems to this reviewer that $C=c$.\n\nIn Lemma 3 and Corollary 1, the uppercase $C$ stands for an absolute constant (in ref. [4]), whose value remains the same throughout the proofs. The lowercase $c$ is a normal numerical constant, which may take varying values across different appearances, since the interest is the decaying trend of probability. This confusion is mainly due to the original theoretical papers of matrix completion problems, like ref. [4] and several others.\n\n\n> The parameter $\\xi$ in Theorem 4 doesn't seem to be defined anywhere.\n\nThanks very much! This parameter $\\xi$ should be $1- \\xi - \\delta$, which is an error probability. In the proof of Theorem 4, we employed a two-round encryption, thus we used both $\\xi$ and $\\delta$. We should have made it clear.\n\n\n> Typo: \"Rank-decent\" should be \"Rank-descent\" after eq (12).\n\nThanks for the careful reading, and the typo is fixed in the revised version.", " > Could the authors numerically provide the privacy and recovery trade-off in different privacy levels?\n\nThis is a challenging question! The primary finding is that low-rank matrix does not exhibit a privacy-recovery tradeoff, but a privacy-sample tradeoff, because the proposed homomorphic framework can achieve EXACT recovery at a cost of more samples. \n\nHowever, numerical experiments may still help investigate the privacy-recovery tradeoff at different privacy levels:\n1. Given different privacy levels, according to Theorem 4, the epsilon-delta parameters will determine the standard deviation of the Guassian distribution.\n2. The added Guassian random numbers will affect the recovery via the condition number. As described in the above that if the maximum value of the generated random numbers is larger than $\\sigma_1$, or the minimum value is smaller than $\\sigma_r$, or both cases hold simultaneously, then those analytical results in [1, 2, 3] need to use a new $\\kappa$ value. \n\nWe are considering adding some numerical results in an appendix, in order to give readers a better understanding of whether the privacy-recovery trade-off has fully vanished.\n\n> Overall, this work follows a simple idea of differential privacy by replacing random noise as the product of public key and random noise. The guarantee of differential privacy and completion is relatively standard in existing works.\n\nYour summary is accurate. \n\nMoreover, the authors would like to say that Alg. 1~2 presents an add-on scheme to existing matrix completion methods, which is relatively easy to implement. Quite unexpected is that it guarantees both EXACT recovery and DP property. Therefore, we would like to share this finding to the community. ", " Thank you for your insightful and detailed comments. We would like to address your concerns and your questions in the following.\n\n> It is worth discussing whether the holomorphic matrix completion can work in that distributed setting.\n\nThanks for suggesting that paper. Yes, our homomorphic framework naturally extends to such a block distributed setting. Actually, we already included that case in our long version (not included in this submission, in order to keep it simple). A block distributed setting, in Fig. 1(a), would be each node (e.g., an edge server in mobile computing) holding multiple columns.\n\nIn the following, the authors would like to discuss how the current results have included the “block distributed” scenario as a special case. We will add several sentences (remarks) to clarify it in the revised version.\n* First, for recovery, it does not require any change of Theorem 2/3 and Lemma 3 at all. Consider t nodes and each node has $\\ell$ columns, with $n_2 = t * \\ell$, as in the paper by Mackey, Talwalkar, and Jordan (2015, JMLR). These variables have the same values: column space $U$, coherence $\\mu$ in (13), rank $r$, and $n_2 = t * \\ell$. \n* Second, for DP privacy, it may require some change to Theorem 4. In the first scenario where one wants to guarantee user-level privacy, there is no change of Theorem 4. In the second scenario where one wants to guarantee node-level (a node can be an edge server that holds multiple users’ feature vectors) privacy, the sensitivity $\\delta$ becomes the maximum Frobenius norm of $n_2$ submatrices, where each submatrix has size $n_1 \\times \\ell$. \n\n\n> One key assumption in matrix completion is low-rankness. However, the transformation M+KR in the server is determined by the public key K to guarantee low rankness. In practice, how do we choose the dimension k since we have no idea the true rank r of M?\n\nThis is an interesting question. It would be quite interesting to point out how a two-round encryption scheme in Appx. B1 would provide a smart way to address the reviewer’s concern. \n\n* In the first round of encryption, one can choose an arbitrary value $k’ >= 1$. The server obtains an encrypted $\\overline M1$ and estimates a column subspace $K2 = \\overline U$ with dimension $k = k’ + r$. \n* In the second round of encryption, the server sends both $K2 = \\overline U$ and $k= k’+r$ to all user nodes. \n\nUsing the above method, our scheme does not require the true rank $r$ of the data matrix $M$.\n\nWe would like to further discuss real-world applications where the data matrix $M$ is approximately low-rank, i.e., not strictly low-rank with a true rank $r$. Some works [1, 2, 3] derived a sample complexity with a polynomial dependence on the conditional number $\\kappa = \\sigma_1 / \\sigma_r$. \n\nIf the maximum value of the generated random numbers is larger than $\\sigma_1$, or the minimum value is smaller than $\\sigma_r$, or both cases hold simultaneously, then those analytical results in [1, 2, 3] need to use a new \\kappa value.\n \n* [1] Jain, Prateek, Praneeth Netrapalli, and Sujay Sanghavi. \"Low-rank matrix completion using alternating minimization.\" Proceedings of the forty-fifth annual ACM symposium on Theory of computing. 2013. \n* [2] Hardt, Moritz. \"Understanding alternating minimization for matrix completion.\" 2014 IEEE 55th Annual Symposium on Foundations of Computer Science. IEEE, 2014. \n* [3] Keshavan, Raghunandan Hulikal. Efficient algorithms for collaborative filtering. Stanford University, 2012.\n\n\n> The transformation g(M)=M+KR is too standard in DP. Is that possible to apply the non-linear transformation to improve privacy?\n\nWe appreciate your insightful comment. The linear transformation $g(M)=M+KR$ is quite standard in DP.\n\nConsider $g(M)$ where $g$ is a nonlinear function, and $M$’s SVD decomposition $M = USV^T$. Then, we have $g(M) = g(USV^T)$. To be concrete, we can set $g = <f, h>$ where both $f$ and $h$ are linear transforms, i.e.g, $f(U) = AU$ and $h(V) = BV$, thus the result $g$ is bi-linear transforms, since $g(USV^T) = AUSV^TB^T$. \n1. When $A$ and $B$ are Guassian, their product is “close” to Gaussian [4].\n2. Projecting $A$ and $B$ into the subspace of $UV^T$ may be equivalent to adding a Gaussian vector to the diagonal matrix $S$. \n\nTherefore, it is possible to apply non-linear transformation and still satisfies the notion of DP privacy. However, the mathematical analysis would be more challenging, which we would like to explore in our long journal version in the future. Other non-linear transformations besides bi-linear transformations are unclear to us yet. \n* [4] Li, Yi, and David P. Woodruff. \"The Product of Gaussian Matrices Is Close to Gaussian.\" Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2021). Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2021.", " > What is the running time of the scheme compared to prior work?\n\nWe appreciate your question and would be happy to clarify it. The encryption method can be simply understood as $g(M)=M+KR$, whose computation involves the following four steps:\n1. A server generates a matrix $K$;\n1. Each node locally generates a random vector and performs one matrix multiplication $KR$ and one matrix addition $M + KR$; \n1. A server performs a matrix completion on $M + KR$ with missing entries indicated by $\\Omega$; \n1. Each node locally performs a matrix subtraction $M - KR$.\n\nThe above step (3) uses conventional matrix completion methods, which may take a long running time for large matrices. Our method introduces Step (1), (2) and (4), whose computations are relatively simple. Also, in our experiments, our method almost does not introduce extra running time.\n\n\n> I couldn't find where the authors discuss the limitations of their work. Could I be pointed to that, please?\n\nWe think the following positions can be some potential limitations of this work:\n1. Please compare Def. 3 with Def. 2 and explanations in Lines 124~130. Here, we propose to relax the conventional privacy notation to a subspace-aware counterpart. We hold the assumption that the shared column subspace is common information that may not need to be protected. Also, this is possible since we are targeting a distributed matrix completion problem. \n2. Similarly, the target problem of distributed matrix completion may not fit many real-world recommendation systems. For example, for current recommendation systems that collect data from all users and then perform data recovery; here, our framework proposes that future recommendation systems should take a distributed structure, where a user’s local app has her own data vector and only sends encrypted data to a central server. Such a shift is not well-supported by existing systems yet. \n\n\n> I feel that the writing quality could be improved by adding more detailed preliminaries for the main problem and the cryptographic primitives (even in the supplement).\n\nWe appreciate your feedback very much and will polish the writing in the revised version.\n\n> In Equation 13, what is $\\mu(U)$?\n\n$\\mu(U)$ is the coherence measure of $U$, where $U$ is the $r$ left singular vector of $M$. We have added the definition of the coherence measure in the revised version (Line 201~204).\n\n> There seems to be some typo at the end of Line 130.\n\nThanks for the careful reading, and the typo is fixed in the revised version.\n\n> Lines 114-115: What does $A-j(D)$ mean? How is this different from running $A$ on $D_{-j}$?\n\n$A−j(D)$ means that excluding the $j$-th output of algorithm $A$, while running $A$ on $D−j$ means excluding the $j$-th input of algorithm $A$.\n\n> Lines 102-106: there are both bullets and numbering.\n\nThe format is corrected in the revised version.", " Thank you for your thoughtful comments. We would like to address your concerns and your questions in the following.\n\n> I wish the writing could be a bit more clear in terms of the cryptographic tools and the preliminaries related to matrix completion problem.\n\nAppreciate your feedback. We plan to add a background of cryptographic tools and the matrix completion problem in both problem setting and appendix, to make the paper easy to follow.\n\n> The DP side of the scheme doesn't seem very interesting as it just involves adding some Gaussian noise. I don't see any assumptions about the data range (except that the $\\ell_2$ norm is bounded by $L$), or how they're being removed. I would have liked to see more of that. This problem might have just required something that simplistic, but I certainly wouldn't highlight its DP property as a huge selling-point for this reason.\n\nWe agree with your evaluation. Regarding DP property, there are two points: \n1. We propose a variant of DP notation for low-rank matrices, since we believe it is necessary to exclude the shared column subspace, which is treated as common information; \n2. Yes, the DP side of Alg. 1~2 is simply the standard Gaussian mechanism (additive Gaussian noise), and it is not a selling-point.\n\nMoreover, the authors would like to say that Alg. 1~2 presents an add-on scheme to existing matrix completion methods, which is relatively easy to implement. Quite unexpected is that it guarantees both EXACT recovery and DP property. Therefore, we would like to share this finding to the community. \n\n\n> Could I know what the assumptions behind the data are? \n\n Thanks for your questions. They are important in practical usage! Our responses are drawn from two aspects: \n1. Algorithmic requirement to guarantee data recovery; \n1. Algorithmic requirement to achieve a relatively high level of data privacy.\n \nRegarding data assumption, the underlying assumptions are \n1. The actual data matrix $M$ is low-rank, which means that truncating the SVD decomposition of $M$ using a small rank (say $r$ is one order smaller than $n$, if $n=10000$, then $r = 1000$) can get a good estimate.\n1. A relatively small coherence value $\\mu$, which means that a user’s preference ratings are not concentrated on a few entries but spread out across relatively many entries.\n\n\n> I would like to see what the sensitivity for the added noise is. \n\nThe sensitivity for the matrix completion problem is the norm bound $L = ||M_j ||$ (the $L2$ norm of a matrix column), since the algorithm is expected to output the true data matrix; then the sensitivity (under DP notion) of the algorithm is, $\\delta = argmax ||A(D) - A(D’)||_2 = L$, since the $L2$ difference between $A(D)$ and $A(D’)$ is upper bounded by the maximum norm of $M$’s columns. \n\n> How big is the norm bound $L$ in practice, and how is it affecting the accuracy of your scheme?\n\nWe would like to elaborate more about $L$’s value in practical scenarios and how it affects the scheme performance. We checked the following datasets in ref. [14, 28].\n\nRecommendation systems: \n1. MovieLens (Top 400) [14], or $10^5$ , $10^6$ and $10^7$ in [28];\n1. Netflix (Top 400) in [14];\n1. Jester;\n1. Yahoo! Music.\n\nIn most recommendation systems, the rating matrix takes values in {1, 2, 3, 4, 5}. So, in practice, the norm bound $L$ depends on the size of rows (user’s feature vector), say $5 \\sqrt{n_1}$, which is $500$ for $n_1 = 10,000$. \n \nFurthermore, $L$ together with $\\epsilon$ and $\\delta$ will directly determine the standard deviation of the Gaussian distribution, and thus may affect the accuracy of the proposed scheme through a condition number of the encrypted matrix. It would be good to include more numerical results in the appendix, in order to provide readers a sense of how $L$ affects the algorithms’ performance. The relationship is as follows:\n\n1. Some works [1, 2, 3] derived a sample complexity with a polynomial dependence on the conditional number $\\kappa = \\sigma_1 / \\sigma_r$. \n1. If the maximum value of the generated random numbers is larger than $\\sigma_1$, or the minimum value is smaller than $\\sigma_r$, or both cases hold simultaneously, then those analytical results in [1, 2, 3] need to use a new $\\kappa$ value. \n \n* [1] Jain, Prateek, Praneeth Netrapalli, and Sujay Sanghavi. \"Low-rank matrix completion using alternating minimization.\" Proceedings of the forty-fifth annual ACM symposium on Theory of computing. 2013. \n* [2] Hardt, Moritz. \"Understanding alternating minimization for matrix completion.\" 2014 IEEE 55th Annual Symposium on Foundations of Computer Science. IEEE, 2014. \n* [3] Keshavan, Raghunandan Hulikal. Efficient algorithms for collaborative filtering. Stanford University, 2012.", " This paper presents a private and secure matrix completion scheme, where the data gets sent to a server for the task. The communication security is based on homomorphic encryption, and the privacy notion here is $(\\varepsilon,\\delta)$-differential privacy (DP). The authors work with a relaxed notion of joint DP.\n\nBoth theoretical and experimental results are provided for completeness. The focus is on exact recovery in this paper.\n\nEdit: Score updated. Strengths:\nThe paper does provide a scheme that seems to satisfy both the communication security and DP requirements. This paper actually focuses on exact recovery, as opposed to sacrificing accuracy in the prior work. The experimental results indicate compatibility with two matrix completion algorithms, NN and AM.\n\nWeaknesses:\n1. I wish the writing could be a bit more clear in terms of the cryptographic tools and the preliminaries related to matrix completion problem.\n2. The DP side of the scheme doesn't seem very interesting as it just involves adding some Gaussian noise. I don't see any assumptions about the data range (except that the $\\ell_2$ norm is bounded by $L$), or how they're being removed. I would have liked to see more of that. This problem might have just required something that simplistic, but I certainly wouldn't highlight its DP property as a huge selling-point for this reason. I don't necessarily have many questions. Could I know what the assumptions behind the data are? I would like to see what the sensitivity for the added noise is. How big is the norm bound $L$ in practice, and how is it affecting the accuracy of your scheme?\n\nWhat is the running time of the scheme compared to prior work? I couldn't find where the authors discuss the limitations of their work. Could I be pointed to that, please?\n\nI feel that the writing quality could be improved by adding more detailed preliminaries for the main problem and the cryptographic primitives (even in the supplement).\n\nIn Equation 13, what is $\\mu(U)$?\n\nThere seems to be some typo at the end of Line 130.\n\nLines 114-115: What does $\\mathcal{A}-j(D)$ mean? How is this different from running $\\mathcal{A}$ on $D_{-j}$?\n\nLines 102-106: there are both bullets and numbering.", " This paper studies the problem of privacy-preserving data completion in a distributed manner with the homomorphic matrix completion problem and propose a homomorphic encryption-decryption scheme. Strengths: The targeted problem is interesting and important. This paper is well organized and easy to follow. Some theoretical results are provided to guarantee recovery and differential privacy.\n\nWeakness:\n(1) The centralized server framework in Figure 1(a) does not seem to be a well-known distributed framework in distributed matrix completion. There are too many nodes increasing with the dimension n2. In distributed matrix completion works, say, Mackey, Talwalkar, and Jordan (2015, JMLR), people distribute the matrix into blocks and complete them in parallel. It is worth discussing whether the holomorphic matrix completion can work in that distributed setting.\n\n(2) One key assumption in matrix completion is low-rankness. However, the transformation M+KR in the server is determined by the public key K to guarantee low rankness. In practice, how do we choose the dimension k since we have no idea the true rank r of M?\n\n(3) The transformation g(M)=M+KR is too standard in DP. Is that possible to apply the non-linear transformation to improve privacy?\n\n(4) Could the authors numerically provide the privacy and recovery trade-off in different privacy levels?\n\n(5) Overall, this work follows a simple idea of differential privacy by replacing random noise as the product of public key and random noise. The guarantee of differential privacy and completion is relatively standard in existing works. (1) It is worth discussing whether the holomorphic matrix completion can work in that distributed setting.\n\n(2) One key assumption in matrix completion is low-rankness. However, the transformation M+KR in the server is determined by the public key K to guarantee low rankness. In practice, how do we choose the dimension k since we have no idea the true rank r of M?\n\n(3) The transformation g(M)=M+KR is too standard in DP. Is that possible to apply the non-linear transformation to improve privacy?\n\n(4) Could the authors numerically provide the privacy and recovery trade-off in different privacy levels?\n\n(5) Overall, this work follows a simple idea of differential privacy by replacing random noise as the product of public key and random noise. The guarantee of differential privacy and completion is relatively standard in existing works. Yes", " The authors derive a novel homomorphic matrix completion algorithm with a proof that the homomorphism property holds provided certain technical conditions are satisfied, including a probabilistic bound on the number of observed entries required.\nThey also prove that the novel algorithm satisfies differential privacy constraints.\nThe authors' scheme solves the matrix completion problem on the server with homomorphically encrypted matrix entries while employing a higher rank constraint using any standard matrix completion method. The proof for the homomorphism property relies upon a homomorphic version of the Rudelson selection estimation theorem from [3].\n\nExperimental results on the Netflix and MovieLens datasets indicate that the homomorphic counterparts of nuclear norm (NN) minimization, dubbed HNN and alternating minimization (AM), dubbed HAM, are only slightly worse than the original ones and that the new schemes outperforms the differentially private Frank Wolf (FW) scheme.\n Strengths\n\n1. The theoretical guarantees in the papers as summarized above appear to be quite strong.\n\n2. The experimental results as summarized above appear to be quite solid. The observation regarding the significant drop in error for HNN in Sec. 6.3 is also quite interesting.\n\n3. The authors provide geometric intuition for Lemma 1 regarding why $M$ is the exact solution to Eq (3) when a certain condition holds. It appears that Lemma 1 and this geometric interpretation is original and not already present in reference [3] or [6].\n\n4.\nHomomorphic encryption is generally supposed to be slow, but the proposed encryption / decryption method using public / private random matrices seems to be quite fast. \n\nWeaknesses\n\n1. This reviewer found the math in the paper hard to follow and hard to constantly refer to [3] or other references because the paper didn't seem to be self-contained.\nThe tangent cone mentioned after eq (12) is not defined in the paper and neither is the incoherence parameter used in (13).\n\n2. Both constants $C$ and $c$ are present in Lemma 3 and Corollary 1. It seems to this reviewer that $c = C$. \n\n3. The parameter $\\zeta$ in Theorem 4 doesn't seem to be defined anywhere.\n\n4. Typo: \"Rank-decent\" should be \"Rank-descent\" after eq (12).\n The parameter $k$ is a design parameter of the algorithm but the authors simply set it to $10$ in all experiments.\nShouldn't it vary with rank $r$ or $p$ ?\n The limitations of the proposed method seems to be the trade-off between accuracy and privacy / number of samples, as discussed in the Conclusion. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 3, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "lF5WNN1THmu", "RXAdDqTY7VY", "nips_2022_9TsP2Gg0CM", "L1woWzBytdQ", "b7y5nQsj1_", "71-6MD5EA1", "yjvzL6L9Tv", "DjkNb73cJ4", "IRj2ANJxif", "nips_2022_9TsP2Gg0CM", "nips_2022_9TsP2Gg0CM", "nips_2022_9TsP2Gg0CM" ]
nips_2022_hqtSdpAK39W
Cluster Randomized Designs for One-Sided Bipartite Experiments
The conclusions of randomized controlled trials may be biased when the outcome of one unit depends on the treatment status of other units, a problem known as \textit{interference}. In this work, we study interference in the setting of one-sided bipartite experiments in which the experimental units---where treatments are randomized and outcomes are measured---do not interact directly. Instead, their interactions are mediated through their connections to \textit{interference units} on the other side of the graph. Examples of this type of interference are common in marketplaces and two-sided platforms. The \textit{cluster-randomized design} is a popular method to mitigate interference when the graph is known, but it has not been well-studied in the one-sided bipartite experiment setting. In this work, we formalize a natural model for interference in one-sided bipartite experiments using the exposure mapping framework. We first exhibit settings under which existing cluster-randomized designs fail to properly mitigate interference under this model. We then show that minimizing the bias of the difference-in-means estimator under our model results in a balanced partitioning clustering objective with a natural interpretation. We further prove that our design is minimax optimal over the class of linear potential outcomes models with bounded interference. We conclude by providing theoretical and experimental evidence of the robustness of our design to a variety of interference graphs and potential outcomes models.
Accept
This well-written paper proposes a possibly-new experiment-design problem where there is interference. This interference is modeled by a bipartite graph where one side has the "experimental" units and the other has "interference" units. The purpose of the interference units is to facilitate interactions between the experimental units. The goal is to assign the experimental units to "treatment" or "control" in order to estimate the total treatment effect on the experiment units. Specifically, for each experiment i, we can either assign "control" (Z_i = -1) or "treat" (Z_i = 1). The outcome for this experiment is the value Y_i(Z) = alpha_i + beta_i Z_i + gamma_i e_i(Y)---where e_i(Y) is called the "exposure mapping" that captures the interferences of the (Z_j: j distinct from i) values with the outcome for i. This setting is motivated by marketplace experiments where buyers interact with sellers. This primarily-theoretical paper studies the performance of difference-in-means estimators for cluster-randomized balanced designs, under a "linear exposure" model. One key results is a min-max optimal equivalence between treatment effect estimation and identifying a good clustering: the clustering that minimizes maximum bias will basically yield a partitioning of a weighted graph, with weights representing the strength of interference between the units. Simulation results are also given. There are concerns about the novelty of the clustering objective, and the supplementary material has poor formatting, but the paper's contributions are appreciated. The authors are asked to carefully incorporate the referee-comments.
train
[ "lpflBoVxhEi", "Ds8456iKEn", "Bqgya1mITDe", "oGrinrWPGBG", "Mtt4B5m5_W3", "Cv5RlMzJArJ", "gVibI99Gia2", "TGeWQVdjtW1", "XWmONu0QnN", "5B91TnILfca", "5PG7XLEYZwu", "w1jd2o4Nw0a", "2K3lZFbswDy", "YJqrSamKLIL", "BwJHhGdpo9C", "RmV-sbJ-x-P" ]
[ "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for their comments. It would be great if you can comment on our response addressing the reviewer's major concerns. Thanks.", " Thank you for the response. We will include the power-law graph experiments and the discussion of the choice of balanced design in our revision.\n\nThank you also for clarifying your comment on the values Gamma0 and Gamma1. We now understand that you were asking for alternative notation to remove the clutter caused by these two dummy variables. We agree with you that this is a good change that will improve the readability of the result, and we will change the lemma to read $\\\\gamma_i = O(1)$. We will also make the corresponding notational change in the proof.", " I thank the authors for answering my questions with great clarity -- I am raising my score.\n\nI would suggest the authors to include your additional experiment on power-law graphs and your response to the choice of balanced design, appropriately in the revised version.\n\nYes, I went through the proof of Lemma 3.2 in the appendix. It was clear from the proof about the values of gamma1 and gamma0. However, as stated in the main version, these variables do not seem to have any specific meaning. Is it possible to rephrase it using order notation?\n", " We thank the reviewer for their careful reading of our work. We respond to their questions below.\n\n## Comparison to two-sided experimental design\n\nWe thank the reviewer for the question about comparisons to the two-sided randomization designs of [27, 28], in which experimental and interference units are each randomized to treatment and control, but treatment is only applied when a treated unit interacts with another treated unit. The major benefit of this design is that it allows for the quantification of spillover effects by comparing the fully controlled interactions (in which both sides of the interaction were controlled) to the interactions that experience interference (in which only one side of the interaction was assigned to control). In the absence of interference, these interaction types should have the same average response; any deviation from equality indicates interference that can be quantified.\n\nThe idea of measuring and correcting for interference is very appealing, especially when interference can be measured using the same experiment that measures the global treatment effect. We think that this line of work is an important part of mitigating interference, and that the two-sided randomization design is a particularly clever way of approaching the problem. One challenge with TSR designs, as discussed by [28, Section 6], is that without further assumptions on the interference model, TSR with treatment fraction p is only able to estimate the global effect of treating p-fraction of the units vs. treating no units, instead of the global effect of treating all units vs. treating no units. [28, Section 8.1] suggests randomizing at the interaction level to allow estimation of the spillover effect at multiple levels of interaction, which could then be extrapolated to estimate the global treatment effect - this of course starts relying on some model for the extrapolation. Ultimately, then, one difference between the TSR design and the cluster-randomized design is that the latter tries to get as many units in “pure treatment” and “pure control” as possible, to estimate the global treatment effect while limiting the need for such extrapolation. As [27, Section 8] suggests, cluster-randomization is likely to outperform TSR when the underlying graph is well-clusterable, but TSR is likely to do better when no cluster-based method can achieve near-complete treatment or control of units.\n\nIt would be interesting to think about combinations of TSR and cluster-based designs, particularly in the context of the extrapolation ideas proposed in [28, Section 8.1]. One option would be to cluster-randomize the buyers and sellers individually, in a way that maximizes the variance of the unit-level exposures to treatment under a TSR design (echoing the objective of [18, 19]). The TSR design would allow for the estimation of spillover effects at various levels of interference, while maximizing the variance among the realized levels of interference would improve the accuracy of the extrapolation. We would be happy to mention these interesting research directions at the intersection of the clustering and TSR approaches, and we thank the reviewer for asking this question.\n \n## How much does balancedness help reduce variance?\n\nPlease see our discussion of balancedness in the response to all reviewers.\n\n## Implementation details\n\nWe thank the reviewer for pointing out that our citation of the balanced partitioning algorithm was unclear. We did indeed use the balanced partitioning algorithm of [34], and we would be happy to provide a summary of the algorithm in an appendix for completeness.\n\nThe reviewer also asked about the enforcement of the balancedness constraint. The balanced partitioning algorithm of [34] gives strongly balanced clusters. However the Exposure-Design algorithm of [19] only gives weakly balanced clusters, which we addressed by tuning that algorithm’s hyperparameter to encourage balancedness.\n", " ## It certainly would be an important addition to strengthen the current work by considering confidence interval bounds for the bias of the considered estimator\n\nSince confidence intervals are taken with respect to some source of randomness, while the bias is an expectation, we interpret this question as asking about confidence intervals with respect to some underlying random model on the parameters. For example, if we assumed that the parameters $\\\\gamma_i$ in the linear model were drawn from N(0,1), then the bias would be a weighted sum of Gaussians, and we could use concentration inequalities on such sums to put confidence intervals on the bias. We would be happy to mention this in the text.\n\n## The authors restrict themselves to balanced designs -- it is not immediately clear if this is necessary. They claim that it reduces the variance in the estimator. It would be helpful to understand the exact dependence of parameters here.\n\nPlease see our discussion of balanced designs in the response to all reviewers.\n\n## Moreover, the theoretical claims in the paper do not compare directly with respect to a simple baseline, such as unit-level randomization. As stated, they show the equivalence in terms of the exact clustering, but not the objective values themselves.\n\nThank you for pointing this out. Please see our discussion above in the response to all reviewers.\n\n## Although min-max optimal results are useful, it would be interesting to look at specific cases, for which one can do better than off-the-shelf graph partitioning (clustering) algorithms from prior literature.\n\nWe agree with the reviewer that it is informative to consider how our clustering algorithm compares with prior literature on specific graph instances. We have provided two such examples in Appendix C: C.1 demonstrates a graph on which our clustering algorithm outperforms the naive balanced partitioning that treats the bipartite graph as a non-bipartite graph, while C.2 demonstrates a graph on which our clustering algorithm outperforms the bipartite clustering algorithm of [18, 19] that is designed for two-sided bipartite experiments.\n", " We thank the reviewer for their careful review of our paper, and would like to provide the following responses to the points they raised, copied from their review:\n\n## Can the exposure model be tied into some of the earlier literature works?\n\nThank you; we agree with the reviewer that Section 2.2 would benefit from in-line citations to the relevant papers, which are currently only cited in the related works section. The exposure model we consider is related to the linear “dose” function of [18,19] for two-sided bipartite graphs, and also to the model of [9]. We would be happy to update the section to include these citations.\n\n## Isn't there double counting with respect to edge weights? (It suggests that the model was chosen after discovering the correct clustering objective)\n\nWe believe the question is about the clustering objective H(C), which sums over all pairs i, j from distinct clusters, thus counting both the interference of i on j and the interference of j on i. We observe that these two directed interferences may in fact be of different magnitude, so that counting both directions is important to reducing bias. This occurs because the normalization factors may be different for each unit; for example, unit i’s only connection may be to interference unit s, which is one of many neighbors of unit j. In this scenario, $Z_j$ would have a stronger influence on $e_i$ than $Z_i$ would have on $e_j$.\n\nIn Lemma 3.2, are the terms corresponding to gamma0 and gamma1 missing? Although the proof in the appendix states it clearly, I would suggest fixing it to avoid any confusion.\nThe terms gamma0 and gamma1 can be seen in the first max of the equation after line 223. The key proof idea here is that H(C) is linear in the $\\\\gamma_i$, so that maximizing over all $\\\\gamma_i$ simultaneously will set each $\\\\gamma_i$ to the same constant, which can be removed from the argmax. This is why the gamma0 and gamma1 disappear in the middle and right hand sides of that same equation.\nWe would like to ensure that our results are as clear as possible, and it is possible that we misunderstood what the reviewer is asking. Please let us know what we can do to clarify this further, if any questions remain.\n\n## It would be interesting to see the differences in the relative bias (Table 1) with respect to Tr(Var(d)) and H(C) for other synthetic graphs, such as power-law, which exhibit skewed distributions (unlike the stochastic block model used for simulations).\n\nThank you for this great suggestion. We have performed a new experiment on a bipartite power-law graph with latent classes, generated by combining the bipartite preferential attachment model in [1] with the “preferential attachment in graphs with affinities” model in [2].\n\n[1] Guillaume, Jean-Loup, and Matthieu Latapy. \"Bipartite graphs as models of complex networks.\" Physica A: Statistical Mechanics and its Applications 371.2 (2006): 795-813.\n[2] Lee, Jay, et al. \"Preferential attachment in graphs with affinities.\" Artificial Intelligence and Statistics. PMLR, 2015.\n\nIn our experiment both the Tr(Var(d)) and the H(C) objectives outperformed the unit-level randomization. We have the following results for relative bias (as in our other experiments, variance was inconsequential; standard deviation was 0.1 for all results):\n\n$$\n\\\\begin{array}{l|r|r}\n \\\\text{Design} & \\\\text{Graph with strong latent structure} & \\\\text{Graph with weak latent structure}\\\\\\\\\\\\hline\n \\\\text{H(C)} & 1.9 & 3.5\\\\\\\\\n\\\\text{Tr(Var(d))} & 2.0 & 3.7 \\\\\\\\\n\\\\text{Unit-level} & 4.9 & 5.4 \\\\\\\\\n\\\\text{True Clusters} & 2.9 & 5.2\n\\\\end{array}\n$$\n\nInterestingly, in this setting we see that the cluster-randomized designs outperform the true latent clusters - this is possible because the interference occurs with respect to a single draw of the random graph, which the clustering algorithm gets to see. We speculate that factors specific to the power-law graphs, notably the existence of vertices with very high degree, might cause the optimal clustering for a given draw of the graph to be very different from the optimal clustering for the graph on average. This could have negative consequences for experimental design when only a random draw of the edges is observed, but the interference occurs according to the underlying latent structure.\nWe were also surprised that the Tr(Var(d)) objective performed on par with the H(C) objective; we had expected the former to do worse in a setting where the degree distribution was very different between units. It is possible that the small size of the graph we used (100 experimental units, with degrees distributed Zipf(3)) led to an insufficiently broad range of normalizations to distinguish between the two objectives.\nWe thank the reviewer for suggesting this experiment, and we would be happy to include it in an appendix if the reviewer thinks it would be helpful.\n", " We would like to thank the reviewer for their close reading of our work, and offer the following responses to the points they raised.\n\n## The one-sided bipartite graph implies a straightforward way to construct the exposure graph\n\nPlease see our discussion in the combined reviewer response.\n\n## Relationship to paper [1]\n\nWe thank the reviewer for drawing our attention to [1], which we had not seen and which is indeed an important piece of related work that should be included in our paper. One notable difference between our model and the Linear-In-Means model of [1] is that our model allows the parameters $\\\\alpha_i$, $\\\\beta_i$, and $\\\\gamma_i$ to vary among the units i, whereas the model of [1] uses common parameters. This distinction allows [1] to estimate the parameters of the (common) linear model, and use these estimates to extrapolate to the “fully treated” or “fully controlled” setting.\n\nWe had considered modeling interference in the same way as [1], using common linear model parameters across all units i. In this setting, it would even be possible to use the principles of optimal linear design to choose a clustering that minimizes the variance (and thus the MSE) of the ordinary least squares estimator. However, we ultimately decided to proceed with our model because it did not rely on any shared response structure across the units. The agnostic perspective of [1] is particularly interesting to us because it provides a rationale for using the LIM model even when we don’t believe the generative model itself is linear - precisely the concern we had about relying on the assumption of common parameters across units. We again thank the reviewer for pointing us to this work; we will both include it in the paper discussion and use it to inform our future work in this area.\n\n## Discussion of the design under less favorable data generating processes\n\nThe reviewer makes a good point about studying our design under less favorable data-generating processes. The field has developed several tools to address interference, and different tools work better in different settings. Cluster randomized designs work best when the underlying graph has some clustering structure, so that it is possible to assign treatments in such a way that some units are near-completely controlled while others are near-completely treated. If that is not possible, then a cluster-randomized design will not provide a meaningful benefit over unit-randomized design, and other tools must be used instead. For example, a stronger model assumption on the potential outcomes lets us estimate the effect of complete treatment or control while only ever observing units with partial exposure. Another alternative is to change the mechanism of randomization itself, such as with two-sided randomized designs and the associated adjusted estimators (see our references [27, 28]). We appreciate the reviewer’s interest in this point, and believe that understanding how to choose between the methods available to address interference, or even combining them, is a useful area of future work.\n\n## The role of cluster size in the estimator’s variance\n\nPlease see our discussion under “Motivation from variance reduction” in the main reviewer response. The reviewer makes a good point about hyperparameter selection to define the number of clusters, and we would be happy to incorporate this discussion into the text.\n", " We would like to thank the reviewer for their careful reading of our paper. Our response to the three points raised by the reviewer are as follows:\n\n## Comparison to unit-level randomization \n\nPlease see our discussion in the combined reviewer response.\n\n## Novelty of the clustering idea\n\nPlease see our discussion in the combined reviewer response.\n\n## Supplementary material formatting\n\nWe apologize for the extended line lengths, and we will fix the typesetting in the supplement.\n", " Reviewers qE7Y and nu8Q inquired to what extent our suggested clustering objective is not a simple extension of existing ideas surrounding cluster-randomized designs. Our paper is certainly not the first to suggest cluster-randomized designs (see our review in lines 74-87). However, there are many possible choices of clustering algorithms, so we believe that the choice of the clustering objective is an important contribution. \n\nIn particular, in Section 3.1 we describe the natural extensions of two existing cluster-randomized designs to our bipartite setting, and we provide counterexamples showing that these two objectives fail to minimize the bias. First, the approach of directly clustering the bipartite graph (ignoring the bipartite structure) fails because it considers only one-hop neighbors, but interference in a bipartite graph is fundamentally at least a two-hop phenomenon. Second, an existing cluster randomization objective for the two-sided bipartite design, used in [18, 19] fails in our one-sided design setting because it can myopically focus on only the cluster assignment of the highest-weighted experimental units. See Section 3.1 and Appendix C for further discussion. We believe this illustrates the importance and nontriviality of choosing the correct clustering objective.\n", " Reviewers qE7Y and Vi8h both asked for a comparison between the biases of the cluster-randomized design and the unit-randomized design. Unit-level randomization can be thought of as a special case of balanced cluster-randomized design in which there are N clusters of one unit each. As a result, the bias of the unit-level randomized design can be derived from Lemma 3.2 by replacing K with N and $j \\\\not\\\\in C(i)$ with $j \\\\neq i$. Letting $X_ij$ denote the influence of $Z_j$ on $e_i$ as given in our Eqn (2), we see that the improvement in bias from clustering is given by the average over N terms of the form:\n\n$\\\\frac{1}{N} \\\\sum_{i\\\\in [N]}\\gamma_i (\\\\sum_{j\\\\not\\\\in C(i)} X_ij (\\\\frac{N}{N-1} - \\\\frac{K}{K-1}) + \\\\sum_{j\\\\in C(i), j\\\\neq i} X_ij \\frac{N}{N-1})$\n\nEach of the N terms is composed of two terms of opposite signs. The first term captures the fact that, under cluster-level randomization with a fixed number of treated clusters, an element j belonging to a different cluster as unit i is more likely to satisfy $Z_i = Z_j$ when the number of clusters is large. This effect becomes negligible for large experiments, going to 1/N as N goes to infinity if K scales with N. The second term captures the fact that a unit j that belonged to i’s cluster under the cluster randomized design will now contribute to the bias of the unit-randomized design. The cluster-randomized design will have lower bias than the unit-randomized design to the extent that this term is large - i.e. that the interference values $X_ij$ within a single cluster are large on average. \n\nWe would be happy to add this as either a corollary or an appendix to the paper, since multiple reviewers were interested in such a result.\n", " Reviewers nu8Q, Vi8h and pzrY had questions about our restriction to balanced designs, and specifically how balancedness related to the estimator’s variance. The use of balanced clustering designs has several motivations, which we expand upon here.\n\n## Motivation from Variance Reduction\n\nFrom the theoretical side, a balanced design provides control of the variance by ensuring that the treated fraction of units is roughly constant across randomizations. If the clusters are highly imbalanced and a fixed number $K_T$ of clusters are assigned to treatment, then the actual fraction of units assigned to treatment could vary significantly, increasing the variance of the treatment effect estimator. This idea is further formalized in Section 4.2 of [9]. \n\nAs Reviewer nu8Q correctly points out, balanced clustering only reduces the variance under some assumptions about the interference effects gamma_i. In Appendix D we describe how clustering may actually increase variance if the gamma_i are correlated among clusters. We believe that developing methods to quantify the benefit of balanced clustering for a specific problem instance, or to choose the correct number of clusters, is an important area of future work.\n\nIn some experimental settings, pre-experiment data is predictive of the estimator variance. For example, in a setting where the variance of outcomes before the experiment starts is much larger than the anticipated magnitude of effects, pre-experiment data can be used to evaluate the variance characteristics of a given clustering (e.g. using an A/A test). In this way, the practitioner can estimate the effect on the variance of a given clustering for their data set, in order to determine the value of a balanced clustering over unbalanced, or even no clustering.\n\n## Implementation Considerations\n\nPractitioners may appreciate two benefits of balanced designs beyond the variance reduction mentioned above. First, when exactly K_T of K clusters are treated, balancing the clusters ensures control over the fraction of units that are treated. Controlling this fraction is important when we want to balance the scientific value of experimentation with potential negative effects on the treated units (i.e. staying within the experimental budget). Secondly, it has been our experience that many clustering algorithms that do not control for balancedness and cardinality sometimes produce many singletons clusters, when clustering incentives are not strong enough to group these units with other units. This is what we observed with the Exposure-Design objective of [18, 19] when its hyperparameters are not tuned properly. When datasets are quite large, the large number of singleton clusters produced by these algorithms can slow down certain data analysis pipelines that work better in low cardinality settings. If these singletons clusters are to be clustered to reduce cardinality without improving any “cut”-like objective, it may make sense to do so in a balanced way for any of the reasons listed above. In other words, the occasional practical need to control for cluster cardinality is extra motivation to maintain balance instead of an arbitrary grouping of isolated nodes \n\nWe would be happy to include further discussion around balancedness in the main text or in appendix in an updated version of the paper.\n", " We thank the reviewers for their thoughtful and engaged reviews of our work. Each reviewer raised several important points. We will begin by addressing three themes that were shared across several reviewers. Additional responses to individual reviewer comments can be found as direct replies to those reviews.", " The paper studies cluster randomized signs for one-sided bipartite experiments under network interferences. The paper discussed many different models, but I will only define the main one studied. There is a weighted bipartite graph between a set of experiments and a set of interference units, with the weight w_{is} between the experiment i and interference unit s indicating relationship between i and s. For each experiment i, we can assign either \"control\" (Z_i = -1) or \"treat\" (Z_i = 1) to i. The outcome for the i-th experiment is a real number Y_i(Z) = alpha_i + beta_i Z_i + gamma_i e_i(Y). The alpha_i + beta_i Z_i is a linear function of the treatment Z_i for i, and e_i is called the exposure, which captures the interferences of the (Z_j)_{j\\neq i} values to the outcome for i. e_i(Y) is called the exposure mapping.\n\nThe paper propose a bipartite analogue of the neighborhood-based exposure mapping. In this model, the dose of an interference unit s is computed as the weighted average of the treatment assignment Z_i among the neighboring experiments i of s, where weights are w_{is} values. Then, the exposure e_i is the weighted average of the doses of neighboring interference units s of i, again using w_{is} values.\n\nThe average total treatment effect tau is defined as 1/N * sum_{i \\in [N]} (Y_i(vector Z = all 1 vector) - Y_i(vector Z = all -1 vector)). In the difference-in-means (DIM) estimator for tau, we choose randomly N_T experiments to treat, and N_C experiments to control, where N_T and N_C are numbers chosen upfront. The estimation given by the DIM estimator is \\hat \\tau_DIM := \\sum_{i:Z_i = 1} Y_i/ N_T - \\sum_{i:Z_i = - 1} Y_i/N_C. The bias of the estimation is defined as \\tau - \\hat \\tau_DIM.\n\nThe main goal of the paper is to use balanced clustering to reduce the mean squared error (MSE) of the DIM estimator. Let C be a partition of the N experiments into K equally sized clusters. A balanced k-cluster randomized design D(C) is a distribution over vectors Z \\in {-1, 1}^N, such that experiments in the same cluster have the same +-1 values, and there are K_T clusters with +1 values. The authors gave the formula for the bias under the cluster design, presented some robustness results, and conducted experiments on the effectiveness of the method. Evaluations: The main contribution of the paper is use of the one-sided cluster design algorithm to reduce the bias of DIM estimator. However, after giving the formula and the optimization problem in Lemma 3.2, I fail to see why this algorithm is better than the standard one without the clustering. A lemma about some typical cases or even an example suggesting this would be helpful. Moreover, the clustering idea was not new. It is used in cases where all units are experimental units, and where there are experimental and interference units, but the outcomes are measured in the interference units. It is not so hard to extend the idea to the setting studied in this paper. \n\nI am also annoyed by the bad format of the supplementary material. There are so many places where the mathematical formulas exceed the width of a line by a lot. The authors may have submitted the paper (at least the supplementary material) in a hurry. It would be good to compare the clustering-based algorithm and the original one (without the clustering used) theoretically. This could be done using an example, or a theorem that covers typical cases that arise in practice. Not applicable. ", " The authors examine a setting of experimentation they refer to as \"one-side bipartite experimentation\", in which treatment and outcomes are measured on one set of units, but connections between units (and, therefore, interference) operates through a distinct set of units. The authors provide theoretical results and simulations which demonstrate that clustering on a special objective function can effectively allow for inference in this setting with minimal bias.\n Originality:\n- The theoretical analyses are not particularly novel.\n- The setting, while interesting, is pretty straightforward: the one-sided bipartite graph just implies a straightforward way to construct an exposure graph. The methods thereafter are fairly straightforward.\n\nQuality:\n+ The authors do a good job of drawing out the implications of their main clustering result\n+ The focus on design rather than analysis is very welcome.\n\nClarity:\n+ Found the paper quite easy to follow.\n\nSignificance:\n+ The problem setting is interesting and relevant to many practitioners.\n\nSection 2.1 defines a model which is, essentially, the \"Linear-in-Means\" model of [1]. This important related work should be discussed. In particular, [1] draws out an agnostic basis for this model. Section 2.2 also has a discussion of exposure maps which greatly parallels section 3.1 of [1]. This is not an accusation of plagiarism, to be clear, but the paper would be improved by drawing more heavily on this.\n\nGreater discussion of the estimand under less favorable data generating processes (in the vein of [2]), could be an interesting direction to make the work more useful in practice. The simple SBM of section 5.2, for example, obviously aligns nicely with a clustering-based design.\n\n[1] A. Chin, Regression adjustments for estimating the global treatment effect in experiments with interference, Journal of Causal Inference, May 2019. https://arxiv.org/abs/1808.08683\n[2] Fredrik Sävje. Peter M. Aronow. Michael G. Hudgens. \"Average treatment effects in the presence of unknown interference.\" Ann. Statist. 49 (2) 673 - 701, April 2021. https://doi.org/10.1214/20-AOS1973 The authors claim \"variance is inconsequential in this setting\". Surely there are scope conditions or assumptions necessary for this statement to hold. I think the authors may be assuming that methods compared are methods of clustered-RA (or unit-level RA where bias is extremely large)? If all clustering methods use the same number of clusters, this would be true, but I don't see why that would necessarily be the case if one were choosing the number of clusters as a practitioner would be (e.g. through finding an elbow, etc). The appropriate number of clusters could look quite different depending on the clustering objective. In any event, I'd appreciate the authors to clarify this (also in the text). Methods like this which allow for better and more accurate experimentation are a key way to detect and analyze biases in complex real-world systems.", " \nIn this work, the authors study a new experimental design problem under interference. The underlying graph capturing the interference is a bipartite graph with two types of units (corresponding to the two partitions): experimental and interference units. The goal is to assign the experimental units to treatment or control to estimate the total treatment effect on the experiment units. The only purpose of the interference units is to facilitate interactions between experimental units. The authors motivate this setting with marketplace experiments, where buyers interact with sellers.\n\nFor this problem, they study the performance of difference-in-means estimators for cluster-randomized balanced designs, under a suitable linear exposure model. The main claim is a min-max optimal equivalence between treatment effect estimation and identifying a good clustering. In other words, they showed that the clustering that minimizes the maximum bias will result in identifying a partitioning of a weighted graph, with weights representing the strength of interference between the units. They demonstrate the performance of their approaches empirically, along with a robustness study, when the underlying data deviates from the studied exposure model.\n\n\n The presentation in the paper is excellent, containing extensive discussions on their choice of experimental designs and estimators. \n\nThey relate two important quantities of interest to a graph partitioning objective: bias of the estimator and robustness of the design. The robustness is captured using covariance of the exposure vector (under the interference model) and the design vector. From the proofs in the appendix, it is evident that the bias is a constant factor away from the graph partitioning objective, which is computationally challenging to solve exactly, and for which many approximation algorithms are already known in the literature. These new relationships/equivalences are novel and could be helpful in furthering our understanding of experimental designs under interference.\n\nIt certainly would be an important addition to strengthen the current work by considering confidence interval bounds for the bias of the considered estimator. \n\nThe authors restrict themselves to balanced designs -- it is not immediately clear if this is necessary. They claim that it reduces the variance in the estimator. It would be helpful to understand the exact dependence of parameters here. Moreover, the theoretical claims in the paper do not compare directly with respect to a simple baseline, such as unit-level randomization. As stated, they show the equivalence in terms of the exact clustering, but not the objective values themselves.\n\nAlthough min-max optimal results are useful, it would be interesting to look at specific cases, for which one can do better than off-the-self graph partitioning (clustering) algorithms from prior literature.\n Can the exposure model be tied into some of the earlier literature works? Isn't there double counting with respect to edge weights? (It suggests that the model was chosen after discovering the correct clustering objective)\n\nIn Lemma 3.2, are the terms corresponding to gamma0 and gamma1 missing? Although the proof in the appendix states it clearly, I would suggest fixing it to avoid any confusion.\n\nIt would be interesting to see the differences in the relative bias (Table 1) with respect to Tr(Var(d)) and H(C) for other synthetic graphs, such as power-law, which exhibit skewed distributions (unlike the stochastic block model used for simulations).\n\n As this is a theoretical paper, with simulated experiments, they did not discuss the potential negative societal impact of their work. ", " The paper studies cluster randomized designs for one-sided bipartite experiments. The authors assume a linear potential outcome model, and a normalized exposure mapping model, and focus on difference-in-means estimators to estimate the average total treatment effect. They propose to find the design that minimizes the bias among balanced K-cluster randomized designs. The authors then demonstrate the robustness of the design to deviations from the assumed potential outcome model/the exposure mapping model. Finally, simulations studies are conducted to show the empirical performance of the design. Strengths\n\n-\tThe paper is very well-written. The proposed designed is motivated and explained in a very nice way. \n\n-\tThe paper addresses an interesting and important question: estimating the total treatment effect in the setting of one-sided bipartite experiments. \n\n-\tThe proposed design appears to have good performance both theoretically and empirically. \n\n-\tThe paper does a great job discussing robustness to model misspecifications both theoretically and empirically.\n\nWeaknesses\n\n-\tI would love to see more detailed discussions on 1. Comparison with two-sided designs. 2. By how much balancedness helps reduce variance. 3. Implementation details (See questions section for more details)\n -\tI’m curious to see how the proposed design compares to the two-sided designs proposed in [27, 28] (I used the same references as in the paper) Do you believe the proposed design can outperform the two-sided ones? I understand that in practice, it would require different platforms to run one-sided experiments/two-sided ones, so it is not 100% comparable, but I’d love to see some more comparisons and discussions. Thank you!\n\n-\tLine 179-184. The authors argue that balancedness helps reduce variance. By how much does balancedness help reduce variance? It will be helpful to see some theoretical/empirical results on this. Or even some more detailed references will be very helpful. \n\n-\tLine 238-241. Is the proposed method implemented with the algorithm in [34]? I don’t think this is made 100% clear except a brief mention in line 284, which seems to be referring to how the benchmark methods are implemented. It will be helpful to clarify what specific algorithms are used. Are exact balancedness achieved? If only partial balancedness is achieved, how close is this to exact balancedness? \n The authors have adequately addressed the limitations and potential negative societal impact of their work. \n\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 3 ]
[ "TGeWQVdjtW1", "Bqgya1mITDe", "Cv5RlMzJArJ", "RmV-sbJ-x-P", "BwJHhGdpo9C", "BwJHhGdpo9C", "YJqrSamKLIL", "2K3lZFbswDy", "w1jd2o4Nw0a", "w1jd2o4Nw0a", "w1jd2o4Nw0a", "nips_2022_hqtSdpAK39W", "nips_2022_hqtSdpAK39W", "nips_2022_hqtSdpAK39W", "nips_2022_hqtSdpAK39W", "nips_2022_hqtSdpAK39W" ]
nips_2022_Qy1D9JyMBg0
Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts
Large sparsely-activated models have obtained excellent performance in multiple domains. However, such models are typically trained on a single modality at a time. We present the Language-Image MoE, LIMoE, a sparse mixture of experts model capable of multimodal learning. LIMoE accepts both images and text simultaneously, while being trained using a contrastive loss. MoEs are a natural fit for a multimodal backbone, since expert layers can learn an appropriate partitioning of modalities. However, new challenges arise; in particular, training stability and balanced expert utilization, for which we propose an entropy-based regularization scheme. Across multiple scales, we demonstrate performance improvement over dense models of equivalent computational cost. LIMoE-L/16 trained comparably to CLIP-L/14 achieves 77.9% zero-shot ImageNet accuracy (vs. 76.2%), and when further scaled to H/14 (with additional data) it achieves 83.8%, approaching state-of-the-art methods which use custom per-modality backbones and pre-training schemes. We analyse the quantitative and qualitative behavior of LIMoE, and demonstrate phenomena such as differing treatment of the modalities and the emergence of modality-specific experts.
Accept
The authors use a mixture-of-experts model in a multimodal setting. the reviewers consider the work technically strong and interesting; the AC concurs.
val
[ "B6ZCZYC1T7", "PmbzIETIF1b", "RvOiFmC6rI", "ef5EsWLCWx2", "JRS04qAEjY2", "ArHoIMwFjFG", "sIMwbn_zz63", "TyKsUg5SFtl" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the reviews and responses from all reviewers and personally thank the authors for the clarifications and answers to my individual questions. I find them satisfactory and increase 1 point to support this paper for that reason. ", " Many thanks for the time spent reviewing and your feedback, and for noting those typos/wording suggestions - we will amend the text accordingly.\n\nRegarding the losses, there are some prior works (e.g.[1]) that use entropy based approaches to regularise expert routing distributions. We don’t know of any prior works that specifically use mutual information, or use entropy to regularise multimodal MoE models.\n\nWe discussed the connection between the global/local loss combination, and the mutual information (end of Section 2.2.2). Note that due to the threshold used on the global loss, which enables some modality-wise specialization of experts, we are not actually using the mutual information loss. In section 4.1 we present results which do use the mutual information loss (i.e. local + global entropy without threshold) and found the performance was worse at B/16 scale. We also found it did not scale stably to L/16.\n\nRegarding whether this solution is trivial, we first note that these auxiliary losses were not the only key of the solution, with other aspects such as Batch Priority Routing being key (see Figure 5). Given the modification to mutual information, the lack of proceeding works using it as an auxiliary loss (and indeed any works which study multimodal per-token routing at all), and the other aspects of our proposed solution, we believe our approach is indeed novel and non-trivial.\n\nAs a final note, if you know of any prior works which use mutual information or entropy to regularise per-example conditional computation models, we would love to read them & cite them accordingly!\n\n[1]Multi-Source Domain Adaptation with Mixture of Experts, J. Guo, D. J. Shah, R. Barzilay, [arxiv:1809.02256](https://arxiv.org/abs/1809.02256)", " Regarding the last question: `how do you identify the negative example`\n\nFor a given example, we consider all other examples in the batch to be negatives. \nIn general, we have a batch of image and text pairs, for which we compute image representations $u_{1:N}$ and text representations $v_{1:N}$. Note $u_i$ and $v_i$ come from a single image-text pair (e.g. image and its caption). We want $u_i \\cdot v_i$ to be high (aligned representations), but $u_i \\cdot v_j$ to be low for every $j \\neq i$.\n\nThis is accomplished via the contrastive loss, which consists of two components as described in Equation 1 (Section 2.1). In the image-to-text component, every other image in the batch is a negative. In the text-to-image component, every other text in the batch is a negative. This formulation follows prior works, but we see that this loss could be noisy (if other elements in the batch are actually quite similar, but we encourage alignment to be low), and could possibly be improved by adapting methods e.g. from hard-negative sampling in other areas. Also, note this implies that the batch size may severely impact the learning process and its performance (higher batch size tends to be better, as we’ll have more negatives¹).\n\n¹ Combined Scaling for Open-Vocabulary Image Classification, H. Pham, Z. Dai, G. Ghiasi, K. Kawaguchi, H. Liu, A. W. Yu, J. Yu, Y. Chen, M. Luong, Y. Wu, M. Tan, Q. V. Le, [arxiv:2111.10050](https://arxiv.org/abs/2111.10050)", " Many thanks for the time spent reviewing the paper. We would first like to clarify the nature of LIMoE - our paper specifically studies learned conditional computation. This relates to your question `how the routing for text and images is done`: For LIMoE, and preceding works in NLP and Vision, examples are embedded to a sequence of tokens, as per typical transformer-based approaches. In the FFN layer, instead of a single FFN, we have $N$ FFNs - each an “expert”. Learned routers decide which experts should process which tokens. The expert is a simple dense layer which, combined with a softmax, predicts from each token’s representation a routing distribution over $N$ experts.\n\nFor LIMoE, the tokens can be either image or text examples. You are precisely correct that we do not separate the modalities for routing, and do not route in pairs*. The router just sees a big bag of tokens which need to go to experts - it doesn’t know which example it comes from or what modality that example is. We were motivated to study this as we believe this modality-agnostic approach will scale better as one increasingly adds modalities. Table 7 in the appendix shows an ablation with two alternative variants: where either the router knows what kind of token it is handling (text or image) or we directly have two independent routers, one per modality. None of the above clearly outperformed our simpler and more general approach.\n\n**You mentioned that Figure 1 does not show clearly that routing is done independently; if you have suggestions on how it can be clarified and improved, we would appreciate it!*\n\nWe will now address some of the weaknesses/questions:\n\n\n1. `The paper claims to be the first multimodal sparse MoE model, which is probably incorrect/too strong (line 299), e.g. [0] was first submitted in Nov 2021.`\n* Thanks for pointing us towards this paper! We will definitely add it as a reference, especially for one-tower models for contrastive learning for which there is very little literature.\n* As noted, we specifically study learned conditional computation, similar to e.g. the Switch Transformer or V-MoE. Though the referenced paper has per-modality FFNs, the structure is otherwise fundamentally different to the class of models we develop here: the FFN is decided in advance for each modality, and all tokens from a given example will always go to the same FFN based on its modality. It is also not sparse; there is no way to scale up the number of experts while keeping computational cost constant. In contrast, LIMoE’s routers learn to assign experts to each token. \n* Overall, VLMO does not have experts, sparsity or routing of the type studied here or developed by relevant prior works. This style of conditional computation is often (somewhat ambiguously) called “sparse MoEs”, hence why the claim may seem too strong; we will update wording in a camera ready version to make this distinction clearer.\n\n2. `The fact that this paper centers around image-text data leads to the need of showing results of image-text benchmarks e.g. in [0], while the paper focuses on zero-shot`\n* We first note that zero-shot classification as studied in this work and prior work does necessitate good image *and* language understanding. Furthermore, we also present results on COCO image-text retrieval, a classic image-text task, similar to the referenced paper and prior works. Though we do not finetune on downstream tasks, zero-shot evaluation is sufficient to evaluate the models’ ability to learn multimodal representations.\n\n* `Even so why the main results in Table 1 doesn’t have other results for other scales of the model` We study scale quite extensively in Figure 3 (same results also provided in Table 4 of the appendix).\n* `For tasks other than Imagenet classification, it would be more convincing if the paper presents the comparison with some current baselines, in addition to the ablations` At the largest scale we compare against numbers reported by other recent competitive works (Table 1). For completeness, we also include the original CLIP numbers (which are roughly comparable to the L/16 models from Figure 3/Table 4)\n3. `Some acronyms might need further explanation such as the ones in Figure 3 for different models, e.g. B.16, S.16, …`\n* Thanks for pointing this out! You are correct - we have Table 4 in the appendix which defines the hyperparameters for each model, but we will amend the text, cite where these acronyms have been previously defined and point to the appendix for further information.\n4. `No code is provided. `\n* Agreed - we hope to open source the code used to train models on LAION400M in Table 8 in time for the conference. \n\n(the final question will be addressed in the next comment)\n", " Many thanks for the time spent understanding and reviewing our paper! We’ve attempted to answer the questions below; please let us know if something doesn’t make sense, or if there’s something we can amend in the paper to make these clearer in the text.\n\n\n1. `Why in an imbalanced context all of the tokens from the minority modality get assigned to a single expert?`\n\n\n* We discussed this a little bit in Section 2.2.1. In general, rather than necessarily a single expert, we observed that minority-modality tokens tend to choose a very small number of experts - fewer than their “fair share” of experts according to the token distributions. For example, with 64 experts and an image:text ratio of 12:1 (roughly matching the B/16 setup), one might expect $64*\\frac{1}{12 + 1} \\approx 5$ text experts. In practice, we would see only 1 or 2 experts being used for text. Note that this was empirically the case - as for why exactly this happens, it is as of yet unclear, and should definitely be further studied.\n\n* Assigning all tokens of the minority modality to a few experts isn’t necessarily catastrophic if the experts have a large enough buffer size to process them. However, what we further noticed (Figure 2), is that the preferences of the minority modality tend to be quite unstable - e.g. at the start of training there will be 1 or 2 clear “text” experts, and midway through training, they suddenly become image experts and some other experts start processing a lot of text. This sudden switch hurts the training/validation performance. Alongside examining why this extreme per-modality expert assignment occurs, further study should be done to understand why these assignments are unstable without the right aux losses.\n\n\n2. `Some effects such as the behavior of global entropy threshold (text and image) are attributed to the imbalance dataset. How different would it be in the case of a balanced dataset?`\n\n* We explored this somewhat in Figure 4, where we adjusted image sequence length by varying the number of patches per image. Our default setup has more image than text tokens (for B/16, image:text ratio is 12.2:1). In Figure 4, even when the ratio is 1:1 or there is more text than image, we find that the commonly used setup does not work, whereas our proposed training setup with BPR + auxiliary losses trains consistently stable models. Thus, we do not fully attribute the auxiliary losses’ functionality to modality imbalance. \n\n* In Section 2.2.2 and Figure 2 we break down what the losses are accomplishing. For example, they serve to “stabilize” the learned expert assignments. For instance, with the local loss, if a given expert learns to process 70% images and 30% text, it will continue with this balance and not suddenly change. This is a useful behavior independent of the ratio of image:text tokens.\n", " Along the line of mixute of experts that add expert layers to the Transformer architecture (e.g., ST-MoE in NLP and Vision MoE for vision), this paper introduces Language+Image MoE. Similar to other MoE, the single FeedForwardNetwork is replaced by an expert layer that contains many parallel FFNs, each of which is an expert. The difference is that given a sequence of tokens to process, a simple router learns to predict which experts should handle which tokens. Pros:\n- Model size can increase while keeping computational cost constant (compared to BASIC the number of params is almost half for LIMoE H/14, while number of params per token- for inference- is less than 50%)\n- As a sparse model, LIMoE avoids negative interference and catastrophic forgetting.\n- Proposed new auxiliary loss(local/global entropy loss) and routing prioritization to prevent sending all tokens [from different modalities] to the same expert.\n- Studing the scaling of LIMoE and ablations for various design decisions, e.g. loss, router architecture, number of experts, etc. \n\n\n\n\n > 1) Why In an imbalanced context all of the tokens from the minority modality get assigned to a single expert?\n>2) some effects such as the behavior of global entropy threshold (text and image) are attributed to the imbalance dataset. How different would it be in the case of a balanced dataset? -", " The paper addresses an important problem, in which multimodal data is required for a model, by designing a unified architecture that accepts images and text concurrently with the use of the contrastive learning method for multimodal representation, trained on a paired Image-Text dataset. In addition, the paper addresses load balancing, which is one of the most important problems in the MoE setting, with an entropy-based regularization technique. Strength: \n1. The paper is well written. \n2. The paper offers many studies that are useful for understanding many aspects it mentions.\n \nWeakness:\n1. The paper claims to be the first multimodal sparse MoE model, which is probably incorrect/too strong (line 299), e.g. [0] was first submitted in Nov 2021. \n2. The fact that this paper centers around image-text data leads to the need of showing results of image-text benchmarks e.g. in [0], while the paper focuses on zero-shot. Even so why the main results in Table 1 doesn’t have other results for other scales of the model (likewise, if you have an even larger model having comparable parameters, would it beat the performances of those baselines?). For tasks other than Imagenet classification, it would be more convincing if the paper presents the comparison with some current baselines, in addition to the ablations. \n3. Some acronyms might need further explanation such as the ones in Figure 3 for different models, e.g. B.16, S.16, … \n4. No code is provided. \n\n\n[0] VLMO: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts (2111.02358.pdf (arxiv.org)\n I am not sure if I miss/misunderstand something including the Strengths/Weaknesses above. Still, it is not clear to me how the routing for text and images is done. Using image-text pairs, it is said that it’s possible that all text can go to a single expert while image tokens are distributed almost equally. So they are routed not in pairs but individually (which is somewhat not similar to the illustration in Figure 1). Please explain why and why not are you separate the 2 modalities at routing? \n\n\nAnother somewhat related question is that for contrastive training, how do you identify the negative examples? \n \nMaybe not very relevant since the paper addresses the system-related level and thus is hard to judge those impacts. \n", " This paper proposes a new multimodal contrastive learning framework named as LIMoE. This is the first large-scale mixture-of-experts model for multiple modalities. Specifically, this paper designs a modality agnostic model which is not explicitly conditioned on modality. To address the training stability issue and to balance expert utilization, two entropy-based regularization losses are introduced. The effectiveness of this proposed model is quantitatively and qualitatively evaluated across multiple scales and network architectures. ---\n\nOriginality: the design of the multimodal mixture-of-experts (MoE) model is novel. To the best of my knowledge, previous MoE models are designed for single modality. The two regularization losses are widely used in previous studies, but the combination with MoE is valuable.\n\n---\n\nQuality: \n\nStrengths: this work is technically sound. The claims are well supported by comprehensive empirical studies. The proposed regularization losses are reasonable and are expected to stabilize the training, and in turn bring performance improvement. \n\nWeakness: The two proposed losses (as shown in Equation 2), which are also known as information maximization loss, are widely used in previous studies. It is quite straightforward to use this loss to mitigate the collapse issue in expert utilization. Therefore, this work is more like an experimental study.\n\n---\n\nClarity: this paper is well written and well organized. It is easy to follow. \n\nSignificance: this work is a good effort in introducing MoE into multimodal contrastive learning. Although the proposed solution is trivial, it brings some insights to the community. \n\n--- 1. Table 2: the caption contains two “for” which should be a typo\n2. It is suggested to use “zero-shot” rather than “0shot”\n Not applicable" ]
[ -1, -1, -1, -1, -1, 8, 6, 7 ]
[ -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "RvOiFmC6rI", "TyKsUg5SFtl", "ef5EsWLCWx2", "sIMwbn_zz63", "ArHoIMwFjFG", "nips_2022_Qy1D9JyMBg0", "nips_2022_Qy1D9JyMBg0", "nips_2022_Qy1D9JyMBg0" ]
nips_2022_Rym8_qTIB7o
Node-oriented Spectral Filtering for Graph Neural Networks
Graph neural networks (GNNs) have shown remarkable performance on homophilic graph data while being far less impressive when handling non-homophilic graph data due to the inherent low-pass filtering property of GNNs. In general, since the real-world graphs are often a complex mixture of diverse subgraph patterns, learning a universal spectral filter on the graph from the global perspective as in most current works may still be difficult to adapt to the variation of local patterns. On the basis of the theoretical analysis of local patterns, we rethink the existing spectral filtering methods and propose the \underline{N}ode-oriented spectral \underline{F}iltering for Graph Neural Network (namely NFGNN). By estimating the node-oriented spectral filter for each node, NFGNN is provided with the capability of precise local node positioning via the generalized translated operator, thus adaptive discriminating the variations of local homophily patterns. Furthermore, the utilization of re-parameterization brings a trade-off between global consistency and local sensibility for learning the node-oriented spectral filters. Meanwhile, we theoretically analyze the localization property of NFGNN, demonstrating that the signal after adaptive filtering is still positioned around the corresponding node. Extensive experimental results demonstrate that the proposed NFGNN achieves more favorable performance.
Reject
The paper has mixed reviews. While some reviewers feel that the paper is novel and interesting, other reviewers think that additional experiments are needed to justify the proposed method and that the proposed methods are somewhat incremental. The paper will benefit from another revision that will address the raised concerns.
train
[ "FS9foMR8Wz", "QXTP63OZxC", "ks5-7wuJ8B", "ixB1MKd8-GR", "_L8QqRMfafo", "LZJoSAwS9xI", "rHtGfij-K4g", "_rZL1bqc-BE", "LXaEolx23ON", "EiKcx175i_F", "OT1d5ifU-i", "ccbc0uLqqW", "UzrZf0pywO4", "HqPnXEnOh3", "vq7ip3M49W" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " \n We sincerely thank the reviewers for their comments and regret not having completely eliminated your concerns about the ability of our NFGNN model in handling heterogeneous graphs. In the new comment, the reviewer thinks that, for a challenging complex graph mixing of homophily and heterophily, the new methods should be strong enough to handle them.\nWe partly agree with the reviewer on this view for evaluating work, and in practice, our model also archives good performance and even significant improvement compared with the popularly adopted baselines, which have been well presented in the original manuscript and our replies to the previous comments.\nHowever, we are not quite convinced that the substantial contributions of a work submitted to an academic conference, e.g., NeurIPS, should be evaluated simply and solely from a performance improvement perspective. In fact, in addition to the concerns about performance improvement, more insightful comments from the aspects of novelty and theoretical or technical contributions may be more constructive and appreciated.\n\nAs emphasized in the previous responses, our proposal is not only concerned with the performance of the proposed model on the heterophily graphs, but also opens up a promising new way for spectral-based GNNs to learn local filters. Besides, the proposed NFGNN not only maintains the existing good performance on homophily graphs, but also makes considerable progress on heterophily graphs compared with existing spectral-based GNNs. Meanwhile, the theoretical and empirical analysis of the homophily and heterophily of the datasets presented in this paper is also one of the contributions that can’t be ignored. Therefore, we argue that this paper provides a noteworthy contribution to the academic community in addition to performance improvements. \n\nIn addition, even as a new strong baseline model, the LINKX doesn’t show a universal excellent performance as expected, e.g., on the datasets proposed by Pei et al. Under a more realistic semi-supervised learning, our NFGNN still outperforms LINKX overall as we can see from Table R4-3. As we know, at least for now, it is objectively difficult for any one model to perform superbly in all situations. To promote progress in dealing with complex graph data, it should be encouraged to explore new ideas and means both in theory and practice ( or performance ) to solve the challenges in graph learning, which is also the goal of this paper.\n", " Thank you for the update. Though the sparse splitting gives a descent performance, I am still not very impressed since homophily is a challenging problem and new methods should be strong enough to handle them.", " We totally agree that the datasets proposed in [Lim et al. 2021a, 2021b] are more challenging compared to the existing heterophily graph datasets. As we mentioned in the previous reply, the new benchmark datasets are an excellent complement to the traditionally used ones, which can further promote the research of heterophily graphs in the academic community.\n\nAs presented in Table R4-2, for the results on the new datasets under the supervised learning setting as [Lim et al. 2021a], the performance of NFGNN is indeed weaker than LINKX on arxiv-year, snap-patents, and twitch-gamers datasets. However, we also noticed that our NFGNN is comparable to the strong baseline LINKX on the 3 datasets, penn94, pokec, and genius datasets. In addition, NFGNN also shows a significant improvement over GPRGNN, which is also one of the SOTA spectral-based GNNs.\n\nBesides, compared with the datasets proposed by Pei et al., one of the characteristics of the new datasets [Lim et al. 2021a,b] is a much larger scale. Generally, there are only a few labels are available for such large-scale graphs in real application scenarios. Therefore, we also conduct experiments on semi-supervised learning on these datasets [Lim et al. 2021a,b]. Under the semi-supervised learning setting with sparse splitting (2.5% for training, 2.5% for validation, 95% for testing), the performance of NFGNN and LINKX are reported in Table R4-3. Specifically, the optimal hyperparameters of LINKX are obtained using the same way as the ones in fully supervised learning, and the hyperparameters of NFGNN are the same as the settings for fully supervised learning without further tuning due to the time limitation.\nIt can be seen from Table R4-3 that the gap between the two approaches is further narrowed compared to Table R4-2, and even NFGNN surpasses LINKX on 4 of 6 datasets. Particularly, on the snap-patents dataset, NFGNN is worse than LINKX under the fully supervised learning setting, while it outperforms LINKX under the semi-supervised learning setting. Overall, the performance of NFGNN degrades less than LINKX with fewer labels. In practice, if we can spend more time tuning the hyperparameters, better performance can be expected for our NFGNN.\n\nIn the case of generality, just as you mentioned, it is really a challenge to develop models that can perform well on new datasets. As shown in Table R4-1, R4-2, and R4-3, GPRGNN achieves strong performance on the datasets proposed by Pei et al., but it fails to perform as well on the new datasets [Lim et al.]. \nContrary to GPRGNN, LINKX performs well on the new datasets [Lim et al.] but is not satisfactory on the datasets proposed by Pei et al. Unlike them, NFGNN not only achieves excellent performance on the datasets [Pei et al.], but also is comparable to LINKX and significantly better than GPRGNN on the new datasets [Lim et al.]. Based on the above observation, it shows that the proposed NFGNN indeed has good generality to some extent. Besides, we also note that LINKX has explicitly utilized the adjacency matrix as additional features and achieved good performance on the new datasets without further using aggregation operations like the conventional GNNS. Inspired by it, it may be a good attempt to incorporate this highlight into some other aggregation-based GNN models including our NFGNN to further improve performance on heterophily graphs.\n\n**TableR4-3**: \nResults on the datasets proposed by [Lim et al., 2021a] under the semi-supervised learning setting with sparse dataset splitting (2.5% for training, 2.5% for validation, 95% for testing).\n|Models|Penn94|pokec|arXiv-year|snap-patents|genius|twitch-gamers|\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|\n|LINKX|64.85±0.79|64.99±2.36|41.04±0.55|40.38±1.12|88.15±0.16|63.83±0.44|\n|NFGNN|66.50±2.14|74.86±0.16|37.77±0.40|43.54±0.32|90.41±0.19|62.91±0.79|", " Thanks to the authors for their detailed responses to my review.\n\nIn particular, the additional experiments on graph classification are appreciated. The results are compelling - it would be interesting to analyze the potential intrinsic advantage GIN might have on those two datasets.\n\nFurther analysis on the local structures represented after training would be very interesting! I will be on the look out for further work on the benefits of localized spectral approaches.", " Thank you for the reply. However, I am a bit concerned with the results from the proposed method on [Lim et al. 2021b]. Many existing methods that uses work well for non-homophilous data by Pei et al. and extensions such as NFGNN naturally work well on these datasets. The challenge is to develop model that can do well with data such as [Lim et al. 2021b].", " We thank the reviewer for the constructive comments. The responses are listed below:\n\n **Q1: Highly Incremental work. Very closely related to ChebNet and related method.**\n\nA1: As one of the spectral-based methods, we agree that there inevitably exist some close connections of the proposed NFGNN model with other spectral-based GNNs including ChebNet. However, as pointed out by reviewer 3, we believe that the proposed node-oriented filtering paves a new perspective for local filter learning with good theoretical properties. It also allows the spectral-based GNNs to no longer be limited to learning globally shared filters. To some extent, the proposed work can’t be simply seen as an incremental yet effective work. Essentially, it can help in improving the generalizability of the family of spectral GNNs, which is very necessary for many real practical situations. \n\n**Q2: Homophily definition is not up to date.**\n\nA2: Thanks for your valuable suggestions. In this paper, we mainly follow the widely adopted homophily measure defined in [Pei et al., ICLR2019]. We are sorry for missing the new work by [Lim et al. 2021b]. After reading it, we totally agree that the new homophily measure has better properties that are less sensitive to the number of classes and size of each class than the one by Pei et al. To further improve the rigor and comprehensiveness of our paper, we have included a discussion of this measure in the revised version.\n\n**Q3: The argument of 2-hops and 3-hops aggregation is a bit confusing. Methods such as the GPRGNN and Mixhop which uses higher-order convolutions do not impose such limits. Can authors elaborate on this?**\n\nA3: We think what we stated is not in conflict with your comments. Here, both the 2-hops and 3-hops aggregation refer to the aggregation of classical GNNs, which employ averaging or positive weighted averaging for neighbor aggregation, such as GCN, GAT, and GraphSAGE. As well known, when simply stacking multiple propagation layers, this type of aggregation has been shown to be prone to the serious over-smoothing problem. Therefore, such GNNs are generally designed as shallow networks and thus lack the ability to capture long-distance neighborhood information. In addition, combined with our analysis of the homophily of neighborhoods as given in Sect.3.2, it can be known that these GNN models also suffer from difficulty in capturing enough information in non-homophilous graphs. \n\nAs you mentioned, GPRGNN and MixHop were also proposed to address this problem faced by the classical GNNs from different perspectives. In fact, GPRGNN allows learning GPR weights to aggregate representations between different layers, and MixHop adopts concatenation to mix the multiple powers of the adjacency matrix. Both of them are not limited to near-neighbor aggregation even up to 3 or more hops, and thus can handle better non-homophilous graphs than the classical GNNs.\n\n**Q4: Can the authors add these two benchmark methods and LINKX [Lim et al., 2021a] in the paper?**\n\nA4: Thanks for the insightful suggestions. In essence, both the LINK and the LINKX proposed recently can be seen as non-GCN-like benchmark methods and deserve to make the comparison as suggested. We added the performances of LINK and LINKX on both homophily and non-homophily datasets in Table 1 (i.e., the following Table R4-2) of the revised version. Besides, the results of MLP has ever been reported in Table 1 of the original version. As we can see, as a simple baseline, LINK achieves the best performance on the Squirrel dataset. Besides, all three baselines being compared perform poorly on homophily graphs under semi-supervised learning settings.\n\n**Q5: Can the authors provide few more experiment with the new datasets by [Lim et al., 2021a]?**\n\nA5: According to your suggestion, we also conduct additional experiments on the new large-scale non-homophilic datasets proposed by [Lim et al. 2021a] to further verify the generalization ability of our NFGNN. We strictly follow the experimental setup in [Lim et al., 2021a] and present the results in Table R4-2. Although our method achieves slightly inferior performance compared to LINKX, it has still a significant improvement over GPRGNN. Meanwhile, the results of Tables R4-1 and R4-2 also illustrate that, unlike on the traditionally used datasets in Table R4-1, LINKX achieves the best performance among the compared baselines on the newly released datasets. It means the new benchmark datasets are also a good complement to the traditionally used ones.\n\n**Q6: Limitations are not well discussed.**\n\nA6: For the proposed NFGNN, the scalability of spectral convolution and the inductive learning setting are still the same key issues as faced in other spectral-based GNNs. Considering the transferability in real applications, some further studies are also needed to address these issues. We added the discussions on the above limitations in Sect.6 of the revised version. ", " Table R4-1. Comparison to MLP, LINK, and LINKX under the experimental setup of this paper: Mean accuracy (%) ± 95% confidence interval.\n|Methods|Cora|Cite.|Pub.|Comp.|Photo|Cham.|Actor|Squi.|Texas|Corn.|\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|MLP|50.34±0.48|52.88±0.51|80.57±0.12|70.48±0.28|78.69±0.30|46.72±0.46|38.58±0.25|31.28±0.27|92.26±0.71|91.36±0.70|\n|LINK |42.94±2.02|25.52±1.98|54.78±0.96|70.05±1.31|78.84±1.45|71.09±1.16|26.25±1.43|**59.77±1.27**|89.61±1.52|44.91±4.19|\n|LINKX |62.40±1.37|55.94±0.96|82.33±0.02|73.64±0.57|79.84±1.21|69.97±0.44|39.22±0.72|58.31±0.47|90.33±0.41|87.36±1.00|\n|NFGNN|**77.69±0.91**|**67.74±0.52**|**85.07±0.40**|**84.18±0.40**|**92.16±0.82**|**72.52±0.59**|**40.62±0.38**|58.90±0.35|**94.03±0.82**|**91.90±0.91**|\n\n\nTable R4-2. Results on the datasets proposed by [Lim et al., 2021a]\n|Models|Penn94|pokec|arXiv-year|snap-patents|genius|twitch-gamers|\n|:-|:-:|:-:|:-:|:-:|:-:|:-:|\n|LINKX|84.71±0.52|82.04±0.07|56.00±1.34|61.95±0.12|90.77±0.27|66.06±0.19|\n|GPRGNN|81.38±0.16|78.83±0.05|45.07±0.21|40.19±0.03|90.05±0.31|61.89±0.29|\n|NFGNN|84.10±0.40|81.37±0.11|49.95±0.32|52.23±0.22|90.94±0.43|64.13±0.16|", " We thank the reviewer for the constructive comment and positive recognition of our work. The responses are listed below:\n\n**Q1: The margins of improvement in Table 1 are not very significant except in Squirrel and Texas.**\n\nA1: We strongly agree that, although our NFGNN achieves better performance compared to the existing baselines, the performance gains in Table 1 are not adequately significant as expected. In fact, for the 10 popularly referred benchmark datasets in GNN evaluation, we have witnessed many SOTA works that have been verified on them, which makes large performance improvement on them very challenging. Even so, in addition to Squirrel and Texas datasets, there are also more 1% improvements over the best baselines on both the Computers and Chameleon datasets, showing that our method is somewhat competitive. \n\n**Q2: How exactly the localized filter can better address the variation of homophily characteristics is confirmed to some extent by the results, but should be motivated in front.**\n\nA2: According to your suggestion, we first made an explanation more intuitively in Sect.4 about why localized filters can better address the variation of homophily characteristics compared to global filter, and then the experimental results in Sect.5 are used to further confirm the effectiveness of the localized filter. In this way, the motivation for the node-oriented localized filter can be further strengthened. \n\n**Q3: I'm a little confused by the wording of line 224 - it's stated as if Ψ should be included in Eq. 9 somewhere?**\n\nA3: Thanks for pointing the writing mistake out. Eq.(9) has been modified properly.\n\n**Q4: How it effects the full graph classification task. Would your hypothesis be that it doesn't improve/alter global representations much in that setting?**\n\nA4: The same valuable suggestion was also pointed out by Reviewer 1. As we know, most of GNNs based on spectral graph theory were generally developed for the node classification task due to its intrinsic good interpretation ability. Unlike the node classification task, more works have been studied from the perspective of graph isomorphism for full graph classification. To the best of our knowledge, fewer works have also been focused on the connections between graph spectra and graph isomorphism [1,2,3]. Although there is no straightforward explanation, our intuition is that the expressive node representations would implicitly facilitate the learning of the full graph representation.\n\nAccording to your kind suggestion, the widely used TU benchmark containing 5 datasets for graph classification is adopted to validate the performance of NFGNN on graph classification tasks. To ensure the fairness of the experiment, we strictly follow the experimental setup in [4] and also used the baselines therein for comparison, which include GCN, GraphSAGE, GIN, and GAT. Besides, we also choose extra GPRGNN and BernNet for the graph classification task. As shown in Table R3-1, our NFGNN is also competitive on the graph classification task. \n\nTable R3-1. Results on TU datasets: Mean accuracy (%) ± standard deviation.\n|Methods|D&D|MUTAG|PROTEINS|PTC_MR|ENZYMES|\n|:-|:-:|:-:|:-:|:-:|:-:|\n|GCN|71.6±2.8|73.4±10.8|71.7±4.7|56.4±7.1|27.3±5.5|\n|GraphSAGE|71.6±3.0|74.0±8.8|71.2±5.2|57.0±5.5|30.7±6.3|\n|GIN|70.5±3.9|**84.5**±8.9|70.6±4.3|51.2±9.2|**38.3**±6.4|\n|GAT|71.0±4.4|73.9±10.7|72.0±3.3|57.0±7.3|30.2±4.2|\n|GPRGNN|75.3±5.6|75.0±8.0|72.1±3.3|60.3±4.0|27.8±5.6|\n|BernNet|74.7±4.4|74.5±7.4|72.2±2.6|58.3±7.6|27.2±5.4|\n|NFGNN|**75.9±5.5**|75.6±8.1|**72.4**±3.6|**62.5**±9.9|28.3±7.1|\n\n[1] Edwin et al. Which graphs are determined by their spectrum? Linear Algebra and its applications, 373:241–272, 2003.\n\n[2] Rattan et al. Weisfeiler--Leman, Graph Spectra, and Random Walks. arXiv:2103.02972, 2021.\n\n[3] Fiori et al. On spectral properties for graph matching and graph isomorphism problems. Information and Inference: A Journal of the IMA. 2015.\n\n[4] Zhang et al. Nested Graph Neural Networks. NeurIPS2021.\n\n**Q7: Barely a limitation, more a comment, that Table 1 is not as interesting/relevant to me as Fig 3 and Table 2. The direct analysis on the ability of the NF method to address heterophily should be the focus of the empirical section, even extended if possible.**\n\nA7: Thanks for your insightful comments. Although we have endeavored to give a direct analysis of the ability of the NF method to address heterophily through Fig 3 and Table 2, your comment yet inspires us to make a further rethinking about it from some other aspects. For example, through the observation of the localized parameter matrix of the polynomial $\\Psi$, it can be tried to find what kind of local patterns NFGNN has ever been learned. After the rebuttal period, we will make some empirical studies on it as soon as possible. ", " We thank the reviewer for the constructive comment and positive recognition of our work. The responses are listed below:\n\n**Q1: What if using 3 or more hop neighborhood aggregation methods? Standard multi-layer GNN models implicitly perform multi-hop neighbor aggregation. A discussion on this point is needed.**\n\nA1: Thanks for your insightful suggestion. As you mentioned, standard multi-layer GNN models can perform multi-hop neighbor aggregation by stacking multiple layers. However, \nthe averaging or positive weighted averaging aggregation employed by classical GNNs has been shown to seriously cause the expressive power of the model to degrade, i.e., over-smoothing, as the number of layers increases [1]. Therefore, classical GNNs, such as GCN, usually adopt shallow network architecture, which makes it difficult to effectively utilize information from distant neighborhoods. Meanwhile, as we discussed in Sect.3.2, the reason for the poor performance of classical GNNs on non-homophily graphs might be due to the insufficient homophilic information captured from the heterophily-preferred near-neighbors. Hence, we think that implementing effective long-range neighborhood aggregation, so as to fully utilize the information from long-distance nodes, will greatly benefit improving the performance of GNNs on non-homophily graphs.\n\n[1]Oono et al. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification. ICLR2020.\n\n**Q2: Will using learnable node-oriented graph spectral filtering involve more trainable parameters? Will this incur more severe overfitting?**\n\nA2: Yes, as you mentioned, it will inevitably involve some more trainable parameters if we directly train them. But fortunately, due to the utilization of reparameterization of localized parameter matrix via the proposed low-rank decomposition into the node-dependent and node-agnostic matrices, it will only increase the parameters by a very small and controllable amount. In our case, since the rank-one approximation is used, i.e., d=1, the increased number of parameters with $\\Psi$ will only be K+1+f in practice, which can be completely negligible compared to the total amount of model parameters. Overall, it means no more severe overfitting will be incurred. \n\nMeanwhile, compared to GPRGNN, our NFGNN model only brings f more parameters, and the total number of parameters can be much smaller than ChebNet. Hence, our NFGNN is also no worse than the existing spectral-based GNNs in terms of the issue of overfitting. \n\n**Q3: It would be better to give a more clear comparison between the proposed filtering method and the prior methods in Section 4.1 (i.e., give more equations) to better highlight the difference and novelty of the proposed method.**\n\nA3: Thanks for your valuable comments. In the supplement material, we have analyzed the connection and difference between several existing GNN models and ours, and summarized their polynomial filtering forms. Following your suggestion, we will try to introduce this part into the formal paper if space permits. Meanwhile, we first added a concise formula to briefly explain our NFGNN in Section 4.1 of the revised version, so as to better highlight the motivation and novelty. ", " We thank the reviewer for the constructive comment. The responses are listed below:\n\n**Q1: My main concern is the performance of the proposed method as compared to existing baselines and FAGCN.**\n\nA1: Compared to the existing baselines, although our NFGNN achieves a better performance on the whole, the performance gains in Table 1 are not adequately significant as expected. In fact, for the 10 popularly referred benchmark datasets in GNN evaluation, we have witnessed many SOTA works that have been verified on them, which makes large performance improvement on them very challenging. Even so, among 10 datasets for evaluation, NFGNN achieves the best on 7 datasets and second-best or comparable results on 3 other datasets. Therefore, we think that NFGNN is competitive with existing baselines. \n\nFAGCN also employs frequency information and gains good performance. However, due to the different settings of dataset construction and splitting, the results reported in the original paper of FAGCN cannot be compared directly with NFGNN. As you suggested, under the same experimental setup, we re-run FAGCN using the available source code. We also added the comparison with FAGCN in Table 1 of the revised version ( and the following Table R1-1). As we can see, our NFGNN completely outperforms FAGCN on 9 datasets in addition to Cora. \n\n**Q2: Additional experiments on graph classification tasks will further strengthen the paper.**\n\nA2: It is a valuable suggestion that was also pointed out by Reviewer 3. Although we are mainly oriented towards node classification, additional graph classification experiments could indeed help to evaluate our model more comprehensively. \n\nTo verify the performance of NFGNN on full graph classification, the TU datasets, the widely used graph classification benchmarks, are adopted. We strictly follow the experimental setup in [1] and also use the same baselines as in it, including GCN, GraphSAGE, GIN, and GAT. Meanwhile, we also choose additional GPRGNN and BernNet for comparison on the graph classification task. As shown in Table R1-2, our NFGNN is also competitive on the graph classification task and outperforms the other baselines on 3 datasets. Particularly, NFGNN achieves the best results among spectral-based methods like GCN, GPRGNN, BernNet, and NFGNN on all datasets.\n\n**Q3&Q4: Even though Sect. 6 is well motivated, the content does not justify it to be a separate section. Citation method is not proper.**\n\nA3: Thanks for your thoughtful comments. We have integrated Sect.6 and Sect.7 into one section to improve the completeness of the description. Besides, more analysis on the connection of existing GNNs with our NFGNN from a polynomial filtering perspective is provided in the supplemental material A.3.\n\nA4: We have changed the citation to the correct format in the revised version.\n\n**Q5: The purpose of reparameterization with low-rank approximation against polynomial.**\n\nA5: We are sorry for the unclear description of it. The purpose of the reparameterization of the localized parameter matrix $\\Psi$ of the polynomial via the low-rank approximation can be as follows:\n\n1. It can significantly reduce the parameter complexity from O(n×(K+1)) to O(d×(K+1+f)). Also, it also provides a way to flexibly adjust the capacity of the model by varying the rank d.\n\n2. Using matrices $\\mathbf{H}$ and $\\Gamma$ to approximate $\\Psi$ explicitly establishes a bridge between the node-oriented localized filtering and global-shared filtering. If we only use the node-agnostic matrix $\\Gamma$, then NFGNN will simply become global-shared filtering. \n\n3. As we can see from Eq.(9), if we learn $\\Psi$ directly in real implementation, only the gradients from $z_i$ will be used to update the parameters of the localized filter corresponding to node i, i.e., $\\Psi_{i\\ :}$. In other words, only $x_i$ will participate in the optimization of $\\Psi_{i\\ :}$, which will inevitably lead to an inefficient optimization for $\\Psi$. The re-parameterization trick allows us to elegantly solve this issue of optimizing $\\Psi$.\n\n**Q6: How does generalized translated operator $\\mathbf{T}_i$ help in adapting to local patterns? What is the intuition of $\\mathbf{T}_i$ in spatial domain?**\n\nA6. As pointed out in [2], a K-order polynomial spectral filter is strictly localized $N_{<K}(i)$ of node i. Meanwhile, as shown in definition 4.1, $\\mathbf{T}\\_i$ is with good capability of centering the filter at a specified node, which also means it can help in adapting to local patterns. Specifically, we can learn a set of optimal filter parameters that are specific to node $i$ by locating a K-order polynomial filter in $N_{<K}(i)$ through $\\mathbf{T}_i$. In essence, as given in definition 4.1, the $\\mathbf{T}_i$ can be intuitively seen as a convolution operator with a delta function centered at node i in the spatial domain[2].\n\n[1] Zhang et al. Nested Graph Neural Networks. NeurIPS2021\n\n[2] Shuman et al. Vertex-frequency analysis on graphs. ACHA2011.", " Table R1-1: Comparison to FAGCN under the experimental setup of this paper: Mean accuracy (%) ± 95% confidence interval.\n|Methods|Cora|Cite.|Pub.|Comp.|Photo|Cham.|Actor|Squi.|Texas|Corn.|\n|:- | :-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|\n|FAGCN |**78.10±0.21**|66.77±0.18|84.09±0.02|82.11±1.55|90.39±1.34|61.59±1.98|39.08±0.65|44.41±0.62|89.61±1.52|88.52±1.33|\n|NFGNN|77.69±0.91|**67.74±0.52**|**85.07±0.40**|**84.18±0.40**|**92.16±0.82**|**72.52±0.59**|**40.62±0.38**|**58.90±0.35**|**94.03±0.82**|**91.90±0.91**|\n\nTable R1-2: Results on TU datasets: Mean accuracy (%) ± standard deviation.\n|Methods|D&D|MUTAG|PROTEINS|PTC_MR|ENZYMES|\n|:-|:-:|:-:|:-:|:-:|:-:|\n|GCN|71.6±2.8|73.4±10.8|71.7±4.7|56.4±7.1|27.3±5.5|\n|GraphSAGE|71.6±3.0|74.0±8.8|71.2±5.2|57.0±5.5|30.7±6.3|\n|GIN|70.5±3.9|**84.5**±8.9|70.6±4.3|51.2±9.2|**38.3**±6.4|\n|GAT|71.0±4.4|73.9±10.7|72.0±3.3|57.0±7.3|30.2±4.2|\n|GPRGNN|75.3±5.6|75.0±8.0|72.1±3.3|60.3±4.0|27.8±5.6|\n|BernNet|74.7±4.4|74.5±7.4|72.2±2.6|58.3±7.6|27.2±5.4|\n|NFGNN|**75.9±5.5**|75.6±8.1|**72.4**±3.6|**62.5**±9.9|28.3±7.1|", " The paper proposes a method called NFGNN which utilizes generalized translation operator to learn local patterns contributing to homophily/heterophily.\n Strengths:\n1. A new GNN method with ability to capture the local patterns.\n2. Well organized and easy to follow (in general).\n\n\nWeaknesses/Questions:\n\n1. My main concern is the performance of the proposed method as compared to existing baselines. \nAlso, FAGCN adresses heterophily by considering low as well as high frequency information. \nThe numbers reported in FAGCN are superior to the proposed method. \nTherefore it is difficult to establish the usefulness of the proposed method.\n\n2. The evaluation on only node classification task seems limited to me. \nAdditional experiments on graph classification tasks will further strengthen the paper. \n\n3. Even though Section 6 is well motivated, the content does not justify it to be a separate section.\n\n4. Citation method is not proper. (please use \\citep{} or \\citet{} accordingly)\n\n5. The purpose of reparameterization with low-rank approximation against first (or higher) order polynomial is not clear to me. \n\n6. How does generalized translated operator help in adapting to local patterns? It is defined through spectral domain. \nWhat is the intuition of this translation operator in spatial domain when a filter is translated to a node?\n\n Please see the section above. n/a", " This work proposes a new graph neural network by designing a node-oriented spectral filtering method. The developed algorithm is motivated by the observation that the subgraphs centered at different nodes have different properties. Experimental results demonstrate superior performance compared to baseline models. Strength:\n\n1. This paper indicates the fact that real-world graphs are often a mixture of diverse subgraph patterns.\n2. This paper shows that the standard near-neighborhood aggregation mechanisms fail to work for the non-homophily graph.\n3. This paper proposes a novel node-oriented graph spectral filtering method, which gives rise to a more powerful GNN model.\n\nWeakness:\n\nOverall, I am satisfied with the experimental findings and observations. The proposed model seems to be novel and useful. Some minor issues are as follows:\n\n1. What if using 3 or more hop neighborhood aggregation methods? Standard multi-layer GNN models implicitly perform multi-hop neighbor aggregation. A discussion on this point is needed.\n\n2. Will using learnable node-oriented graph spectral filtering involve more trainable parameters? Will this incur more severe overfitting?\n\n3. It would be better to give a more clear comparison between the proposed filtering method and the prior methods in Section 4.1 (i.e., give more equations) to better highlight the difference and novelty of the proposed method. See weakness section see weakness section", " In this work the authors address the problem of local structure variation and non-homophily by introducing a generalized translation operator in order to implement a node centered spectral filter for graph neural networks. They demonstrate the empirical effectiveness of their method by confirming the realization of the filter localization in practice and showing performance gains on datasets with different homophily ratios. ### Strengths\n\n**Quality/Clarity**: The preliminaries and technical details of their proposed method are sound. The motivations section is particularly in depth as it provides not only theoretical but also empirical analysis of the homophily versus heterophily characteristics of the datasets to be used in the evaluation.\n\n**Significance**: Though the modification is small, it fundamentally increases the adaptiveness of the family of spectral GNN approaches that, while having some nice theoretical properties, in their standard formulation, lack fine grained expressiveness or the ability to scale to large scale graphs where global filters are likely not as meaningful. Figure 3 and Table 2 provide targeted evidence of the effectiveness of the translation operator/NF modification.\n\n### Weaknesses\n\n**Significance**: The margins of improvement in Table 1 are not very significant except in Squirrel and Texas \n\n**Clarity**: There is not adequate discussion devoted prior to Section 5 in the preliminaries or the other theoretical prose on how exactly the localized filter is expected to better address variations in homophily characteristics. This is somewhat borne out by the results anyway, but it should be motivated earlier on.\n\n**Originality**: Also relating to the significance, the fact that the core modification itself is quite compact, is noted, and thus the likelihood that the approach is completely novel is potentially low. But this isn't a focus of this review. 1. I'm a little confused by the wording of line 224 - it's stated as if $\\Psi$ should be included in Eq. 9 somewhere? \n2. It is somewhat out of scope I'm sure, but as a convincing sanity check, it would be nice to have the _complement_ to the demonstration that the NF method indeed improves node oriented predictions, the complement being how it effects the full graph classification task. Would your hypothesis be that it doesn't improve/alter global representations much in that setting? (this would be a meta ablation of sorts) 1. Barely a limitation, more a comment, that Table 1 is not as interesting/relevant to me as Fig 3 and Table 2. The direct analysis on the ability of the NF method to address heterophily should be the focus of the empirical section, even extended if possible.\n", " This paper discusses node oriented spectral filtering as an extension to Chebyshev polynomial of normalized adjacency matrix. As motivation the paper argues on the aggregation of neighbors of labeled nodes. The paper argues on superiority on non-homophilic graphs through experiments. Strengths:\n1.) Experimental results are promising\n\nWeaknesses:\n1.) Highly Incremental work. Very closely related to ChebNet and related method.\n2.) Homophily definition is not up to date and experiments are not adequate.\n The argument of 2-hops and 3-hops aggregation is a bit confusing. It is mainly based on experiments using a few set of datasets. To my knowledge, methods such as the GPRGNN and Mixhop which uses higher-order convolutions (acting as convolution from multiple hops) do not impose such limits. Can authors elaborate on this? \n\n\nThis paper only contain experiments with graph data by [Pei et al]. Further, the homophily measure described in the paper is a bit dated.\nThere is a new homophily measure and new non-homophilic benchmark datasets introduced by [Lim et al., 2021a, Lim et al., 2021b].\nIt would be interesting know how the proposed method perform with some of the new non-homophilic data. Can the authors provide few more experiment with the new dataset?\n\nTwo important non-GCN/GNN bechmark method that are important for comparison are the MLP and LINK [Zheleva and Getoor, 2009]. MLP would allow us to know about learning ability of node features itself and LINK tells us about direct learning from graph data only (adjacency matrix). Can the authors add these two benchmark methods in the paper? Another benchmark method that is useful for comparison is LINKX [Lim et al., 2021a].\n\n\nReference\n \n[Zheleva and Getoor, 2009] Zheleva, E. and Getoor, L. (2009). To join or not to join: The illusion of privacy in social networks with mixed public and private user profiles. WWW ’09.\n \n [Lim et al., 2021a] Lim, D., Hohne, F. M., Li, X., Huang, S. L., Gupta, V., Bhalerao, O. P., and Lim, S.-N. (2021a). Large scale learning on non-homophilous graphs: New benchmarks and strong simple methods. In NeurIPS.\n \n [Lim et al., 2021b] Lim, D., Li, X., Hohne, F., and Lim, S.-N. (2021b). New benchmarks for learning on non-homophilous graphs. Workshop on Graph Learning Benchmarks, WWW 2021. Limitations are not well discussed. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 3 ]
[ "QXTP63OZxC", "ks5-7wuJ8B", "_L8QqRMfafo", "HqPnXEnOh3", "rHtGfij-K4g", "vq7ip3M49W", "vq7ip3M49W", "HqPnXEnOh3", "UzrZf0pywO4", "ccbc0uLqqW", "ccbc0uLqqW", "nips_2022_Rym8_qTIB7o", "nips_2022_Rym8_qTIB7o", "nips_2022_Rym8_qTIB7o", "nips_2022_Rym8_qTIB7o" ]
nips_2022_vaxPmiHE3S
EGRU: Event-based GRU for activity-sparse inference and learning
The scalability of recurrent neural networks (RNNs) is hindered by the sequential dependence of each time step's computation on the previous time step's output. Therefore, one way to speed up and scale RNNs is to reduce the computation required at each time step independent of model size and task. In this paper, we propose a model that reformulates Gated Recurrent Units (GRU) as an event-based activity-sparse model that we call the Event-based GRU (EGRU), where units compute updates only on receipt of input events (event-based) from other units. When combined with having only a small fraction of the units active at a time (activity-sparse), this model has the potential to be vastly more compute efficient than current RNNs. Notably, activity-sparsity in our model also translates into sparse parameter updates during gradient descent, extending this compute efficiency to the training phase. We show that the EGRU demonstrates competitive performance compared to state-of-the-art recurrent network models in real-world tasks, including language modeling while maintaining high activity sparsity naturally during inference and training. This sets the stage for the next generation of recurrent networks that are scalable and more suitable for novel neuromorphic hardware.
Reject
This paper introduces an event-based GRU to obtain an efficient continuous-time RNN. Although the method is sound and can work on a series of small sequence modeling tasks, there are multiple issues with the significance of the results of the paper which I point out in the following: 1) There have been significant advances in recurrent neural networks designed to efficiently model sequences, and their scaling properties, which are overlooked in this paper. In particular, structural state-space models [1], diagonal state-space models [2], LSSL [3], Closed-form continuous-time networks [4], efficient memorization via polynomial projection [5], and Neural Rough DEs [6] are currently shaping the state-of-the-art sequence modeling frameworks that efficiently model tasks with long-range dependencies while significantly outperforming Transformers and their variants. Therefore, from the perspective of representation learning capabilities, there is a significant gap between what EGRU (proposed in this paper) could achieve compared to the state-of-the-art sequence modeling tools powered by recurrent networks. Before getting published, it is essential to compare performance and speed to these models on proper benchmarks such as Long Range Arena [7]. [1] Gu, A., Goel, K., & Ré, C. (2021). Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396 [2] Gupta, A. (2022). Diagonal State Spaces are as Effective as Structured State Spaces. arXiv preprint arXiv:2203.14343. [3] Gu, A., Johnson, I., Goel, K., Saab, K., Dao, T., Rudra, A., & Ré, C. (2021). Combining recurrent, convolutional, and continuous-time models with linear state space layers. Advances in neural information processing systems, 34, 572-585. [4] Hasani, R., Lechner, M., Amini, A., Liebenwein, L., Tschaikowski, M., Teschl, G., & Rus, D. (2021). Closed-form continuous-depth models. arXiv preprint arXiv:2106.13898. [5] Gu, A., Dao, T., Ermon, S., Rudra, A., & Ré, C. (2020). Hippo: Recurrent memory with optimal polynomial projections. Advances in Neural Information Processing Systems, 33, 1474-1487. [6] Morrill, J., Salvi, C., Kidger, P., & Foster, J. (2021, July). Neural rough differential equations for long time series. In International Conference on Machine Learning (pp. 7829-7838). PMLR. [7] Tay, Y., Dehghani, M., Abnar, S., Shen, Y., Bahri, D., Pham, P., ... & Metzler, D. (2020, September). Long Range Arena: A Benchmark for Efficient Transformers. In International Conference on Learning Representations. 2) When it comes to the efficiency of computations on spatiotemporal tasks, especially when using a benchmark such as DVS Gesture detection, efficient models such as spiking networks must be accounted for. For instance, in [8], authors outperform EGRU on the DVS task with over 10x fewer parameters. In this case, EGRU+DA achieves 97% accuracy with 15.75M (10.77M MAC) parameters, while the method proposed in [8] achieves 98% accuracy with 1.1M parameters. I believe it is essential to compare results with appropriate efficient models both in terms of computational efficiency and performance. [8] She, X., Dash, S., Mukhopadhyay, S.: Sequence approximation using feedforward spiking neural network for spatiotemporal learning: Theory and optimization methods. In: International Conference on Learning Representations (2022), https://openreview.net/forum?id=bp-LJ4y_XC 3) Selected benchmarks for testing EGRU are not appropriate. For instance, sMNIST is a solved problem already with 99% accuracy of CoRNN that even outperforms EGRU (We need more clarifications on this as well). Even DVS is almost solved (98% from [8]). Instead, I suggest the authors try to use more challenging and up-to-date benchmarks such as Long Range Arena [7], audio datasets, and larger language benchmarks. For the above fundamental reasons, I vote for the rejection of this paper and encourage the authors to incorporate these critical points in their next submission.
train
[ "2qR1aNY8z5C", "m7IegRZyBOO", "Qkb8-2reMPX", "Ny3-JOMj2gA", "2WoExm0S3l", "4mcsrXG5v7P", "YdPqvrVoZkz", "GB1jLG9W1WU", "4MS-onvnKnQ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have raised my score and the authors have done good work in the rebuttal.", " We would like to thank all their reviewers for their constructive comments and questions. Please note that we have uploaded an updated version of our main text as well as supplement. We have indicated all major changes using blue color text. We have also responded to each reviewer separately and in detail. \n\nThe updates are summarized as follows:\n\n1. We have added pointers to the ablation studies and statistics over multiple runs in the main text.\n2. We have added additional runs in the supplement for continuous time EGRU with different settings (Table S1), and comparing the memory ability of discrete time EGRU with GRU (as requested by egpJ).\n3. We have added additional experiments to the DVS and PTB tasks in the supplement, to further demonstrate the performance and robustness of the EGRU.\n4. We will add the references suggested by reviewer eSbV and RdzD in the final version (due to space constraints we are unable to add them right now). \n5. We have added details about how we performed the hyper-parameter search for our experiments as requested by reviewer eSbV in the supplement.", " The authors would like to thank the reviewer for recognizing the novely of our approach and the robustness of our empirical results.\n\n> For the delay-copy task, the authors only show a simple example. How about increasing the number of EGRU units? Besides, did the events generating mechanism improve the memory ability, compared to the GRU model?\n\nThe simple version of the delay-copy task was chosen for the sake of illustration (Fig.1C,D). EGRU of course also works on more complex tasks. To demonstrate this we have added additional experiments for the delay-copy task with different number of EGRU units in the supplement. \n\nThe EGRU has comparable memory ability compared to the GRU model. We have added further experiments comparing the discrete time EGRU with the GRU on memory ability in Table S2 of the supplement to show this.\n\n> In gesture prediction, the authors down sample frames into 32x32, which may affect the performance. I wonder how the input size affects the prediction accuracy with EGRU model. In general, the larger-resolution images contain more detailed information, which could be helpful for classification.\n\nThe down-sampling was done here to get a fair comparison with previous work that used the same preprocessing. We now have added additional experiments in Table S3 for input-size 128x128. The performance drops slightly (by 2 %) with larger input size due to the noisy nature of the dataset.\n\n> For language modeling experiments, the performance decreases with the increment of EGRU units. Can the authors give some explanation or the trend of performance related to the number of EGRU units?\n\nThe observation of decreasing model performance with model size is due to overfitting. We resolved this issue in the current version of the paper. Our experiments demonstrate that the support of the pseudo-derivative (epsilon) acts as a regularizing parameter (see Figure S5). Overfitting is a known issue in language modeling, and regularization is the major component in successful LSTM language models (Merity et al, Melis et al)\n\n> Besides, the authors apply EGRU in some special cases. Since the LSTM/GRU are applied widely in CV/NLP applications, can they be replaced by EGRU to reduce the computation complexity in general?\n\nIn our paper, we test the EGRU on gesture recognition using data from dynamic vision sensors, sequential MNIST and Penn Treebank, all of which broadly fall in the domain of CV and NLP. Therefore, we do expect that we can replace LSTM/GRU with EGRU in these domains to reduce the computational complexity.\n\n> As GRU is simplified from LSTM, can event generating mechanism transfer to the LSTM model?\n\nYes, the event generating mechanism can be transferred to the LSTM quite easily by adding the thresholding function after the output gate. We haven’t performed any experiments on this setup yet, but plan to do so in the future.", " The authors would like to thank the reviewer for the positive review and recognising that the paper is well written and the approach novel.\n\n> In terms of activity sparsity during inference, how does it compare against pruning a GRU, such as the one mentioned in \"EXPLORING SPARSITY IN RECURRENT NEURAL NETWORKS\"?\n\nThe authors would like to thank the reviewer for pointing out this reference. This reference primarily deals with parameter sparsity rather than activity sparsity that is the topic of the present paper. But the authors agree with the reviewer that this paper should be discussed in the related work, and we plan to do so in our final version. \n\nThe suggested reference implements a weight magnitude pruning method and apply it on the GRU, where they report 88.6% sparsity resulting in a 13.8% drop in performance for a GRU with 2560 units. We train smaller models, and see a smaller drop in performance with high levels of activity sparsity.\n\nThe authors would like to stress that activity and parameter sparsity are orthogonal and could be combined in future work, which would likely lead to further reducing the required resources above the level each individual method could achieve (although we do not combine them in our paper to avoid enlarging the scope of the paper too much).", " > Hyper-parameter optimization- No detail about hyper-parameter optimization, authors should provide reasoning how these parameters were chosen.\n\nWe have now added additional information about our hyper-parameter optimization to the supplement\n\n> Authors have mentioned that EGRU is computationally efficient, can authors provide numbers in FLOPS for both training and inference compared with other sparse and dense RNNs\n\nThe authors would like to point out that the initial submission included an analysis in terms of reduction in MAC operations, which is a commonly used performance indicator for algorithms run on GPUs. Tables 1-3 include effective MAC operations combined with reported activity sparsity levels. There is a close relationship between MACs and FLOPs, as in the most general case, 1 MAC = 2 FLOPs. On some platforms that support the Fused Multiply Add (FMA) operation, 1 MAC = 1 FLOP. In both cases, our MAC can be directly converted to FLOPs for comparison. The only additional operation that we introduce on top of a GRU is the thresholding function, which uses k FLOPs (comparisons) at every timestep. \n\nFor training, BPTT uses $Tk+p, k^2+p$ for memory and compute, whereas SnAp-1 is $k+dp, d(k^2+p)$ and SnAp-2 is $k+d^2kp,d(k^2+d^2k^2p)$ resp. where T is length of sequence, k is no. of neurons, p is no. of parameters, d is density of parameters (sparsity is $1-d$).\n\nIn the case of continuous time EGRU, all operations are further multiplied by the event-sparsity $\\alpha$, with everything else remaining the same. In this sense, parameter and activity-sparsity are composable.", " The authors would like to thank the reviewer for the valuable comments and recognizing that the paper is well written and the approach novel. \n\nAddressing the questions of the reviewer:\n\n> Statistical significance: - Experiments should be conducted for K trials and average performance and the standard error should be reported.\n\n> Can authors comment on the robustness and stability of the given model? Based on different seeds, and settings, the optimizer is model stable compared to dense models trained using BPTT.\n\nWe would like to point the reviewer to the supplement which already contained statistics over multiple independent random seeds and various different hyper-parameters in the submitted version of the paper. We have added additional studies that demonstrates that the model is robust against modifications and parameter variations — see tables S3, S5, S6 and figure S5.\n\nAn ablation study of the model features also *had already been included* in the submitted version of the paper (see supplement section D.1.1 and Table S4). In the updated version, we have further extended the ablation study (see Table S4).\n\nThe supplementary text also included all details of the experiments and training procedure.\n\nWe have also added more pointers to these additional experiments in the main text to make them more visible.\n\n> The comparison should be made with at least two models from [1-5], first, one proposed sparse RTRL to train neural networks, that can potentially solve the scalability issues shown by BPTT. Other [2] focuses on other recurrent architecture and learning approaches to solve this issue.\n\nThe authors would like to thank the reviewer for pointing out these missing references. Although all these references concern parameter sparsity rather than activity sparsity that is the topic of the present paper, the authors agree that a fair comparison with state of the art should include these prior methods. We plan to include these papers in the related work of our final version. \n\nThe evaluation on Penn Treebank that were done in [4] and [5] are comparable to our study (Table 3), demonstrating similar performance (our best test perplexity of 63.5) and high levels of activity sparsity. The authors would like to stress that activity and parameter sparsity are orthogonal and could be combined in future work, which would likely lead to further savings of resources above the level each individual method could achieve (although we do not combine them in our paper to avoid enlarging the scope of the paper too much).\n\nIn comparison to our proposed model, paper [1] combines truncated forward-mode differentiation based learning with parameter sparsity, showing that they are able to get an advantage for small truncation lengths, which are approximations to RTRL. With larger truncation lengths, or when using exact gradient descent with RTRL without truncation, their method does not achieve any sparsity during learning. Whereas, the activity-sparsity in our method is independent of the truncation length of BPTT, and only depends on the number of events in the network.\n\nPaper [2] proposes a modification to LSTM to improve its memory capabilities, but doesn’t directly deal with activity or parameter sparsity. Our work does not focus on memory capacity, and the EGRU has about the same memory capacity as the GRU on which it’s based, as our experimental results suggest.\n\nPaper [3] proposes a form of parameter sparsity in LSTMs, and are able to train the network with parameter sparsity throughout the whole training process. In our case, we do have activity-sparsity throughout the training process.\n\nPaper [4] proposes a method for parameter sparsity in a general RNN and the ability to keep a fixed FLOP budget using pruning and regrow in every iteration. The results reported for LSTMs in [4] can be closely compared to our results. The EGRU achieves a better model perplexity on the Penn Tree Bank (see Table 3 and Table S6) at high levels of activity sparsity. The level of sparsity is similar to the reduction in FLOPs that would be expected for EGRU, but the EGRU doesn’t provide the ability to maintain a fixed FLOP budget (fluctuations around some mean value due to the number of events are expected).\n\nPaper [5] introduces a sparse-to-sparse training algorithm that is build around parameter sparsity. In their case, the best stacked LSTM approach achieves 73.5 perplexity on PTB at a sparsity of 62%, whereas the best EGRU model achieves 63.5 test perplexity with high activity sparsity.\n\nWe plan to add references and discussions about these related works to the final paper.\n\n", " In this paper, the authors have introduced an event-based continuous-time variant of the gated recurrent model and derived an eve-based form of backprop to train the system. Extensive experiments on a wide variety of benchmarks show that the proposed model EGRU can achieve comparable performance with SOTA recurrent models and exhibits high levels of activity-sparsity during the training and inference phase. # Strengths\n* Approach is novel\n* Well-written paper\n* Experiments are good\n\n# Weakness\n* Ablation study missing\n* Experimental section is missing key details\n* Missing some key citations * The comparison should be made with at least two models from [1-5], first, one proposed sparse RTRL to train neural networks, that can potentially solve the scalability issues shown by BPTT. Other [2] focuses on other recurrent architecture and learning approaches to solve this issue. \n\n* For models only based on BPTT, prior work has shown that sparse versions of recurrent models have the potential to outperform dense versions. Hence it is important to show how efficient the current approach is compared to other sparse RNNs.\n\n*Hyper-parameter optimization- No detail about hyper-parameter optimization, authors should provide reasoning how these parameters were chosen. \n\n* Statistical significance: - Experiments should be conducted for K trials and average performance and the standard error should be reported.\n* Can authors comment on the robustness and stability of the given model? Based on different seeds, and settings, the optimizer is model stable compared to dense models trained using BPTT.\n\n* Authors have mentioned that EGRU is computationally efficient, can authors provide numbers in FLOPS for both training and inference compared with other sparse and dense RNNs\n\n[1] https://arxiv.org/abs/2006.07232\n[2] https://arxiv.org/abs/1602.03032\n[3] https://arxiv.org/pdf/1901.09208.pdf\n[4] https://link.springer.com/article/10.1007/s00521-021-05727-y\n[5] https://proceedings.mlr.press/v139/liu21p.html\n Highlighted above", " This paper proposes EGRU, an event-based GRU for activity-sparse inference and learning. The proposed model is more efficient as only a select set of units are active at any given time. **Originality**:\n\n* To my knowledge, the idea proposed in this article is novel and original. \n\n**Quality**:\n\n* The paper technically sounds correct and claims well supported by theoretical analysis and experimental results.\n* Related works are covered and discussed.\n* Experiments are conducted extensively in different type of tasks and the results are discussed thoroughly and compared in terms of accuracy,#MAC, and sparsity. \n\n\n**Clarity**:\n\n* This paper is well written and organised.\n\n\n**Significance**:\n* the proposed EGRU, as the authors have suggested, can be more energy-efficient and scalable as it reduces the required compute for both inference and learning. \n * In terms of activity sparsity during inference, how does it compare against pruning a GRU, such as the one mentioned in \"EXPLORING SPARSITY IN RECURRENT NEURAL NETWORKS\"? The authors have discussed inheriting negative societal impacts and risks existed in GRU. ", " This paper proposes an event-based continuous-time variant of the GRU model, named EGRU, motivated by building scalable, energy-efficient deep recurrent models. The EGRU unit only outputs events when the internal states reach a trainable threshold, which makes it activity-sparsity naturally. Furthermore, the authors derive corresponding BP algorithms for updating EGRU and extend it to a discrete-time version that is easier to implement by popular machine learning architecture. As the experiments show, the EGRU model exhibits comparable performance to state-of-the-art recurrent network architectures for several tasks. This paper is well-written, consistent, and produces good results. By introducing the event generating mechanism, the EGRU model not only significantly reduces the multiply-accumulate operations but also preserves the performance that is comparable to the more computation-expensive GRU-based model. \n\nStrengths:\n+ The event generating mechanism is a novel module inserted into the GRU model, which is proved to effectively reduce the computation of the original model.\n+ The authors conduct three downstream applications to validate the effectiveness of the EGRU model, which are convincing and with comparable performances to previous methods.\n\nWeakness:\n- For the delay-copy task, the authors only show a simple example. How about increasing the number of EGRU units? Besides, did the events generating mechanism improve the memory ability, compared to the GRU model?\n- In gesture prediction, the authors down sample frames into 32x32, which may affect the performance. I wonder how the input size affects the prediction accuracy with EGRU model. In general, the larger-resolution images contain more detailed information, which could be helpful for classification. \n- For language modeling experiments, the performance decreases with the increment of EGRU units. Can the authors give some explanation or the trend of performance related to the number of EGRU units?\n This paper presents a novel GRU architecture, which balances the performance and computational complexity. \n- However, in the downstream application with EGRU, some experiments’ setups are unclear, as listed in Weakness. I expect the authors to clarify these points. \n- Besides, the authors apply EGRU in some special cases. Since the LSTM/GRU are applied widely in CV/NLP applications, can they be replaced by EGRU to reduce the computation complexity in general? \n- As GRU is simplified from LSTM, can event generating mechanism transfer to the LSTM model?\n The authors have addressed the limitations and potential negative societal impact of their work in discussion section." ]
[ -1, -1, -1, -1, -1, -1, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 4, 2, 4 ]
[ "2WoExm0S3l", "nips_2022_vaxPmiHE3S", "4MS-onvnKnQ", "GB1jLG9W1WU", "4mcsrXG5v7P", "YdPqvrVoZkz", "nips_2022_vaxPmiHE3S", "nips_2022_vaxPmiHE3S", "nips_2022_vaxPmiHE3S" ]
nips_2022_mkEPog9HiV
Structure-Preserving 3D Garment Modeling with Neural Sewing Machines
3D Garment modeling is a critical and challenging topic in the area of computer vision and graphics, with increasing attention focused on garment representation learning, garment reconstruction, and controllable garment manipulation. Whereas existing methods were constrained to model garments under specific categories or with simple topology, and failed to learn reconstructable and manipulable representations. In this paper, we propose a novel Neural Sewing Machine (NSM), a learning-based framework for structure-preserving 3D garment modeling, which is capable of modeling and learning representations for garments with diverse shapes and topologies and is successfully applied to 3D garment reconstruction and controllable manipulation. To model generic garments, we first obtain sewing pattern embedding via a unified sewing pattern encoding module as the sewing pattern can accurately describe the intrinsic structure and the topology of the 3D garment. Then we use a 3D garment decoder to decode the sewing pattern embedding into a 3D garment using the UV-position maps with masks. To preserve the intrinsic structure of the predicted 3D garment, we introduce an inner-panel structure-preserving loss, an inter-panel structure-preserving loss, and a surface-normal loss in the learning process of our framework. We evaluate NSM on the public 3D garment dataset with sewing patterns with diverse garment shapes and categories. Extensive experiments demonstrate that the proposed NSM is capable of representing 3D garments under diverse garment shapes and topologies, realistically reconstructing 3D garments from 2D images with the preserved structure, and accurately manipulating the 3D garment categories, shapes, and topologies, outperforming the state-of-the-art methods by a clear margin.
Accept
This paper was reviewed by four experts in the field. Based on the reviewers' feedback, the decision is to recommend the paper for acceptance to NeurIPS 2022. The reviewers did raise some valuable concerns that should be addressed in the final camera-ready version of the paper. For example, 1) the evaluation on real-world datasets can be incorporated, 2) more discussion can be added on the reconstruction of garments with non-canonical poses. The authors are encouraged to make the necessary changes to the best of their ability. We congratulate the authors on the acceptance of their paper!
train
[ "2jQovDItzOC", "ysq4Uf1ZsQC", "VWN7tqlNT7m", "q7_JWf_J18J", "jO4MgvZOeU", "LUKbgSFm0j", "9KPvE188mEt", "oELCji1O1n", "SC4ePiMzEMv", "h--w5bBkMuU", "s32zVx52JP7", "z2YxgGZpZT8", "Ku8DI5Frk6", "DmpNUG2NkR" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The authors have addressed my concerns thoroughly. After reading other reviews, I decided to change my score to 6. If accepted, please consider adding the details above into the appendix so that the paper is self-contained.", " The authors have tried to address most of my concerns and I appreciate the hard work that they have put forth in coming up with the rebuttal. I am satisfied with their response and have upgraded my score. I still feel that the paper in its current form falls short of being a solid contribution to the community owing to the limitations I and other reviewers have raised. Nevertheless, the authors propose a clear hypothesis and solution and provide sufficient validations to back up their claims. ", " \nWe thank reviewer WXEe for the thoughtful comments. We are encouraged that the reviewer found our approach valuable to the community. Please see our responses below.\n\n> Concern 1: The experiments are all done on a synthetic dataset, which might not be enough to fully prove its practicality when facing real-world data. \n\n\n\n**A1:** Thanks a lot! As advised, **we used the real-captured in-the-wild images for evaluation**. Given an image of a T-pose person, we first estimate the camera parameters using [1] and a 2D cloth semantic segmentation using [2], then we fit the trained NSM to the image to obtain the 3D garment. We set the input embedding for the NSM decoder as learnable variables and fixed the NSM decoder parameters. We optimized the projection of the predicted garment to match the cloth segmentation on the image. The results will come soon these days because we spent lots of time running the experiments posed by Reviewer HS7n in the past week. (We have updated the visualization results in Sec5 of our revised supplementary)\n\n\n\n> Concern 2: The sewing patterns/topologies of several basic panels are solely represented by the predicted UV-position maps. It might lead to two problems. a) For two corresponding edges, the predicted UV position is not guaranteed to be perfectly aligned, which would give broken geometry when stitching panels. b) For problem a, a solution would be to find correspondences and force them to align spatially. \n\n**A2:** Thanks for your concern on this issue! \nWe appreciate your interest in our implemental details! Actually, the sewing patterns/topologies of several basic panels are **NOT** solely represented by the predicted UV-position maps. Instead, they should be stitched together.\n\n\n We first clarify several concepts as follows.\n\n**Sewing pattern:** A sewing pattern consists of the stitch information for which two edges from different panels should be stitched together.\n\n**Raw output of NSM:** The raw output of our NSM consists of the UV-position map and the panel contour for each panel.\n\n\nWith these concepts, we can easily describe how the position maps compose the final 3D mesh.\n\n**Step 1: Construct the triangulated mesh for each panel.** The grid points on the UV-position maps are chosen as the inner vertices if they are inside the panel contour. Then we sample the points on the panel contour as the edge vertices. We uniformly sample a fixed number of vertices on each contour edge. Then we construct the triangulated mesh from these vertices on a 2D plane by an automatic triangulation algorithm. The 3D coordinate of each vertex is obtained by bilinear interpolation on the UV-position map. \n\n**Step 2: Construct the inter-panel triangulated mesh.**\nAs the stitching information is known, we first determine which two edges from panels are stitched. In general, the panels are stitched uniformly along the edge as we have uniformly sampled a fixed number of vertices on each edge. We also apply the automatic triangulation algorithm to obtain the inter-panel triangulated mesh. \nAs we have introduced the inter-panel structure preserving loss, the deviation of two edges on the most predicted 3D garment is relatively small. The results without post-processing are shown in Figure 7 in the main paper, Figure 3, and the demo video in the supplementary. \n\n**Step 3: Post-processing (optional).** Post-processing can also be introduced to further prevent the artifacts using Laplacian mesh editing or physical-based simulation on MAYA. (see A3 to Reviewer 1Ro2)\n\n\n\n> Concern 3: It seems like NSM is only capable of reconstructing clothes in the canonical space. A simple synthetic dataset for this would be Cloth3D.\n\n**A3:** Thanks for this imaginative question! Your mentioned Cloth3D dataset doesn't provide the sewing pattern annotations, disabling training our model on the Cloth3D dataset. The reason why we only reconstruct garments with the canonical pose is attributed to benchmark availability, i.e., existing sewing-pattern benchmarks only contain canonical-pose data. We will expand NSM to multi-pose bodies in our future work, dependent on the benchmark availability. One potential solution is to use \"virtual bones\" techniques (see [3]) for modeling loose garment deformation, which is beyond the scope of this paper.\n", " \n\nWe are very grateful to Reviewer RjUE for recognizing the originality of this article and its importance to the community. Thank you so much for your insightful comments! We try to address all of your concerns and clarify some technical details. Please see our responses below.\n\n> Concern 1: The method does not model the tightness of the cloth. \n\n**A1:** Thanks for this visionary insight! Unfortunately, we are unable to model the tightness of the cloth due to dataset unavailability. One possible solution is to generate data where each garment sample has multiple sizes, such as small, medium, and large, and the garment can be draped on different body sizes, such as slim, medium, and large. However, this is beyond the scope of this paper and will be discussed as a limitation in our revision. \n\nNote that our NSM is promising in modeling garment tightness because it represents garments with sewing patterns, allowing us to represent garments with diverse shapes and topologies. This differs from template-based methods because they use a template mesh with a fixed vertex number and cannot represent garments with diverse shapes \\& topologies to model tightness.\n\n\n\n> Concern 2: It is unclear how the position maps compose the final 3D mesh. \n\n**A2:** We appreciate your interest in our implemental details! We first clarify several concepts as follows.\n\n**Sewing pattern:** A sewing pattern consists of the stitch information for which two edges from different panels should be stitched together.\n\n**Raw output of NSM:** The raw output of our NSM consists of the UV-position map and the panel contour for each panel.\n\n\nWith these concepts, we can easily describe how the position maps compose the final 3D mesh.\n\n**Step 1: Construct the triangulated mesh for each panel.** The grid points on the UV-position maps are chosen as the inner vertices if they are inside the panel contour. Then we sample the points on the panel contour as the edge vertices. We uniformly sample a fixed number of vertices on each contour edge. Then we construct the triangulated mesh from these vertices on a 2D plane by an automatic triangulation algorithm. The 3D coordinate of each vertex is obtained by bilinear interpolation on the UV-position map. \n\n**Step 2: Construct the inter-panel triangulated mesh.**\nAs the stitching information is known, we first determine which two edges from panels are stitched. In general, the panels are stitched uniformly along the edge as we have uniformly sampled a fixed number of vertices on each edge. We also apply the automatic triangulation algorithm to obtain the inter-panel triangulated mesh. \nAs we have introduced the inter-panel structure preserving loss, the deviation of two edges on the most predicted 3D garment is relatively small. The results without post-processing are shown in Figure 7 in the main paper and the demo video in the supplementary. \n\n**Step 3: Post-processing (optional).** Post-processing can also be introduced to further prevent the artifacts using Laplacian mesh editing or physical-based simulation on MAYA. (see A3 to Reviewer 1Ro2)\n\n", " \n\nWe believe that Reviewer HS7n is an **excellent** reviewer who, while raising many concerns, also used a long text to appreciate the strengths of our article. Thank you so much for your valuable suggestions! Please see our response to your concerns below.\n\n\n> Concern 1: The authors should do a better job at describing the related work. They mention them all in a single line L112, L115, but that’s pretty much what the reader gets. At the present moment, it is hard to properly contextualize the contribution of this paper with respect to existing literature.\n\n**A1:** Thank you for this valuable suggestion! We clarify L112 and L115 here and add more discussion \\& comparison to the related work in the revision. Specifically, [1,2] model 3D garments as the displacement on the SMPL model, but they have limitations in modeling loose garments with large wrinkles, for garment folding may cause one-to-many mapping between SMPL models and garment surfaces. Another line of works [3,4] registers garments with different shapes and topologies to a template mesh with a fixed vertex number and topology. These garments with a unified representation are then mapped to low-dimensional representations that are used for reconstructing 3D garments from images/videos. But these methods have limitations in representing garments with diverse variations because large garment shape variation is non-linear even on a T-pose human body and thus could not be expressed by a template with a fixed number of vertices. \nAlso, the large wrinkles on the garment surface make the registration not robust for loose garments, which may cause the incorrect garment topology after registration. In contrast to the above works, sewing patterns describe the intrinsic structure of garments, which is capable of representing garments with diverse shapes. \n\n\n> Concern 2: The authors show a comparison with the general purpose and \"freeform\" baselines such as pixel2mesh and anchorUDF, which is good to know but by no means is a fair comparison, in my opinion. Such weak baselines tell little about the efficacy of the proposed method. \n\n\n\n**A2:** Thanks for this critical advice! As suggested, in this rebuttal, we compare our NSM with BCENet, a state-of-the-art template-based model on the garment3D dataset [5]. This implementation took up a lot of our rebuttal time. Specifically, we register the garments in each category to a template garment mesh using the open-source non-rigid registration method [6]. After registration, we use PCA to map the garment meshes to a subspace and obtain a low-dimensional representation for each garment. Then we train one network to predict the garment category and another to predict the low-dimension representation from an image, from which a 3D garment is recovered with the inverse PCA transform. We evaluate the accuracy of the predicted garment geometry and topology with the evaluation metrics of Chamfer distance, P2S, and MGLE. The results in the table below show that our NSM far outperforms BCENet on the two metrics Chamfer and P2S and is on par with BCENet on MGLE. These comparisons demonstrate that our NSM outperforms template-based BCENet on reconstruction tasks with diverse shapes and that some template-based methods may also have a particular structure-preserving ability but at the cost of losing muti-shape reconstruction robustness.\n\n\n| Methods | Chamfer | P2S | MGLE |\n| ---- | ---- | ---- | ---- |\n| Pixel2Mesh[7] | 5.23 | 3.84 | 10.81 |\n| AnchorUDF[8] | 3.31 | 4.17 | 4.61 |\n| BCNet[3] | 4.69 | 4.33 | 3.42 |\n| NSM | 2.08 | 1.90 | 3.73 |\n\n\n> Concern 3: I am curious to see an ablation for the UV position maps as an intermediary representation. How well does it compare against those approaches that used an alternative image-based representation?\n\n**A3:** Many thanks! This ablation has been in Section 4.1. \nSpecifically, our NSM is the method that **uses the UV position maps as an intermediary representation** because it first predicts the latent embedding and panel classes from images and then decodes the latent embedding to the UV position maps, from which the 3D garment is recovered. AnchorUDF is the method that **uses an alternative image-based representation** because it predicts the 3D garment shape as the implicit function from the corresponding image-based representation without modeling the UV position maps. As shown, using the UV position maps as an intermediary representation significantly outperforms using an alternative image-based representation.\n\n\n> Concern 4: The paper would benefit if the authors spent some time making the figures more explanatory (e.g., Figure 6).\n\n**A4:** Thank you for your careful reading and advice! We made Fig 6 more explanatory and readable by adding more indicators and zoom-in to identify each sub-image and the artifacts in the revision. \n\n", " \n\n> Concern 5: Authors should not claim novelty on surface normal loss as it is pretty much well known in the area of 3D reconstruction.\n\n**A5:** Thanks for pointing that out! We adjusted our statement to avoid misunderstanding as we only introduced the surface normal loss for learning the 3D garment deformation. \n\n> Concern 6: The authors should have provided more analysis for the behavior of MGLE, such as the effect of the number of samples K. I suspect that for different values of sample K, the results will be very different. I hope to see results with a much larger value of K (on the order of a few 1000s at least).\n\n**A6:** Thanks for this insightful suggestion! As advised, we conduct sensitivity analysis to K in the MGLE metric. We use the task of 3D garment reconstruction from the sewing pattern for validation. The table below shows that the performance is not sensitive to K. Moreover, we can see that K greatly affects the evaluation time. For example, when K=1,000, the evaluation time is already more than 10 hours. Hence, we didn't provide the results of Ks greater than 1,000 due to the computational cost and unnecessity.\n\n\n\n\n| K | Time(h) | MGLE |\n| ---- | ---- | ---- |\n| 2 | 1.38 | 3.52 |\n| 10 | 1.8 | 3.53 |\n| 20 | 2.3 | 3.54 |\n| 40 | 3.6 | 3.53 |\n| 80 | 7.6 | 3.52 |\n| 1000 | 10.4 | 3.51 |\n\n\n> Concern 7: L59: \"Our NSM guarantees …\" -- I don't understand how this statement is true. It's not that the 2D UV map is an isometric parameterization of any kind. Maybe I am missing something?\n\n**A7:** Thanks for this insightful question! Generally, a 2D UV map is not an isometric parameterization of 3D shapes in general scenes (e.g., maps or globes). But Garments have a prior that is different from general scenes, i.e., garments are constructed by cutting the panels from flat fabric and then sewing them into 3D space. Hence, 2D UV maps and sewing patterns are closely isometric parameterizations of the 3D garments when there are limited stretches on the 3D garment. Such priors are reasonable as we usually include these priors for face and human body reconstruction.\n\n\n> Concern 8: I have absolutely no clue how using physically based simulators for interacting with the clothes is a “contemporary line of work” to the proposed method, as the two are completely different items. What's the context or argument here? Also, the authors claim that their method creates garments that can be manipulated but nowhere do they qualify or provide any evidence for this one.\n\n**A8:** We really appreciate your profound insights! We make comments about physics-based simulators for two reasons. **First**, we refer to physics-based simulators to show that our NSM is more computationally efficient than physics-based simulators. Precisely, physical-based simulators require about 20~30s for a sample, while our NSM takes about 0.6s for each sample. **Second**, we highlight physical-based simulators because they can generate more robust and realistic results than geometry-based methods and can be used to correct the artifacts introduced by geometry-based methods. Our NSM can also benefit from physical-based simulators (see A3 to Reviewer 1Ro2).\n\n", " \n\n> Concern 4: In Section 4.2, what is the input for the reconstruction of the sewing pattern?\n\n**A4:** Sorry for the confusion! In Section 4.2, the input is a sewing pattern, and the output is a predicted 3D garment mesh. Specifically, given a sewing pattern at the inference stage, we first use the PCA transform obtained at the training stage to map it into a latent embedding. Then it is fed into our NSM garment decoder to predict the UV-position maps, from which a 3D garment mesh is a read-out. We evaluate the reconstruction error between the predicted 3D garment mesh and the ground truth 3D garment mesh.\n\n> Concern 5: Why inner-panel structure preserving loss learns an isometric mapping between UV and 3D coordinates?\n\n**A5:** Thank you so much for your careful reading! The inner-panel structure preserving loss constrains that the distance between two near points on UV should be maintained after transformation in 3D only except for a constant ratio, as the measure of length is not exactly the same between the UV maps and the 3D shape. This loss approximately learns an isometric mapping when two points are sampled very near to each other since a 3D garment can be regarded as a 2D manifold structure in 3D and \"locally\" resembles Euclidean space. \n\n> Concern 6: For the controllable garment editing, can we draw the panel shape by hand?\n\n**A6:** Many thanks! Garment editing can be **either automatic or manual**. If you like manual drawing, of course, you can draw it. Otherwise, garment editing can be realized by designing a GUI for simply dragging the positions of the vertices or changing the curvature between two vertices on the panels. Thereafter, our model will quickly return the edited 3D garment. \n\n> Concern 7: The limitation is not discussed. Please discuss more on the limitation. Please see the above 'Weakness' section for my comments on the proposed garment modeling.\n\n**A7:** Thanks! We will discuss more limitations (especially your concerns) in the revision.\n\n> Concern 8: Missing References.\n\n**A8:** Thank you so much for recommending us these important related works! We have updated the missing references in our revision. \n\n\n[1] Kocabas et al. \"Vibe: Video inference for human body pose and shape estimation.\", 2020.\n\n[2] Gong et al. \"Graphonomy: Universal human parsing via graph transfer learning.\", 2019.\n\n[3] Pan et al. \"Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks.\", 2022", " \n\n> Concern 3: The fabric material is not modeled. This is not a necessary component but should be mentioned in the future work section\n\n**A3:** Thanks for this insightful and futuristic advice! We will add more discussions about the material modeling in the limitations and future work. \n\n\n> Concern 4: It is not clear how the basic panel groups are defined, and it is not shown that what has been included would be sufficient to support generic garments. \n\n**A4:** The basic panel groups are defined as the **shared primitives across the training set, with each primitive having a similar function and panel structure**. For example, the sleeve panels are shared between T-shirt and jacket, which can be considered as one basic panel group. Specifically, there exist 12 common garment categories in the used dataset, and we extracted ten groups, including one sleeve panel group, two top panel groups, one hood panel group, one pant panel group, one belt panel group, and four skirt panel groups. Ideally, the completeness of the basic panel groups can be appended constantly when novel garment categories are included. The pockets on T-shirts or the lace/ribbon decorations on dresses can also be considered as new basic panel groups and appended to the training set. When the training set is updated, normally, we only need to fine-tune the trained model on the training set, learning to attach the novel appended groups on garments.\n\n> Concern 5: It seems in Fig. 8 that garment editing for the panel class is manual. It doesn't actually help too much because the user will have to specify the new pattern shape themselves, which could be tedious.\n\n**A5:** Many thanks! Garment editing can be **either automatic or manual**. Specifically, garment editing can be realized by designing a GUI for simply dragging the positions of the vertices or changing the curvature between two vertices on the panels. Thereafter, our model will quickly return the edited 3D garment. \n\n", " \n\n> Concern 9: In the dataset, the authors say L245 -- \"We use the first 90\\% part of the data for each base type … and remaining as the test set.\" Where is the validation set? If the authors fine-tuned their model on the test set, then this basically throws all of the experimental evaluation into jeopardy.\n\n**A9:** Thanks! We don't use the test set for searching hyperparameters or fine-tuning our model. We train and adopt our model based on the recovered garments in the training set and the convergence of the training loss. This is standard practice in common deep learning. For example, on ImageNet (and CIFAR and MNIST), only training and validation sets (sometimes called test sets) are currently used, and no distinction is made between test and validation sets.\n\n> Concern 10: I also didn't understand the section on controllable garment editing. It looks interesting, but I could not parse through the procedure involved in getting that.\n\n**A10:** Thanks for your interest! Controllable garment editing aims at editing a 3D garment. We first train PointNet[9] to predict a latent embedding from the 3D garment points, from which the sewing pattern is recovered by PCA inverse transform. Then we can edit the panel shapes/categories by changing the panel shapes or adding/deleting panel types. Finally, the edited sewing patterns are fed into our NSM garment decoder to obtain the edited 3D garments. The introduction of the sewing pattern makes our NSM capable of controllable 3D garment shape editing under geometry and topology variation. We added more explanations and demonstrations in the revision to make this part more readable. \n\n> Concern 11: I am surprised that the authors didn't talk about the obvious limitations of their work, i.e., both technical and in terms of its social impact.\n\n**A11:** Thanks! We will discuss more limitations (including your concerns) in the revision.\n\n> Concern 12: Due to the choice of their representation, I doubt that their method generalizes to other clothing styles. The authors neither experiment with the generalization nor provide any comments on it.\n\n**A12:** Many thanks for this creative question! Our NSN **can generalize** to other clothing styles due to two advantages. **First**, combining the basic panel groups may result in new garment categories not shown in the training set. **Second**, introducing the basic panel group also helps our NSM to generalize to novel garment categories. Specifically, a new garment category can be decomposed into basic panel groups, and if some basic panel groups are not included in the training set, we can append them into our training set and fine-tune the model to predict the novel category. For example, a 3D T-shirt with a pocket can be appended to the training set by modeling the basic panel group of the pocket panel. We will add this discussion in the revision.\n\n> Concern 13: The dataset used in this paper contains clothing styles that are not very inclusive, e.g., excluding South Asian clothes.\n\n**A13:** Thanks for this imaginative comment! It is still difficult for our method to model 3D garments with irregular sewing patterns, as the irregular panels have unknown vertex and edge numbers and are hard to be captured by the PCA algorithm. How to parameterize irregular garments via a unified representation is a challenging and important problem; we will explore this in future works. \n\n[1] Ma et al. \"Learning to dress 3d people in generative clothing.\", 2020.\n\n[2] Bhatnagar et al. \"Multi-garment net: Learning to dress 3d people from images.\", 2019.\n\n[3] Jiang et al. \"Bcnet: Learning body and cloth shape from a single image.\", 2020.\n\n[4] Hong et al. \"Garment4D: Garment reconstruction from point cloud sequences.\", 2021.\n\n[5] Korosteleva et al. \"Generating Datasets of 3D Garments with Sewing Patterns.\", 2021.\n\n[6] Yao et al. \"Quasi-Newton solver for robust non-rigid registration.\", 2020.\n\n[7] Wang et al. \"Pixel2mesh: Generating 3d mesh models from single rgb images.\", 2018.\n\n[8] Zhao et al. \"Learning anchored unsigned distance functions with gradient direction alignment for single-view garment reconstruction.\", 2021.\n\n[9] Qi et al. \"Pointnet: Deep learning on point sets for 3d classification and segmentation.\", 2017.\n\n", " \n\nWe thank reviewer 1Ro2 for the thoughtful comments. We are encouraged that the reviewer found our approach valuable to the community. Please see our responses below.\n> Concern 1: Maybe a few more examples with complicated or straight shapes can be provided.\n\n**A1:** Many thanks for this valuable suggestion! As suggested, **we added more examples with complicated and straight shapes to Figure 7 in the revision**. For example, the jacket in the second row has a more complex structure than the jumpsuit in the third row. These examples demonstrate that our NSM can better preserve the garment topologies and maintain detailed 3D garment shapes, especially for loose garments with large wrinkles.\n\n> Concern 2: It would be nice if real-captured or even in-the-wild images could be used for evaluation.\n\n**A2:** Thanks a lot! As advised, **we used the real-captured in-the-wild images for evaluation**. Given an image of a T-pose person, we first estimate the camera parameters using [1] and a 2D cloth semantic segmentation using [2], then we fit the trained NSM to the image to obtain the 3D garment. We set the input embedding for the NSM decoder as learnable variables and fixed the NSM decoder parameters. We optimized the projection of the predicted garment to match the cloth segmentation on the image. The results will come soon these days because we spent lots of time running the experiments posed by Reviewer HS7n in the past week. (We have updated the visualization results in Sec5 of our revised supplementary)\n\n\n> Concern 3: This work can only reconstruct the garments with the canonical pose ... and many local details (e.g., wrinkle) are missing ... the possible penetrations between the reconstructed 3D garments and the 3D human body are not considered ...\n\n**A3:** Many thanks! \n**First**, the reason why we only reconstruct garments with the canonical pose is attributed to benchmark availability, i.e., existing sewing-pattern benchmarks only contain canonical-pose data. We will expand NSM to multi-pose bodies in our future work, dependent on the benchmark availability. One potential solution is to use \"virtual bones\" techniques (see [3]) for modeling loose garment deformation, which is beyond the scope of this paper. **Second**, the reason why wrinkles are missing is attributed to the complexity of loose garment deformation, which is more than driven by the human pose or shape. One solution is to model the detailed wrinkles with a generative model (see [4]). **Third**, to address possible penetration between a garment and a human body (and avoid missing details), we introduce a post-process by simulating the garment with the human body via MAYA in this rebuttal. Note that this differs from a pure physical-based simulation(PBS) in two folds. On the one hand, thanks to the good representations by our NSM, our post-process is about 30 times faster than purely PBS (i.e., 1s vs. 30s, see [5]). On the other hand, our NSM explicitly models the consistency between sewing patterns and the 3D garment surfaces to preserve garment topologies, allowing rich downstream tasks in computer vision and computer graphics. The qualitative results are shown in Figure 3 in our revised supplymentary.\n\n> Concern 4: The method limitations have not been properly discussed.\n\n**A4:** Thanks! We will discuss more limitations (especially your concerns) in the revision.\n\n[1] Kocabas et al. \"Vibe: Video inference for human body pose and shape estimation.\", 2020.\n\n[2] Gong et al. \"Graphonomy: Universal human parsing via graph transfer learning.\", 2019.\n\n[3] Pan et al. \"Predicting Loose-Fitting Garment Deformations Using Bone-Driven Motion Networks.\", 2022\n\n[4] Zhu et al. \"Detailed wrinkle generation of virtual garments from a single image.\", 2021\n\n[5] Korosteleva et al. \"Generating Datasets of 3D Garments with Sewing Patterns.\", 2021\n", " This work focuses on modeling 3D garments with a novel neural sewing machine (NSM). The NSM is a learning framework that can effectively encode garment representations, which could facilitate 3D garment construction and manipulation. Firstly, it utilizes a unified sewing pattern encoding module to embed sewing patterns. Then, the embeddings are decoded into a 3D garment by using the UV-position maps with masks. A few loss terms are introduced to preserve the inner-panel structures of the 3D garments. Extensive experiments validate its superiorities over previous SoTA methods. Strengths:\n1. Using sewing patterns for 3D garment modeling is a good idea and also a nice contribution, which supports both 3D garment reconstruction and controllable editing.\n2. Extensive experiments show that the proposed method outperforms previous SoTA methods quantitatively and qualitatively.\n\nWeakness:\n1. This manuscript claims that previous methods failed to learn reconstructable and manipulable representations with complicated topology. Maybe a few more examples with complicated or straight shapes can be provided. \n\n2. Also, all the experiments are conducted on synthetic data, and it would be nice if real-captured or even in-the-wild images can be used for evaluation.\n\n3. It seems like this work can only reconstruct the garments with the canonical pose, and many local details (e.g. wrinkle) are missing even if they can be observed in the input image. And the possible penetrations between the reconstructed 3D garments and the 3D human body are not considered during reconstruction and manipulation. Generally, I think it is a nice paper, and maybe the authors can provide more real-captured images as I mentioned in the ``Weakness''. The manuscript has a section to discuss the limitations and broader impact. But the method limitations have not been properly discussed.", " This paper looks into the problem of 3D garment modeling and reconstruction which is a challenging problem within the wider area of 3D generative modeling and reconstruction. The problem is especially challenging because it is unclear as to what constitutes a good representation for 3D clothes that captures the wide variety of garments we see around us. This paper proposes to use sewing patterns as a representation of 3D garments and proposes a learning based framework - Neural Sewing Machine - capable of generating garments of varied topology. The authors discuss challenges and propose solutions to both encoding and decoding using this representation. They additionally propose three kinds of losses – inner-panel structure preserving loss, inter-panel structure preserving loss and surface normal loss. The authors claim that their method outperforms the prior state of the art. \n Strengths: \n\nIn my evaluation, the proposal of this paper is very good. Indeed, representation of 3D garments is a challenge and this paper provides an interesting sewing pattern based representation. The authors also do a great job of highlighting various design choices involved in both encoding and decoding this representation and thus building their proposed – Neural Sewing Machine. In particular, I believe the following are the set of strengths of this paper: \n\n1. Unified Sewing Pattern Encoding: I like this encoding scheme as it allows for representing clothes of diverse categories, shapes and topologies. The fact that the authors chose PCA to come up with the encoding is again very interesting – something well suited for this problem setting as there is only so much variety in clothing. \n\n2. 3D Garment Decoding: The authors provide a principled approach to decode the encoding by learning a 2D UV map and a mask map which together represent the various panels. \n\n3. Structure Preserving losses: The proposed losses are intuitive and crucial for getting good results. The ablation that the authors provided validates their importance. \n\n4. Mean Geodesic Length Error: I particularly liked this metric to evaluate the quality of topology and the authors show that their method gives good results on this metric. \n\nBefore I mention the weaknesses of the paper, I want to emphasize that the authors have presented an interesting method to model and generate 3D garments and I personally feel that the idea has a lot of potential. However, I think that authors have left out several important experiments and their analysis does not adequately back the claims they are making. Moreover, I don’t find their choice of baselines very convincing. I also find the paper is hard to read in several places with figures missing captions/ not being clear enough. \n\n1. I think the authors should do a better job at describing the related work. They mention them all in a single line L112, L115 but that’s pretty much what the reader gets. At the present moment, it is hard to properly contextualize the contribution of this paper with respect to existing literature. \n\n2. The authors show comparison with general purpose and ‘freeform’ baselines such as pixel2mesh and anchorUDF which is good to know but by no means is a fair comparison in my opinion. Such weak baselines tell little about the efficacy of the proposed method. The authors dedicate half a page in the related work on template-based models and garment structure modeling – which clearly shows that there are plenty of closely related recent papers (with open source code) that specifically target the garment generation problem. However, the authors don’t provide any comparisons for these. Therefore I am not convinced that this paper is really state of the art (not that it matters). \n\n3. I am also curious to see an ablation for the UV position maps as an intermediary representation. Particularly, how well does it compare against those approaches that used an alternative image based representation. \n\n4. I think the paper would benefit if authors spend some time in making the figures more explanatory. For example, in Fig 6 there are no captions as to which one is the prediction and which is ground truth. Moreover, I qualitatively cannot make sense what are the artifacts manifested by each loss term ablation. Perhaps an inset focusing on the artifact will help. \n\n5. Novelty on surface normal loss – I think authors should not claim novelty on this one as it is pretty much well known in the area of 3D reconstruction. \n\n6. While the authors introduce a new metric – Mean Geodesic Length Error (MGLE) to measure the quality of topology, I believe that the authors should have provided more analysis for the behavior of this metric such as the effect of the number of samples K. This is crucial because this metric seems to be important to show the superiority of the proposed method and I suspect that for different values of the sample K, the results will be very different. I hope to see results with a much larger value of K (on the order of a few 1000s atleast). \n 1. L59 : “Our NSM guarantees …” – I don’t understand how this statement is true? It’s not that the 2D UV map is an isometric parameterization of any kind. Maybe I am missing something?\n\n2. The authors make comments about physics-based simulators in several places and believe that their method is superior to this line of work. I have absolutely no clue how using physically based simulators for interacting with the clothes is a “contemporary line of work” to the proposed method, as the two are completely different items? What’s the context or argument here? Also, the authors claim that their method creates garments which can be manipulated but nowhere do they qualify or provide any evidence for this one. \n\n3. In the dataset, the authors say L245 - “ We use the first 90% part of the data for each base type … and remaining as the test set”. Where is the validation set? If the authors fine tuned their model on the test set then this basically throws all of experimental evaluation into jeopardy. \n\n4. I also didn’t understand the section on controllable garment editing. It looks interesting but I could not parse through the procedure involved in getting that. \n\nI firmly believe that the idea has a lot of potential, but I am not convinced with the experimental evaluation provided here. Unless the authors can address the concerns (primarily with comparisons to relevant methods) I don't think there is any merit to accepting paper. This is simply because the main hypothesis of this paper is that using sewing patterns as a representation is inherently superior to other kinds of representation. And in the absence of state of the art results on relevant comparisons, I don't see how else the authors would validate this hypothesis. I am surprised that the authors didn't talk about obvious limitations of their work — both technical and in terms of its social impact. \n\nDue to the choice of their representation, I doubt that their method generalizes to other clothing styles and the authors neither experiment with the generalization nor provide any comments on it. This to me seems like a severe limitation of the proposed work. However, I don't think this is any reason to reject the paper or the idea as this is not the main hypothesis/claim of the paper. \n\nThe dataset used in this paper contains clothing styles which are not very inclusive. In particular, there are several garments found in South Asia, East Asia and Africa which would be hard to be captured by the PCA basis used in the paper — which primarily contains popular garments worn in the western countries. I believe that the lack of inclusivity of garments found in other (and often underrepresented) regions of the world fundamentally limits this work and moreover prevents even a discussion about the needs of such underrepresented regions and cultures as it relates to automatic 3D garment representation and generation. ", " This paper introduces a network model to reconstruct 3D garments based on sewing pattern input. The sewing patterns are first gone through PCA for coefficients to compose the embedding. Later, masks and uv position maps are generated using the coefficients. The pixels from the masked maps are used to reconstruct the 3D mesh. The paper introduces new loss functions to regularize the model behavior using deformation and sewing edge alignment rules, as well as normal consistentcy. Results show that the proposed method works better in a single-view garment reconstruction task and is capable of controllable editing and interpolation. Strengths:\n - The paper is the first one that I know of that addresses the importance of modeling garments in the context of sewing patterns, which is the key to develop a physically correct as well as easy-to-use garment generation models.\n\nWeakness:\n - The method does not model tightness of the cloth. What it really does is merely a *garment template mesh* generation model for a given sewing pattern shape. While it is not a critical component, the authors should discuss it either in future work or in the limitation section.\n - It is still not clear how the position maps compose the final 3D mesh. Although there are several sentences in the caption of the figure in the appendix that mentions it, it is not in sufficient detail. How is the mesh triangulated around the sewing pattern boundary where the edges are defined in lines but the inner vertices are defined as grid points? What does the algorithm do when the 3D coordinates of the sewing edges from two patterns are different? Is there any post-process to smooth out the neighboring vertex locations caused by the difference? If not, would that generate unnatural artifacts? There are a lot of critical details on this topic but the authors did not discuss them in detail.\n - The fabric material is not modeled. This is not a necessary component but should be mentioned in the future work section.\n - It is not clear how the basic panel groups are defined and it is not shown that what has been included would be sufficient to support *generic* garments. There seems to be only 9 basic panel groups. How are they chosen? Which factors are considered? Which garments can be supported by the 9 groups and which cannot? For example, can the model support T-shirts with pockets or dresses with lace/ribbon decorations using the 9 groups? It is a critical component of the paper but it is not discussed in depth.\n - It seems in Fig. 8 that the garment editing for the panel class is manual. It doesn't actually help too much because the user will have to specify the new pattern shape themselves which could be tedious. See above. See above.", " This paper proposes the Neural Sewing Machine (NSM) for structure-preserving 3D garment modeling. Previous methods mostly model garments in a class-specific way, which is not flexible and easily extendable. In this work, each garment is modeled as several basic panels and their stitching relations. The basic panels are encoded by PCA from a large-scale dataset. The stitching relations as well as the 3D structure of each panel are encoded in the UV-position maps. For each image input, NSM predict PCA coefficients for each basic panel that corresponds to the observed garment. Based on the predicted PCA coefficient, a CNN-based network is trained to predict the UV-position maps. Four training targets are introduced to facilitate the supervised training. The reconstruction results are the best compared with previous baselines. Utilizing the structure-preserving modeling, controllable garment editing and interpolation are also shown to demonstrate the versatility of this kind of garment modeling. ## Strength\n1. Considering the sewing pattern into the garment modeling process and unifying modeling for basic garment classes are important contributions towards practical real-world garment modeling.\n2. Both qualitative and quantitative results surpasses previous SOTA methods.\n3. Ablation study proves the effectiveness of the proposed training targets.\n4. The application of controllable garment editing is interesting.\n\n## Weaknesses\n1. The experiments are all done on a synthetic dataset, which might not be enough to fully prove its practicality when facing real-world data. For example, given a real photo of a clothed human, can NSM still give correct reconstructions on the clothes?\n2. It seems like the sewing pattern/ topology of several basic panels are solely represented by the predicted UV-position maps. It might lead to two problems. a) For two corresponding edges, the predicted UV-position are not guaranteed to be perfectly aligned, which would give broken geometry when stitching panels. b) For problem a, a solution would be to find correspondences and force them to align spatially. However, finding such correspondences is not trivial under the current modeling. The edge points in the predicted UV-position map are not necessarily spatially ordinal.\n3. It seems like NSM is only capable of reconstruct clothes in the canonical space. Can NSM takes photos of a posed clothed human as input? Can NSM output posed clothes as observed in the input image? A simple synthetic dataset for this would be Cloth3D.\n\n## Typo\nTable 3, headers, four L_{rec}.\n\n## Missing References\n[1] Xiang, D., Prada, F., Wu, C., & Hodgins, J. (2020, November). Monoclothcap: Towards temporally coherent clothing capture from monocular rgb video. In 2020 International Conference on 3D Vision (3DV) (pp. 322-332). IEEE.\n\n[2] Hong, F., Pan, L., Cai, Z., & Liu, Z. (2021). Garment4D: Garment reconstruction from point cloud sequences. Advances in Neural Information Processing Systems, 34, 27940-27951.\n\n[3] Bhatnagar, B. L., Sminchisescu, C., Theobalt, C., & Pons-Moll, G. (2020, August). Combining implicit function learning and parametric models for 3d human reconstruction. In European Conference on Computer Vision (pp. 311-329). Springer, Cham.\n\n[4] Alldieck, T., Magnor, M., Bhatnagar, B. L., Theobalt, C., & Pons-Moll, G. (2019). Learning to reconstruct people in clothing from a single RGB camera. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1175-1186). 1. Section 4.2, what is the input for the reconstruction from sewing pattern?\n2. Why inner-panel structure preserving loss learns an isometric mapping between UV and 3D coordinates?\n3. For the controllable garment editing, can we draw the panel shape by hand? The limitation is not discussed. Please discuss more on the limitation. Please see the above 'Weakness' section for my comments on the proposed garment modeling." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 4 ]
[ "oELCji1O1n", "z2YxgGZpZT8", "DmpNUG2NkR", "Ku8DI5Frk6", "z2YxgGZpZT8", "z2YxgGZpZT8", "DmpNUG2NkR", "Ku8DI5Frk6", "z2YxgGZpZT8", "s32zVx52JP7", "nips_2022_mkEPog9HiV", "nips_2022_mkEPog9HiV", "nips_2022_mkEPog9HiV", "nips_2022_mkEPog9HiV" ]
nips_2022_sQiEJLPt1Qh
Improved Bounds on Neural Complexity for Representing Piecewise Linear Functions
A deep neural network using rectified linear units represents a continuous piecewise linear (CPWL) function and vice versa. Recent results in the literature estimated that the number of neurons needed to exactly represent any CPWL function grows exponentially with the number of pieces or exponentially in terms of the factorial of the number of distinct linear components. Moreover, such growth is amplified linearly with the input dimension. These existing results seem to indicate that the cost of representing a CPWL function is expensive. In this paper, we propose much tighter bounds and establish a polynomial time algorithm to find a network satisfying these bounds for any given CPWL function. We prove that the number of hidden neurons required to exactly represent any CPWL function is at most a quadratic function of the number of pieces. In contrast to all previous results, this upper bound is invariant to the input dimension. Besides the number of pieces, we also study the number of distinct linear components in CPWL functions. When such a number is also given, we prove that the quadratic complexity turns into bilinear, which implies a lower neural complexity because the number of distinct linear components is always not greater than the minimum number of pieces in a CPWL function. When the number of pieces is unknown, we prove that, in terms of the number of distinct linear components, the neural complexity of any CPWL function is at most polynomial growth for low-dimensional inputs and a factorial growth for the worst-case scenario, which are significantly better than existing results in the literature.
Accept
Three reviewers agree that this work meets the bar for acceptance, rating it weak accept, weak accept, and accept. The work provides bounds for approximating continuous piecewise linear functions by ReLU networks and an algorithm. Reviewers praised the novelty and significance, and were positive about clarifications offered during the discussion period, particularly about the time complexity of the algorithm. Hence I am recommending accept. I encourage the authors to still work on the items of the discussion and the promised additions such as the open source implementation of their algorithm for the final version of the manuscript.
val
[ "FuTgJ7JeAVL", "EmqG-Eodao", "t_4bcZQ1Udg", "omm9eSnT0_", "hrTaPpmcJnf", "6AUFJIbQ_m2", "3bocoUo6va8", "TcqHa7IFev", "t3PzKX8OvJY", "ts616n8krl", "ZcatGefNO0x", "Ae1S7CQgPdW", "f_wDlbG3OC7", "4bMck8ZCeIT" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for letting us know! Again, we thank the reviewer for taking the time and effort to review the paper and provide thoughtful comments.", " If you see it fit, please consider giving the paper a higher rating to ensure acceptance. Thank you!", " The authors have addressed all my concerns and I also appreciate the effort the authors have put in clarifying the algorithm under conditions when the number of linear components are unknown. Thank you! ", " Thank you very much for replying to our rebuttal. We are glad to know that the reviewer is happy with our responses. We will add an open source implementation of Algorithm 1 and measure its run time on a computer under different conditions such as different numbers of pieces and input dimensions. These extra materials will be added to the supplementary material in the camera-ready version when possible. We highly appreciate the reviewer’s clarification and the concrete example provided. Finally, we would like to express our gratitude to the reviewer for recommending the acceptance of the paper.", " I would like to thank the authors for their clarifications and for the time complexity of the algorithm.\n\nWhen I asked whether Algorithm 1 can be used in practice, I was actually not so much looking for possible applications, but I was more interested on whether the algorithm can be implemented and run for standard networks on a reasonably fast computer. E.g., if you take some arbitrary 100-dimensional CPWL with 1 million pieces, how long would the algorithm take to run on a computer?\n(It may not be possible to provide an answer to this question until the rebuttal deadline and this is not a problem for me. Maybe this information and an open source implementation of the algorithm could be added in a camera ready version. But since the paper's main application are the improved bounds and not the algorithm itself, this is not absolutely necessary.)\n\nI am happy with the authors' replies and recommend the acceptance of the paper.\n", " We thank the reviewer for providing valuable comments that help us strengthen the paper. Given the time constraint of the discussion period, we would like to let the reviewer know that we would be happy to answer any further questions if any. Thank you very much again for taking the time and effort in reviewing the paper and checking the proofs.", " We thank the reviewer for providing insightful suggestions that help us make the paper technically stronger. Given the time constraint of the discussion period, we would like to let the reviewer know that we would be happy to answer any further questions if any. Thank you very much again for taking the time and effort in reviewing the paper.", " We thank the reviewer for providing valuable suggestions that help us make the paper technically stronger. Given the time constraint of the discussion period, we would like to let the reviewer know that we would be happy to answer any further questions if any. Thank you very much again for taking the time and effort in reviewing the paper.", " First, we would like to express our gratitude to the reviewer for checking the proofs. Thank you very much for reviewing the paper and recognizing the significant contribution of the work.\n\n### Below we respond to the reviewer’s suggestion:\n>It would be interesting to extend the results to more general network architectures.\n\n**High-level response:**\n\nWe are encouraged by the reviewer’s suggestion. Depending on the architecture and the activation function, one may be able to derive some bounds that are better than the quadratic complexity. However, for architectures that impose more constraints such as residual networks, it is still not clear how we can derive better bounds for them.\n\n**Details and insights:**\n\nGiven a network architecture, one would be interested in finding a bound for the number of hidden neurons required to represent any given CPWL function. To shed more light on this, we take residual networks as an illustrating example and point out the hope and difficulty to obtain a better bound for the number of hidden neurons. Because a different architecture may rely on a very different technique, we limit the discussion on residual networks (ResNets) here. The network architecture we consider is defined by the following statement.\n\nLet $l$ be an even positive integer. A function $g:\\mathbb{R}^{k_0}\\to\\mathbb{R}^{k_{l}}$ is an $l$-layer ResNet if there exist weights $\\mathbf{W}\\_i\\in\\mathbb{R}^{k\\_{i}\\times k\\_{i-1}}$ and $\\mathbf{b}\\_i\\in\\mathbb{R}^{k\\_i}$ for $i\\in[l]$ such that the input-output relationship of the network satisfies\n $\n g(\\mathbf{x})=h_l(\\mathbf{x})\n $\n where $h_1(\\mathbf{x})=\\mathbf{W}\\_1\\mathbf{x}+\\mathbf{b}\\_1$, $h\\_l(\\mathbf{x})=\\mathbf{W}\\_{l}h\\_{l-1}(\\mathbf{x})+\\mathbf{b}\\_{l}$, and\n \\begin{equation}\n h\\_i(\\mathbf{x})=\\mathbf{W}\\_{i}\\sigma\\_{k\\_{i-1}}\\left(h\\_{i-1}(\\mathbf{x})\\right)+\\mathbf{b}\\_{i}\n \\end{equation}\nif $i$ is even, and\n\\begin{equation}\nh\\_i(\\mathbf{x})=h\\_{i-2}(\\mathbf{x})+\\mathbf{W}\\_{i}\\sigma\\_{k_{i-1}}\\left(h\\_{i-1}(\\mathbf{x})\\right)+\\mathbf{b}\\_{i}\n\\end{equation}\nif $i$ is odd for every $i\\in[l]\\setminus \\\\{1,l\\\\}$.\n\n**Hope:**\nBefore finding a bound, we first recognize that finding a bound may be possible due to the argument that “Any ReLU ResNet represents a CPWL function.” One can prove this statement by using the results in the plain ReLU network and the fact that (a) the addition of two CPWL functions is still CPWL and (b) the composition of two CPWL functions is still CPWL.\n\n**Difficulty:**\nIt is still not clear how we can find a bound that is better than the quadratic complexity. Because the skip connection for every residual block in a ResNet maintains the residual dimension in every block, the number of output neurons in the last layer of every block is the same. Such a constraint makes it difficult to fully utilize the max-min representation of CPWL functions in Eq. (12).\n\n---\n\nGiven the positive comments below:\n>“The novelty of this paper is high.”\n\n>“I think this is a significant contribution to the machine learning community.\"\n\n>“This paper is clearly written and well organized.”\n\n\nWe are surprised by a rating of 6. Based on the comments from the other reviewers, we added new results including the complexity analysis and the relaxation of the assumption in Algorithm 1 by adding a new algorithm ( Algorithm 6). We now feel the contribution of the paper has been further enhanced. We sincerely hope the reviewer will reconsider the rating based on the responses to the issues raised and the new results provided.", " Thank you very much for reviewing the paper. We are glad to hear that the reviewer really enjoyed reading the paper.\n\n### Below we respond to the reviewer’s suggestions:\n>However, this algorithm requires one to know how many linear components are present in the function beforehand\n\n>A possible drawback is that Algorithm 1 assumes that the number of linear components in the function is known beforehand. Is there a way that this assumption could be relaxed in a way?\n\nWe thank the reviewer for this important comment. We agree with the reviewer that this is a drawback and we believe relaxing such an assumption greatly improves the generality of Algorithm 1. Reflecting on this comment, we worked on Algorithm 1 and found a way to relax such an assumption by using the result from Lemma 12(a) which guarantees that the interior of each of the pieces is nonempty. In light of this, we have added a new algorithm in our revised manuscript, Algorithm 6 in Appendix C in the supplementary material, to find all linear components for Algorithm 1. Algorithm 1 now does not need to know any linear components or be given the number of linear components beforehand. It only needs to be given the pieces of the CPWL function p and be able to observe the output of p when feeding an input. We also added an explanation for Algorithm 6 in the main text (see Line 252 to Line 257 in the revised manuscript). Algorithm 6 is given below:\n\n---\n### **Algorithm 6**\n\n**Input**: An unknown CPWL function $p$ whose output can be observed by feeding input from $\\mathbb{R}^n$\nto the function. A center $c_i$ and radius $\\epsilon_i > 0$ of any closed $\\epsilon$-radius ball $B(\\mathbf{c}_i, \\epsilon_i)$ such that $B(\\mathbf{c}_i, \\epsilon_i)\\subset\\mathcal{X}_i$ for $i=1,2,\\cdots,q$ where $\\mathcal{X}_1,\\mathcal{X}_2,\\cdots,\\mathcal{X}_q$ are all pieces of $p$.\n\n**Output**: All distinct linear components of $p$, denoted by $\\mathcal{F}$.\n\n$\\quad$$\\ \\ $1: $\\mathcal{F}\\gets\\emptyset$\n\n$\\quad$$\\ \\ $2: **For** $i=1,2,\\cdots,q$ **do**\n\n$\\quad$$\\ \\ $3: $\\quad$$\\mathbf{x}_0 \\gets \\mathbf{c}_i$\n\n$\\quad$$\\ \\ $4: $\\quad$$y_0 \\gets p(\\mathbf{x}_0)$\n\n$\\quad$$\\ \\ $5: $\\quad$$\\begin{bmatrix}\\mathbf{s}_1&\\mathbf{s}_2&\\cdots&\\mathbf{s}_n\\end{bmatrix} \\gets \\epsilon_i\\mathbf{I}_n$\n\n$\\quad$$\\ \\ $6: $\\quad$$\\mathbf{S} \\gets \\begin{bmatrix}\\mathbf{s}_1&\\mathbf{s}_2&\\cdots&\\mathbf{s}_n\\end{bmatrix}$\n\n$\\quad$$\\ \\ $7: $\\quad$$\\mathbf{z} \\gets \\begin{bmatrix}p(\\mathbf{s}_1+\\mathbf{x}_0)-y_0 & p(\\mathbf{s}_2+\\mathbf{x}_0)-y_0 & \\cdots & p(\\mathbf{s}_n+\\mathbf{x}_0)-y_0\\end{bmatrix}^\\mathsf{T}$\n\n$\\quad$$\\ \\ $8: $\\quad$$\\mathbf{a} \\gets \\mathbf{S}^{-\\mathsf{T}}\\mathbf{z}$\n\n$\\quad$$\\ \\ $9: $\\quad$$b \\gets y_0-\\mathbf{a}^{\\mathsf{T}}\\mathbf{x}_0$\n\n$\\quad$10: $\\quad$$f \\gets \\mathbf{x}\\mapsto\\mathbf{a}^{\\mathsf{T}}\\mathbf{x}+b$\n\n$\\quad$11: $\\quad$**if** $f\\not\\in\\mathcal{F}$ **then**\n\n$\\quad$12: $\\quad$$\\quad$$\\mathcal{F} \\gets \\mathcal{F}\\bigcup\\{f\\}$\n\n$\\quad$13: $\\quad$**end if**\n\n$\\quad$14: **end for**\n\n---\n\n>I think giving a sketch of what Algorithms 2, 3 and 4 do in the main text—even if it is just a sketch will improve the readability of the paper.\n\nYes, we agree with the reviewer that a simple sketch for each algorithm is beneficial for the reader. Algorithm 2, 3, and 4 are originally described right before the introduction of Theorem 2, Line 253 to Line 257. As it might not be clear enough in our previous manuscript, we have moved their description to the beginning of the paragraph, Line 247 to Line 251, in the revised manuscript.\n\n---\nBased on the new algorithm (Algorithm 6) and the relaxation of the assumption in Algorithm 1, now the contribution of the paper has been further enhanced. We sincerely hope the reviewer finds that the paper has been technically strengthened.", " We thank the reviewer for reviewing the paper and providing valuable comments. We have revised the manuscript based on your comments.\n\n### Below we address specific concerns and suggestions:\n>It is unclear whether Algorithm 1 can be used in practice… Can Algorithm 1 be used in practice?\n\n**Clarification of Algorithm 1**\n\nUpon reflecting on your comment and Reviewer WJTP's comment, we realized we can relax the information needed for implementing Algorithm 1. Algorithm 1 only needs to be given a closed $\\epsilon$-ball in the interior of every piece of a CPWL function $p$ and be able to observe the output of $p$ when feeding an input. Algorithm 1 does not explicitly need to know any linear components.\n\n**Practical applications**\n1. **DNN pruning:** Theorem 1 says that the number of hidden neurons of a ReLU MLP representing a CPWL function $p$ can be bounded by a quadratic function of the number of pieces of $p$. Such a result can have an impact on DNN pruning. Taking ReLU MLP for example, one may want to estimate how many neurons can be possibly pruned away for an application. If the number of linear regions or pieces required is given, known, or estimated (based on loose bounds of activation patterns) for the application, then one can leverage the quadratic bound to know at least how many neurons can be pruned or even directly construct a network that has a lower complexity by Algorithm 1.\n2. **Neural engines**: A compute-intensive application computing CPWL functions can potentially leverage specialized hardware like neural engines to make the computation energy-efficient, given that neural engines are highly optimized for neural networks. Our Algorithm 1 provides an important procedure to convert a CPWL function to ReLU MLP. Meanwhile, it also guarantees the computational resources are bounded by a quadratic function, which is a substantial improvement compared to previous exponential bounds.\n\n>...the run time and the computational complexity of the algorithm are unclear… What is the computational complexity of the algorithm?\n\nWe have worked on the time complexity of Algorithm 1 and below we show that Algorithm 1 is a polynomial time algorithm, $\\text{poly}(n,q)$ for Theorem 1 and $\\text{poly}(n,k,q)$ for Theorem 2. In fact, the time complexity is $\\mathcal{O}\\left(q^3\\max(n,k)^3\\log_2q\\right)$. Based on the new results, we have updated several statements in the manuscript to reflect the time complexity.\n\n### Time complexity of Algorithm 1\n| Line | Operation count | Explanation |\n| :--- | :--- | :--- |\n| 1 | $\\mathcal{O}(nq\\max(n^2,q))$ | Algorithm 6 (see Table 6). |\n|2 | $\\mathcal{O}(q)$ | Repeat Line 3 to Line 9 $q$ times. |\n|3 | $\\mathcal{O}(1)$ | Initialize an empty placeholder.|\n|4 | $\\mathcal{O}(k)$ | Repeat Line 5 to Line 7 $k$ times.|\n|5 | $\\mathcal{O}(n)$ | Check $n+1$ affinely independent vectors.|\n|6 | $\\mathcal{O}(1)$ | Add an index.|\n|7 | - | -|\n|8 | - | -|\n|9 | $\\mathcal{O}\\left(k^2\\max(k\\log_2k,n)\\right)$ | Algorithm 2 (see Table 2).|\n|10 | - | -|\n|11 | $\\mathcal{O}\\left(q\\max(n,k)^2\\max(n,k,q)\\log_2k\\right)$ | Algorithm 3 (see Table 3).|\n|12 | $\\mathcal{O}\\left(q^3\\log_2q\\right)$ | Algorithm 2 (see Table 2).|\n|13 | $\\mathcal{O}\\left(q^3\\max(n,k)^3\\log_2q\\right)$ | Algorithm 4 (see Table 4).|\n---\n\n>Definition 4: ReLU network should be called ReLU-Multi Layer Perceptron. A general ReLU network is a network with only ReLU activations. A ResNet can also be a ReLU network. Hence, in the Broader impact chapter, it’s confusing that...\n\nWe thank the reviewer for pointing this out. We primarily follow the terminology used by [Arora et al., 2018] and [He et al., 2020] (see the references below). We have added a sentence before Definition 4 to emphasize that the ReLU network considered in the paper is a ReLU multi-layer perceptron. We have also improved the broader impact section by clarifying the architecture used in the paper. We believe now the reader can clearly distinguish our network architecture from other advanced architectures.\n\n[^1]: R. Arora, A. Basu, P. Mianjy, and A. Mukherjee. Understanding deep neural networks with rectified linear units. In International Conference on Learning Representations, 2018.\n\n[^2]: J. He, L. Li, J. Xu, and C. Zheng. ReLU deep neural networks and linear finite elements. Journal of Computational Mathematics, 38(3):502–527, 2020.\n\n>L11: Why is the invariance wrt the input dimension more important than the quadratic bounds?\n\nWe strongly agree with the observation that one is not more important than the other and that in reality, they are not comparable. We have improved this by removing “more importantly.”\n\n>L100: “q is always not less than k” -> $k\\leq q$\n\nWe have adopted the reviewer’s suggestion and improved the sentence. Thank you!\n\n---\n\nBased on the revisions, now the contribution of the paper has been further enhanced. We sincerely hope the reviewer will reconsider the rating based on the responses to the issues raised and the new results provided.", " The paper focuses on neural networks with ReLU activations. For these networks, the paper proposes bounds on the number of neurons needed to represent continuous piecewise linear functions (CPWL). These bounds are derived in terms of the number of linear pieces and the number of distinct linear components of the CPWL. In contrast to prior work, which proposed exponential bounds, the paper shows that quadratic bounds in the number of pieces are enough. ### Strengths\n* The paper significantly improves previous bounds, which enables much more accurate estimates of the number of neurons needed.\n\n### Weaknesses\n* It is unclear whether Algorithm 1 can be used in practice. Also, the run time and the computational complexity of the algorithm are unclear.\n**Clarity**\n* Definition 4: ReLU network should be called ReLU-Multi Layer Perceptron. A general ReLU network is a network with only ReLU activations. A ResNet can also be a ReLU network. Hence, in the Broader impact chapter, it’s confusing that the text says “We focus on ReLU networks in this paper, but it is possible to derive bounds with similar asymptotic growth rates for other neural network architectures such as … residual networks …, densely connected networks …\n **For the rebuttal**\n* Can Algorithm 1 be used in practice? What is the computational complexity of the algorithm?\n\n**Questions**\n* L11: Why is the invariance wrt the input dimension more important than the quadratic bounds?\n* L100: “q is always not less than k” -> $k \\leq q$\n Theoretical work, no direct negative societal impact.", " In this paper the authors provide bounds for approximating a continuous piecewise linear function by a ReLU function. \n\nWhen compared to previous work, the bounds provided by the authors are tighter. Furthermore, the bounds provided by the authors only depend upon the total number of linear components in the CPWL (unlike previous work). This helps when the growth in the number of dimensions is faster than the number of linear components. The authors are able to do this because they make use of a different representation of a CPWL function (as mentioned in equation 13). \n\nThe authors also provide an algorithm for finding a ReLU network that computes the given a CPWL. However, this algorithm requires one to know how many linear components are present in the function beforehand, and uses the fact that the final function is a combination of smaller ReLU function approximating the linear components. The paper is very well written and motivated and provides a rigorous proof for improving of bounds for approximating CPWL functions with neural network. The authors have done an amazing job at motivating and explaining their results as well as placing their work with the related work (which is very well explained). I really enjoyed reading the paper. \n\nA possible drawback is that Algorithm 1 assumes that the number of linear components in the function is known beforehand. Is there a way that this assumption could be relaxed in a way? I think giving a sketch of what Algorithms 2, 3 and 4 do in the main text—even if it is just a sketch will improve the readability of the paper. Yes", " Summary. The authors study the representation of continuous piecewise linear (CPWL) functions using neural networks with ReLU activations. They showed that, for Any CPWL function with q pieces or k linear components, it can be represented by a ReLU network\nwith certain layers l, maximum width w, and the number of hidden neurons h. The bounds in this paper are much better than in previous papers. The algorithm for constructing such networks is also given. Originality: The related works are adequately cited. The novelty of this paper is high. The results on the sharp bound of number of hidden neurons needed to representing CPWL functions in this paper, will certainly help us have a better understating of representation power of neural networks from a theoretical way. I have checked the technique parts and find that the proofs are solid. I think this is a significant contribution to the machine learning immunity. \n\nQuality: This paper is technically sound.\n\nClarity: This paper is clearly written and well organized. I find it easy to follow.\n\nSignificance: I think the results in this paper is significant, as explained above. It would be interesting to extend the results to more general network architectures. Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 3, 4 ]
[ "t_4bcZQ1Udg", "hrTaPpmcJnf", "ts616n8krl", "hrTaPpmcJnf", "ZcatGefNO0x", "4bMck8ZCeIT", "f_wDlbG3OC7", "Ae1S7CQgPdW", "4bMck8ZCeIT", "f_wDlbG3OC7", "Ae1S7CQgPdW", "nips_2022_sQiEJLPt1Qh", "nips_2022_sQiEJLPt1Qh", "nips_2022_sQiEJLPt1Qh" ]
nips_2022_hTxYJAKY85
Learning Graph-embedded Key-event Back-tracing for Object Tracking in Event Clouds
Event data-based object tracking is attracting attention increasingly. Unfortunately, the unusual data structure caused by the unique sensing mechanism poses great challenges in designing downstream algorithms. To tackle such challenges, existing methods usually re-organize raw event data (or event clouds) with the event frame/image representation to adapt to mature RGB data-based tracking paradigms, which compromises the high temporal resolution and sparse characteristics. By contrast, we advocate developing new designs/techniques tailored to the special data structure to realize object tracking. To this end, we make the first attempt to construct a new end-to-end learning-based paradigm that directly consumes event clouds. Specifically, to process a non-uniformly distributed large-scale event cloud efficiently, we propose a simple yet effective density-insensitive downsampling strategy to sample a subset called key-events. Then, we employ a graph-based network to embed the irregular spatio-temporal information of key-events into a high-dimensional feature space, and the resulting embeddings are utilized to predict their target likelihoods via semantic-driven Siamese-matching. Besides, we also propose motion-aware target likelihood prediction, which learns the motion flow to back-trace the potential initial positions of key-events and measures them with the previous proposal. Finally, we obtain the bounding box by adaptively fusing the two intermediate ones separately regressed from the weighted embeddings of key-events by the two types of predicted target likelihoods. Extensive experiments on both synthetic and real event datasets demonstrate the superiority of the proposed framework over state-of-the-art methods in terms of both the tracking accuracy and speed. The code is publicly available at https://github.com/ZHU-Zhiyu/Event-tracking.
Accept
The paper receives overall positive reviews and rebuttal has resolved the reviewer's concerns. The paper proposes a new framework that directly takes raw event clouds as inputs for object tracking. Reviewers agree that this innovation is inspiring. AC agrees and recommends accepting the paper.
train
[ "ZTzl3iiP2IM", "141dAUsLyD", "ff4rgapF5Wt", "8m5L1iAkaer", "bzpXEMFL8aB", "bLsXVZ7bRf", "MOQWgaR8kjG", "8IMpOjhuT7O", "Y94VUVsi-Bn", "tGVOPVxnuFP", "Ly-sNealNJ", "1iYFgRJ8ADE", "xNVqYUty0g", "pRkDff6m_1", "TI5VfXtChsZ", "puXdzygeIhK", "Rd863boTA9", "3-P75O81wmb", "I4ycaaUPqfN", "XZnEKfIhyIP", "YNkQdJ4Y9Is" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear **Reviewer kchd**\n\nThanks for your time and efforts in reviewing our submission **3911**, as well as the recognition of our work. We think we have answered your questions clearly and directly. We are also glad to answer them if you have any more questions. Thanks.\n\nThe authors", " Dear **Reviewer Vqd3**\n\nThanks for your time and efforts in reviewing our submission **3911**, as well as the recognition of our work. We think we have answered your questions clearly and directly. We are also glad to answer them if you have any more questions. Thanks.\n\nThe authors", " Dear **Reviewer bkyP**,\n \nThanks for your time and efforts in reviewing our submission **3911**, as well as the recognition of our work. We think we have answered your questions clearly and directly. We are also glad to answer them if you have any more questions. Thanks.\n\nThe authors\n", " The authors greatly appreciate Reviewer mj7a for the thoughtful and valuable comments. The authors also believe that event data has strong potential for object tracking! Thanks very much!", " Thank you for your time and effort in this response. The additional analysis is helpful, and detailed explanations are convincing. Please incorporate the discussions in the final version if the paper is accepted. Thank you so much for the extraordinary efforts and for sharing your insights on handling event clouds for object tracking. I have updated my rating accordingly.", " The authors are glad to know that all your concerns have been resolved. Besides, thanks for your additional suggestion. Here, we also provide more explanations, which will be discussed in the camera-ready version. \n\n**Q1.** *Sorting out all factors (e.g., network architectures) that could potentially contribute to the performance gain to identify the \"actual\" gain from the sampling strategy.*\n\n**Response**: Currently, we demonstrate the effectiveness and advantage of the proposed sampling strategy based on the ablative study results shown in 4) and 6) of Table 3 of the manuscript, where in 4), random sampling is used to replace the proposed sampling strategy with the remaining settings exactly the same as 6), i.e., the full model. Qualitatively, the proposed sampling method may sample more regularly distributed key-events than the random sampling, which indeed may facilitate the following swin-layer embedding. \n\nMore importantly, following your suggestion, we also designed a **plain model** to **exclude** the potential effects of other architectures such that the effectiveness of the sampling layer can be uniquely identified. Specifically, **the plain model only contains the basic and necessary modules**, i.e., 4 hieratical graph convolution layers to embed event clouds, a self-attention layer to globally fuse embeddings, a cross-attention layer to match the embeddings from template to search key-events (the semantic-driven Siamese-matching branch), and a target proposal module same as the proposed method. We trained and tested the plain model with the same experiment settings but different sampling methods. As listed in the following table, it can be seen that the plain model with the proposed sampling strategy achieves 3.1% and 4.4% improvements in terms of RSR and RPR, respectively, compared with the plain model with random sampling. Since both the graph convolution and attention layers are widely adopted for point set processing, we think that such a plain model **has no bias** to different sampling methods, and thus, the effectiveness and advantage of the proposed sampling strategy can be uniquely validated. We will add more discussions in the camera-ready version.\n\n|Methods|$RSR$ | $OP_{0.50}$|$OP_{0.75}$|$RPR$\n|----|----|-----|-----|-----\n|The plain model with random sampling |44.1|43.7|9.7|65.1\n|The plain model with proposed sampling|47.2|49.1|11.0|69.5 \n", " Thanks for the authors’ time and efforts in responding to my questions. \n\n1. As written in the Section of “Question,” the reviewer’s main suggestion is to explore if the existing 3D point cloud trackers can be applied for this task:\n - The authors describe the difference between the two data types (3D point cloud acquired by laser scanners vs. event cloud acquired by event cameras).\n - The authors implement the two recent point cloud trackers (i.e., [1] and [2]) on the event camera data.\n - The experiments show that the proposed tracker outperforms existing point cloud trackers by a large margin. \nEmpirically, the numbers show the importance of considering data characteristics. While the results are promising, the reviewer would encourage the authors to incorporate further discussions on what we could do to sort out all factors (e.g., network architectures) that could potentially contribute to the performance gain such that we can identify the \"actual\" gain given by the proposed sampling strategy.\n \n2. What is the importance of the proposed *semantic-aware target likelihood prediction? Thanks for the additional ablative study. The experiment indeed shows that semantic-aware target likelihood prediction plays an essential role.*\n3. Thanks for the clarification. My concern is resolved.", " Dear Reviewers\n \nThanks for your time and efforts in reviewing our submission **3911** and the valuable comments. We hope our detailed responses have addressed all your concerns. We are glad to answer them if there are any more questions.\n\nWe are looking forward to your responses.\n\n\nThe authors", " **Q4**. *Table 2 presents the proposed algorithm has a higher FPS on Evt-LaSOT. The authors point out that the main reason is this method directly consumes raw event data without pre-processing. Is there any other reason besides this? Can you design some ablation experiments to analyze the processing speed?*\n\n**Response**: We confirm that the high efficiency of our method is mainly credited to the fact that our method directly consumes raw event data without pre-processing. Specifically, we utilized the officially released code by FE108 for pre-processing, i.e., converting raw event data into event frames. The average time of pre-processing one event-frame is about 33.8 ms. However, for our method, such a pre-processing step is not required, and its total running time is about 39.7 ms, which validates the efficiency of our method. \n\n**Q5**. *The video demo's main screen intuitively shows this method's effect, but what do the three sub-screens on the right refer to? Please explain the specific meaning of each sub-screen.*\n\n**Response**: Sorry for the confusion caused. For the meaning of each sub-screen, we have denoted it in the first 2 seconds of the video demo, i.e., the main screen shows the tracking results (i.e., bounding boxes) of different methods, and the other three sub-screens in the right-side show, from top to the bottom, the intermediate results of motion-aware target likelihood, semantic-driven target likelihood, and motion flow.\n\n**Q6**. *Limitations: The authors discuss the public privacy issues in the additional material and argue that the event-based vision puts a lower threat to public privacy than RGB image. However, I want to know how secure and robust the event-based vision is? If we deploy the event-based scheme on self-driving systems, will it be safer than the current RGB image-based plan?*\n\n**Response**: The authors agree that RGB data-based tracking is indeed an important part for industrial applications. However, event data-based methods have demonstrated its potential and advantage under the special environments, e.g., strong exposure and low-light, which may further improve performance and robustness of the whole system, together with the RGB data-based tracking. The authors think that current event data-based tracking is at the very beginning stage, compared with RGB data-based tracking, and more efforts and studies should be devoted. Note that an Autopilot Tesla Model 3 has killed a motorcyclist on southbound I-15 at 1 a.m. Jul/24/22’, which may be caused by the poor lighting conditions in the early morning. \n", " Thanks for your time and efforts in reviewing our submission, as well as the valuable comments. The authors appreciate your recognition of our work. In the following, we will address all of your concerns point-by-point in detail.\n\n**Q1**. *What is the correspondence between subfigures in the upper and lower lines of Figure 3? Can you further explain how to understand the green arrow in the second row? (Number of arrows, consistency of direction, the thickness of arrows, etc.)*\n\n**Response**: Sorry for the confusion caused. The upper row compares the tracking accuracy of different methods on four different event cloud samples. The bottom row denotes a local zoom-in view of the dashed box in corresponding sub-figures of the upper row. The green arrows start from the spatial locations of key-events and point to the direction of motion flow. We also clarify that the thickness difference of those arrows does not mean anything and is only caused by the different ratios of zoom-in local-views. We will update this figure in the camera-ready version. \n\n**Q2**. *In Section 4.1, what is the sequence length of the datasets used in this experiment (mean frame, min frame, max frame)? Does the sequence length have a greater impact on the experimental results?*\n\n**Response**: In the following table, we list the details of the test dataset. We also categorize the performance of our method based on the sequence length in the following tables. Besides, the results of the second-best method Trans-T are also provided for comparisons.\n\n| | Min length | Max length| Avg length\n|----|----| -----|-----\n|FE108 \t | 641\t | 2400\t | 1864\n|Evt-LaSOT| 1260\t | 4499\t | 2372\n\n\n|FE108 \t\t|Method| $RSR$| $OP_{0.50}$| $OP_{0.75}$| $RPR$\n|----|----|-----|-----|---- |-----\n|Seqs. \\#Frames < 1000|Trans-T|70.4 |91.7 | 40.0 | 100.0 \n|Seqs. \\#Frames < 1000|Ours |70.5 | 93.1\t | 39.8 | 100.0\n|Seqs. \\#Frames > 2000|Trans-T |49.4 |55.9 |21.2\t| 78.3\n|Seqs. \\#Frames > 2000|Ours |54.2 | 63.9\t | 21.4 | 85.4\n|All Seqs. |Trans-T |52.4 | 62.2 | 21.0 | 87.0 \n|All Seqs. |Ours |54.9 | 65.8 | 21.4 | 85.9\n\n\n\n|Evt-LaSOT\t|Method| $RSR$| $OP_{0.50}$ | $RPR_{0.075}$\n|----| ----|-----|-----|----- \n|Seqs. \\#Frames < 1500|Trans-T |41.0 | 27.2 | 52.2\n|Seqs. \\#Frames < 1500|Ours |39.4 | 23.3 | 47.9\n|Seqs. \\#Frames > 3000|Trans-T |27.2 | 11.0 | 18.1\n|Seqs. \\#Frames > 3000|Ours \t |31.0 | 21.5 | 34.1\n|All Seqs. |Trans-T |30.3 | 17.9 | 30.1\n|All Seqs. |Ours\t |32.3 | 22.1 | 35.9\n\nFrom the above tables, it can be seen that our method generally performs better on relatively short sequences than on relatively long sequences, which is also a common phenomenon for the object tracking task. On the relatively long sequences, our method consistently outperforms Trans-T to a large extent, demonstrating the advantage of our method. Besides, in Lines 312 – 313 of the manuscript, we indeed discussed that more efforts and studies should be devoted to improving the performance on extremely long sequences, which is more challenging.\n\n**Q3**. *As far as I know, LaSOT contains 1400 sequences, and the sequence length varies from 1K to 10K frames, so how does the Evt-LaSOT construct in this article filter data from the original LaSOT? Are there specific standards? Is the filtered sequence have the same length as the original sequence?*\n\n**Response**: We randomly selected 8 sequences from the original LaSOT dataset, i.e., bird-15, bird-3, crab-6, crab-18, cat-20, cat-3, crocodile-3, bear-4. The statistics of these sequences are provided in the above table (**Response to Q2**). In the camera-ready version, we will clarify this issue.\n", " **Q7**. *The performance on Ev-LoSOT (PRP 30+) is very low, while the top RGB trackers (e.g. OSTrack) have achieve performances higher than 70+ with realtime speed. It implies the experiment on Ev-LaSOT tend to fail. The SiamBAN is mentioned but is not compared in the experiments.*\n\n**Response**: As mentioned in the **Response to Q3**, the simulated event data may be less realistic (lack of sufficient information, compared with real event data). As stated in Lines 277-280 of the manuscript, considering that this issue may compromise the performance of event tracking methods, resulting in unfair comparisons with RGB-trackers, we only compared with event tracking methods in order to demonstrate the “relative superiority” of the proposed method used for tracking **non-rigid** objects in the experiments conducted on the simulated event dataset Evt-LaSOT (Note that all target objects in the real event dataset FE108 are rigid). Besides, the SOTA RGB-trackers utilize larger-scale training datasets, e.g., TrackingNet, LaSOT, COCO, and GOT-10K, which is also unfair to event-based methods. For the method SiamBAN, according to the results shown in Table 2 of Ref. [2], it can be known that SiamBAN does not perform well for event data. The comparison with the more powerful Transformer-tracker [3] is more convincing to demonstrate the advantage of the proposed method. Actually, when preparing for the manuscript, considering the space limitation we removed the SiamBAN but forgot to delete it in the final manuscript. Sorry for the mistake. We will revise it in the camera-ready version.\n\n**Q8**. *The result of FENet in Table 1 seems to be worse than the one reported in the original paper.*\n\n**Response**: Note that FENet is a multi-modality method, which consumes both RGB and event data to achieve tracking. As mentioned in **Line 249** of our manuscript, for a fair comparison, we compared the proposed method only with the event branch of FENet (its performance is listed in Table 4 B of Ref. [2]). Besides, we also want to note that we retrained the event-branch of FENet and obtained a higher RSR (52.3) than that reported in the original paper (i.e., 52.0). Thus, we confirm the results of FENet are correct.\n\n[1] Gehrig D, Gehrig M, Hidalgo-Carrió J, et al. Video to events: Recycling video datasets for event cameras, in Proc. IEEE/CVF CVPR, 2020, pp. 3586-3595.\n\n[2] Zhang J., Yang X., Fu Y., et al. Object tracking by jointly exploiting frame and event domain, in Proc. IEEE/CVF ICCV, 2021, pp. 13043-13052.\n\n[3] Chen X, Yan B, Zhu J, et al. Transformer tracking, in Proc. IEEE/CVF CVPR, 2021, pp. 8126-8135.\n", " **Q3**. *In the experiment on synthetic data, would low fps of videos make the simulation less realistic and bring more gap between simulation and real-world data?*\n\n**Response**: The reviewer is correct. There indeed exists a gap between the simulated and real events. Specifically, to simulate event data from the corresponding RGB video, the simulator [1] generally interpolates input videos with low FPS to ones with high FPS using estimated optical flow. Then, an event calculator is applied to the videos with high FPS to output simulated event data. As the interpolated videos lack the HDR characteristic and the employed optical flow may not be completely accurate, the simulated event data are less realistic. Despite that the simulated event data have these drawbacks, their overall characteristic and distribution are consistent with real event data, and thus such a simulation setting has been widely adopted in the field of event data processing and analysis due to the lack of real event data.\n\n**Q4**. *What about comparison with other event representations such as voxel grid and time surface?*\n\n**Response**: Note that our method is specially designed for directly processing raw event data (or event clouds), and it **cannot** consume event data with other representations, e.g., voxel grid or time surface, via post-processing. Besides, Ref. [2] has experimentally validated that the event-frame-based representation produces better performance than time surfaces, event count, time-surface with linear time decay frames. We refer the reviewer to Table 4 J, K, L, M,and N of Ref. [2] for more details. \n\n**Q5**. *It is not clear how to effectively supervise the motion-aware branch to get the correct motion? Is there any intermediate supervision or regularization?*\n\n**Response**: Sorry for the confusion caused. We refer the reviewer to Sec. 3 of Supplementary Material for the details of the regularization term $L_{CD}$ involved in Eq. (5) of the manuscript, which is used for supervising the flow estimation module. The event cloud within a very tiny time interval at the beginning of each event cloud sample is utilized as the potential locations of the back-traced key-events. We use Chamfer Distance to measure the discrepancy between two point sets. Moreover, considering that some key-events may be missing, or some noisy events may be sampled in such a potential target set, which may disturb the learning process, we also truncate the accidental errors using a threshold.\n\n**Q6**. *In Fig. 3, what is the relationship between (a) (b) (c) (d)? In addition, it seems the motion vectors are noisy. Do the motion vectors can really reflect the movement triggering the corresponding key events?*\n\n**Response**: Sorry for the confusion caused. The four subfigures (a), (b), (c), and (d) indicate the tracking results of different methods on four event cloud samples from different sequences. We will update the figure caption in the camera-ready version. Besides, we also agree with the reviewer that there are indeed some noises (or errors) in the predicted motion vectors. However, the majority of the motion vectors are consistent with the movement direction of the target object (see the video demo contained in Supplementary Material). Besides, it is worth noting that most of the noisy motion vectors correspond to the noisy events. As the ground-truth motion vectors are unavailable for this dataset, we cannot evaluate the prediction quality quantitatively. We also want to notice that the predicted motion vectors are by-products, and although our method does not predict completely accurate motion vectors, such a motion flow estimation module **indeed makes a contribution** (see the ablation studies in 3) and 6) of Table 3 of the manuscript). Last but not least, Fig. 3 is mainly used to demonstrate that the motion flow estimation module indeed works as what we claim, i.e., predicting motion vectors. In the future, more advanced techniques could be investigated to predict more accurate motion vectors to improve tracking performance. \n", " Thanks for your time and efforts in reviewing our submission, as well as the valuable comments. The authors appreciate your recognition of our work. In the following, we will address all of your concerns point-by-point in detail.\n\n**Q1**. *The description about key-event sampling is not very clear. For grids with empty events, do they also try to find spatially closest events as key-events? In addition, what if there are two events that both are closest events to the grid? Does this operation heavily compress temporal information within event streams?*\n\n**Response**: As stated in Line 160 of the manuscript, for each grid point, we choose its spatially closest event as the key-event, no matter whether the grid is empty or not. For the special case that multiple events have the same distance to a typical grid point, we will randomly choose one as the key-event. Note that we use the 2D grid to sample only a few key-events efficiently and then rearrange the spatio-temporal embedding of these key-events to form a regular 2D matrix according to the indices of the grid points, which is fed into the subsequent transformer. Moreover, the learned embeddings always stay in their original positions for the subsequent key-event back-tracing and target likelihood prediction. For the potential temporal information compression, it may occur in the downsampling procedure because the spatial and temporal information is **coupled** (an event is denoted by (x, y, t)). However, it is **not serious** for the proposed framework because the GNN-based spatio-temporal embedding utilizes the adjacent information around the key-events. More importantly, we quantitatively investigated such a compression effect via counting the ratio of perceived events (including the key events and the utilized neighbor events) by the GNN in each time interval. The following table shows that our method can perceive about 90% of events in each time interval, preserving most temporal information. Moreover, the ablation studies, i.e., 4) and 6) of Table 3 of the manuscript, also validate the effectiveness of the designed sampling algorithm. \n\n|Normalized time range | [0,0.1) | [0.1,0.2)| [0.2,0.3)| [0.3,0.4)| [0.4,0.5)| [0.5,0.6)| [0.6,0.7)| [0.7,0.8)| [0.8,0.9)| [0.9,1.0]\n| ---- |-----|-----|-----|-----|-----|-----|-----|-----|-----|----- \n| \\#Events in event clouds\t |100301 |102215 |101460 |98821 |99518 |101035|100909\t|100071 | 97715 | 97955\n| \\#Events been perceived by the GNN |88176 |92543 |93668 |92450 |93803\t|95285\t|94575 | 92803 | 89008 | 86761\n| Ratio of perceived events (%)\t | 87.91 | 90.54 |92.32 |93.55\t|92.26\t|94.31\t|93.72 |92.74 |91.09 |88.57\n\n**Q2**. *Too many details are moved to supplementary material, which cause difficulty in understanding the whole method. For example, GNN-based spatio-temporal embedding, target likelihood prediction via motion trajectory back-tracking, and confidence-based object proposals.*\n\n**Response**: Sorry for the confusion caused. As claimed in the manuscript, we make **the first attempt** to construct a new end-to-end learning-based paradigm that directly consumes event clouds. Accordingly, we build the whole framework **from scratch**, rather than on the basis of an existing pipeline. Thus, there are too many technical details different from previous works. Due to space limitations, we mainly put the motivation, the function, and the brief implementations of each module in the manuscript so that the proposed paradigm can be grasped from a more global perspective. Regarding the detailed implementations, we have to move them to the supplementary material. Besides, to help the reviewers understand the method better, we also submitted the **source code**. Finally, according to the NeuIPS policy, i.e., “ If your submission is accepted, you will be allowed an additional content page for the camera-ready version,” we will add more details in the camera-ready version to make the paper better understood. \n\n \n", " **Q3**. *When comparing with SOTA, why do you choose some RGB-based methods to train yourself instead of comparing with other event-based methods?*\n\n**Response**: To the best of our knowledge, when submitting our manuscript, FENet [1] compared in Table 3 of the manuscript is the only officially published end-to-end learning-based event tracking work, whose source code is also publicly available. \n\n- We did not compare with model (or non-learning) -based methods because most of them did not release their source codes. Besides, for model-based methods, hyperparameters have to be tuned for each specific sequence, making it hard to achieve desired performance on the large-scale dataset.\n\n- RGB data-based tracking has been studied for many years with great efforts devoted, and many mature techniques have been proposed, producing impressive performance. Thus, we believe it is convincing to construct event tracking baselines by modifying the strong RGB-trackers to adapt event data. Note that such a manner was also adopted in Ref. [1]. As shown in Table 3 of the manuscript, the modified event-trackers from RGB-trackers, i.e., E-PrDiMP and E-TransT, which consume event-frames, achieve reasonable performance. \n\n- Finally, we also want to notice that the leftmost four methods in Table 3, i.e., ATOM, DiMP, SiamFC++, and PrDiMP, take the corresponding RGB data as input, which are provided to illustrate the differences between RGB data-based tracking and event data-based tracking. \n\n**Q4**. *The offset is derived from the projection of the key-event in the last step, and the key-events in the last step is not the same timestamp either. And the time difference between the same key-event before and after is not sure. Is it unreasonable to have a relationship between the displacement and the timestamp?*\n\n**Response**: Sorry for the confusion caused. The key-event back-tracing module is designed to back-trace each key-event to the place, where it should be, in the 2-D initial x-y plane whose time is the initial time of the corresponding event cloud sample. Thus, we can compare back-traced key-events with the previous bounding box, as those key-events have been coordinated to be with the same timestamp as the bounding-box. Besides, the time difference could be easily calculated, i.e., the current key-event time-stamp minus the initial time of this event cloud sample.\n\n[1] Zhang J, Yang X, Fu Y, et al. Object tracking by jointly exploiting frame and event domain, in Proc. IEEE/CVF ICCV, 2021, pp. 13043-13052.\n", " Thanks for your time and efforts in reviewing our submission, as well as the valuable comments. The authors appreciate your recognition of our work. In the following, we will address all of your concerns point-by-point in detail.\n\n**Q1**. *Motivation mentioned the high temporal resolution and sparse characteristics of event data. Where is the specific design for these two characteristics the data processing and model design.*\n\n**Response**: The traditional event frame-based trackers accumulate events into event-frames, inevitably compromising the temporal resolution, while the proposed method directly consumes raw event data (or event clouds), which can naturally preserve original high temporal resolution. Moreover, the sparse and asynchronous sensing manner (i.e., sparsity) results in that the underlying structure of event clouds is irregular (Lines 134 – 135 of the manuscript). To deal with such an irregular characteristic, we propose to adopt the graph neural network (GNN) to embed the spatio-temporal information. Compared with the event frame-based method, which convolves on both triggered and inactivated pixels, our GNN-based method directly processes the sparse event points, which is more plausible for event data processing. Besides, to handle the huge data samples caused by the raw event data efficiently, we propose a simple yet effective downsampling strategy. Finally, the subsequent modules, including target likelihood prediction via semantic-driven Siamese-matching and motion trajectory back-tracking, object proposal, as well as the training strategies, are all specially designed to adapt to event cloud data. Note that in Table 3 of the manuscript, we experimentally validated the effectiveness of each module.\n\n**Q2**. *The ablation experiment did not discuss how much does the use of raw data contributed to the final results. Does removing the uniquely designed data processing part of your method make a big difference to the final result?* \n\n**Response**: First it is worth noting that all modules and strategies of our framework are specifically and carefully designed for **directly processing raw** event data (or event cloud data), e.g., the key-event sampling algorithm, graph neural network-based key-event embedding, key-event back-tracing, etc. Thus, the proposed method **cannot** be fed with other types of event representations. But we believe the advantage of using raw event data could be validated to some extent, based on the following facts: the methods under comparison, including FENet [1], E-PrDiMP and E-TransT that handle **event-frame**, are built upon RGB data-based pipelines, which contain many **advanced and mature** techniques, while our framework is built **from scratch** and thus relatively plain. In this situation, our method still achieves better performance than the compared methods (see Table 1 of the manuscript), demonstrating the great potential of directly processing raw event data, which retains the unique characteristics of event data to the greatest extent. Besides, we conducted comprehensive ablative studies in Table 3 of the manuscript, which convincingly validate the effectiveness of each of the specially designed modules.\n", " **Q3**. *STOA comparison: In Table 1, the proposed method demonstrates inferior performance compared to E-PrDiMP on the HDR setting …. event-based generally surpass RGB-based methods under LL and HDR scenarios.*\n\n**Response**: Note that E-PrDiMP refers to the modified PrDiMP (an RGB-based tracker) to adapt to event data in the form of the frame representation. In the HDR (high dynamic range) scenario of the FE108 dataset, the objects have relatively slow movement, making that the edges of the objects in the event-frame are relatively clear. Thus, the event frame-based method E-PrDiMP may grasp the semantic meaning of objects better, leading to its higher tracking performance. Meanwhile, the higher tracking performance of E-PrDiMP than PrDiMP in the LL (low-light) and HDR scenarios also supports our statement that “event-based generally surpass RGB-based methods under LL and HDR scenarios”. Such an advantage is credited to the high dynamic range characteristic of the event camera, as mentioned in Lines 30 – 32 of our manuscript, i.e., it could capture pixel intensity variation under very poor light conditions, while traditional RGB cameras may even have no response under these conditions. We also refer the reviewer to the Fig.7 of Ref. [3] for the visual comparison between event and RGB data. We will add the corresponding discussion in the camera-ready version.\n\n[1] Qi H., Feng C., Cao Z., et al. P2b: Point-to-box network for 3d object tracking in point clouds, in Proc. IEEE/CVF CVPR, 2020, pp. 6329-6338.\n\n[2] Zhou C., Luo Z., Luo Y., et al. PTTR: Relational 3D Point Cloud Object Tracking with Transformer, in Proc. IEEE/CVF CVPR, 2022, pp. 8531-8540.\n\n[3] Zhang J., Yang X., Fu Y., et al. Object tracking by jointly exploiting frame and event domain, in Proc. IEEE/CVF ICCV, 2021, pp. 13043-13052.\n", " Thanks for your time and efforts in reviewing our submission, as well as the valuable comments. The authors appreciate your recognition of our work. In the following, we will address all of your concerns point-by-point in detail.\n\n**Q1**. *3D point cloud tracking: The proposed framework shares similarities with existing 3D point cloud tracking literature …. comparing the proposed framework with a state-of-the-art 3D point cloud tracker, which aims to show the importance of considering data characteristics.*\n\n**Response**: We agree with the reviewer that the raw event data (or event clouds) could be thought of as 3D point cloud (PC) data. However, they are still different due to the fact that these two kinds of data have different physical meanings. Specifically, 3D Point clouds by the LIDAR sensor record the geometric information of 3d entities, which are generally collected by measuring how long the emitted pulses of infrared light take to come back after hitting nearby objects/scenes. Only the part of the object/scene visible to the LIDAR sensor will be perceived (one emitted pulse will generate at most one 3D point). Thus, the captured 3D points are distributed on the **surface** of the 3D entity. However, even data are acquired by measuring the pixel intensity variations in a short time period, and for a typical pixel, there may be multiple events (i.e., the events have the same x and y dimensions but different t dimensions). Thus, the events are distributed **within a 3D cube**. Such a data distribution difference may make it **inappropriate** to apply PC-trackers to event data. Considering that the final object proposal of the event data is in the 2D spatial domain, which is different from that of 3D point cloud data, we design a key-event sampling algorithm to enlarge the spatial receptive field, as mentioned in **Lines 137 – 139** of our manuscript, which is also experimentally validated in 4) and 6) of Table 3 of the manuscript.\n\nBesides, the high-temporal resolution of event data naturally retains the object motion information. The proposed event-tracker is able to utilize such inherent motion information to back-trace the potential initial positions of key-events and learns a motion-aware target likelihood for boosting the classic semantic-driven Siamese-matching, which is also different from current PC-trackers. Moreover, to avoid accumulated errors by such a motion-aware target likelihood estimation module, we also design a robust training-strategy by injecting noise into the fed bounding-box, as mentioned in **Lines 137 – 139** of our manuscript (experimentally validated in the 3) and 2) in Table 3 of the manuscript).\n\nFollowing your suggestion, we also conducted experiments to compare the proposed method with SOTA PC-trackers on the FE108 dataset. Specifically, we trained the PC-trackers with the same experiment settings as ours, and we also tried our best to adapt those methods to event data for optimal performance. As shown in the following table, it can be seen that the proposed method exceeds the PC-trackers [1, 2] to a large extent, i.e., 7.4% and 15.5% improvements in terms of the success rate and precision respectively, which demonstrates the demand of designing event-specialized tracking algorithms. We will add some discussions about the relation between event- and PC-trackers in the camera-ready version.\n\n|Methods| $RSR$ | $OP_{0.50}$| $OP_{0.75}$| $RPR$\n|----| ----| ----- |-----|-----\n|P2B CVPR’20 [1]\t |36.1\t| 35.4 \t | 4.7 \t |40.5\n|PTTR CVPR’22 [2]\t |47.5\t| 49.6 | 11.5 \t |70.2\n|Ours \t\t |54.9 | 65.8 | 21.4 |85.9\n\n**Q2**. *Ablative studies: The reviewer found the contribution of the two target likelihood predictors unclear. Specifically, based on the implementation of the proposed architecture, the authors could have removed one of the predictions and tested the corresponding model. However, in Table 3, the performance of removing semantic-aware target likelihood prediction is missing.*\n\n**Response**: In the ablative study (Table 3 of the manuscript), we always reserve the semantic-aware target likelihood module under different cases because we think it is a basic and necessary module in the framework to achieve object tracking. Following your suggestion, we also conducted such an ablation study on the FE108 dataset, i.e., removing this module with all the other modules unchanged. As listed in the following table, it can be seen that the tracking performance decreases significantly.\n\n| | $RSR$ | $OP_{0.50}$| $OP_{0.75}$| $RPR$\n|----|----|-----|-----|-----\n|W/O semantic-aware target likelihood|45.2 | 43.5\t | 10.2 | 65.3\n|Full model \t\t\t |54.9 | 65.8 | 21.4 | 85.9\n\n\n", " The paper tackles the problem of event data-based object tracking. Instead of using event frame/image representation as in the existing methods, the authors chose to process raw event clouds for event data-based object tracking. The proposed framework includes a downsampling strategy to extract key events from a raw event cloud, a graph-based network to embed spatio-temporal information of key-events, semantic and Motion-aware Target Likelihood Prediction, and a confidence-based object proposal. The authors conduct experiments on FE108 [49] and a new synthetic dataset Event LaSOT for tracking the evaluation of non-rigid objects. All the baselines and the proposed method are trained on FE108 and evaluated on the testing set of FE108 and Evt-LaSOT. The experiments prove the possibility of consuming raw event clouds for object tracking and achieving state-of-the-art performance on challenging datasets. Strengths:\n1. The authors focus on event-based object tracking, and we make the first attempt to construct a new end-to-end learning-based paradigm that directly consumes event clouds. The reviewer found the direction will interest those working on event-based cameras. Moreover, those who work on point cloud modeling might find a new problem to test the applicability and generalization of their methods.\n2. The work contributes a network architecture that can consume a raw event cloud and demonstrate that the network can outperform other event cloud representations for object tracking on two challenging datasets.\n3. Convincing tracking results are demonstrated in the supplementary material.\n4. The implementation of this framework has been released.\n\nWeaknesses:\n1. 3D point cloud tracking: The proposed framework shares similarities with existing 3D point cloud tracking literature, e.g., Qi et al., 2020 and Zhou et al., 2022. It will be essential to summarize the difference between the literature pools. Moreover, the authors could analyze how the proposed architecture harvests characteristics of event cloud (e.g., inherent motion trajectories) where methods built for 3D point cloud tracking do not consider these aspects. Moreover, the experimental section could be further improved by comparing the proposed framework with a state-of-the-art 3D point cloud tracker, which aims to show the importance of considering data characteristics.\n- Qi et al., P2B: Point-to-Box Network for 3D Object Tracking in Point Clouds, CVPR 2020\n- Zhou et al., PTTR: Relational 3D Point Cloud Object Tracking with Transformer, CVPR 2022\n\n2. Ablative studies: The reviewer found the contribution of the two target likelihood predictors unclear. Specifically, based on the implementation of the proposed architecture, the authors could have removed one of the predictions and tested the corresponding model. However, in Table 3, the performance of removing semantic-aware target likelihood prediction is missing.\n\n3. STOA comparison: In Table 1, the proposed method demonstrates inferior performance compared to E-PrDiMP on the HDR setting. The discussion in Sec. 4.2 does not explicitly discuss it. It would be essential for the authors to provide the corresponding insights as they mention that event-based generally surpass RGB-based methods under LL and HDR scenarios. The current recommendation is Borderline accept (5). Overall, the authors indeed prove the claim that they design an architecture that consumes raw event cloud and achieves promising object tracking performance on two challenging datasets compared with STOA. The paper could be improved by discussing the difference between the proposed methods with existing 3D point cloud tracking literature. Moreover, it would be interesting to demonstrate that existing 3D point cloud trackers could not outperform the proposed method as it considers the characteristics of the event cloud. Yes, the authors discuss the limitations of the work.", " This paper make the first attempt to construct a new end-to-end learning-based paradigm that directly consumes event clouds. Considered the high temporal resolution and sparse characteristics in event clouds, it proposed key-event and graph-based network to extract irregular spatio-temporal information from raw data. Then this paper proposes a dual-path architecture to predict the possibility of a key-event of the search event cloud. The dual-path architecture captures the similarity of each search key-event to the template key-events and utilize the information of motion trajectories respectively. This paper demonstrates the superiority of the proposed framework over state-of-the-art methods in terms of both the tracking accuracy and speed. (1) The paper has a clear structure and is easy to understand the motivation and innovation.\n(2) It proposed key-event and graph-based network to extract irregular spatio-temporal information from raw data, Making better use of the characteristics of event data.\n(3) It developing a new end-to-end learning-based paradigm that directly consumes raw event data , in which new modules considering the unique features of event clouds.\n\nWeaknesses:\n(1) The specific technical details are not clear enough\n(2) This paper mentioned the high temporal resolution and sparse characteristics of event data. The biggest motivation of this paper is to design data feature extraction module and algorithm model according to the unique attributes of data. But I didn't see a targeted design of the unique attributes of the Event data.\n (1) Motivation mentioned the high temporal resolution and sparse characteristics of event data. Where is the specific design for these two characteristics the data processing and model design.\n(2) The ablation experiment did not discuss how much does the use of raw data contributed to the final results. Does removing the uniquely designed data processing part of your method make a big difference to the final result ?\n(3) When comparing with SOTA, why do you choose some RGB-based methods to train yourself instead of comparing with other event-based methods?\n(4) “Considering that trajectories are usually continuous, the former event points should have relatively smaller moving offsets” . The offset is derived from the projection of the key-event in the last step, and the key-events in the last step is not the same timestamp either. And the time difference between the same key-event before and after is not sure. Is it unreasonable to have a relationship between the displacement and the timestamp?\n The motivation is very convincing, but targeted design and description with respect to motivation are needed. The comparison experiment should also be focused on the points emphasized in the motivation.", " Instead of converting events into 2D event frames/images, this work first sample a subset of key events and use GNN to transform them into high dimensional feature embeddings. The resulting feature embeddings are used to compute target likelihood following siamese fashion. In addition, motion-aware target likelihood is also computed to strengthen the matching process. Experiments on both synthetic and real event datasets demonstrate the its effectiveness. 1. A novel event processing method which first samples large amount of events into a subset of key events and then uses GNN to transform them into feature embeddings. the embedding representations could preserve more task-specific information than others.\n2. Different from previous event-based trackers that, the proposed method explore to utilize the motion information contained within the event could, which is very interesting. 1. The description about key-event sampling is not very clear. For grids with empty events, do they also try to find spatially closest events as key-events?\nIn addition, what if there are two events that both are closest events to the grid? \nDoes this operation heavily compress temporal information within event streams?\n2. Too many details are moved to supplementary material, which cause difficulty in understanding the whole method. For example, GNN-based spatio-temporal embedding, target likelihood prediction via motion trajectory back-tracking, and confidence-based object proposals.\n3. In the experiment on synthetic data, would low fps of videos make the simulation less realistic and bring more gap between simulation and real-world data?\n4. What about comparison with other event representations such as voxel grid and time surface?\n5. It is not clear how to effectively supervise the motion-aware branch to get the correct motion? Is there any intermediate supervision or regularization?\n6. In Fig. 3, what is the relationship between (a) (b) (c) (d)? In addition, it seems the motion vectors are noisy. Do the motion vectors can really reflect the movement triggering the corresponding key events?\n7. The performance on Ev-LoSOT (PRP 30+) is very low, while the top RGB trackers (e.g. OSTrack) have achieve performances higher than 70+ with realtime speed. It implies the experiment on Ev-LaSOT tend to fail. The SiamBAN is mentioned but is not compared in the experiments.\n8. The result of FENet in Table 1 seems to be worse than the one reported in original paper. Yes, authors have discussed the limitation of this work. ", " Unlike existing methods that usually reorganize raw event data to solve the unusual data structure problem, this work develops new designs tailored to the unique data structure to realize object tracking. To this end, the authors construct a new end-to-end learning-based paradigm that directly consumes event clouds. The proposed method featured key-event embedding and motion-aware target likelihood prediction to take advantage of the unique characteristics of event data. Extensive experiments on both synthetic and real event datasets demonstrate the excellent performance of the proposed method in terms of both tracking accuracy and speed. This paper concentrates on the unusual data structure problem and presents the first end-to-end learning-based object tracking paradigm on raw event data. Experiments illustrate that the proposed framework achieves competitive performance and efficiency compared to SOTA methods. The article is well-structured and easy to read. However, due to space limitations, some experimental details are not fully elaborated, and the analysis of experimental results can also be further strengthened. Please refer to Questions for details. 1.\tWhat is the correspondence between subfigures in the upper and lower lines of Figure 3? Can you further explain how to understand the green arrow in the second row? (Number of arrows, consistency of direction, the thickness of arrows, etc.)\n2.\tIn Section 4.1, what is the sequence length of the datasets used in this experiment (mean frame, min frame, max frame)? Does the sequence length have a greater impact on the experimental results?\n3.\tAs far as I know, LaSOT contains 1400 sequences, and the sequence length varies from 1K to 10K frames, so how does the Evt-LaSOT construct in this article filter data from the original LaSOT? Are there specific standards? Is the filtered sequence have the same length as the original sequence?\n4.\tTable 2 presents the proposed algorithm has a higher FPS on Evt-LaSOT. The authors point out that the main reason is this method directly consumes raw event data without pre-processing. Is there any other reason besides this? Can you design some ablation experiments to analyze the processing speed?\n5.\tThe video demo's main screen intuitively shows this method's effect, but what do the three sub-screens on the right refer to? Please explain the specific meaning of each sub-screen.\n The authors discuss the public privacy issues in the additional material and argue that the event-based vision puts a lower threat to public privacy than RGB image. However, I want to know how secure and robust the event-based vision is? If we deploy the event-based scheme on self-driving systems, will it be safer than the current RGB image-based plan?" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 5, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "YNkQdJ4Y9Is", "XZnEKfIhyIP", "I4ycaaUPqfN", "bzpXEMFL8aB", "bLsXVZ7bRf", "MOQWgaR8kjG", "Rd863boTA9", "nips_2022_hTxYJAKY85", "YNkQdJ4Y9Is", "YNkQdJ4Y9Is", "XZnEKfIhyIP", "XZnEKfIhyIP", "XZnEKfIhyIP", "I4ycaaUPqfN", "I4ycaaUPqfN", "3-P75O81wmb", "3-P75O81wmb", "nips_2022_hTxYJAKY85", "nips_2022_hTxYJAKY85", "nips_2022_hTxYJAKY85", "nips_2022_hTxYJAKY85" ]
nips_2022_JpxsSAecqq
OrdinalCLIP: Learning Rank Prompts for Language-Guided Ordinal Regression
This paper presents a language-powered paradigm for ordinal regression. Existing methods usually treat each rank as a category and employ a set of weights to learn these concepts. These methods are easy to overfit and usually attain unsatisfactory performance as the learned concepts are mainly derived from the training set. Recent large pre-trained vision-language models like CLIP have shown impressive performance on various visual tasks. In this paper, we propose to learn the rank concepts from the rich semantic CLIP latent space. Specifically, we reformulate this task as an image-language matching problem with a contrastive objective, which regards labels as text and obtains a language prototype from a text encoder for each rank. While prompt engineering for CLIP is extremely time-consuming, we propose OrdinalCLIP, a differentiable prompting method for adapting CLIP for ordinal regression. OrdinalCLIP consists of learnable context tokens and learnable rank embeddings. The learnable rank embeddings are constructed by explicitly modeling numerical continuity, resulting in well-ordered, compact language prototypes in the CLIP space. Once learned, we can only save the language prototypes and discard the huge language model, resulting in zero additional computational overhead compared with the linear head counterpart. Experimental results show that our paradigm achieves competitive performance in general ordinal regression tasks, and gains improvements in few-shot and distribution shift settings for age estimation. The code is available at https://github.com/xk-huang/OrdinalCLIP.
Accept
The paper proposes a language-powered model for ordinal regression tasks, based on CLIP. Language prototypes are constructed from sentences with rank categories via the CLIP paper encoder, and then optimizing the CLIP model by language prototype and image feature matching. To further boost the ordinality, this paper introduces the learnable rank prompts by interpolation from the base rank embeddings. While the proposed approach builds on CoOp, reviewers agree the contribution is significant enough and original enough for NeurIPS. Regarding the experimental section, the paper shows that on three regression tasks (age estimation, historical image dating, and image aesthetics assessment), results show good performance compared to baseline models. Concerns regarding the writing of the manuscript have been raised [PfAX, RX3e], but seem to have been addressed during the rebuttal phase.
train
[ "CInkYtdhlTU", "Sk7N0LpmQNG", "R6T-oxvBoH", "y7rOjysufUg", "lg9mLdJNRZ", "DfG-v5xK-02", "WrvWzrrSj27", "BSxZLrYS5yX", "7eK6HUt4TmJ", "LyRY83YjaE3", "nCKqkKU7M-w", "qGm3KgJzWgY", "lH2ESQDEaI8", "G6tcqzkAfh6" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer RX3e,\n\nThanks again for your valuable advice and supportive comments! We have responded to your initial comments. We are looking forward to your feedback and will be happy to answer any further questions you may have.", " Dear Reviewer PfAX,\n\nThanks again for your valuable advice and supportive comments! We have responded to your initial comments. We are looking forward to your feedback and will be happy to answer any further questions you may have.", " Thank you for your reply and your positive feedback. We appreciate it very much!", " Hello,\n\nThe additional experiments show that the empirical performance of the method is very good. I will raise my rating.", " Dear reviewer wMGt,\n\nDoes our response address all of your concerns? As we have clarified some crucial misunderstandings about our paper, we sincerely hope that you can reconsider the rating. Please feel free to let us know if you have any further questions.\n", " We would like to thank the reviewer for the valuable comments. However, we feel there is some misunderstanding. We clarify the issues and address the questions accordingly as described below.\n\n**Q1: Linear probe baseline**\n\n> The main weakness to me is that an obvious baseline is missing from an engineering perspective. The paper uses a VGG-16 network pretrained on ImageNet as a trainable vision encoder for several datasets. The VGG-16 network has 138M parameters, similar to the ViT in the official CLIP release. The natural baseline would then have been to train a linear probe atop the CLIP ViT-B to simply predict the rank as a classification task. \n\n> An important baseline (linear probing) is missing.\n\n**[Reply]** Thanks for the advice. Following the suggestion, we conducted experiments with the Linear probe solution on all tasks. The results are presented below. \n\n\n**Table R1-1. The MAE results on four benchmarks. The lower, the better.**\nDataset | MORPH II | Adience | Image Aesthetics | Historical Image Dating\n---|:---:|:---:|:---:|---\nLinear probe | 4.70 | 0.64 | 0.487 | 0.86\nOdrinalCLIP | **2.32** | **0.47** | **0.280** | **0.67**\n\n**Table R1-2. The Accuracy results on three benchmarks. The higher, the better.**\nDataset | Adience | Image Aesthetics | Historical Image Dating\n---|:---:|:---:|:---:\nLinear probe | 51.8% | 61.60% | 41.07%\nOdrinalCLIP | **61.2%** | **72.85%**| **56.44%**\n\nWe see that our method consistently outperforms the Linear probe method on all datasets, which demonstrates the effectiveness of our method. We have also included these results in our revision. It is worth pointing out that since most SOTA methods use VGG-16 as the vision encoder, we simply follow this setting for a fair comparison. Moreover, the specific choice of vision encoder does not affect our method and conclusion.\n\n**Q2: Definition and Effectiveness of Ordinality**\n\n> Second, the results in Fig 3. are very surprising. If I am reading it correctly, about 35% of the rank prototypes violate the ordinal property. This is a substantial portion... \n\n> Finally, the result showing that 35% of the rank embeddings do not obey the ordinal property is troubling...\n\n**[Reply]** First, we clarify the misunderstanding of the ordinality definition. To evaluate the order of the learned language prototypes, we proposed a simple metric (line 235, footnote) that compares the normalized cosine distances of **every pair** of language prototypes. The overall comparison is $(N - 1) \\times N / 2$ given $N$ language prototypes. In Figure 3, there are around 35% of prototype **pairs** that violate the ordinal property, instead of 35% of prototypes. \n\nThe second thing we want to clarify is that the ordinality score metric is only used to describe **relative ordinality** rather than **absolute ordinality**. The ordinality score is not a perfect metric as it assumes that the normalized cosine distances can be used to measure the order relations of language prototypes. Since the language prototypes lie in a nonlinear manifold, this assumption is difficult to hold between language priors that are far away. A better metric is to compute the language prototype pairs within a sliding window, where the assumption of a locally linear manifold can hold. We denote this metric as \"Ordinality@s\", where s is the sliding window size. The results on the MORPH II dataset show that our OrdinalCLIP can well preserve the order of language prototypes (100% for Ordinality@8).\n\nThus, our approach **does not have a substantial portion** of language prototypes that violate the ordinal property.\n\n**Table R1-3. The Ordinality@s results on the MORPH II database.**\nOrdinality | @2 | @4 | @8 | @16\n---|:---:|:---:|:---:|:---:\nCNN Baseline | 77.31% | 61.97% | 56.42% | 52.02%\nCoOp [48] |91.60% | 85.04% | 78.54% | 68.33% \nOrdinalCLIP | **100.00%** | **100.00%** | **100.00%** | **96.19%** ", " **Q3: Long tail issue**\n\n>the proposed method cannot grasp anything on the long tail of the distribution. \n\n>rank embeddings are clustered in the long tail ... suggests the error distribution of the model is highly biased and the model does not work well at all for the tail.\n\n**[Reply]** First, as analyzed above, our approach **does not have a substantial portion** of language prototypes that violate the ordinal property. Second, Learning from long tail data is difficult for all methods but **our method still outperforms other methods on long tail data**.\n\nWe define the data in the last 25 ranks on the MORPH II dataset as long tail data (the data distribution is shown in the supplementary material). We first present the ordinality score for the last 25 ranks on the MORPH II database. We see our method achieves much higher ordinality scores.\n\n**Table R1-4. The Ordinality@s results for the last 25 ranks on the MORPH II database. Ordinality@$\\infty$ denotes the old metric.**\nOrdinality | @2 | @4 | @8 | @16 | @$\\infty$\n---|:---:|:---:|:---:|:---:|:---:\nCNN Baseline | 70.21% | 58.89% | 52.44% | 51.89% | 51.00%\nCoOp [48] | 89.36% | 74.44% | 65.24% | 62.50% | 60.33%\nOrdinalCLIP | **100.00%** | **100.00%** | **100.00%** | **97.64%** | **97.14%** \n\nWe further show the MAE results on the long tail data, which also demonstrates that our method can better handle the long tail data. \n\n**Table R1-5. The MAE results on the long tail data of MORPH II database.**\nMethod | CNN Baseline | CoOp [48] | OrdinalCLIP\n---|:---:|:---:|:---:\nMAE | 4.32 | 4.21 | **4.06**\n\n**Q4: Why and how does language information help with this task**\n\n> why do language priors help with this task at all? It's difficult to conceive that CLIP has learned a meaningful representation of say, the number 72. Even if CLIP did have a very meaningful representation of some arbitrary number, the rank embeds are entirely learned. How is the language information being used? \n\n> Furthermore, it is unclear to me why language information can help with this kind of task at all, I do not see any experiments to explain why language information can help with this kind of task, or provide insight into what the language model is adding here.\n\n\n**[Reply]** Why does language information help with this task? Existing methods are easy to overfit and usually attain unsatisfactory performance as the learned rank concepts are mainly derived from the vision training set. Since learning the rank concept from the image domain alone is prone to overfitting, we can leverage multimodal information to alleviate this issue. The human language contains rich semantic information and prior knowledge. We consider simultaneously borrowing the rank concept from the language domain. Specifically, each rank label is not only regarded as a class category but also linked to a sentence describing the corresponding rank, such as \"this person is 23 years old\". In this way, our model not only learns the concept of ranks defined on the vision dataset but also exploits the common knowledge of rank in the language domain.\n\nHow does language information help with this task? In practice, we employ the pre-trained giant text encoder in CLIP to extract language prototypes for all ranks. Since the prototypes are obtained from a ***fixed language model***, we are somehow distilling the language knowledge from the CLIP model. Moreover, the prototypes are constrained in the well-learned language latent space, which is also a kind of regularization leading to stronger generalization.\n\nAny experiments? To see the benefits of language priors, we first consider the zero-shot setting. We conducted two experiments: 1) without Language Priors (w/o LP), the classifier is a random initialized FC layer, 2) with Language Priors (w/ LP), the classifier is language initialized prototypes with the CLIP text encoder. Neither experiment involves model training. The results in Table R1-6 show that the w/ LP solution significantly outperforms the w/o LP across four datasets, which indicates that the CLIP model does contain a meaningful representation of rank numbers to some extent, and language information can help with this task. \n\nWe agree that CLIP may not be able to give a perfect representation of some arbitrary number **simply using raw text input**. Therefore we propose to learn rank prompts. Here we consider the full-training setting, where the full model is trained. w/ P refers to our OrdinalCLIP and w/o LP means that the language prototypes are replaced with an FC layer. The results show the effectiveness of language priors again.\n\n**Table R1-6. The MAE results on four benchmarks. The lower, the better.**\nDataset | MORPH II | Adience | Image Aesthetics | Historical Image Dating\n---|:---:|:---:|:---:|---\nzero-shot, w/o LP | 32.51 | 2.73 | 1.425 | 1.95\nzero-shot, w/ LP | 14.45 | 1.50 | 0.730 | 1.48 \nfull training, w/o LP | 2.63 | 0.56 | 0.365 | 0.80\nfull training, w/ LP | 2.32 | 0.47 | 0.280 | 0.67", " **Q5: Variance and Statistical Tests:**\n> T1 is missing variance numbers for the comparison with CoOp. As evident in T2, CoOp and OrdinalClip are very close, with large variances. The authors should include a statistical test to confirm that the proposed method is indeed superior to CoOp. There are no variance numbers again in Table 7.\n\n> The results are in general very close to that of CoOp, and some tables are missing variance information, while others have variance information. Given how close the results of the method are to CoOp and how high the included variances are, variance results should be included for all tables, and statistical significance tests conducted. \n\n**[Reply]** We believe there is a huge misunderstanding here. The variances in Tables 2 and 8 are the variances of the five-fold cross-validation results, which means that they **do not represent** the performance stability of the model in **multiple runs**, but represent the performance stability of the model **among the five-fold sub-datasets**. The high variance simply means the difference among the five-fold sub-datasets.\n \n\nWe did not report the variance results in Tables 1 and 7 simply following the practice of previous SOTA methods [6,20,22,41,45]. Following the advice, we provide the variance results and statistical significance tests for all tasks.\n\n\nFor MORPH II, the standard deviation results over five random runs are shown below. We see they are quite small. A one-way ANOVA test revealed a significant performance boost of OrdinaCLIP [$\\alpha=0.05, F(2, 12)<240.7$, $p=2e{-10}$]. *Post-hoc* analyses (Tukey HSD) showed that the results of OrdinalCLIP are significantly better than those from CoOp and those from CNN baseline.\n\n\n**Table R1-7. The standard deviation results over five random runs on the MORPH II database.**\nMethod | CNN Baseline | CoOp | OrdinalCLIP\n---|:---:|:---:|:---:|\nStandard Deviation | 0.04 | 0.02 | 0.01\n\n\nFor Adience dataset, the paired t-test [$\\alpha=0.05, P(T<=t)=0.020<0.05$ for Acc; $\\alpha=0.05, P(T<=t)=0.002<0.05$ for MAE] reveals the significant performance of OrdinalCLIP over CoOp.\n\nFor the Image Aesthetics task, we show the standard deviation of the five-fold cross-validation results below. Paired t-test on both MAE and Accuracy metrics [$\\alpha=0.05, P(T<=t)=0.026<0.05$ for Acc; $\\alpha=0.05, (T<=t)=0.018<0.05$ for MAE] reveal the significant performance improvement made by OrdinalCLIP, compared with CoOp.\n\n**Table R1-8. The standard deviation results for the Accuracy metric on the Image Aesthetics dataset.**\nMethod | Nature | Animal | Urban | People | Overall\n---|:---:|:---:|:---:|:---:|:---:\nCNN Baseline | 1.84| 2.60 |1.83 |2.60 |2.22\nZero-shot CLIP | 1.12| 2.33 | 2.20 | 1.57 | 1.80\nCoOp | 1.97 | 1.97 | 3.27 | 1.80 | 2.25\nOrdinalCLIP | 2.45 | 1.31 | 1.80 | 1.96 | 1.88\n\n**Table R1-9. The standard deviation results for the MAE metric on the Image Aesthetics dataset.**\nMethod | Nature | Animal | Urban | People | Overall\n---|:---:|:---:|:---:|:---:|:---:\nCNN Baseline | 0.017| 0.020 | 0.017 | 0.017 | 0.018\nZero-shot CLIP | 0.007| 0.019 | 0.034 | 0.022 | 0.020\nCoOp | 0.020 | 0.023 | 0.033 | 0.004 | 0.020\nOrdinalCLIP | 0.027 | 0.016 | 0.018 | 0.021 | 0.020\n\nWe also conducted the paired t-test on the Historical Dating dataset. The test [$\\alpha=0.05, P(T<=t)=0.002<0.05$ for Acc; $\\alpha=0.05, P(T<=t)=0.009<0.05$ for MAE] shows OrdinalCLIP is significant better than CoOp.\n\nWe see that our method significantly outperforms CoOp and passes statistical significance tests on all tasks.\n\n\n**Q6: Novelty**\n\n>The technique presented is a somewhat straightforward extension of CoOp, with the main novelty being that the literal rank names are replaced with soft class names.\n\n**[Reply]** We do not agree. Simply adapting CoOp with learnable rank embeddings leads to no performance boost and degradation in ordinality (Table [10] and Figure [8] in Appendix). Our method is the first language-powered paradigm for ordinal regression. We first reformulate the ordinal regression task as the image-text matching to utilize the well-structured, rich-semantic CLIP latent space. To improve both performance and ordinality, we propose to construct the ordinal rank embeddings via interpolation between a set of base rank embeddings that are learnable during training. These are specifically designed for ordinal regression tasks and are non-trivial. Moreover, both reviewers PfAX and RX3e appreciated our work and agreed that our method was novel (reviewer RX3e also agreed that our method was ''*insightful extensions of the CLIP model for the ordinal regression task*''). \n", " We thank Reviewer RX3e for the time, and also for the positive and constructive feedback! We are glad that the reviewer found our method to be **novel**, and the proposed language prototypes and learnable rank prompts to be **insightful extensions** of the CLIP model. We have revised the manuscript as suggested by the reviewer, and we address the reviewer's concerns as below.\n\n**Q1: Improve Writing**\n\n> The writing of this paper should be improved, including motivation, related work and task description, etc.\n\n**[Reply]** Many thanks for your suggestion. We have revised the related work section in our manuscript to be more organized. We also did our best to revise the introduction section of the manuscript to include a more detailed description of the tasks and challenges, as well as a clearer explanation of the motivation. We further elaborate on our modifications regarding the task description and motivation in the response to the next question.\n\n\n**Q2: Explain the motivation and the description of the tasks**\n\n> I would suggest that the authors better explain the motivation of the paper and the description of the tasks presented. While the ordinal regression approach proposed by the authors is novel and makes sense, I did not see a description of the task or the challenges.\n\n**[Reply]** Thanks for the advice. For a given image, the task of ordinal regression in computer vision is dedicated to predicting a rank number or continued value. For example, age estimation aims to estimate the age of a given face image while image aesthetic assessment predicts the aesthetic score for an image.\n\nAs many popular methods adopt a classification framework, there are two main challenges. First, treating ranks as independent class categories fails to grasp the ordinal property. Second, as the learned concepts are mainly derived from the training set, these approaches are prone to overfit and usually attain unsatisfactory performance. \n\nSince learning the rank concept from the image domain alone is prone to overfitting, we can leverage multimodal information to alleviate this issue. The human language contains rich semantic information and prior knowledge. We consider simultaneously borrowing the rank concept from the language domain. Specifically, each rank label is not only regarded as a class category but also linked to a sentence describing the corresponding rank, such as \"this person is 23 years old\". In this way, our model not only learns the concept of ranks defined on the vision dataset but also exploits the common knowledge of rank in the language domain. Therefore we propose a language-powered paradigm for ordinal regression to alleviate the overfitting issue by associating each rank category with its language concept. Moreover, we propose to learn rank prompts to model the ordinal property.\n\nWe have included these explanations in the revised version.\n\n\n\n**Q3: Clarification of two proposed interpolations**\n\n> The motivation for the two proposed interpolations, linear interpolation, and inverse-proportion, is unclear and lacks visualization for comparison other than numerical comparison. \n\n**[Reply]** We have included an additional Figure [13] in Appendix, to visualize a toy example of the interpolation weight matrix. We use interpolation to incorporate the ordinal property from the language end. Specifically, the input word embeddings differ only one word embedding from each other. Then for each rank embedding, we need to incorporate a certain level of the ordinal property. In our implementation, we use linear interpolation and inverse property interpolation to impose the ordinality to the rank embeddings. Our experiments show that via interpolation between several base ranks, the language prototypes can better preserve the ordinality, resulting in a compact and ordinal latent space. In other words, the ordinality of the rank embeddings can be implicitly propagated toward the language prototypes. In practice, we consider two different interpolation strategies: linear interpolation and inverse property interpolation, where linear interpolation gives smoother weights and inverse property interpolation gives sharper weights. The experiments show that smoother weights usually give better results when the number of base ranks is small.\n", " **Q4: Description of Adapting CoOp into Ordinal Regression**\n\n> Since this paper uses CoOP as a competitive method for comparison, the paper lacks a description of how to implement CoOP into ordinal regression. Could the authors discuss this further?\n\n**[Reply]** Here we detail the implementation of CoOp [48] in the ordinal regression task. We borrow the CoOp model only with the modifications of language inputs. The prompt context (context embeddings) could be initialized by either task-related prompt templates (e.g, \"The age of the person is {}.\" for age estimation) or random vectors. We change the input class label in CoOp to the rank labels of the task (e.g, \"0\", \"1\", ..., \"99\", \"100\", 100 ranks for age estimation). CoOp only finetunes the shared context embeddings ($m$ word embeddings). To fairly compare with OrdinalCLIP, we experiment with all three settings: only finetune the context embeddings, only finetune the rank embeddings, and finetune both context and rank embeddings.", " We thank the reviewer for the constructive feedback and a positive assessment of our work. We are happy the reviewer finds the paper **well-organised and clear-motivated**, our method **interesting, valuable, and innovative with good performance**. Below we detail our answer to the review concerns.\n\n**Q1: Narrow down the writing to CV tasks**\n\n> The statement of introduction, related works, and problem statements should narrow down the ordinal regression to the vision-language or CV ordinal regression task cuz there are some pure language ordinal regression tasks.\n\n**[Reply]** Thanks for this nice suggestion! We have revised the introduction, related works, and problem statements of the manuscript to ensure narrowing down to ordinal regression tasks in computer vision.\n\n**Q2: Explanation of KL-Loss**\n\n> The two loss (image-to-text loss and a text-to-image loss ) should be introduced in detailed, and the reason for using KL.\n\n**[Reply]** Following CLIP, we use an image-to-text loss and a text-to-image loss to supervise the model. For image-to-text loss, using KL loss is equivalent to using cross-entropy loss, as the labels for each image are all one-hot encoded. However, for the text-to-image loss, there might be several image hits for a certain label in a mini-batch. We follow ActionCLIP [44] to use a KL loss to supervise the text-to-image logits. Specifically, the ground-truth matrix is constructed by taking the normalized probability of each multi-hot label for the corresponding rank. \n\n**Q3: Explanation of order preservation of rank embeddings and language prototypes**\n\n> We choose to maintain the order of rank embeddings to preserve the order of the language prototypes. This statement is unclear. How to maintain the order of the language?\n\n**[Reply]** Sorry for the confusion. We hope that the language prototypes will lie on the manifold in good order. Since the language prototypes are extracted from the CLIP model using prompt inputs, we instead consider constraining the prompt inputs. The inputs of text encoders are context embeddings ($m$ words) along with a rank embedding. The context embeddings are shared among all ranks. The input word embeddings differ only one word embedding from each other. Then for each rank embedding, we need to incorporate a certain level of the ordinal property. In our implementation, we use linear interpolation and inverse property interpolation to impose the ordinality to the rank embeddings. Our experiments show that via interpolation between ranks, the language prototypes can better preserve the ordinality, resulting in a compact and ordinal latent space. In other words, the ordinality of the rank embeddings can be implicitly propagated toward the language prototypes.\n\n**Q4: Ablation of different prompt templates:**\n\n> To leverage the language priors with the text encoder, we treat the rank categories as words. How to choose a suitable sentence? The sentence of “a person at the age of [rj] is the best? Are there some ablation studies?\n\n**[Reply]** The prompt templates for ablation are shown in the tables below. \n\n| Ctx. Ind. | Template Ctx. |\n|:---------:|---------------------------------------------------------------|\n| 0-0 | Age estimation: the age of the person is {} . |\n| 1-0 | Age estimation: the age of the person in the portrait is {} . |\n| 2-0 | Age estimation: the age is {} . |\n| 3-0 | Age estimation: the age of the face is {} . |\n| 0-1 | The age of the person is {} . |\n| 1-1 | The age of the person in the portrait is {} . |\n| 2-1 | The age is {} . |\n| 3-1 | The age of the face is {} . |\n\nThe table below shows that different optimization start points all lead to similar convergence and performance, which suggests that the most meaningful templates work fine for this task.\n\n| Ctx. Ind. | 0-0 | 1-0 | 2-0 | 3-0 | 0-1 | 1-1 | 2-1 | 3-1 |\n|-------------|:----:|:----:|:----:|:----:|:----:|:----:|:----:|:----:|\n| OrdinalCLIP | 2.30 | 2.31 | 2.30 | 2.32 | 2.31 | 2.32 | 2.32 | 2.31 |\n\n**Q5: Explanation of \"$m$ word embeddings\"**\n\n> We directly learn m word embeddings. what's the meaning of m? It's the ordinal number word?\n\n**[Reply]** $m$ word embeddings refer to the shared context embeddings for all ranks, where $m$ is the number of the context embeddings. For example, given a prompt context \"This person is {}.\", it has $m=3$ word embeddings (in fact, it is $m=4$, as the dot \".\" is also a word embedding in CLIP tokenizer).", " The paper proposes a method to use CLIP for ordinal regression using a combination of soft labels and prefix tuning, with an interpolation scheme added to enforce order between the learnt prompts. # Strengths\nThe idea of using context from natural language for ordinal regression tasks is interesting and worth exploring. The proposed interpolation method to allow regressing in between ordinal ranks is also clever. \n\n# Weaknesses\nThe main weakness to me is that an obvious baseline is missing from an engineering perspective. The paper uses a VGG-16 network pretrained on ImageNet as a trainable vision encoder for several datasets. The VGG-16 network has 138M parameters, similar to the ViT in the official CLIP release. The natural baseline would then have been to train a linear probe atop the CLIP ViT-B to simply predict the rank as a classification task. Second, the results in Fig 3. are very surprising. If I am reading it correctly, about 35% of the rank prototypes violate the ordinal property. This is a substantial portion, and suggests the proposed method cannot grasp anything on the long tail of the distribution. Third, _why_ do language priors help with this task at all? It's difficult to conceive that CLIP has learned a meaningful representation of say, the number 72. Even if CLIP did have a very meaningful representation of some arbitrary number, the rank embeds are entirely learned. How is the language information being used? T1 is missing variance numbers for the comparison with CoOp. As evident in T2, CoOp and OrdinalClip are very close, with large variances. The authors should include a statistical test to confirm that the proposed method is indeed superior to CoOp. There are no variance numbers again in Table 7.\n\n# Summary\nThe technique presented is a somewhat straightforward extension of CoOp, with the main novelty being that the literal rank names are replaced with soft class names. The results are in general very close to that of CoOp, and some tables are missing variance information, while others have variance information. Given how close the results of the method are to CoOp and how high the included variances are, variance results should be included for all tables, and statistical significance tests conducted. An important baseline (linear probing) is missing. Furthermore, it is unclear to me why language information can help with this kind of task at all, I do not see any experiments to explain why language information can help with this kind of task, or provide insight into what the language model is adding here. Finally, the result showing that 35% of the rank embeddings do not obey the ordinal property is troubling, especially since the \"broken\" rank embeddings are clustered in the long tail. This suggests the error distribution of the model is highly biased and the model does not work well at all for the tail. Please see the strengths and weaknesses. Limitations are not discussed. ", " The authors propose a language-powered paradigm for ordinal regression tasks by learning rank prompts, named OrdinalCLIP. The OrdinalCLIP can leverage rank categories of language to explicit learning ordinal rank embeddings, which will preserve the order of the language prototypes in the language latent space. In the three regression tasks, including age estimation, historical image dating, and image aesthetics assessment, The experimental results show good performance than other baseline models. In addition, for few-shot learning, the method also gains improvement. The overall structure is well-organised. The paper has a clear motivation and is innovative for the regression field. Strength:\n\n1. The innovative language-powered paradigm for ordinal regression uses language prototypes and learned rank prompts, which are interesting and valuable.\n\n2. The good performance shows the effectiveness of OrdinalCLIP.\n\n3. The approvement and experiments of the appendix are detailed.\n\nWeakness:\n\n1. The statement of introduction, related works, and problem statements should narrow down the ordinal regression to the vision-language or CV ordinal regression task cuz there are some pure language ordinal regression tasks.\n\n2.The two loss (image-to-text loss and a text-to-image loss ) should be introduced in detailed, and the reason for using KL.\n\n3. we choose to maintain the order of rank embeddings to preserve the order of the language prototypes. This statement is unclear. How to maintain the order of the languag? Questions:\n\n1.To leverage the language priors with the text encoder, we treat the rank categories as words. How to choose a suitable sentence? The sentence of “a person at the age of [rj] is the best? Are there some ablation studies?\n\n2. we directly learn m word embeddings. what's the meaning of m? It's the ordinal number word?\n\n3. See the weakness.\n Yes", " The authors propose a language-powered model for ordinal regression based on CLIP. The language prototypes are constructed from sentences with rank categories via the CLIP paper encoder, and then optimizing the CLIP model by language prototype and image feature matching. To further boost the ordinality, this paper introduces the learnable rank prompts by interpolation from the base rank embeddings. Multiple experiments on age estimation, image aesthetics assessment and historical image dating show that the proposed paradigm surpasses other related methods. Strengths: \n1. This paper introduces the contrastive language-image pretrained (CLIP) model as a paradigm for ordinal regression is novel to me.\n2. The proposed language prototypes and learnable rank prompts are insightful extensions of the CLIP model for the ordinal regression task.\n3. The proposed interpolation learning rank prompts contribute to the output of smooth language prototype similarity trends, which represent well learned ordinality.\n\nWeaknesses:\n1. The writing of this paper should be improved , including motivation, related work and task description, etc.\n2. The motivation for the two proposed interpolations, linear interpolation and inverse-proportion, is unclear and lacks visualization for comparison other than numerical comparison. \n 1. I would suggest that the authors better explain the motivation of the paper and the description of the tasks presented. While the ordinal regression approach proposed by the authors is novel and makes sense, I did not see a description of the task or the challenges.\n2. Since this paper uses CoOP as a competitive method for comparison, the paper lacks a description of how to implement CoOP into ordinal regression. Could the authors discuss this further?\n The authors have discussed the limitations and potential negative effects of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "G6tcqzkAfh6", "lH2ESQDEaI8", "y7rOjysufUg", "lg9mLdJNRZ", "qGm3KgJzWgY", "qGm3KgJzWgY", "qGm3KgJzWgY", "qGm3KgJzWgY", "G6tcqzkAfh6", "G6tcqzkAfh6", "lH2ESQDEaI8", "nips_2022_JpxsSAecqq", "nips_2022_JpxsSAecqq", "nips_2022_JpxsSAecqq" ]
nips_2022_c7sI8S-YIS_
Unsupervised learning of features and object boundaries from local prediction
A visual system has to learn both which features to extract from images and how to group locations into (proto-)objects. Those two aspects are usually dealt with separately, although predictability is discussed as a cue for both. To incorporate features and boundaries into the same model, we model a layer of feature maps with a pairwise Markov random field model in which each factor is paired with an additional binary variable, which switches the factor on or off. Using one of two contrastive learning objectives, we can learn both the features and the parameters of the Markov random field factors from images without further supervision signals. The features learned by shallow neural networks based on this loss are local averages, opponent colors, and Gabor-like stripe patterns. Furthermore, we can infer connectivity between locations by inferring the switch variables. Contours inferred from this connectivity perform quite well on the Berkeley segmentation database (BSDS500) without any training on contours. Thus, computing predictions across space aids both segmentation and feature learning and models trained to optimize these predictions show similarities to the human visual system. We speculate that retinotopic visual cortex might implement such predictions over space through lateral connections.
Reject
There was some disagreement on the value of this work. The paper received 1 strong accept, 1 accept and 1 reject. The positive reviewers recommended the paper to be accepted because it proposes a novel unsupervised approach to semantic segmentation and contours detection and because of connections to neuroscience. Some of the main criticisms from the more negative reviewer included a lack of a discussion to related work and a lack of sufficient experimental evaluation (in particular comparisons to related work). The AC found the response of the authors in the rebuttal unconvincing. There is quite a bit of prior work using MRF for segmentation (yes prior supervised segmentation work needs to be properly cited and not just in passing in the results section (line 216-217). Wrt a lack lack of baseline comparisons, I also agree with the reviewer and in that respect the statement in the paper (line 236-237) is not convincing ("There are also a few deep neural network models that attempt unsupervised segmentation [e.g. 10, 47, 82], but we were unable to find any that were evaluated on the contour task of BSD500"). It seems that the authors should at least consider running these models on BSD500 or run their models on other datasets. And as acknowledged by one of the more positive reviewers the results are not SOTA. As stated in the discussion, the authors plan to continue tweaking the architecture to improve results. The AC thinks this is needed for this work to make a sufficient contribution to the conference since neither the use of MRFs or contrastive loss are novel on their own -- the burden is on the authors to demonstrate that they can engineer a system from these two ideas with at least competitive results. As for the neuroscience contribution, I am quoting the discussion ("the features learned by our models resemble receptive fields in the retina and primary visual cortex and the contours we extract from connectivity information match contours drawn by human subject fairly well, both without any training towards making them more human-like."). This seems like an unsubstantiated statement since no quantitative analysis is provided. Almost any CNN trained for any reasonable task will return some sort of center-surround, orientation-selective and color-opponency filters (take AlexNet trained on ImageNet for instance) -- so it is unclear what is new here. For this statement to be meaningful that authors should formulate null models and demonstrate empirically that their proposed model are more cortex-like than other reasonable alternative model. Overall because of a lack of sufficient technical or neuroscience contribution, the AC recommends this paper to be rejected.
train
[ "6tVsqHnNz_C", "_Ffma96J2C", "ds73YZwr42d", "_6yMEaMQXAS", "BsEWyeyawX", "jczdLNkZp-C", "1ly81u-Z5U", "qsG84cF9yTi", "R6M35nvMgte", "7XDycZlHogd", "XJPkU7xsNZ2", "tb63_P9OTlL", "x2E-Q2XCQlc", "dE2aQadmvJ" ]
[ "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you! We are glad that we could clarify things and hope to see this paper accepted, too, of course.\n\nAlso, If our submission is accepted, we will be allowed an additional content page for the camera-ready version. Thus, we will most certainly have space to add a paragraph on texture vs. contour grouping.", " Thank you for your answer this is much clearer now !\nI really hope that you'll find the space to include the discussion about texture vs contour grouping (I could definitely find something to remove but I am not author, this is your decision). \nI am happy to increase my score and I hope to see this paper accepted !", " Thanks for these clarifications and the modifications to the manuscript. In particular I appreciate the discussion around feature learning, I'd be curious to see how this approach fares as a pre-training step for image encoders for downstream tasks.", " \nThank you again for your positive evaluation!\n\nSorry, that we did not properly explain the new figures. The *w_maps* we display are the log-posterior ratios for all the $w$s being 1 vs being 0 for the given image. To display these, we put all values that correspond to the same spatial relationship between the neighbors into one image. This is what is displayed in the `number_w_map.pdf` files. For example, the first map shows this log-posterior ratio for the connection from each pixel to their right neighbor. The second one shows the connection to the pixel below. The third one is for 1 pixel to the right and one pixel down, etc. There are 10 maps here for the 20 neighborhood relations, because our factors are symmetric. Thus, each map is exactly equal to the map for the opposite shift between pixels, e.g. the map is the same for one pixel up as for one pixel down. We display only one of these each, which halves the number of maps. \n\nThe colormap is the other way around than you assume (black is active, white is inactive). The w_maps are active everywhere but at the contours, i.e. most pixels should be grouped with their close neighbors, only a few along the contours should not be.\n\nIn the files named `number.pdf`, the second panel is a simple sum of the w_maps for the different neighborhood relationships (all 20 in this case). This also creates an approximation of the contours without any graph spectral methods. We hope the comparison to the contours returned by the globalization algorithm in the third panel is convincing that globalization is not the main driver of algorithm performance.\n\n---\n\nLast but not least on the texture vs. contour grouping distinction: We think of our method mostly as texture grouping: We extract (texture or surface) features at every location and closeby locations with similar features are grouped together, while locations with different features are not. In particular, none of our filters are directly used as contour detectors. The globalization algorithm is just a convenient way to turn this similarity information into contours in a way that is strongly related to graph spectral clustering.\n\nThis classification is not perfectly clear though. Someone else might argue that our algorithm instead uses the feature maps only as an intermediate step to calculating a complicated edge detector that returns the w_maps as the contours. These contours are then grouped by the globalization algorithm. This highlights to us that these two interpretations are truly aspects of the same problem.\n\nWe will try to find space to say something about this distinction in the manuscript.\n", " Thanks to the authors for their answers.\nAfter reading other reviews et responses I am still very positive about this manuscript. In particular, I hope that the response to reviewer NeiE will convince him to raise his score !\n\nI thank the author for including w_ij maps in the supplementary. However, I am a bit unsure how to read those figures. What are the two images in the files number.pdf ? What are the 10 b&w images in the files number_w_map.pdf. I guess those are the w_ij maps but I would have expected that such a map is given for a fixed index ie set i_0 and show w_i0j. Here I don't see any reference pixel for each map... \n\nIf I understood correctly the weight w_ij tells us whether the pair of features f_i and f_j is active (ie whether the pixels i and j should be grouped together, right ?). Then, in the results, the w_map you are showing seem to be only active along contours. Does it mean that your algorithm is only learning to group contours ? I would have expected to see some texture grouping. \n\nThis was a bit crowded among other questions in 4. but I would have appreciated to see a discussion about the type of grouping that the algorithm has performed ie contour vs texture grouping. Could you elaborate on that and discuss it in the paper ?", " \n> I would like to see some maps of p_ij or w_ij associated to some features to understand how the algorithm has grouped those.\n\nWe have looked at all those maps extensively during the development of our method. We usually favored the maps across all features, as our model assumes a single w which applies to all features. We computed these maps for the example images and the winning models and added them to the supplementary material. They take up quite a lot of space to show at a reasonable resolution, which is why we cannot add them to the manuscript. \n\nThe individual maps generally look like oriented edge detector responses, where the orientation corresponds to the displacement between the neighbors. The raw probabilities for the w are somewhat noisier than the final contours we extract, but we hope that the reviewer agrees that the information is already present quite completely in the maps.\n\nSome informal observations on the relation between features and the MRF parameters reveal the expected relationships: Factors for closer neighbors have higher prior probability for connection $p$ and higher precision parameters in $C$. Local averages are connected in all directions, and the stripe-like patterns have stronger coupling along the stripes than orthogonal to them. \n\n> A related question is that the post-processing step to turn p_ij into contours makes hard to understand what was learned ? Is the use of spectral clustering transparent, i.e. has the algorithm learned to group contours over any other type of grouping (e.g. textures) ?\n\nFor evaluation, we required an algorithm that turns local connectivity into contours, i.e. probable locations for object boundaries. This is nontrivial, because the local connections can be inconsistent, which makes this question a valid concern.\n\nHowever, we chose the most canonical algorithm for this transformation we could find. The globalization algorithm we use is provided with the BSDS500 dataset and has been applied to other edge features and segmentation algorithms previously as discussed in detail in the paper by the authors of the BSDS500 dataset (Arbeláez et al., 2011). Thus, we believe this algorithm is among the most transparent ones available for the transformation we require and that we can trust that the contours found by the algorithm match the connectivity information we input well.\n\nConversely, this algorithm does not automatically lead to good performance. Arbeláez et al. (2011) applied the algorithm to other local edge detection signals, too. Typically, this led to some improvement, but the simple edge detectors still performed badly. Similarly, different feature extraction networks yield substantially different performance in our evaluations, indicating that the features and connectivities are important.\n\nFinally, we now provide the raw inferred probabilities for the example images in the supplementary material. We found the direct comparison of a simple sum of connectivities and the inferred contours quite convincing that the main contribution comes from the inferred features and connectivities, not from the globalization algorithm.\n\n**References:** We included the given references into our manuscript. Thank you for pointing them out! \n", " We thank the reviewer for their positive comments! For this reviewer, we think that the questions contain the weaknesses almost exactly. Thus, we answer only the questions here:\n\n### Questions\n\n> The issue with the position loss and the trick to circumvent it are obscure to me. Can you elaborate more on the issue and its solution ? I do not really understand the issue and how the proposed trick solves it.\n\nThe issue for the position loss is that the denominator for the contrastive loss (Eq. 8) is different for each position. Each position has a different neighborhood and thus requires a different normalization constant for the conditional distribution $p(f | f_{neigh})$. As we process all positions in parallel this incurs a very high memory load. Intermittently, the computation of the denominator generates a tensor of dimensions $N_{features} \\times N_{neighbors} \\times N_{negative samples} \\times N_x \\times N_y$. This severely limits the number of negative samples that we can use, which is not great, because contrastive methods require a decent number of negative samples to work well.\n\nOur trick to solve this problem is nothing complicated. We simply compute the loss multiple times with a small number of negative samples (usually 10), which is similar to using many negative samples, but avoids the memory explosion.\nWhen implementing this trick, we save a little computation by splitting the computation of the gradients at the feature map level. We split the computation into two steps: Image $\\rightarrow$ feature map and feature map $\\rightarrow$ loss. To compute multiple losses we run only the second part multiple times summing the gradients we get for the feature map. We then need only a single backpropagation through the computation of the feature map from the image.\n\nWe hope this made this technical step clearer. We added some explanation to this step to our manuscript as well such that we confuse future readers less. In the manuscript we now write:\n\n“To enable a sufficiently large set of negative points $i'$ with the available memory, we compute this loss multiple times with few negative samples and sum the gradients. This trick saves memory, because we can free the memory for the loss computation after each repetition. As the initial computation of the feature maps is the same for all negative samples, we can save some computation for this procedure by computing the feature maps only once. To propagate the gradients through this single computation, we add up the gradients of the loss repetitions with regard to the feature maps and then propagate this summed gradient through the feature map computation. This procedure does not save computation time compared to the loss with many negative samples, as we still need to calculate the evaluation for each position and each sample in the normalization set.”\n\n> I disagree with the fact that the linear models learn Gabor-like features. In particular for the position loss (Figure 3.A), the features are more like non-local wave-like features (Fourier).\n\nWe can certainly understand where the reviewer comes from. The features do not look like they smoothly taper down towards the edges of their filters. We would argue that they are nonetheless local features as their convolutional filters or receptive fields do end. Perhaps some of them are more akin to a boxcar filter times a wave than to a Gaussian times a wave, but they are spatial frequency and orientation specific filters with a limited spatial extent. We hope that the reviewer can agree with us on that.\n", " ### Questions\n\n> It was stated that models using the factor loss + neighborhood of size 20 performed best. Was there a sense of whether further increases in the neighborhood would yield further improvement in the scores? Were any other losses considered?\n\nWe think that larger neighborhoods would work even better given that performance increases monotonically with the neighborhood size and very large neighborhoods or even fully connected MRFs work better in similar semantic segmentation tasks. We added this point to the manuscript as follows:\n\n“Performance increases monotonically with neighborhood size and Markov random field based approaches to semantic segmentation also increased their performance with larger neighborhoods up to fully connected Markov random fields \\cite{krahenbuhl2012, chen2014, chen2017}. We thus expect that larger neighborhoods could work even better.”\n\nWe did not consider any other losses beyond the ones in the paper. \n\n> In section 3.2 it's stated: \"... the best performing models seem to be mostly the local averaging models ...\" Do you have any intuition or explanation as to why the best performing models were those that do local averaging?\n\nThis observation holds primarily in contrast to other, more Gabor-like features. Our intuition is that the models that learn more gabor-like features tend to classify some contours as separate objects with boundaries on either side of them (Some effects in this direction can be seen by zooming into the example images for the linear model and layer 1 of predseg1). This leads to lower performance on the benchmark. However, this argument contains a lot of intuition and we would like to find some kind of formal evidence for this being the difference before making strong claims about this in our paper.\n\n> Could this approach induce good representations for image or video data for downstream tasks or other applications? Was this explored at all?\n\nWe hope that this would be the case and both the success in finding good low-level features and the success of contrastive learning as a pretraining scheme for other tasks lends support to this idea. We did not explore this with our model so far, because our architectures were quite shallow and we thus do not expect that the features we find are sufficient for object classification or similarly complex tasks. We now reference this with our future directions in our rewritten paragraph on training deeper architectures:\n\n“One possible next step for our model would be to train deeper architectures, such that the features could be used for complex tasks like object detection and classification. Contrastive losses like the one we use here are successfully applied for such pretraining purposes even for large scale tasks such as ImageNet \\cite{russakovsky2015} or MS Coco \\cite{lin2015}. These large scale applications often require modifications for better learning though \\cite{chen2020, feichtenhofer2021, grill2020, he2020, henaff2020, oord2019}. For example: Image augmentations to explicitly train networks to be invariant to some image changes, prediction heads that allow more complex shapes for the predictions, and memory banks or other methods to decrease the reliance on many negative samples. Similar modifications might be necessary to apply our formulation to deeper architectures for pretraining purposes. For understanding human vision, this line of reasoning opens the exciting possibility that higher visual cortex could be explained based on similar principles, as representations from contrastive learning also yield high predictive power for these cortices \\cite{zhuang2021}.”\n\n> Would prediction across features in the manner presented be applicable to other input modalities (e.g. language/text) or sensory data (e.g. speech or audio)?\n\nWe do think that the ideas we present here could generalize to other modalities. For example, one could mix the predictions of a text or speech prediction system with a broader uninformed distribution as we do here for predictions across space in an image. One could then use the relative prediction of these two cases for the segmentation of the stream into syllables, words, etc., analogous to our image segmentation. There was enough to say about the visual modality, such that we did not find space to discuss this in the paper, but it would be an interesting application to look at in the future.\n", " We thank the reviewer for their positive comments and answer to their questions and mentioned weaknesses below, starting with the weaknesses:\n\n### Weaknesses\n\n> May need more analysis/explanation on the reasons behind why the factor loss performed well vs. the position loss.\n\nWe were not surprised by the factor loss performing better than the position loss. We believe the main reason for the factor loss performing well is the higher efficiency due to reusing the negative samples across positions. This approach allows us to use a much larger negative sample size for the factor loss ([batch size] * [image size] \\approx 100000 for the factor loss vs. 50 or 100 for the position loss). This leads to much faster convergence of the training and the resulting features look much smoother for the factor loss. Thus, we believe that the advantage for the factor loss is simply due to the higher technical efficiency. We now state this belief in the manuscript when we note the higher performance.\n\n> Sub-section 2.3 may be more appropriately placed under section 3. Are these meant to be baselines? Finally, Fig. 2 appears fairly early in the text relative to this subsection.\n\nWe did not have strong opinions on these placements. We moved the start of section 3 earlier and Fig. 2 is now gone. We hope this makes our manuscript better.\n\n> The authors stated a number of future directions for this work in section 5 including processing video input. It would be helpful to know if we could reasonably expect these models to advance state of the art and how this approach fits into the broader family of models tackling image/video segmentation and representation.\n\nWe do think that our approach could lead to state of the art representations learning in the future and that processing over time is indeed one of the most likely areas where this could happen. We state this now in our discussion using the space gained by removing Fig. 2. We do not want to phrase this too strongly, as we implemented neither processing across time, nor evaluated our models on these tasks.\n\n> Figure 2 D & E were a bit tricky to understand and might benefit from some clarification in the description or main text.\n\nWe thought that we do not have the space to explain this figure sufficiently and thus removed it. We hope that our added explanations made the paper clearer then the figure did.\n", " Unfortunately, it seems that we did not get some of our central points across. To prevent such misunderstandings for future readers, we removed Figure 2, which was not explained sufficiently and added some clarifying sentences to the explanation of our model to make the mathematics more intuitive. Furthermore, we added a few comparison points from the deep neural networks for segmentation, although there are very few truly unsupervised models as most models require some form of supervision like human reported contours or segments.\n\nAdditionally, we want to clarify two points here:\n\nFirst, we believe that there was some misunderstanding regarding the structure of our model. The features we compute are not “connectivity of pixels in a local image patch”, but filters of a shallow convolutional neural network which computes the maps, which we model with the markov random field model. In other words: We first compute these feature maps with a DNN architecture and model the output feature maps using a MRF.\nThis is important, because it means that our models are deep neural networks trained with contrastive learning, just as the reviewer asks for as a potentially better alternative. Also, this means that our approach is applicable to much deeper architectures for computing the feature maps than we used here and definitely different from the classical computer vision approaches.\n\nSecond, we did reference alternative approaches in our introduction and discussion already, just not in a separate section on related work. We happily included a few references to unsupervised learning for segmentation in deep neural networks there, together with literature the other reviewers mentioned. We want to emphasize that approaches that are truly unsupervised, i.e. do not use any segmented data for training, are rare. Most references we found that follow the approach of “unsupervised segmentation models that can be adjusted to the BSBS500 contour detection task” as suggested by the reviewer require human contours to train the adjustment and read out mechanisms. \n\nWe hope that these comments improve clarity for both the reviewer and future readers and that the reviewer might reconsider their recommendation for rejection.", " We thank the reviewers for their insightful comments and the overall positive evaluation of our manuscript! It seems that the one reviewer’s negative evaluation is largely due to some misunderstandings, which we hope our responses will rectify.\n\nWe were particularly happy that the reviewers highlighted that we “do a good job of demonstrating links to the human visual system” and that we have “a real concern about visual perception”, as understanding human vision is our final goal. Additionally, we appreciate that the reviewers evaluate the performance of our model and paper positively despite not being SOTA, because achieving SOTA in our case would most likely involve a level of tuning that is incompatible with the claim that the model learns without supervision.\n\nMost of the concerns were about some aspects of our model presentation being unclear and additional reference points we should relate our model to. To address these points, we decided to remove Figure 2 from our manuscript as two reviewers felt it was insufficiently explained to be useful. We had included it, because it was a helpful illustration in presentations we gave on this work, but for the written article it seems that we cannot give sufficient context to make it useful.\n\nRemoving the figure allowed us to discuss most points the reviewers raised in the manuscript as well. In particular, we are now able to give some additional clarification on the differences we observe between the position and factor losses, add some discussion on previous unsupervised deep neural network models and previous work on texture segmentation, and some more hints to prevent misunderstandings of our architecture as we believe happened for one of the reviewers.\n\nWe believe these changes improve the clarity of our submission and answer the reviewers comments as we discuss in direct replies to each of them.\n", " The paper proposes an unsupervised model for segmentation and contour detection. The unsupervised learning is based on training Markov Random Field model find new feature maps. The MRF is constructed by connectivity of pixels in a local image patch, and Gaussian responses (called factors). The feature maps are inserted into shallow neural networks, and ideal features are learned via contrastive loss based on noisy and correct locations of the pixels and gaussian factors. as well as target which can then be inserted into forward DNNs. Strengths: \n•\tI agree with the authors that more research on grouping mechanisms for neural network models is valuable and can contribute to improved CNN for downstream visual recognition tasks. \n\nWeaknesses: \n•\tThe paper is hard to follow, and more intuitive explanations on the mathematical derivations are needed. Figure captions are lacking, and require additional explanations and legends (e.g., explain the colors in Fig. 2). Fig. 1 and 2 did not contribute much to my understanding, and I had to read the text few times instead. \n•\tAt the end of the day the model proposes a method to learn features for detecting boundaries, which is an old computer vision task. Indeed, it uses a new MRF framework, and contrastive loss, but it is not clear why not using DNNs with contrastive loss for doing that, besides that maybe the learned features are more like human vision features. \n•\tThe results of the models are not compared against unsupervised DNN models. I think it is interesting to see such a comparison, e.g., to unsupervised segmentation models that can be adjusted to the BSDS500 contour detection task. \n•\tThere is not review of previous related work, and these is not section for “related work”. Can unsupervised segmentation model be relevant to your work?\n See above. \n\nMinor: \nLine 3: .”..dealt with separately,” -> “ dealt separately” ?\n yes", " * In this work the authors propose a model that can learn visual input features and segmentation without supervision which combines contrastive learning + Markov Random Fields (MRFs) to implement spatial feature prediction for feature maps + image segmentation.\n* Self supervised learning approach: prediction between locations yield self supervised loss to learn feature maps, how to infer which locations should be grouped.\n* Gaussian Markov Random Fields are used to model pairwise connections among features given feature maps and binary variables that indicate activity between connections. \n* When considering pairwise connections the four adjacent pixels from the neighbours of position i. The factors of features (point or pairwise) model the strength and influence of a point on its neighbours wrt to a feature. \n* There are two contrastive Learning Objectives defined on the marginal likelihood p(f) 1. Position Loss: optimizes probability of randomly chosen positions from other features + images 2. Factor Loss: maximizes factor for correct feature vectors relative to random feature vectors sampled from different locations and images. Optimization is done on all parameters of the MRF in parallel via SGD with momentum \n* Segmentation Inference: p(w_i,j = 1 | f) is the probability that two points are connected. This depends on the two feature vectors f_i ^& f_j. This is computed by forming a sparse connectivity matrix from: p(w_i,j = 1 | f) / p(w_i,j = 0 | f) (claimed to be an attention dependent serial process)\n* Features are trained over three Models: Identity, Linear, and ResNet. Models are trained on MS COCO image dataset. The authors provide an analysis of the learned features for the linear and ResNet model. Contours are extracted from the model and compared to the Berkeley (human) Segmentation Database (BSDS500). The authors present recall and precision scores along with other segmentation metrics against a number of other baselines that may have been trained with additional supervision signals ( human contour / segmentation data + pretraining on other tasks). \n\nMain claims of the work:\n\n1. Predictions across space aid in segmentation and feature learning without further supervision signals (human contour data or pre-training on other tasks/data).\n2. Models optimized under this approach have similarities to the human visual system (retinotopic visual cortex)\n **Strengths**\n\n* Overall I believe this is a pretty interesting paper although I'm not deeply familiar with all of the ties to cognitive science and the human visual system. However, I think that the authors do a good job of demonstrating links to the human visual system via the contour & segmentation analysis and the feature visualization without any specific supervision toward human visual cognition.\n* The results are clearly demonstrated and show that even with simple feature modeling that spatial feature prediction modeled by the MRF + contrastive losses the model achieves reasonable segmentation and contour prediction accuracy approaching the level of deep neural networks making use of pretraining on other tasks and training on human reported contours.\n* In the discussion there are some highlights regarding how this method performs object selection and how temporal data might be used to further improve model capacity to detect object structure (the binding problem). Also the discussion of biological plausibility helps to underpin the relevance of this work. \n* The scores achieved look fairly good despite relatively little tuning and relatively small models even though optimizing over these metrics wasn't the aim of this research. \n\n**Weaknesses**\n\n* May need more analysis/explanation on the reasons behind why the factor loss performed well vs. the position loss.\n* Sub-section 2.3 may be more appropriately placed under section 3. Are these meant to be baselines? Finally, Fig. 2 appears fairly early in the text relative to this subsection. \n* The authors stated a number of future directions for this work in section 5 including processing video input. It would be helpful to know if we could reasonably expect these models to advance state of the art and how this approach fits into the broader family of models tackling image/video segmentation and representation. \n* Figure 2 D & E were a bit tricky to understand and might benefit from some clarification in the description or main text.\n \nIt was stated that models using the factor loss + neighbourhood of size 20 performed best. Was there a sense of whether further increases in the neighbourhood would yield further improvement in the scores? Were any other losses considered?\n\nIn section 3.2 it's stated: \"... the best performing models seem to be mostly the local averaging models ...\" Do you have any intuition or explanation as to why the best performing models were those that do local averaging?\n\nCould this approach induce good representations for image or video data for downstream tasks or other applications? Was this explored at all?\n\nWould prediction across features in the manner presented be applicable to other input modalities (e.g. language/text) or sensory data (e.g. speech or audio)?\n The authors discuss the limits of their model in the context of low scale and minimal tuning - in effect this is a proof of concept. They also note that the contrastive losses + learning are not optimal and could be modified to achieve better optimization. Further the model does include any temporal signal which may also aid in helping to better model probabilistic features. ", " The authors propose an unsupervised learning method which learns local features from image pixels both with weights that allow to group those features together.\n\nThe contributions are the following :\n- a description of their original model based on Markov Random Fields,\n- explanations about how to trained the proposed models following principle from contrastive learning,\n- a comparison of the learned features under variation of their algorithm (loss, linear/seg),\n- a comparison of the performance of their algorithm on a contour detection task (BSD500). Strengths :\n- modelisation of grouping using a binary weight which is on or off whether the feature are grouped or not,\n- a method to train a seemingly untrainable model (without supervision),\n- multiple variations of their model are compared,\n- model comparison with SOTA algorithms,\n- a real concern about visual perception\n\nWeaknesses :\n- results are not SOTA,\n- non-negligible post-processing to extract image contours\n- learned features are shown but not the local weights given a feature and a location\n\n 1. The issue with the position loss and the trick to circumvent it are obscure to me. Can you elaborate more on the issue and its solution ? I do not really understand the issue and how the proposed trick solves it.\n\n2. I disagree with the fact that the linear models learn Gabor-like features. In particular for the position loss (Figure 3.A), the features are more like *non-local* wave-like features (Fourier).\n\n3. I would like to see some maps of p_ij or w_ij associated to some features to understand how the algorithm has grouped those.\n\n4. A related question is that the post-processing step to turn p_ij into contours makes hard to understand what was learned ? Is the use of spectral clustering transparent ie has the algorithm learned to group contours over any other type of grouping (eg textures) ?\n\nFew additional references about texture-based segmentation :\n- Beck, J., Sutter, A., & Ivry, R. (1987). Spatial frequency channels and perceptual grouping in texture segregation. Computer Vision, Graphics, and Image Processing, 37(2), 299-325.\n- Landy, M. S., & Bergen, J. R. (1991). Texture segregation and orientation gradient. Vision research, 31(4), 679-691.\n- Wolfson, S. S., & Landy, M. S. (1995). Discrimination of orientation-defined texture edges. Vision research, 35(20), 2863-2877.\n- Vacher, J., Launay, C., & Coen-Cagli, R. (2022). Flexibly regularized mixture models and application to image segmentation. Neural Networks, 149, 107-123.\n\nTypos :\nl 63 -> which we can us to\n Limitations are sufficiently assessed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4 ]
[ "_Ffma96J2C", "_6yMEaMQXAS", "R6M35nvMgte", "BsEWyeyawX", "jczdLNkZp-C", "dE2aQadmvJ", "dE2aQadmvJ", "x2E-Q2XCQlc", "x2E-Q2XCQlc", "tb63_P9OTlL", "nips_2022_c7sI8S-YIS_", "nips_2022_c7sI8S-YIS_", "nips_2022_c7sI8S-YIS_", "nips_2022_c7sI8S-YIS_" ]
nips_2022_2dxsDFaESK
Amortized Projection Optimization for Sliced Wasserstein Generative Models
Seeking informative projecting directions has been an important task in utilizing sliced Wasserstein distance in applications. However, finding these directions usually requires an iterative optimization procedure over the space of projecting directions, which is computationally expensive. Moreover, the computational issue is even more severe in deep learning applications, where computing the distance between two mini-batch probability measures is repeated several times. This nested-loop has been one of the main challenges that prevent the usage of sliced Wasserstein distances based on good projections in practice. To address this challenge, we propose to utilize the \textit{learning-to-optimize} technique or \textit{amortized optimization} to predict the informative direction of any given two mini-batch probability measures. To the best of our knowledge, this is the first work that bridges amortized optimization and sliced Wasserstein generative models. In particular, we derive linear amortized models, generalized linear amortized models, and non-linear amortized models which are corresponding to three types of novel mini-batch losses, named \emph{amortized sliced Wasserstein}. We demonstrate the favorable performance of the proposed sliced losses in deep generative modeling on standard benchmark datasets.
Accept
During the author-reviewer discussions, the authors have addressed most of the concerns raised by the reviewers, leading to original scores being raised. During the reviewer discussions, the disagreement among reviewers about the demonstration of computational benefits was discussed. At this point, the merits of the paper, including the originality of its contribution and the sufficient experimental validation, outweigh the doubts remaining with one of the reviewers. Therefore, the recommendation is to accept this submission. I would like to thank the authors and reviewers for engaging in discussions.
train
[ "U6-y_e6ztzC", "dn4tUqgzYOB", "Nmn9jKpRtf", "3kFABmEVUE", "O1wG5C7FaZz", "BDisx06EQoQY", "N4GMkZVkoBR", "1wH3JElO2-X", "6uaI_p3qP10", "ZZlqA_HR0NZ", "aVawyKWwJZN", "5k8BKoH0RIN", "u0XY8zBl-pm", "HOdqtxHKrWc", "oYxj3bWpmHC", "sZ187ct_GZ_", "fcOM4NJ9mQY", "GGLMUIQ9N96", "ynS7fFW-l1s", "cspeNTOajs" ]
[ "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer hTkP,\n\nWe have addressed your concerns in our responses. Given that the discussion deadline is only a few hours from now and you are the only one that gives a negative score on our paper, we would like to hear your feedback. Please feel free to raise questions if you have other concerns.\n\nBest regards,\n\nAuthors", " Dear Reviewer 3bjP,\n\nWe have addressed your concerns in our responses. Given that the discussion deadline is approaching, we would like to hear your feedback. Please feel free to raise questions if you have other concerns.\n\nBest regards,\n\nAuthors", " Dear Reviewer mdhx,\n\nThank you for your time and feedback. We will revise the size of Fig. 1 and Tabl 1 in the revision.\n\nBest regards,\n\nAuthors", " Thank you for your clear response. As it is already a strong accept, I will keep my score.\n\nI wanted to shortly note that Fig. 1 and Table 1 are slightly to large (which obviously does not impact my decision).", " Dear Reviewer 3bjP,\n\nwe have run experiments on CIFAR10 with the batch size $m=32$, $m=64$ in addition to $m=128$ in the paper. In more detail, we run SW ($L=1000$), Max-SW ($\\eta_2=0.01$, $T_2 =10$), LA-SW, GA-SW, and NA-SW three times. We observe that increasing batch size leads $m$ to better FID scores and IS scores for all methods. This is consistent with the theoretical sample complexity of sliced Wasserstein. For all choices of $m$, A-SW variants are better than SW and Max-SW. When the mini-batch size is small e.g, 32 and 64, GA-SW and NA-SW are better than LA-SW. The reason might be the non-linearity in GA-SW and NA-SW is better than the linearity in LA-SW when having limited data in mini-batches. The FID scores and the IS scores are given below.\n\n|Method |FID | IS|\n--- | --- | ---|\n|SW m=32 |19.56 $\\pm$ 0.41|7.80 $\\pm$ 0.03|\n|Max-SW m=32 |36.11 $\\pm$ 1.45|6.45 $\\pm$ 0.37|\n|LA-SW m=32|19.45 $\\pm$ 0.05|7.91$\\pm$ 0.04|\n|GA-SW m=32|**17.53 $\\pm$ 0.81**|**7.95 $\\pm$ 0.08**|\n|NA-SW m=32|18.84 $\\pm$ 1.02|7.89 $\\pm$ 0.11 |\n--- | --- | ---|\n|SW m=64 |16.02 $\\pm$ 0.87|8.01 $\\pm$ 0.11|\n|Max-SW m=64 |34.33 $\\pm$ 1.22|6.57 $\\pm$ 0.25|\n|LA-SW m=64|15.99 $\\pm$ 0.12|7.98$\\pm$ 0.08|\n|GA-SW m=64|15.22 $\\pm$ 0.31|**8.09 $\\pm$ 0.02**|\n|NA-SW m=64|**15.15$\\pm$ 0.58**|8.08 $\\pm$ 0.10 |\n--- | --- | ---|\n|SW m=128 |14.25 $\\pm$ 0.80 |8.12 $\\pm$ 0.07|\n|Max-SW m=128 |31.33 $\\pm$ 3.02|6.67 $\\pm$ 0.37|\n|LA-SW m=128|**13.21 $\\pm$ 0.69** |8.19 $\\pm$ 0.03|\n|GA-SW m=128|13.64 $\\pm$ 0.11|8.22 $\\pm$ 0.11|\n|NA-SW m=128|14.22 $\\pm$ 0.51|**8.29 $\\pm$ 0.08** |", " Dear Reviewer nJDk,\n\nThank you for your feedback. We will revise our paper based on your comments. Please feel free to raise questions if you want us to clarify anything about our work. \n\nBest regards,", " Dear Reviewer mdhx,\n\nWe have addressed your concerns in our responses. We would like to hear your feedback. Please feel free to raise questions if you have other concerns.\n\nBest regards,\n\nAuthors", " Thank you very much for your response. I keep my (positive) rating unchanged. Let me refer to some points.\n\n**A9**. If independence is not guaranteed, these sets should not be formally called \"bases\" of linear spaces (recall that a basis of a linear space is a set of linearly independent vectors that spans the space). You can simply use the names \"sets\" or \"collections\". Also, thank you for the note on $w_0$, but this is not what I asked for. I wanted to know why $w_0$ was not an element of the spanning sets in Definitions 4 and 5 (which, as I supposed, was wrong). However, I see that in the revised version $w_0$ appears in each spanning set, so now it's ok. \n\n**A10**. I wanted to suggest not using the same notation for various objects. I find a discrepancy between distributions a more general notion than a distance/metric between distributions (e.g, OT distance).\n\n**A11**. Ok, but I suggest that, for the reader's convenience, it should be written before the first use (in l. 38).\n\n**A12**. By definition, the support of a probability measure is the largest (closed) set consisting of points for which every open neighborhood has a positive measure.\n", " Dear Reviewers and Chairs,\n\nWe would like to thank the reviewers for their time and feedback. We have answered all questions of the reviewers in the corresponding discussions. Moreover, we have also included the following results (written in blue color) in our revision:\n\n1. We apply amortized optimization to mini-batch projected robust Wasserstein (PRW) to obtain Amortized projected robust Wasserstein ($\\mathcal{A}$-PRW) . With a slight modification of amortized models, we introduce linear Amortized projected robust Wasserstein $\\mathcal{LA}$-PRW, generalized linear Amortized projected robust Wasserstein $\\mathcal{GA}$-PRW, and non-linear Amortized projected robust Wasserstein $\\mathcal{NA}$-PRW. We refer reviewers to Appendix C for detailed definitions. We conduct experiments on generative models on CIFAR10 to compare $\\mathcal{A}$-PRW to mini-batch PRW. Overall, we observe that $\\mathcal{A}$-PRW gives a better FID score and IS score than PRW.\n\n2. We run mini-batch SW (L=100,1000,10000), mini-batch Max-SW ($\\eta_2 \\in$ \\{0.001,0.01\\} and $T_2$ in \\{1,10,100\\}), $\\mathcal{LA}$-SW, $\\mathcal{GA}$-SW, and $\\mathcal{NA}$-SW five different times on CIFAR10. We have updated the result in Table 1 in the main text. We still observe that $\\mathcal{LA}$-SW and $\\mathcal{NA}$-SW give the best FID score and IS score respectively.\n\n3. We have added a paragraph in Appendix G for discussing the limitations of our proposed methods.\n\n4. We have also submitted the code for the new experiments.\n\n5. We have fixed typos and revised the writing based on the suggestions of reviewers in blue color.\n\nWe are looking forward to your feedback.\n\nBest regards,\n\nAuthors\n", " **Q20**: Clarify the motivation and take-home message of the paper: A-SW does not seem to provide computational advantages over mini-batch max-SW but instead, provides more accurate results ; this might be due to the design of the amortized models, which constraints the shape of the optimal projections, thus regularizes the underlying optimization problem.\n\n**A20**: As answered in $A11$, the main benefit of amortized optimization is avoiding starting local optimization from the start. Based on Table 2, when Max-SW uses a lot of update steps, e.g., 100, it is slower than $\\mathcal{A}$-variants. The reason that we focus on the computational benefit instead of the performance is that the computational benefit of amortized optimization is well-known. The better performance of $\\mathcal{A}$-SW in generative modeling is a good effect; however, it might not be guaranteed in other applications.\n\n**Q21**: Add a more extensive discussion on the amortization gap to better motivate the proposed amortized models and the training procedure.\n\n**A21**: We have added a paragraph in Appendix G for discussing the limitations of our work including the lack of understanding of the amortization gap of proposed amortized models. \n\n\n**Q22**:: Typos…\n\n**A22**: We have fixed all the mentioned typos in blue color for the revision. We really appreciate your feedback.\n", " **Q17**: the amortization gap seems to be an important aspect of amortized optimization and is usually taken into account in the final methodology [1,2], but the authors chose not to study it (l.162-163). Are there any guarantees on whether the optimal projection directions lie in the space parameterized by Definitions 3, 4 or 5?\n\n**A17**: We would like to remark that the nature of the optimization problem in mini-batch Max-SW is different from the conventional ElBO in VAE literature; hence, it is non-trivial to understand the amortization gap at the moment. Moreover, since the optimal solution of Max-SW has not been investigated rigorously in the literature yet, it is non-trivial to derive guarantees on whether the optimal projection directions lie in the space parameterized by Definitions 3, 4 or 5.\n\n\n**Q18**: A-SW outperforms mini-batch max-SW in terms of FID/IS scores in the image generation problem, and the authors conjecture that mini-batch max-SW performs poorly because it is stuck in a local optimum. What prevents A-SW from suffering from the same issue? This aspect could have also been explored empirically, which relates to my last point (\"No uncertainty measure in the experiments\").\n\n**A18**: Due to limitations of time and hardware, we can only run min-batch SW (L=100,1000,10000), mini-batch Max-SW ($\\eta_2 \\in$ \\{0.001,0.01\\} and $T_2$ in \\{1,10,100\\}), $\\mathcal{LA}$-SW, $\\mathcal{GA}$-SW, and $\\mathcal{NA}$-SW five different times on CIFAR10. We evaluate the FID scores and IS scores on only the last epoch to speed up the experiments, hence, we cannot plot the FID curve and IS curve. In the revision, we have updated the results with error bars in Table 1. According to the table, the relative comparisons of FID scores and IS scores are unchanged. Namely, $\\mathcal{LA}$-SW and $\\mathcal{NA}$-SW give the best FID score and IS score respectively on CIFAR10. Moreover, we observe that Max-SW still gives a not good result which suggests that the projected gradient ascent of Max-SW leads to non-optimal slicing projection.\n\nAs answered in **A11**, one reason that A-SW can avoid the local optimum better than Max-SW is that A-SW uses an amortized model to predict the optimum. The amortized model is trained on multiple local optimization problems which are finding the max slice on all pairs of mini-batch distributions. Therefore, the shared information between mini-batches might be one reason that A-SW can yield better results. Moreover, we design the amortized model to predict the global optima as a (generalized) linear combination of supports of the two distributions that might lead to a better landscape for the optimization problem.\n\nWe also would like to refer the reviewer to the new experiments where we compare mini-batch projected subspace robust (PRW) which is the generalization of mini-batch Max-SW to $\\mathcal{A}$-PRW which is the generalization of $\\mathcal{A}$-SW. In more detail, they project two original distributions to a subspace with dimension $k>1$. The detailed of PRW and $\\mathcal{A}$-PRW are given in Appendix C. By modifying our proposed amortized model slightly, we can derive three variants of $\\mathcal{A}$-PRW including $\\mathcal{LA}$-PRW, $\\mathcal{GA}$-PRW, and $\\mathcal{NA}$-PRW. We again conduct experiments on generative models to compare $\\mathcal{A}$-PRW variants with PRW with $k\\in $\\{2,4,16\\} on CIFAR10 dataset. The result is given in Table 4 in Appendix E and the detailed settings are also given in Appendix E. From the table, we observe that $\\mathcal{A}$-PRW variants give both lower FID scores and IS scores than PRW for all choices of $k$. This further strengthens the performance of amortized optimization to find the best projecting subspace and the best projecting directions.\n\n\n**Q19**: Finding an experiment where the training of mini-batch-max-SW-generator does not get stuck in a bad local optimum, and compare its performance against A-SW. It would also be interesting to see how mini-batch max-SW performs when the number of mini-batches k is equal to 1.\n\n**A19**: Due to the limitation of time and hardware, we have not been able to design such experiments. We will report the experiment as soon as it is available. According to the number of mini-batches $k$, it has been already set to 1 for all methods in our experiments.\n", " **Q14**: Unclear conclusion: In the end, the message of this paper is confusing to me: in the introduction, the authors motivate the development of A-SW as a strategy to reduce the computational complexity of mini-batch max-SW, but this aspect is not supported by the computational complexity analysis (l.237-247) nor the experimental section (l.324-326: A-SW is \"comparable to Max-SW ... about the computational memory and the computational time\", or even slower and more memory-intensive when the amortized model is not linear (Table 2)). According to Table 1, the main advantage of using A-SW instead of mini-batch-max-SW in generative models is not the speeding-up of the training, but the improved quality of the results (Table 1). However, it is not clear why this happens, which leads me to the next point (\"Lack of theoretical justification\").\n\n**A14**: Thanks for your comment. $\\mathcal{A}$-SW gives the computational complexity of $\\mathcal{O}(n\\log n)$ that is the same as those of SW and Max-SW, while the memory complexity is comparable to Max-SW as discussed in the paper. From Table 2, when Max-SW uses 10 update steps for finding the max projecting direction, the $\\mathcal{LA}$-SW variant is faster than it. When Max-SW uses 100 update steps, it is slower than all $\\mathcal{A}$-SW variants. We would like to recall that the computational benefit of amortized optimization is that it does not start local optimization problems from the start. In particular, optimums are predicted from an amortized model, and those models are trained on all local problems. That is the reason why amortized optimization can save computational time.\n\nThe reason why $\\mathcal{A}$-SW gives better performance than Max-SW is that Max-SW might get stuck at a local optimum in the optimization of finding the max projecting direction. Since Max-SW uses the projected gradient ascent to solve its optimization problem, it is not guaranteed to converge to the global optima. $\\mathcal{A}$-SW introduces an assumption to predict the global optima which is the (generalized) linear combination of supports of the two distributions. Hence, that prior might lead to a better landscape for the optimization problem. Moreover, the amortized model is shared across all mini-batch distributions that avoid starting the optimization from the beginning like in Max-SW. As a result, ASW can find better projections in each mini-batch than Max-SW. \n\n\n**Q15**: Proof of Proposition 2 (Appendix A.2): The conditions guaranteeing the existence of ϕ∗ should be specified.\n\n**A15**: In the revised version of Proposition 2, we already include the additional conditions that the space $\\Psi$ is compact and the map $f_{\\psi}$ is continuous in terms of $\\psi$ to guarantee the existence of $\\psi^{*}$. Please refer to Appendix A.2 for new proof.\n\n\n**Q16**: The parameterization of the amortized models is not clearly motivated. Are the proposed parameterizations standard in amortized optimization (if so, please add some references)? \n\n**A16**: Thanks for your question. Since this is the first work that connects amortized optimization to sliced Wasserstein literature, we believe that our parameterizations of amortized models are original. Our proposed amortized models are motivated by famous literature on generalized linear models [R.5] and they are the most natural ways to choose in practice.\n\nFor the linear model, the assumption is that the optimal projecting direction lies on the subspace that is spanned by the basis \\{$x_1,\\ldots,x_m,y_1,\\ldots,y_m,w_0$\\} where $X=(x_1,\\ldots,x_m)$ and $Y=(y_1,\\ldots,y_m) $ are supports of two mini-batch measures, and $w_0$ is the vector of biases of the linear model. Similarly, the generalized linear model assumes that the optimal projecting direction lies in the subspace \\{$x_1’,\\ldots,x_m’,y_1’,\\ldots,y_m’,w_0$\\} where $X’=(x_1’,\\ldots,x_m’) = g_{\\psi_1}(X)$ and $Y’=(y_1’,\\ldots,y_m’)= g_{\\psi_1}(Y)$ for some non-linear link function $ g_{\\psi_1}(.)$. Moreover, we would like to recall that we can use several architectures of neural networks for the amortized model. In the paper, our non-linear amortized model can be seen as a two-layer MLP. However, a more powerful amortized model leads to higher computational complexity as noticed by the reviewer. In the paper, we recommend the linear amortized model which is the most efficient model. \n\n[R.5] Plane Answers to Complex Questions, Christensen et al\n", " **Q8**: I think the presentation (including the English of the paper) needs some improvements. For example, the authors quite often use simplified phrases such as (amortized, sliced, max) Wasserstein, to name the models, distances, or losses (without adding these words, what I find inappropriate), \n\n**A8**: Thank you for your comments. We have revised the paper based on your suggestion. The modifications are written in blue color in the revision.\n\n**Q9**: Do the sets of vectors listed in l. 192, 202, and 217 indeed form bases of linear subspaces in Rd, or do only span these spaces (without independence), and why does w0 appear only one time? Moreover, what do you exactly mean by writing \"one-one \"reshape\" mapping\"?\n\n**A9**: The set of vectors in l. 192, 202, and 217 are not guaranteed to be independent since they are subsampled from the data; hence, the rank of the linear subspace is less or equal than $2m+1$ where $m$ is the number of samples in mini-batches. The reason why $w_0$ appears only once is that it is the vector of biases of the linear model. The one-one reshape mapping is the mapping that maps vectors of size $dn$ to a matrix of size $d\\times n$. This mapping is for a rigorous definition of changing a vector into a matrix.\n\n\n**Q10**: $D$ denotes an arbitrary discrepancy or an OT distance? \n\n**A10**: Yes, we use $D(.,.)$ to denote an arbitrary discrepancy or an OT distance. The estimator in Equation 2 is normally known as the minimum distance estimator [R.3, R.4] while the estimator based on Equation 3 is known as the minimum expected distance estimator [R.3, R.4].\n\n[R.3] On parameter estimation with the Wasserstein distance, Bernton et al.\n\n[R.4] Asymptotic Guarantees for Learning Generative Models with the Sliced-Wasserstein Distance, Nadjahi et al\n\n**Q11**: what is d?\n\n**A11**: $d$ is the dimension of supports of probability measures as mentioned in the Notation paragraph at the beginning of page 2.\n\n**Q12**: needs precising: I hope m means the size of the sum of supports of two mini-batch measures\n\n**A12**: It is correct, $m$ is the number of supports of two mini-batch measures.\n\n**Q13**: I would recommend including a separate paragraph to describe the limitations\n\n**A13**: We have added a paragraph for discussing the limitations of our proposed methods in Appendix G. In summary, we have not been able to investigate the amortization gaps of the proposed amortized models since the connection of the optima of Max-SW to the supports of two probability measures has not been well-understand yet. Moreover, the design of amortized models requires more engineering to achieve better performance since there is no inductive bias for designing them at the moment. The hardness in designing amortized models is that we need to trade-off between performance and computational efficiency. We will leave these questions to future work. \n", " **Q5**: Typos, fonts, and sizes…\n\n**A5**: Thank you for your comments. We have fixed all these typos and errors in blue color in the revision based on your suggestions.\n\n\n**Q6**: Could the method be extended to “top-k SW” instead of max-SW? Orthogonality could be enforced by an orthogonality loss via a Lagrangian.\n\n**A6**: In the revision, we introduce Amortized Projected Robust Wasserstein ($\\mathcal{A}$-PRW) in Appendix C which is the application of amortized models into Projected Robust Wasserstein (PRW) [2]. PRW is the generalization of Max-SW that finds the best orthonormal matrix $U \\in \\mathbb{R}^{d\\times k}$ with $k>1$ that can maximize the projected Wasserstein distance between projected $k$ dimensional measures. Similar to Max-SW, PRW can be solved by using projected gradient descent with the projection operator be the QR decomposition. We refer the reviewer to Appendix C in the revision for more detail. By modifying our proposed amortized model slightly, we can derive three variants of $\\mathcal{A}$-PRW including $\\mathcal{LA}$-PRW, $\\mathcal{GA}$-PRW, and $\\mathcal{NA}$-PRW. We again conduct experiments on generative models to compare $\\mathcal{A}$-PRW variants with PRW with $k\\in $ \\{2,4,16\\} on CIFAR10 dataset. The result is given in Table 4 in Appendix E and the detailed settings are also given in Appendix E. From the table, we observe that $\\mathcal{A}$-PRW variants give both lower FID scores and IS scores than PRW for all choices of $k$. This strengthens the application of amortized optimization in finding the best subspace and projecting directions for comparing probability measures.\n\n**Q7**: What I would have liked to see would be using SW and Max-SW with even larger L / T. It would be interesting to see at which point the methods break even, even if the baselines take a very long time. It would strengthen the paper even further, and with such an evaluation, I would consider raising my score.\n\n**A7**: Due to the limitation of time and our hardware, we cannot run experiments with larger $L/T$ since they either exceed our GPUs’ memory or consume a lot of time. We will try to optimize the memory consumption and report the result in the discussion when it is available.\n", " **Q3**: Would the proposed method still guarantee convergence to the right distribution?\n\n**A3**: Thanks for your question. To our best knowledge, there is no deep generative model that can guarantee convergence in the current literature. The main reason for the lacking of theoretical analysis is due to the usage of minibatch in deep learning applications [1]. The minibatch variants of Wasserstein are not a proper metric in the probability spaces [R. 2], e.g., minibatch with Wasserstein distance, Sinkhorn divergence, sliced Wasserstein, max-sliced Wasserstein, and amortized sliced Wasserstein for deep generative models. Despite lacking metricity, the empirical results indicate that using (amortized) sliced Wasserstein still produces a better result than the conventional GAN training. The improvement in terms of performance indicates that the proposed method may potentially guarantee a better estimation of the underlying distribution than GAN. We leave a rigorous investigation of that conjecture for future work.\n\n[R. 2] Learning with minibatch Wasserstein : asymptotic and gradient properties, Fatras et al\n\n**Q4**: As mentioned in Line 233, the optimization forms a min-max problem. Does it cause training instabilities?\n\n**A4**: Due to limitations of time and hardware, we can only run SW ($L \\in $\\{100,1000,10000\\}), Max-SW (slice learning rate $ \\eta_2 \\in $\\{0.001,0.01\\} and $ T_2 \\in$ \\{1,10,100\\}), $\\mathcal{LA}$-SW, $\\mathcal{GA}$-SW, and $\\mathcal{NA}$-SW five different times on CIFAR10. We evaluate the FID scores and IS scores on only the last epoch to speed up the experiments. We have updated the results with error bars in Table 1 in the revision. From the table, we observe that Namely, $\\mathcal{LA}$-SW and $\\mathcal{NA}$-SW give the best FID score and IS score respectively on CIFAR10. Moreover, we observe that the results of $\\mathcal{A}$-SW losses have relatively small error bars. Overall, the current setting of architectures of neural networks and training procedures are stable for $\\mathcal{A}$-SW losses.\n\n**Q5**: Does the batch size affect the performance? In other words, does the proposed approach require significantly larger batch sizes to achieve good performance?\n\n**A5**: The batch size does not affect the relative comparison between $\\mathcal{A}$-SW, mini-batch SW, and mini-batch Max-SW. However, a bigger batch size leads to a better estimation of mini-batch distribution to the original measure. Namely, the sample complexity of sliced Wasserstein is $O(m^{-1/2})$ where $m$ is the size of mini-batches. However, a better estimation might not lead to a better FID score and a better IS score since they have different favors in measuring discrepancy between distributions. Due to the limitation of time and hardware, we have not been able to run additional experiments on changing the batch size. We will add the result to the discussion when it is available.\n\n**Q6**: The authors have not adequately addressed the limitations and potential negative social impact of their work. It would be good to add more discussions.\n\n**A6**: Thank you for your feedback. We have added more discussions on the potential negative social impact and limitations of our proposed methods in Appendix G. In summary, amortized sliced Wasserstein losses can be applied to various applications such as generative models, domain adaptation, and approximate inference, adversarial attack, and so on. Due to its widely used potential, it can be used as a component in some applications that do not have a good purpose. For example, some examples are creating images of people without permission, attacking machine learning systems, and so on.\n", " **Q1**: The proposed approach seems like a direct application of amortized inference to sliced Wasserstein generative models. There are also no extra theoretical insights/analyses provided. The amortized model considered in the paper could be restrictive. It would be good to perform theoretical analyses on how powerful the model family is.\n\n**A1**: Thanks for your comment. We would like to recall that this is the first work that connects amortized optimization to the sliced Wasserstein literature, hence, the design of amortized models are original. To our best knowledge, the understanding of the optimality of max-sliced Wasserstein has not been established yet. Therefore, understanding the amortization gap and designing good amortized models have still remained to be open questions. We believe that it will require more effort in engineering to find the best design in the case of mini-batch max sliced Wasserstein loss. We leave this investigation to future work.\n\nAccording to our insight of designing amortized models, for the linear model, the assumption is that the optimal projecting direction lies on the subspace that is spanned by the basis \\{$x_1,\\ldots,x_m,y_1,\\ldots,y_m,w_0$\\} where $X=(x_1,\\ldots,x_m)$ and $Y=(y_1,\\ldots,y_m) $ are supports of two mini-batch measures, and $w_0$ is the vector of biases of the linear model. Similarly, the generalized linear model assumes that the optimal projecting direction lies in the subspace \\{$x_1’,\\ldots,x_m’,y_1’,\\ldots,y_m’,w_0$\\} where $X’=(x_1’,\\ldots,x_m’) = g_{\\psi_1}(X)$ and $Y’=(y_1’,\\ldots,y_m’)= g_{\\psi_1}(Y)$ for some non-linear link function $ g_{\\psi_1}(.)$. The above two amortized models are motivated by the fundamental literature on generalized linear models. Moreover, we would like to recall that we can use several architectures of neural networks for the amortized model. In the paper, our non-linear amortized model can be seen as a two-layer MLP. However, a more powerful amortized model leads to higher computational complexity. As we focus on the efficiency of the model instead of the performance, we only design the amortized models to be as simple as possible. \n\nFrom the experimental result, we observe that our proposed models have a good computational speed and also provide good performance. We also observe the trade-off between training efficiency and performance of amortized models. In particular, the non-linear amortized model provides the best result on CelebA dataset; however, its computational time and memory are worse than the linear amortized model (see Table 2 in the main text). Again, this is the trade-off that exists in almost all deep learning applications.\n\n**Q2**: Empirical results are good but not impressive. The improvements in Table 1 are marginal. In Table 2, the memory cost for the proposed approach is also larger than the baselines.\n\n**A2**: Thanks for your comment. We would like to remark that the main comparison that we want to make is between $\\mathcal{A}$-SW, SW, and Max-SW. The linear version of $\\mathcal{A}$-SW -$\\mathcal{ LA}$-SW is better than both SW and Max-SW in terms of performance, memory, and speed. All sliced Wasserstein variants have higher memory than SNGAN; however, their performance is significantly better than that of SNGAN. To our best knowledge, we have achieved the best performance of SNGAN architecture in generative modeling. As a reference, we would like to refer the reviewer to Table 3 in [R.1] for the current best FID scores of SNGAN architecture. In our paper, we have achieved a lower (better) FID score. Therefore, we could say that our experimental result is enough to show that ASW variants help to learn better generative models than SW and prior approaches.\n\nMoreover, our main contribution is connecting the literature on sliced Wasserstein and amortized optimization. $\\mathcal{A}$-SW can be used in several other deep learning applications that involve comparing probability measures, such as domain adaptation, approximate inference, adversarial attacks, and so on. Therefore, we believe that the impact of $\\mathcal{A}$-SW is promising.\n\n[R.1] Exploiting Chain Rule and Bayes’ Theorem to Compare Probability Distributions, Huangjie Zheng et al, NeuRIPS 2021.\n\n", " This work proposes to use amortized optimization to predict the optimal projection direction which can be used for learning Wasserstein generative models. The authors derived a family of amortized models which can be used for the framework. Empirical results show that the proposed approach is able to achieve good performance on benchmark datasets. **Strength**\n1. The paper is relatively well-written.\n2. The experiments are detailed.\n3. The proposed approach could be interesting for the community. However, learning the projection direction also seems straightforward. \n\n**Weakness**\n1. The technical novelty could be limited. The proposed approach seems like a direct application of amortized inference to sliced Wasserstein generative models. There are also no extra theoretical insights/analyses provided.\n2. The amortized model considered in the paper could be restrictive. It would be good to perform theoretical analyses on how powerful the model family is.\n3. Empirical results are good but not impressive. The improvements in Table 1 are marginal. In Table 2, the memory cost for the proposed approach is also larger than the baselines. \n 1. Proposition 1 mentions that $A-SW(\\mu, \\nu)=0$ does not imply $\\mu=\\nu$. Would the proposed method still guarantee convergence to the right distribution?\n2. As mentioned in Line 233, the optimization forms a min-max problem. Does it cause training instabilities?\n3. Does the batch size affect the performance? In other words, does the proposed approach require significantly larger batch sizes to achieve good performance?\n The authors have not adequately addressed the limitations and potential negative societal impact of their work. It would be good to add more discussions.", " The presented draft proposes a method for estimating the Max-SW by learning the projection leading to the largest possible SW metric. ### Strengths \nGenerative modeling is an important subject in machine learning.\n\nThe paper is exceptionally well written and simple to understand.\nThe proposed method is simple yet very powerful as it simplifies computation for Wasserstein generative models while improving model performance.\n\nThe background is very well explained. The method section is easy to understand.\n\nThe empirical evaluation is extensive, covering four data sets of different scales, and the results are impressive.\n\nI greatly appreciate the submission of the code, which is well structured.\n\n\n### Weaknesses\n\n#### Miscellaneous Issues\n* The presented paper uses a different font / font size or line distance. As the font / line distance is increased, I will not consider it for the review score, but it has be fixed for the final version.\n\n#### Typos\n* 35: there have been remained certain problems\n* 85: we make some conclusion. Suggestions: we provide a conclusion.\n* 86: preposition: defer … in -> defer … to \n* 174: measure -> measures\n* 223: leads to three corresponding amortized sliced Wasserstein [a word seems to be missing here], …\n Could the method be extended to “top-k SW” instead of max-SW? \nOrthogonality could be enforced by a orthogonality loss via a Lagrangian.\n The method requires additional memory and model evaluations but this is extensively discussed in the presented draft.\n\nIn practice with large models, i.e., in the demonstrated experiments, the computational cost of the Wasserstein loss is often small, which means that the utility of the method is limited in some applications. However, due to the favorable asymptotical complexity of the method, it allows learning in new settings which were previously impossible or hardly feasible. Further, even in the demonstrated cases the method achieve superior performance.\n\nWhat I would have liked to see would be using SW and Max-SW with even larger L / T. It would be interesting to see at which point the methods break even, even if the baselines take a very long time. \nIt would strengthen the paper even further, and with such an evaluation, I would consider raising my score.", " In the paper, the authors investigate the application of amortized optimization procedures to find the most distinguishing direction that can be used to calculate the sliced Wasserstein distance between two empirical latent distributions. This leads to the construction of three variants of the amortized sliced Wasserstein model: linear amortized ($\\mathcal{LA}$-SW), generalized linear amortized ($\\mathcal{GA}$-SW), and non-linear amortized ($\\mathcal{NA}$-SW). The authors conduct experiments on CIFAR10, CelebA(-HQ), and STL10 datasets, which prove the performance of their methods compared to SNGAN and (max-)SW models. They also analyze the computational complexity of the proposed solutions. The proposed method seems to be novel, well-motivated, and with sufficient theoretical background. Also, the experimental results show significant improvement over the SOTA techniques. Hence, I think this paper may be interesting for the ML community. However, I found some presentation issues (see below for the details). My final opinion may depend on the response to issues (1) and (2).\n\n(1) I think the presentation (including the English of the paper) needs some improvements. For example, the authors quite often use simplified phrases such as (amortized, sliced, max) Wasserstein, to name the models, distances, or losses (without adding these words, what I find inappropriate), e.g.:\n\nl. 43: Sliced Wasserstein is defined $\\rightarrow$ Sliced Wasserstein distance is defined,\n\nl. 69: named amortized sliced Wasserstein $\\rightarrow$ named amortized sliced Wasserstein losses,\n\nl. 141: mini-batch max-sliced Wasserstein $\\rightarrow$ mini-batch max-sliced Wasserstein loss,\n\nl. 222-223: leads to three corresponding amortized sliced Wasserstein $\\rightarrow$ leads to three corresponding amortized sliced Wasserstein models.\n\n(2) Do the sets of vectors listed in l. 192, 202, and 217 indeed form bases of linear subspaces in $\\mathbb{R}^d$, or do only span these spaces (without independence), and why does $w_0$ appear only one time? Moreover, what do you exactly mean by writing \"one-one \"reshape\" mapping\"?\n\n(3) Minor issues:\n\nl. 27: $\\phi$ is a parameter of NN (not NN),\n\nl. 23 and 30: $\\mathcal{D}(\\cdot,\\cdot)$ denotes an arbitrary discrepancy or an OT distance?,\n\nl. 38: what is $d$?\n\nl. 38: needs precising: I hope $m$ means the size of the sum of supports of two mini-batch measures,\n\nl. 91-92: a language issue in this sentence (\"as the product measure which has the support is $m$ random variables follows $\\mu$\"?),\n\nl. 127: huge/large,e.g., $\\rightarrow$ huge/large, e.g., (missing gap),\n\nl. 133: samples $\\rightarrow$ sample,\n\nl. 172 and 184: equation (5)/5 $\\rightarrow$ Equation (5),\n\nl. 174: measure $\\rightarrow$ measures,\n\nl. 213: the one-one mapping $\\rightarrow$ the one-one \"reshape\" mapping,\n\nl. 235: $\\ldots$ $\\rightarrow$ $, \\ldots,$. I would recommend including a separate paragraph to describe limitations (as it was done for potential impact, see Appendix F).", " Modern generative models aim at fitting the probability distribution of the observed data. To this end, a common strategy consists in minimizing a chosen divergence between the observed empirical distribution and a parametric distribution, over the set of parameters. In recent years, models based on the minimization of optimal transport (OT) metrics have attracted significant interest. However, the standard OT metric, namely the Wasserstein distance, suffers from an expensive computational cost and sample complexity in large-scale settings (i.e. when the compared distributions are high-dimensional or supported on a large number of samples). For that reason, using practical alternatives to the traditional Wasserstein distance is more convenient in generative modeling. \n\nDue to its lower computational complexity and favorable theoretical properties, the sliced Wasserstein distance has been successfully integrated within the generative modeling framework. Nevertheless, the sliced Wasserstein distance (SW) is defined as an expectation which is intractable in general. In practice, this distance is thus traditionally approximated with a simple Monte Carlo algorithm: the expectation is replaced by a finite sum over L projection directions. Computing the sliced Wasserstein distance then scales linearly with L, meaning that improving the accuracy of the Monte Carlo estimator (by increasing L) leads to higher computational requirements. This is a limitation of SW-based generative algorithms, since they need to compute SW at each training iteration. One solution is to pick \"more informative\" samples in the Monte Carlo strategy, which motivated the formulation of maximum sliced Wasserstein (max-SW): the expectation is replaced by a maximum operator.\n\nIn this paper, the authors argue that the mini-batch formulation of max-SW (mini-batch max-SW, equation (5)) can be computationally expensive when used in generative modeling, and they develop alternative pseudo-metrics inspired by amortized optimization to overcome this problem. Their main contributions are summarized below.\n\n1) This paper introduces a new family of pseudo-metrics, coined \"amortized sliced Wasserstein\" (A-SW), by reformulating the mini-batch max-SW using amortized optimization (Definition 2). In other words, the optimal projection directions found in mini-batch max-SW are approximated with a family of parametric functions (the amortized model).\n\n2) The authors prove that amortized SW are positive, symmetric (Proposition 1) and lower than mini-batch max-SW (Proposition 1).\n\n3) Three instances of amortized SW are proposed by specifying the amortized model (Definitions 3, 4 and 5) and an analysis of their computational complexity as compared to mini-batch-max-SW is provided. \n\n4) Finally, GANs based on A-SW are proposed and compared against SNGAN and (max-)SW-based generators in image generation in terms of training speed, memory and quality of results, on 4 standard datasets (Section 5). According to the empirical results, A-SW is able to produce images of higher quality, at the price of a higher execution time and memory due to the optimization of the amortized model (depending on its parameterization). Strengths:\n- The problem addressed in this paper is adequate for NeurIPS: generative models based on optimal transport metrics, in particular variants of sliced Wasserstein distance, have gained popularity in the machine learning community, and it is important to develop strategies which further reduce their computational requirements.\n- The main idea behind the paper is quite original since, to the best of my knowledge, amortized optimization (which has been previously deployed in other areas such as variational inference and auto-encoders, for example) had not been applied in optimal transport. \n- Overall, I found the paper clearly written and easy to follow. Some typos are reported in \"Questions\".\n\nWeaknesses:\n\nUnclear conclusion: In the end, the message of this paper is confusing to me: in the introduction, the authors motivate the development of A-SW as a strategy to reduce the computational complexity of mini-batch max-SW, but this aspect is not supported by the computational complexity analysis (l.237-247) nor the experimental section (l.324-326: A-SW is \"comparable to Max-SW ... about the computational memory and the computational time\", or even slower and more memory-intensive when the amortized model is not linear (Table 2)). According to Table 1, the main advantage in using A-SW instead of mini-batch-max-SW in generative models is not the speeding-up of the training, but the improved quality of the results (Table 1). However, it is not clear why this happens, which leads me to the next point (\"Lack of theoretical justification\").\n\nLack of theoretical justification: the proposed methodology is not theoretically-grounded in my opinion and the empirical analysis is not comprehensive enough to compensate this lack of insights, which makes me question the performance of the proposed generative algorithm. More specifically,\n1) Proof of Proposition 2 (Appendix A.2): The conditions guaranteeing the existence of $\\phi^*$ should be specified.\n2) The parameterization of the amortized models is not clearly motivated. Are the proposed parameterizations standard in amortized optimization (if so, please add some references)? In particular, the amortization gap seems to be an important aspect in amortized optimization and is usually taken into account in the final methodology [1,2], but the authors chose not to study it (l.162-163). Are there any guarantees on whether the optimal projection directions lie in the space parameterized by Definitions 3, 4 or 5? \n3) A-SW outperforms mini-batch max-SW in terms of FID/IS scores in the image generation problem, and the authors conjecture that mini-batch max-SW performs poorly because it is stuck in a local optimum. What prevents A-SW from suffering from the same issue? This aspect could have also been explored empirically, which relates to my last point (\"No uncertainty measure in the experiments\").\n\nNo uncertainty measure in the experiments: In Section 5, if I am not mistaken, generative models have been trained only once on each dataset. In the checklist, the authors specified that they reported error bars, but I cannot find these anywhere. Having multiple runs is essential and can support the conjecture that A-SW-based generative models do not get stuck in a local optimum or at least less often than mini-batch max-SW generators (I am not fully convinced by this aspect for now).\n\n[1] \"Amortized Inference Regularization\", Shu et al. (NeurIPS 2018)\n\n[2] \"Iterative Amortized Policy Optimization\", Marino et al. (NeurIPS 2021) To address my concerns, I encourage the authors to:\n- Clarify the motivation and take-home message of the paper: A-SW does not seem to provide computational advantages over mini-batch max-SW but instead, provides more accurate results ; this might be due to the design of the amortized models, which constraints the shape of the optimal projections, thus regularizes the underlying optimization problem.\n- Add a more extensive discussion on the amortization gap to better motivate the proposed amortized models and the training procedure.\n- Further illustrate the empirical superior performance of A-SW by: 1) running the experiments multiple times and report the associated error bars (Table 1, Figure 1); 2) finding an experiment where the training of mini-batch-max-SW-generator does not get stuck in a bad local optimum, and compare its performance against A-SW. It would also be interesting to see how mini-batch max-SW performs when the number of mini batches k is equal to 1. \n\nTypos:\nMain document: l.92, l.263 (unclear sentences); l.102 (definition of $W_p$: the norm should be Euclidean so indexed by 2); equation (2) ($\\nu_\\phi$ instead of $\\nu$); l.131 ($Y_{\\phi,i}$ instead of $Y_{\\phi_i}$); l.174 (\"two probability measureS\")\n\nAppendix: derivations below l.523 (parenthesis issue, equation repeated twice); l.553; legend of Table 7. The limitations of A-SW are discussed in Section 5 and seem to be the computational complexity (which, again, contradicts the original motivation). The potential negative societal impact is discussed briefly in Appendix F." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 5 ]
[ "cspeNTOajs", "fcOM4NJ9mQY", "3kFABmEVUE", "N4GMkZVkoBR", "oYxj3bWpmHC", "1wH3JElO2-X", "GGLMUIQ9N96", "u0XY8zBl-pm", "nips_2022_2dxsDFaESK", "cspeNTOajs", "cspeNTOajs", "cspeNTOajs", "ynS7fFW-l1s", "GGLMUIQ9N96", "fcOM4NJ9mQY", "fcOM4NJ9mQY", "nips_2022_2dxsDFaESK", "nips_2022_2dxsDFaESK", "nips_2022_2dxsDFaESK", "nips_2022_2dxsDFaESK" ]
nips_2022_O4Q39aQFz0Y
Revisiting Sliced Wasserstein on Images: From Vectorization to Convolution
The conventional sliced Wasserstein is defined between two probability measures that have realizations as \textit{vectors}. When comparing two probability measures over images, practitioners first need to vectorize images and then project them to one-dimensional space by using matrix multiplication between the sample matrix and the projection matrix. After that, the sliced Wasserstein is evaluated by averaging the two corresponding one-dimensional projected probability measures. However, this approach has two limitations. The first limitation is that the spatial structure of images is not captured efficiently by the vectorization step; therefore, the later slicing process becomes harder to gather the discrepancy information. The second limitation is memory inefficiency since each slicing direction is a vector that has the same dimension as the images. To address these limitations, we propose novel slicing methods for sliced Wasserstein between probability measures over images that are based on the convolution operators. We derive \emph{convolution sliced Wasserstein} (CSW) and its variants via incorporating stride, dilation, and non-linear activation function into the convolution operators. We investigate the metricity of CSW as well as its sample complexity, its computational complexity, and its connection to conventional sliced Wasserstein distances. Finally, we demonstrate the favorable performance of CSW over the conventional sliced Wasserstein in comparing probability measures over images and in training deep generative modeling on images.
Accept
The paper presents a new slicing methods for the Wasserstein distance between probability measures over images based on convolution operators. This way memory requirements can be reduced and locality can be better preserved. Experiments are conducted on generative modeling problems. Reviewers noted that the idea of convolution operators on probability measures over images is natural and simple, yet novel and acknowledged theoretical and practical results. The rebuttals were in-depth and provided additional clarifications. On the other hand reviewers note that CSW only defines a pseudo-metric. Overall this paper is an interesting contribution to the NeurIPS community and should be accepted.
test
[ "M4J8IJ-1Vxe", "emLl_afR8N3", "8JQjK40-YY0", "YJNnI1PF33J", "tgAbAGLdxN", "E2biCTnkNhm", "Tpc0jojym58", "0RDgSuVH0f6", "BLUJJs_eIr", "IKa2WLX5Zf", "QnHMW9YcBiF", "qGFDo9XS-5", "rev8ApFa_bx", "PHcVTqX0f3B", "AEVbxvl_-tR", "36RtXFup99I", "IbjkUWK8HCU", "TOr2f2X6pa0", "4iXAPbjJZ65", "OpFvwfaPC3", "w1F8zItXlMM", "r1DmC08zhxG", "GBxaSbZ1aIP", "P943yvMEJoF", "iulOICfoZBY", "L0TtYX0PnpD1", "YCOlzlrV9D9", "zg2eKJ0t76R", "pyB9oNb056G", "rmiKRrYxFth", "v6LLf0ykspjA", "A5Tg9lN98yk", "rSB8thXaofs", "1udSDhM9VYM", "at4FObFbYmS", "DKLz2q7Txwt", "B6zcRMxZ8TI", "ms2h1f7_Zp_" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer GwEg,\n\nGiven that we already addressed your concerns, the discussion deadline is only a few hours from now, and you give a negative score on the paper, we would like to hear your feedback on whether our response is sufficient to change your opinion about the paper. Please feel free to raise questions if you have other concerns.\n\nBest regards,\n\nAuthors", " Thank you for your detailed explanations. I have no more questions and would tend to keep the score.", " Dear Reviewer nLU5,\n\nThank you for your quick response,\n\nOur point is that the composition of the **convolution** neural network (non-linearly) that maps the tensor into a vector and then uses SW is a generalized version of convolution sliced Wasserstein. Here, the projection space is over-parametrized and only one instance of that space is used (found by an optimization problem). The thing is that using the **convolution** neural network accepts the input as a distribution over **tensor** implicitly.\n\nBest regards,\n\nAuthors,", " Thanks for the prompt response.\n\nI understood the authors' point is to emphasize that this paper is the first to define SW on the tensor space using convolution.\n\nThe reason why I proposed the previous hypotheses is: if in practice, using a deep network to (non-linearly) map the tensor into a vector then applying conventional SW is good enough, defining SW on tensor space might be a fake problem. \n\nAs a researcher with an application-oriented mindset, it really took me some time to understand the theoretical merit of this paper, though my understanding is still very limited.\n\nAnyway I really appreciate all the discussions.", " Dear Reviewer nLU5,\n\nWe appreciate your time and feedback. We would like to thank you for giving us a chance to clarify the contribution of our work, implementation detail, related works, and discussion. We are looking for work your feedback.\n\nBest regards,\n\nAuthors,\n\n", " Dear Reviewer nLU5,\n\nThank you for the detailed comments about the practical implementation of convolution. The point we want to make is that defining sliced Wasserstein variants on tensors can allow us to have more efficient ways to project distributions into one-dimensional space. Convolution is one example where doing vectorization leads to a worse computational complexity and memory complexity with Toeplitz multiplication. We would like to answer your questions as follow:\n\n**Q1** What is the difference between these convolution steps and the convolution in CSW?\n\n**A1** First, we would like to recall that we follow the setup in [R.2] which gives the most realistic generative images for sliced Wasserstein generative models. The difference between the convolution steps and the convolution in CSW is that the weights of convolution steps are learned by a GAN loss while the weights of the convolution in CSW are sampled randomly. Moreover, the weights of convolution steps do not have any constraints while the weights of the convolution in CSW have the sum of square equals 1. The meaning of the convolution steps is to map original distributions into a feature space (high-dimension) where pushforward distributions can be linearly separated by sliced Wasserstein. In contrast, the meaning of the convolution in CSW is to map feature distributions to one dimension for a closed form of Wasserstein distance for comparing two distributions.\n\n[R.2] Generative Modeling using the Sliced Wasserstein Distance; Deshpande et al\n\n**Q2** Will the result be better if you use more of these learnable convolution steps (i.e. deeper networks)?\n\n**A2** To our best knowledge, the answer to this question has not been investigated yet. We follow the standard setup in [R.2] to compare CSW to SW despite the fact that the setting is not fair for CSW since using feature extraction convolution steps can be considered a more complicated version of CSW. We agree that the practical algorithm has not been able to formulate in a formal mathematical formulation yet. However, CSW is the first building block to explain it.\n\n\n**Q3** ... using a deep enough network to directly map the images into 1d vectors and apply SW and hopefully get better results.\n\n**A3** Using a deep neural network naively could lead to a high projection complexity (the space of all weights of the neural network), hence, it is impossible to do sampling and apply Monte Carlo estimation. Moreover, without constraints on the weights of neural networks, statistical estimation rate (sample complexity) cannot be derived. In CSW, we can derive it in Proposition 2 by setting a norm constraint on the weights of the convolution kernel. To our best knowledge, there is no work that has done this before. \n\n**Q4** why not directly apply CSW on the generated images (in this case, the hypothesis is the randomly sampled convolution kernel weights are better than the learned)?\n\n**A4** Sliced Wasserstein variants require a ground metric in the projected space which is normally $\\mathbb{L}_2$ distance. However, this choice is not good for images since the geodesic distance between images is much more complicated and is normally unknown [R.1*]. Therefore, in deep generative modeling, the geodesic distance is implicitly learned by using a discriminator. The non-linearity inside the discriminator and its deep architecture can form a complex type of metric learning. Without using a discriminator, trained generative models will favor the $\\mathbb{L}_2$ distance between images that create non-realistic images [R.1*].\n\n[R.1*] Wasserstein GANs Work Because They Fail (to Approximate the Wasserstein Distance), Stanczuk et al.\n\n**A5** is there a \"sweet point\" that a proper combination of the learned convolution and random sample convolution in CSW applying to the images yields the best results?\n\n**Q5** This is an interesting question. However, we do not have the answer to this at the moment. We follow the standard setup in [R.2] to compare CSW with SW. To our best knowledge, this is the only way to train generative models that produce realistic images. A lot of works also follow this setup [R.2*,R.3*,R.4*] \n\n[R.2*] Distributional sliced Wasserstein and Applications to Generative modeling; Nguyen et al\n[R.3*] Augmented Sliced Wasserstein Distances, Chen et al\n[R.4*] Fast Approximation of the Sliced-Wasserstein Distance Using Concentration of Random Projections, Nadjahi et al\n\nOverall, we would like to make the point that **sliced Wasserstein has not been ever defined on **tensor** space**, hence, practical training approaches cannot be explained yet. We make the first step to explain why the practical training approach works. Again, we would like to mention that using convolution steps for feature extracting is not a fair setup for CSW, however, it is still better than SW. This reinforces the need of defining sliced Wasserstein variants on tensors without doing vectorization.\n\nBest regards,\n\nAuthors\n\n\n", " Dear authors,\n\nI really appreciate the detailed feedback and it addressed some of my concerns. I apologize that I thought the discussion deadline is the 19th as that shown on my task panel so I prioritized other tasks. \n\n1. Efficiency\nHow to implement convolution is beyond the scope of this paper. For example, TensorFlow implements conv2d as GEMM (though not Toeplitz):\nhttps://github.com/tensorflow/tensorflow/blob/v2.9.1/tensorflow/python/ops/nn_ops.py#L2238-L2249\n(this is a correlation, not convolution, more specifically)\n\nIn certain cases, FFT/IFFT implementation might be more efficient (I think this might be more widely used in the communication system like MIMO), for example:\nhttps://ieeexplore.ieee.org/document/8653873\n\nFor [R.1] Accelerating Deep Convolutional Neural Networks Using Specialized Hardware, Ovtcharov et al., if you look at table 1, FPGA only won at power consumption. (you might argue the cuDNN contributed most to the efficiency and I agreed with you as I thought cuDNN implements convolution as 4 nested loops. Sorry I could not find its code to back this.)\n\nAnyway, my point in my initial review is you can use a single convolution to replace your consecutive linear convolution. If I understood correctly from the rebuttal, you argued that consecutive convolution with the proposed kernel sampling scheme has some beneficial distribution properties. This seems a valid argument but I cannot judge the correctness of this claim either due to the lack of expertise (I don't think I can gain all the necessary knowledge in one or two weeks either). \n\n2. CSW in discriminators.\nI may share a common concern with the reviewer GwEg: \n\nbased on the loss function in appendix F.2, you still need a couple of learnable convolution steps ($T_{\\beta_1}$) to obtain the image feature as the **tensor** format (thank authors for pointing this loss function as I missed it in my initial review) to compute CSW. What is the difference between these convolution steps and the convolution in CSW? \n\nWill the result be better if you use more of these learnable convolution steps (i.e. deeper networks)? \n\nIf so, we can use a deep enough network to directly map the images into 1d **vectors** and apply SW and hopefully get better results (in this case, the hypothesis is the learned convolution kernel weights are better than those randomly sampled in CSW). \n\nIf not, then why not directly apply CSW on the generated images (in this case, the hypothesis is the randomly sampled convolution kernel weights are better than the learned)? \n\nOr is there a \"sweet point\" that a proper combination of the learned convolution and random sample convolution in CSW applying to the images yields the best results?", " Reviewer nLU5,\n \nWe have addressed your concerns in our responses. Given that the discussion deadline is approaching, we would like to hear your feedback. Please feel free to raise questions if you have other concerns.\n\nTo summarize, we have addressed the connection between CSW to DSW. Moreover, we discuss carefully the benefit of using the convolution directly on **tensors** compared to using Toeplitz matrix multiplication on **vectors**. Namely, the direct computation of convolution has lower memory complexity and time complexity than using Toeplitz matrix multiplication. Also, the convolution can be implemented on hardware to speed up the computation [R.1,R.2,R.3]. In addition, we have discussed in detail the projection sampling step and the training process of generative modeling.\n\n[R.1] Accelerating Deep Convolutional Neural Networks Using Specialized Hardware, Ovtcharov et al.\n\n[R.2] Hardware Accelerated Convolutional Neural Networks for Synthetic Vision Systems, Farabet et al.\n\n[R.3] A Unified Hardware Architecture for Convolutions and Deconvolutions in CNN, Bai et al\n\nBest regards,\n\nAuthors", " As in discussion with Reviewer GwEg, we would like to compare the benefit of using convolution directly on **tensor** space with the equivalent Toeplitz matrix multiplication on the **vector** space.\n\nTheoretically, 2D convolution can be converted to matrix multiplication. However, that will create a lot of redundantly zero multiplications which worsen the computation complexity and the memory complexity. This is also an issue of the vectorization step due to defining the sliced Wasserstein variant on **vectors**. For example, a tensor of size $c \\times d \\times d$ ($d$ is even can be mapped to a tensor of size $1 \\times d/2 \\times d/2$ with the kernel of size $c\\times 2 \\times 2$ and stride $2$ (invariance to the dimension $d$) and with time complexity $\\mathcal{O}\\left(cd^2 \\right)$. When using Toeplitz matrix multiplication, the size of the Toeplitz matrix is $cd^2 \\times d^2/4$ which leads to the time complexity of $\\mathcal{O}\\left(\\frac{1}{4}cd^4 \\right)$. We would like to mention that convolution can be implemented directly on the hardware [R.1,R.2,R.3].\n\n[R.1] Accelerating Deep Convolutional Neural Networks Using Specialized Hardware, Ovtcharov et al.\n\n[R.2] Hardware Accelerated Convolutional Neural Networks for Synthetic Vision Systems, Farabet et al.\n\n[R.3] A Unified Hardware Architecture for Convolutions and Deconvolutions in CNN, Bai et al\n\nBest regards,\n\nAuthors,", " Dear Reviewer GwEg, we would like to give an example for demonstrating the benefit of using convolution directly without doing vectorization.\n\nFor example, a tensor of size $c \\times d \\times d$ ($d$ is even) can be mapped to a tensor of size $1 \\times d/2 \\times d/2$ with the kernel of size $c\\times 2 \\times 2$ and stride $2$ (invariance to the dimension $d$) and with time complexity $\\mathcal{O}\\left(cd^2 \\right)$. When using Toeplitz matrix multiplication, the size of the Toeplitz matrix is $cd^2 \\times d^2/4$ which leads to the time complexity of $\\mathcal{O}\\left(\\frac{1}{4}c d^4 \\right)$.\n\nBest regards,", " Thank you for your response,\n\nTheoretically, 2D convolution can be converted to matrix multiplication. However, that will create a lot of redundantly zero multiplication which worsens the computation complexity and the memory complexity. This is also an issue of the vectorization step due to defining the sliced Wasserstein variant on **vectors**. We would like to recall that convolution can be implemented directly on the hardware [R.1,R.2,R.3]. \n\nAgain, we would like to recall that the work [23] did not define the space of projection rigorously. Hence, the variant of using neural networks in [23] is not understandable in terms of sample complexity which controls the statistical estimation rate. Therefore, it is impossible to verify whether the neural network variant in [23] can avoid the curse of dimensionality or not. In contrast, CSW can be proved to be able to escape the curse of dimensionality like in Proposition 2 in the paper. Overall, we believe that the view of changing the definition from the conventional definitions from **vector** space to **tensor** space is new and subtle. Please feel free to ask more questions. We appreciate the time and effort of the reviewer in clarifying our paper.\n\n[R.1] Accelerating Deep Convolutional Neural Networks Using Specialized Hardware, Ovtcharov et al.\n\n[R.2] Hardware Accelerated Convolutional Neural Networks for Synthetic Vision Systems, Farabet et al.\n\n[R.3] A Unified Hardware Architecture for Convolutions and Deconvolutions in CNN, Bai et al\n\nBest regards,\n\nAuthors,", " Thanks for the reminder on multi-layer convolutions of the proposed method. However, my original question has not been lifted yet. \nAs a simple example, a matrix multiplied by a vector can easily implement a 2D convolution via \"conventional lexicographic ordering\" of the vector (so the matrix will be sparse). In that sense, there is no theoretical issue in using performing 2D convolutions on vectors (see the following example: https://stackoverflow.com/questions/16798888/2-d-convolution-as-a-matrix-matrix-multiplication). Thus, unlike the authors' argument above, I can not see any problem for the work of [23] to be applicable to 2D convolutions as claimed in that work. Could you response to this?", " As in discussion with Reviewer GwEg and nLU5, we would like to clarify our contribution of CSW compared to the neural network version of generalized sliced Wasserstein [23 and distributional sliced Wasserstein [R.8] (DSW).\n\n[23] Generalized sliced Wasserstein distances; Kolouri et al\n\n[R.8] Distributional sliced Wasserstein and Applications to Generative modeling; Nguyen et al\n\n### **Comparision to the neural network version of generalized sliced Wasserstein**\n\n1. The authors in [23] did not define the space of projection rigorously for the neural network which makes the study of sample complexity impossible. In contrast, we define rigorously the projection space of CSW which is the set of all convolution kernels that has the norm 2 square equals 1. This rigorous definition helps us to derive the sample complexity of CSW in Proposition 2.\n\n2. The projection complexity of a general neural network-based projection is high. For example, the authors in [23] propose to use a multi-layer perception (MLPs) with Leaky-ReLu activation that leads to a very large number of parameters for the space projection which is the space of all possible values of the weights of the MLPs. Therefore, the computational complexity and projection complexity are huge compared to the conventional sliced Wasserstein. In contrast, our CSW has a lower computational complexity and projection complexity than the conventional SW due to the lightweight convolution kernel. In [23], the authors must use the max slice variant (Max-GSW-NN) for saving the memory, however, the authors conducted experiments only on gradient flow application which is not very large scale. Moreover, the authors in [23] did not publish the code for that application https://github.com/kimiandj/gsw. In the deep generative modeling setting, the performance in terms of generative quality of Max-GSW-NN is only slightly better than SW while the computational time is much slower [R.8]. In contrast, CSW is better than SW while being faster.\n\n[R.8] Distributional sliced Wasserstein and Applications to Generative modeling; Nguyen et al\n\n3. Again, the most important contribution is that GSW-NN is still defined on **vector** space. It is the reason why the authors in [23] propose to use MLPs instead of convolution neural networks. In contrast, we are the first people that define a sliced Wasserstein variant between distributions over **tensors** or images. We would like to recall that one of our contributions is changing the way of defining and interpreting sliced Wasserstein variants on different types of data. It is not just about choosing the architecture of neural work. The most crucial message that we want to bring in is incorporating geometry into the design of slicing operators for different types of data.\n\n4. The generalized sliced Wasserstein defines with a general slicing function $f_\\theta$. Hence, everything slicing function can be considered a special case of $f_\\theta$. However, designing an efficient parameterization of $f_\\theta$ is a challenge that has not been solved and that challenge has prevented the applications of GSW-NN in practice. In the paper, we make the first step to shedding light on the \"black-box\" $f_\\theta$ and make it possible to be used in large-scale applications.\n\n### **Comparison to distributional sliced Wasserstein [R.8] (DSW)**\n\nAs in discussion with Reviewer nLU5 , consecutive convolution operations can be replaced with a single convolution; however, the distribution of that “single convolution” can be complicated. Therefore, it is impossible to sample from that distribution. To ease the ensuing discussion, we refer to the “single convolution” as the aggregated projecting direction variable. CSW can be seen as a special case of distributional sliced Wasserstein (DSW) [R.8] where the distribution over the aggregated projecting direction is not uniform and is not supported on the unit-hypersphere. Also, the distribution over the aggregated projecting direction is regularized implicitly by architectures while DSW uses a loss-regularizer. We would like to recall that the other benefit of using convolution is saving computation and memory.\n\n[R.8] Distributional sliced Wasserstein and Applications to Generative modeling; Nguyen et al\n\n\nBest regards,\n\nAuthors,", " \nAs in discussion with Reviewer nLU5, consecutive convolution operations can be replaced with a single convolution; however, the distribution of that “single convolution” can be complicated. To ease the ensuing discussion, we refer to the “single convolution” as the aggregated projecting direction variable. CSW can be seen as a special case of distributional sliced Wasserstein (DSW) [R.8] where the distribution over the aggregated projecting direction is not uniform and is not supported on the unit-hypersphere. Also, the distribution over the aggregated projecting direction is regularized implicitly by architectures while DSW uses a loss-regularizer. We would like to recall that the other benefit of using convolution is saving computation and memory.\n\n\n[R.8] Distributional sliced Wasserstein and Applications to Generative modeling; Nguyen et al", " Thank you for your quick response,\n\nConcretely, the work of [23] cannot use the 2D convolution operator since it is defined on vectors. Moreover, we would like to recall that our work uses multiple layers of convolution instead of only one layer. \n\nWe will add this discussion to the revision of our paper. \n\nBest regards, \n\nAuthors,", " Thanks for your quick replies! Could you add some comments / responses on the following logic \"the proposed method in this manuscript can be seen as a special case of the work of [23] with a single layer convolutional neural network and linear activations (which is a case of leaky ReLU)\"? Your response will be helpful for me to evaluate the theoretical novelty of the proposed method.", " Thank you for your response.\n\nWe have discussed \bthe differences between our work and the work of [23]. We refer the reviewer to the corresponding discussion for detailed comparisons. \n\nBest regards,\n\nAuthors,", " Thank you for your response.\n\nWe will move the discussion into the main text in the revision.\n\nBest regards,", " Thank you for your response. We would like to clarify our contribution of CSW compared to the neural network version of generalized sliced Wasserstein [23].\n\n[23] \"Generalized sliced Wasserstein distances\"\n\n1. The authors in [23] did not define the space of projection rigorously for the neural network which makes the study of sample complexity impossible. In contrast, we define rigorously the projection space of CSW which is the set of all convolution kernels that has the norm 2 square equals 1. This rigorous definition helps us to derive the sample complexity of CSW in Proposition 2.\n\n2. The projection complexity of a general neural network-based projection is high. For example, the authors in [23] propose to use a multi-layer perception (MLPs) with Leaky-ReLu activation that leads to a very large number of parameters for the space projection which is the space of all possible values of the weights of the MLPs. Therefore, the computational complexity and projection complexity are huge compared to the conventional sliced Wasserstein. In contrast, our CSW has a lower computational complexity and projection complexity than the conventional SW due to the lightweight convolution kernel. In [23], the authors must use the max slice variant (Max-GSW-NN) for saving the memory, however, the authors conducted experiments only on gradient flow application which is not very large scale. Moreover, the authors in [23] did not publish the code for that application https://github.com/kimiandj/gsw. In the deep generative modeling setting, the performance in terms of generative quality of Max-GSW-NN is only slightly better than SW while the computational time is much slower [R.8]. In contrast, CSW is better than SW while being faster.\n\n[R.8] Distributional sliced Wasserstein and Applications to Generative modeling; Nguyen et al\n\n3. Again, the most important contribution is that GSW-NN is still defined on **vector** space. It is the reason why the authors in [23] propose to use MLPs instead of convolution neural networks. In contrast, we are the first people that define a sliced Wasserstein variant between distributions over **tensors** or images. We would like to recall that one of our contributions is changing the way of defining and interpreting sliced Wasserstein variants on different types of data. It is not just about choosing the architecture of neural work. The most crucial message that we want to bring in is incorporating geometry into the design of slicing operators for different types of data.\n\n4. The generalized sliced Wasserstein defines with a general slicing function $f_\\theta$. Hence, everything slicing function can be considered a special case of $f_\\theta$. However, designing an efficient parameterization of $f_\\theta$ is a challenge that has not been solved and that challenge has been prevented the applications of GSW-NN in practice. In the paper, we make the first step to shedding light on the \"black-box\" $f_\\theta$ and make it possible to be used in large-scale applications.\n\n\nOverall, our contribution is new and novel in the sliced Wasserstein literature. We appreciate the reviewer for giving us the opportunity to clarify our work. Please feel free to raise questions if you still have other concerns.\n\nBest regards,\n\nAuthors,", " I would like to thank the authors for their responses. For Q3, please see my comments on \"Response to Reviewer GwEg - Part 1\" and feel free to discuss further if possible to show clear differences between this work the work of [23].", " I would like to thank the authors for their responses on the issue of \"pseudo-metric\". I think this discussion is worth mentioning in the main text, not just in the Appendix. ", " I would like to thank the authors for their long, detailed responses. Sorry for late reply, but it took me a while to read and contemplate the manuscript and the responses. While this response clearly revealed that vectorization - projection was replaced with convolutions, unfortunately, I found great similarities between this proposed work and the work of [23], \"Generalized sliced Wasserstein distances\". The work of [23] proposed generalized sliced Wasserstein distances with a neural-network based projection scheme including convolutional networks with leaky ReLU activations. In other words, the proposed method in this manuscript can be seen as a special case of the work of [23] with a single layer convolutional neural network and linear activations (which is a case of leaky ReLU). The work of [23] also showed that this neural-network based projection is a pseudo-metric, which is consistent with the Theorem 1 in this manuscript. Thus, I still can not be convinced that this work contains novel contributions. Could you response to this comment if possible?", " Dear Reviewer s9sz,\n\nThank you for your positive feedback. We will revise the paper based on your comments. We would like to hear from you if you have any other concerns that want us to clarify. \n\nBest regards,", " Dear Reviewer V4XA,\n\nWe have addressed your concerns in our responses. We would like to hear your feedback. Please feel free to raise questions if you have other concerns.\n\nBest regards,\n\nAuthors", " Thank you for your responses and explanations. I find the discussion about the pseudo-distance very interesting and I believe it would deserve to be in the paper.", " Dear Reviewer nLU5,\n\nWe would be grateful if you give us your feedback about our response. We believe that we have addressed all your concerns in your reviews. Please feel free to raise questions if you have other concerns.\n\nBest regards,\n\nAuthors,", " Dear Reviewers and Chairs,\n\nWe would like to thank the reviewers for their time and feedback. We have answered all questions of the reviewers in the corresponding discussions. Moreover, we have also included the following results (written in blue color) in our revision:\n\n1. We introduce max convolution sliced Wasserstein (Max-CSW) variants which are extensions of max sliced Wasserstein with convolution slicers. Moreover, we also define convolution projected robust Wasserstein (CPRW) which is an extension of projected robust Wasserstein (PRW) (projecting measures into $k>1$ dimensional subspace). The definitions of Max-CSW and CPRW are given in Definition 9 and Definition 11 in Appendix E. We compare Max-CSW variants to Max-SW on CIFAR10, CelebA, and CelebA-HQ datasets. The results are given in Table 6 in Appendix F.2. From the table, we observe that the Max-CSW-s variant gives the best result on CIFAR10 and CelebA while Max-CSW-d performs the best on CelebA-HQ. Moreover, we also compare the CPRW-stride variant to PRW in our revision. In particular, we run experiments on CIFAR10 with the subspace dimension $k \\in \\{2,4,16\\}$. For all choices of $k$, CPRW-stride has lower FID scores than PRW. The above results reinforce the claim that using convolution for projecting measures over images leads to more meaningful ways of comparing measures.\n\n2. We have added the results of SW, CSW-b, CSW-s, CSW-d with (L=100, L=1000) on CIFAR10 after running 5 different times in Table 1 in the main text. From the table, we still observe the same phenomenon that CSW variants are better than SW in terms of FID scores and IS scores.\n\n3. We have also submitted the code for the new experiments. \n\n4. We have fixed typos and revised the writing based on the suggestions of reviewers in blue color. We have also added a paragraph for discussing the limitations of CSW and questions of reviewers in Appendix A. \n\nWe are looking forward to your feedback.\n\nBest regards,\n\nAuthors", " We appreciate the reviewer's time and feedback. We would like to answer the questions of the reviewer as follow:\n\n**Q16**: Pseudo distance?\n\n**A16**: Thank you for your comment. The pseudo distance means that CSW does not have the identity property. In more detail, $CSW(\\mu,\\nu)=0$ does not imply $\\mu=\\nu$. However, $\\mu=\\nu$ will still lead to $CSW(\\mu,\\nu)=0$. \n\nThe CSW is a pseudo metric on the space of all distribution over tensors which means we do not assume any structure on distribution over images. In practice, many empirical investigations show that image datasets belong to some geometry group (symmetry, rotation invariant, translation invariant, and so on). Therefore, the set of distributions over images might be a subset of the set of distributions over tensors. If the convolutional transform can hold the injectivity on the set of distributions over images, CSW can be a metric on the space of distributions over images. In our applications, we compare the value of sliced Wasserstein and convolution sliced Wasserstein on MNIST digits in Table 5 in Appendix F1, we found that the values of CSW are closed to the value of SW that can be considered as a test for our hypothesis of metricity of CSW. To our best knowledge, there is no formal definition of the space of distributions over images and its property. Therefore, we will leave this for future work. Moreover, in deep learning applications where mini-batches are used, all optimal transport metrics will turn into a loss due to the subsampling [R.1]. Therefore, the pseudo metricity of CSW does not affect much in deep learning applications. That partially explains why CSW is still better than SW in our experiments.\n\n[R.1] Learning with minibatch Wasserstein : asymptotic and gradient properties; Fatras et al.\n\n**Q17**: the second question is on the experiments. While the results seem quite convincing, it seems like the experiments (to get e.g. the FID and the IS) were run only once. I believe it would be more robust and convincing to take a mean over several trainings for example.\n\n**A17**: Thank you for your comment. We have added the results of SW, CSW-b, CSW-s, CSW-d with (L=100, L=1000) on CIFAR10 after running 5 different times in Table 1. From the table, we still observe the same phenomenon that CSW variants are better than SW. This strengthens the claim of using convolution slicers is better than the conventional slicer in both performance and efficiency. To further verify this, we introduce max convolution sliced Wasserstein (Max-CSW) variants which are extensions of max sliced Wasserstein with convolution slicers. We also define convolution projected robust Wasserstein (CPRW) which is an extension of projected robust Wasserstein (projecting measures into $k>1$ dimensional subspace). We refer the reviewer to the definitions of Max-CSW and CPRW in Definition 9 and Definition 11 in Appendix E, and experimental results in Table 6 in Appendix F.2. We observe that both Max-CSW variants and a CPRW variant (CPRW-stride) give better performance than the conventional approach with the vectorization step.", " We appreciate the reviewer's time and feedback. We would like to answer the questions of the reviewer as follow:\n\n**Q13**: I am a little confused about the definition of the convolution-base slicer and its invariants… what is the definition of these kernels when d=18?\n\n**A13**: Thank you for your comment. The definition of slicers is actually a CNN that maps the images to a one-dimensional scala, hence, it is flexible to choose the size of the kernels in each layer. In our three proposed slicers, we aim to reduce the dimension by half after each layer. When the dimension of the feature map is not even, we will use the kernel size that is equal to the size of the feature map to map the dimension to 1. For example. $d=18=2 \\times 9$. Therefore, our proposed convolution slicers will have 2 layers. \n\nFor Convolution-base Slicer, the first kernel size is $10\\times 10$ and the second kernel size is $9\\times9$.\nFor Convolution-stride slicer, the first kernel size is $2\\times 2$ and the second kernel size is $9\\times 9$.\nFor Convolution-dilation slicer, the first kernel size is $2\\times 2$ and the second kernel size is $9 \\times 9$.\n\nWe have revised the definition of the number of layers $N$, and $a$ in the revision as follows. Given the dimension $d$, $N$ is the biggest integer that satisfies $d=2^{N-1}\\times a$, where $a$ is also an integer. Hence, $N$ is the number of layers and $a$ is the size of the last kernel. The size of the intermediate kernel is unchanged. We have updated the definition to the revision in blue color, please check it for a more formal definition. \n\n**Q14**: The authors fix the stride size s=2 in the definition of convolution-stride slicer. There may be some trade-off between time consumption and estimation efficiency for different stride sizes s. Some simple analysis may help us recognize it intuitively. I am wondering how does the stride size affect the performance of the proposed method? Some discussion and empirical results are needed.\n\n**A14**: Thank you for your comment. As mentioned in the A13, the definition of slicers is flexible as designing a CNN. In practice, the architecture depends on a lot of factors including tasks, datasets, and so on. In the paper, we propose three simple variants which might be not the best choice. We only aim to show that using convolution can easily beat conventional vectorization slicing. Using stride can get rid of the dependency of the convolution kernel size to the dimension. A large stride can help to reduce the dimension faster, however, it might lose some local information. To our best knowledge, the effect of stride size has not been established in the literature yet. Therefore, we leave this question to future work. Due to the limitation of time and hardware, we have not been able to run experiments on generative modeling with different stride configurations. We will try to report the result to the discussion when it is available.\n\n**Q15**: This paper mainly considers probability measures over images. It would be convincing if the authors can provide other tasks in real data applications.\n\n**A15**: Similar to SW, CSW can be applied to other applications such as autoencoders [R.3], domain adaptation [R.4], and so on. When dealing with images, CSW is a better choice than SW. When dealing with a general type of data, it is not guaranteed that the performance of CSW is better than SW. However, CSW will always be more efficient in computation and memory. We would like to recall that CSW is the first step to incorporating geometry and inductive bias into sliced Wasserstein distance. It will be a very interesting direction in designing variants of SW with geometry-oriented projection in other types of data such as text, time series, and graphs. In these scenarios, special architectures can be used such as Transformer, recurrent neural networks, graph neural networks, and so on. These distances can be used in various applications that deal with probability measures such as generative modeling, clustering, domain adaptation, adversarial attacks, the point could reconstruction, and so on. However, there will be also challenges such as designing the slicing distribution and controlling the projection complexity (the number of parameters of slicers). We leave these investigations to future works.\n\n[R.3] Sliced Wasserstein Auto-Encoders; Kolouri et al \n\n[R.4] Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation; Lee et al\n\n\n", " **Q10**: Some training details are missing. What is the loss function? Based on the main script, it seems the network does not need a discriminator as CSW directly computes the distance between the generated images and real images, and the network is trained to minimize the distance. However, in the supplementary, discriminators are still used (L732-739). If discriminators are used, I am a little confused as SW GAN [13] used discriminators to map images into 1D vectors and compute SW, which is similar to nonlinear CSW introduced in this submission, except that kernels in [13] are learnable while the kernels in this submission are uniformly sampled.\n\n**A10**: In the experiments, we also use the discriminator which is trained by the loss given in Appendix F.2. The generator is trained by using SW and CSW on the feature space of the discriminator. We agree that this is similar to the idea of nonlinear CSW; however, in this case, the space of projections is over-parametrized and only one projection (slice) is selected by maximizing a training loss. The minimax training can be seen as a Max-sliced variant of Non-linear CSW with a more rich architecture for the slicer. we would like to mention that almost all deep learning applications uses sliced Wasserstein on vectorized features of a non-linear convolutional neural network which can be considered a more complex form of non-linear CSW (e.g., deep generative model [R.2],[R.3], deep domain adaptation [R.4]). \n\nThe reason that we need non-linearity is that using linear projection and $\\mathbb{L}_2$ ground metric cannot produce good images since the geodesic metric between images is often non-trivial [R.9]. Therefore, in practice, practitioners need to use the discriminator as the non-linear projection and the ground metric learning [R.2,R.10]. As mentioned above, the architecture of the discriminator is very complicated, hence, it is hard to formulate practical training in a rigorous formulation based on the current literature. The definitions of CSW and Non-linear CSW is the first building block to understanding the training process. Moreover, since the discriminator also uses convolution, it reinforces the claim that we cannot use directly SW with vectorization in applications on images. In the paper, we choose the setting from [R.2] which can produce good generated images despite being not fair for CSW. However, CSW still performs better than SW which indicates the importance of convolution slicing.\n\n[R.2] Generative Modeling using the Sliced Wasserstein Distance; Deshpande et al \n\n[R.3] Sliced Wasserstein Auto-Encoders; Kolouri et al \n\n[R.4] Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation; Lee et al\n\n[R.9] Wasserstein GANs Work Because They Fail (to Approximate the Wasserstein Distance); Stanczuk et al\n\n[R.10] Learning Generative Models with Sinkhorn Divergences; Genevay et al\n\n**Q11**: The authors might miss one reference about introducing convolution in Wasserstein to reduce the computational cost: Justin Solomon, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, Leonidas Guibas, Convolutional Wasserstein Distances: Efficient Optimal Transportation on Geometric Domains, Proc. SIGGRAPH 2015.\n\n**A11**: Thank you for the reference, we have cited it in related work in Appendix A in the revision. In the mentioned paper, the convolution is used to find the ground metric of optimal transport based on solving the heat equation. The resulted distance has the complexity of $\\mathcal{O}(n^2)$ where n is the number of supports. Our paper uses a convolution operator in a different way which is a slicing method (or a dimension reduction method) to map original measures to multiple one-dimensional measures. Since the computation of Wasserstein distance on one-dimension has closed form $\\mathcal{O}(n\\log n)$, our resulted distance costs only $\\mathcal{O}(Ln\\log n)$ where $L$ is the number of projections.\n\n**Q12**: limitations and potential negative societal impact?\n\n**A12**: We leave the discussion on negative societal impact in Appendix A. Overall, our proposed CSW is a general tool, hence, it could be used in applications that do not have a good purpose. For the limitations of CSW, we mention that CSW is not a metric on the space of all distributions over tensors and it might only be meaningful when used on images. The reason is that the set of distributions over images might be a subset of the set of distributions over tensors which are invariant to rotation, translation, and so on. For general distributions, the superior performance of CSW compared to SW is not guaranteed and CSW will only be better than SW in terms of computation and memory. We refer the reviewer to **Q2** of Reviewer GwEg for a detailed discussion. We have also added a paragraph to discuss the limitations in Appendix A in the revision.\n", " Thank you for your time and feedback. We would like to answer the questions of the reviewer below. We also would like to mention that we have applied convolution slicing to define new discrepancies which are Max convolution sliced Wasserstein (Max-CSW) and Convolution projected robust Wasserstein (CPRW). They are based on max sliced Wasserstein (Max-SW) and projected robust Wasserstein (PRW) which use the conventional vectorization approach. The details of definitions are given in Definition 9 and Definition 11 in Appendix E. We conducted experiments to compare them with Max-SW and PRW in Table 6 in Appendix F.2 and observed a better result for our proposed discrepancies.\n\n**Q7**: In L201, the authors did notice that convolution is a linear operation. Thus consecutive convolution operations can be replaced with a single convolution. Except for the non-linear convolution slicer, what is the purpose of using multiple linear convolution operations? \n\n**A7**: Thank you for a detailed and interesting question. Consecutive convolution operations can be replaced with a single convolution; however, the distribution of that “single convolution” can be complicated. To ease the ensuing discussion, we refer to the “single convolution” as the aggregated projecting direction variable. In contrast to SW which uses the uniform distribution on the aggregated projecting direction variable which is shown to be redundant and not efficient in high-dimension [R.5,R.7], CSW uses a more complicated and geometry-oriented distribution. Therefore, CSW can be seen as a special case of distributional sliced Wasserstein (DSW) [R.8] where the distribution over the aggregated projecting direction is not uniform and is not supported on the unit-hypersphere. Also, the distribution over the aggregated projecting direction is regularized implicitly by architectures while DSW uses a loss-regularizer. We would like to recall that the other benefit of using convolution is saving computation and memory.\n\n[R.5] Generalized Sliced Wasserstein Distances; Kolouri et al \n\n[R.7] Max-Sliced Wasserstein Distance and its use for GANs; Deshpande et al\n\n[R.8] Distributional sliced Wasserstein and Applications to Generative modeling; Nguyen et al\n\n**Q8**: Moreover, convolution is often implemented as matrix multiplication, which also flattens the image and converts the kernel into a Toeplitz matrix. So the proposed convolutional SW is similar to SW with a special R in Eq. 1?\n\n**A8**: Thanks for your question. As in **A7**, CSW can be seen as a special case of DSW which is SW with a special R in Eq. 1. Convolution is often implemented as matrix multiplication; however, the Toeplitz matrix of convolution is very sparse which improves the computational time and memory of matrix multiplication. Moreover, in some special computational hardware that supports computing convolution directly without converting to the Toeplitz matrix, the computational time of CSW will be improved further.\n\n**Q9**: I wonder how the kernel is determined? From L228-230, it seems the kernel is uniformly sampled from the set K(l) Do we sample once for all the images (that is, using the same set of kernels for all the images), or do we repetitively sample kernels for each image?\n\n**A9**: For each estimation of CSW, the kernel is sampled once. However, in applications, where the computation of CSW is repeated on multiple mini-batches, the kernel is sampled repeatedly for each mini-batch. We would like to recall that we use group convolution to make sure that $(K_i^{(1)},...,K_i^{(N)})_{i=1}^L$ are independent for the Monte Carlo estimation.\n", " **Q3**: Compare to recent max SW or generalized sliced Wasserstein [23]... Comparing with recent Wasserstein variants and investigating proper (or optimal) convolution operators could improve this manuscript.\n\n**A3**: Thank you for your suggestions. We agree the max-convolutional sliced Wasserstein (Max-CSW) is a great extension. We define Max-SW and its variants in Definition 9 in Appendix E, and conduct experiments to compare it with max sliced Wasserstein on CIFAR10, CelebA, and CelebAHQ. The result is shown in Table 6 in Appendix F.2 in the revision. From the result, we observe that there is always a variant of Max-CSW that is better than Max-SW on CIFAR10, CelebA, and CelebAHQ. Moreover, we would like to mention that, similar to CSW, Max-CSW variants have fewer parameters and are faster than Max-SW. We can also adjust the configuration of the convolution slicer to get better performance. Moreover, we also generalize Max-CSW to convolution projected robust Wasserstein (CPRW) and its variants in Definition 11 in Appendix E. We observe that CPRW-stride is better than the projected robust Wasserstein on CIFAR10 based on the experimental result in Table 6 in Appendix F.2.\n\nAccording to the generalized sliced Wasserstein (GSW), using a circular defining function does not work in practice while the polynomial defining function cannot be computed in high-dimension. Therefore, in Table 6 (before revision) and Table 7 (after revision) in Appendix F.2, we have already compared non-linear CSW to GSW with the Sigmoid defining function. The table indicates that non-linear CSW is better than GSW. We would like to recall that all GSW variants [R.5] use projecting directions that are in the same dimension as the supports of distributions while Non-linear CSW can save memory and computation with convolution operators.\n\n[R.5] Generalized Sliced Wasserstein Distances; Kolouri et al \n\n**Q4**: Moreover, there were many different convolutions defined in the main text and compared in the experiments, the results seem to suggest no clear winner among these convolutions. Thus, the experiments in their current form seem weak. \n\n**A4**: Overall, we observe that the convolution stride is better on almost all datasets such CelebA, CIFAR10, and CelebAHQ. Therefore, we recommend this variant. Moreover, we believe that the distributions on images of different datasets have different geometry structures. Therefore, a specific type of convolution might be preferred by a specific dataset that partially explains why some CSW-variants are better in some datasets while being worse in other datasets. Investigating geometric structures of data that are preferred by each variant of CSW is an interesting research direction and we will leave this direction to future work.\n\n**Q5**: Conventional image processing works have widely used vector representations in their models and tensors (i.e., reshapes) in their implementations. While this manuscript mentioned that \"a probability measure over images should be defined over the space of tensors instead of images\" and \"this extra step does not keep the spatial structures of the supports, which are crucial information of images\", I need more explanation on why these sentences are justified. Even though images are represented as vectors, spatial structures can be considered in the operations on them. \n\n**A5**: Thanks for your comments. Indeed, there is a typo. \"A probability measure over images should be defined over the space of tensors instead of images\" should be \"a probability measure over images should be defined over the space of tensors instead of vectors\". Thank you for pointing out this typo. \n\nAs we clarified in A1, doing vectorization has two limitations which are destroying the geometry of images and requiring more memory and computation for projection. Also, doing reshaping/vectorization is a one-to-many mapping since we have different ways to arrange entries of tensors into vectors. Therefore, we can consider vectorization as a redundant operator. Moreover, preserving geometry and spatial structures when doing vectorization is hard. For example, Vision Transformer [R.6] needs to divide the images into 16x16 smaller images and then do vectorization. That is one example of the importance of spatial information on images. To our best knowledge, there is no simpler choice than using convolution directly on images to preserve the geometry. \n\n[R.6] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale; Dosovitskiy et al\n\n**Q6**: potential negative societal impact?\n\n**A6**: We leave the discussion on negative societal impact in Appendix A. Overall, our proposed CSW is a general tool, hence, it could be used in applications that do not have a good purpose such as creating people's images without permission or attacking machine learning systems. We have also added a paragraph to discuss the limitations in Appendix A in the revision.\n", " **Q2**: While this defined CSW was shown to be a pseudo-metric in Theorem 1, it may be a serious weakness as a loss to be used in the optimization. Will the different image with the distance 0 be good to represent Unfortunately, there is no discussion on this issue. The non-linear extension of CSW was mentioned in lines 328-332, it does not seem to be meaningful as a metric considering that the linear version of it is a pseudo-metric\n\n**A2**: Thank you for your insightful comment. Our response to your concerns is three-fold.\n\n1. The CSW is a pseudo metric on the space of all distribution over tensors, which means we do not assume any structure on distribution over images. In practice, many empirical investigations show that image datasets belong to some geometry group (symmetry, rotation invariant, translation invariant, and so on). Therefore, the set of distributions over images might be a subset of the set of distributions over tensors. If the convolutional transform can hold the injectivity on the set of distributions over images, CSW can be a metric on the space of distributions over images. In our applications, we compare the value of sliced Wasserstein and convolution sliced Wasserstein on MNIST digits in Table 5 in Appendix F1, we found that the values of CSW are closed to the value of SW that can be considered as a test for our hypothesis of metricity of CSW. To our best knowledge, there is no formal definition of the space of distributions over images and its property. Therefore, we leave this for future work. \n\n2. Moreover, in deep learning applications, sliced Wasserstein is computed between empirical distributions over mini-batches of samples that are randomly drawn from the original distribution [R.1]. This is known as mini-batch optimal transport with sliced Wasserstein kernel that is used when dealing with very large scale distributions and implicit continuous distributions. When using mini-batches, both Wasserstein distance, sliced Wasserstein distance, and convolutional sliced Wasserstein will lose its metricity to become a loss [R.1]. Therefore, metricity is not the only deciding factor in some applications of sliced Wasserstein, such as deep generative model, deep domain adaptation, and so on. This partially explains the better performance of CSW on our deep generative model experiments in Table 1.\n\n3. According to non-linear extension of CSW, we would like to mention that almost all deep learning applications use sliced Wasserstein on vectorized features of a non-linear convolutional neural network which can be considered as a more complex form of non-linear CSW (e.g, deep generative model [R.2],[R.3], deep domain adaptation [R.4]). Therefore, the definitions of CSW and non-linear CSW are the first building block to understanding practical approaches and are also the first attempt to incorporate geometry into sliced Wasserstein distance.\n\n--- Finally, we have added these discussions to the paper in Appendix A in the revision (in blue color).\n\n[R.1] Learning with minibatch Wasserstein : asymptotic and gradient properties; Fatras et al.\n\n[R.2] Generative Modeling using the Sliced Wasserstein Distance; Deshpande et al\n\n[R.3] Sliced Wasserstein Auto-Encoders; Kolouri et al \n\n[R.4] Sliced Wasserstein Discrepancy for Unsupervised Domain Adaptation; Lee et al\n", " We would like to say thank you for your time and your reviews. \n\n**Q1**: ..., CSW was not derived, but was defined in this work (see Definition 5). While this work seems to assume that the distance between \"images\" is compared with the Wasserstein distance, there are a number of cases where \"vectorized feature tensors\" of two images are compared with the (sliced) Wasserstein distance (e.g., see [35]) and in that case, this work can be seen as a simplified version of these works. In that sense, it can be seen that there is no novelty in the definitions of this manuscript.\n\n**A1**: Thanks for your comments. We agree that there are previous works that use (sliced) Wasserstein distance on images. However, all those works need to vectorize images or feature tensors of images into vectors since the conventional definition of (sliced) Wasserstein is defined between distributions on vector space. Due to the projection step in sliced Wasserstein distance, there are two limitations of this approach when we consider sliced Wasserstein distance between images. Firstly, the vectorization steps might destroy the geometry structure of images which are shown to be important in image processing and computer vision literature. Secondly, the slicing directions of sliced Wasserstein must be in the same dimension as the images which is not efficient in terms of memory and computation. \n\nOur main contribution is to get rid of the vectorization step. To do that, instead of dealing with distributions on vector space, we first need to define the discrepancy between distributions over tensor space. We then need to design the projection step that maps directly a tensor to a one-dimensional scalar. Since the convolution operator has been successfully used for images, we choose to map images via multiple convolution operators with uniform random kernels. This design helps to preserve the geometry of images and also helps to save computation and memory as in Proposition 3 in Appendix B.\n\n", " This work attempted to extend the sliced Wasserstein distance to be the convolution sliced Wasserstein distance by replacing vectors with tensors and Radon transforms with convolution operators, respectively. Then, this work compared its proposed methods with different convolutions (usual convolution, convolution with stride, convolution with dilation) with the original sliced Wasserstein on image generation task with various datasets (CIFAR10, CelebA, STL10, CelebA-HQ), claiming to show favorable results. Its theorem showed that this new convolution sliced Wasserstein distance is a pseudo-metric, it is always less than or equal to the max sliced Wasserstein distance, and its expectation is upper bounded with a specific value. - Even though this manuscript claimed that \"We derive convolution sliced Wasserstein (CSW) and its variants\" in the abstract, CSW was not derived, but was defined in this work (see Definition 5). While this work seems to assume that the distance between \"images\" are compared with the Wasserstein distance, there are a number of cases where \"vectorized feature tensors\" of two images are compared with the (sliced) Wasserstein distance (e.g., see [35]) and in that case, this work can be seen as a simplified version of these works. In that sense, it can be seen that there is no novelty in the definitions of this manuscript.\n- While this defined CSW was shown to be a pseudo-metric in Theorem 1, it may be a serious weakness as a loss to be used in the optimization. Will the different image with the distance 0 be good to represent Unfortunately, there is no discussion on this issue. Non-linear extension of CSW was mentioned in lines 328-332, it does not seem to be meaningful as a metric considering that linear version of it is a pseudo-metric.\n- While its experiments were showing results on image generation task for various datasets, it was only compared with a conventional SW method, not a recent max SW or generalized sliced Wasserstein [23]. Moreover, there were many different convolutions defined in the main text and compared in the experiments, the results seems to suggest no clear winner among these convolutions. Thus, the experiments in its current form seem weak. Comparing with recent Wasserstein variants and investigating proper (or optimal) convolution operators could improve this manuscript. - Feel free to argue back to the weakness points above.\n- This work argued the needs for using tensors instead of vectors in the sliced Wasserstein distance. While it might be making sense, it was not easy for me to be convinced about these arguments. Conventional image processing works have widely used vector representations in their models and tensors (i.e., reshapes) in their implementations. While this manuscript mentioned \"a probability measure over images should be defined over the space of tensors instead of images\" and \"this extra step does not keep the spatial structures of the supports, which are crucial information of images\", I need more explanation on why these sentences are justified. Even though images are represented as vectors, spatial structures can be considered in the operations on them. In that sense, these arguments seem relatively weak. Even though the authors indicated that the limitations and potential negative societal impact of their work, I was not able to find them.", " This paper proposed convolutional sliced Wasserstein distance and compared it with the conventional sliced Wasserstein distance. In addition, the authors also introduced a convolution base slicer, a convolution stride slicer, and its variant with dilation. The paper provided the details about how the convolutional sliced Wasserstein is calculated, and analyzed several properties. The experiment results on the multiple datasets demonstrate that for the generative tasks, the proposed CSW achieved better performance than SW while using similar computational resources. The paper is well written and provides a lot of details about SW, which is very helpful for readers like me with little SW background.\nThe idea is well motivated and the math seems convincing to me.\n\nI must apologize I don't have enough expertise to judge the merit of this paper. Below are some of my questions and concerns.\n1. In L201, the authors did notice that convolution is a linear operation. Thus consecutive convolution operations can be replaced with a single convolution. Except for the non-linear convolution slicer, what is the purpose of using multiple linear convolution operations? Moreover, convolution is often implemented as matrix multiplication, which also flattens the image and converts the kernel into a Toeplitz matrix. So the proposed convolutional SW is similar to SW with a special R in Eq. 1?\n\n2. I wonder how the kernel is determined? From L228-230, it seems the kernel is uniformly sampled from the set $K^{(l)}$. Do we sample once for all the images (that is, using the same set of kernels for all the images) or we repetitively sample kernels for each image?\n\n3. Some training details are missing. What is the loss function? Based on the main script, it seems the network does not need a discriminator as CSW directly computes the distance between the generated images and real images and the network is trained to minimize the distance. However, in the supplementary, discriminators are still used (L732-739). If discriminators are used, I am a little confused as SW GAN [13] used discriminators to map images into 1D vector and compute SW, which is similar to nonlinear CSW introduced in this submission, except that kernels in [13] are learnable while the kernels in this submission is uniformly sampled.\n\n4. The authors might miss one reference about introducing convolution in Wasserstein to reduce the computational cost:\nJustin Solomon, Fernando de Goes, Gabriel Peyré, Marco Cuturi, Adrian Butscher, Andy Nguyen, Tao Du, Leonidas Guibas, Convolutional Wasserstein Distances: Efficient Optimal Transportation on Geometric Domains, Proc. SIGGRAPH 2015.\n\nMinor: L126 $R^{c\\times \\times d}$ should be $R^{c\\times d\\times d}$ See the weakness section.\n\nOverall this paper proposed an interesting idea to better compute the Wasserstein distance between two image sets. However I lack the related background knowledge and some experiment details are missing, I cannot judge the merit of this paper. The authors did not provide the discussions or analysis about the limitations and potential negative societal impact (if I did not miss anything). Overall I think it is fine as the paper focuses more on the fundamental SW distance computation. See the weakness section for potential technical limitations, for example, the choice of kernels.", " This paper presents a new methodology for comparing two probability measures over images.\nThe key idea of the paper is to apply convolution operators on probability measures over images.\nThe proposed method, named convolution sliced Wasserstein(CSW), makes use of the spatial structure of images and needs less slicing memory. \nThe authors provide the metricity of CSW as well as its sample complexity, its computational complexity, and connection to sliced Wasserstein distance.\nBeyond this theoretical contribution, the authors discuss numerical considerations and perform a thorough real data analysis, showing the favorable performance of their method. The idea of applying the convolution operators on probability measures over images is a very natural idea. \nThe proposed method is novel and simple to describe.\nAlso, the authors show the efficiency numerically. **Major Comments**\n\n* I am a little confused about the definition of the convolution-base slicer and its invariants.\nIn definition 3, when $d$ is even, the last kernel $K^{(N)}$ has dimension $1\\times a \\times a$.\nHowever, $a=\\frac{d}{2^{N-1}}$ and by definition $a$ is an integer.\nIs there any guarantee about this? \nFor example, what is the definition of these kernels when $d=18$?\nIn real data, the dimension $d=32,64,96,128$,\nwhose prime factor decomposition has at most one number which is not 2.\nBut $a=\\frac{d}{2^{N-1}}$ won't be an integer when $d=18=2\\times 3\\times 3$.\nThis question also lies in the definition of other variants of convolution sliced-Wasserstein distances.\n\n* The authors fix the stride size $s=2$ in definition of convolution-stride slicer.\nThere may be some trade-off between time consumption and estimation efficiency for different stride sizes $s$.\nSome simple analysis may help us recognize it intuitively.\nI am wondering how does the stride size affect the performance of the proposed method? Some discussion and empirical results are needed.\n\n* This paper mainly considers probability measures over images. It would be convincing if the authors can provide other tasks in real data applications.\n\n ", " While the Sliced-Wasserstein distance is widely used to compare distributions on images (which can be seen as tensors), it requires to perform a vectorization step using a reshape operation which is not suited to images and loses spatial structure, while being memory inefficient. In this work, authors propose to define a Convolutional Sliced-Wasserstein on the set of probabilities over tensors using convolution operations which are better suited to images. Some properties are derived such as the sample complexity, pseudo-distance. And many experiments are performed on classical image datasets. This paper proposes to use convolutions to project images on the real line in order to use a SW distance between distributions over images. While the convolution idea is not novel since it has been widely used in neural networks for some time, the idea of using it in the context of optimal transport is new and very interesting. Moreover, the results seem quite convincing.\n\nStrengths:\n- Well written and very clear\n- Using convolution to define a new SW distance.\n- Application on different datasets which give good results\n- Theoretical results\n\nWeaknesses:\n- Only a pseudo-distance? I have first a question on the theoretical results. In Theorem 1, it is stated that it is a pseudo-distance and that $CSW_p(\\mu,\\nu)=0 \\nLeftrightarrow \\mu=\\nu$. I do not find the last statement very clear. I believe that it is meant that $CSW_p(\\mu,\\nu)=0$ does not imply that $\\mu=\\nu$. I did not find in the proof of the theorem any argument supporting this statement. Is there any counter example of $\\mu\\neq \\nu$ such that $CSW_p(\\mu,\\nu)=0$?\n\nA second question is on the experiments. While the results seem quite convincing, it seems like the experiments (to get e.g. the FID and the IS) were run only once. I believe it would be more robust and convincing to take a mean over several trainings for example.\n\n\n\n\n\n\nTypos:\n- Line 126: $\\mu\\mathbb{R}^{c\\times \\times d}$ Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 2, 4, 4 ]
[ "at4FObFbYmS", "pyB9oNb056G", "YJNnI1PF33J", "E2biCTnkNhm", "Tpc0jojym58", "Tpc0jojym58", "0RDgSuVH0f6", "DKLz2q7Txwt", "nips_2022_O4Q39aQFz0Y", "AEVbxvl_-tR", "qGFDo9XS-5", "AEVbxvl_-tR", "nips_2022_O4Q39aQFz0Y", "36RtXFup99I", "36RtXFup99I", "4iXAPbjJZ65", "OpFvwfaPC3", "w1F8zItXlMM", "r1DmC08zhxG", "A5Tg9lN98yk", "rSB8thXaofs", "1udSDhM9VYM", "iulOICfoZBY", "B6zcRMxZ8TI", "zg2eKJ0t76R", "DKLz2q7Txwt", "nips_2022_O4Q39aQFz0Y", "ms2h1f7_Zp_", "B6zcRMxZ8TI", "DKLz2q7Txwt", "DKLz2q7Txwt", "at4FObFbYmS", "at4FObFbYmS", "at4FObFbYmS", "nips_2022_O4Q39aQFz0Y", "nips_2022_O4Q39aQFz0Y", "nips_2022_O4Q39aQFz0Y", "nips_2022_O4Q39aQFz0Y" ]
nips_2022_r-6Z1SJbCpv
Towards Learning Universal Hyperparameter Optimizers with Transformers
Meta-learning hyperparameter optimization (HPO) algorithms from prior experiments is a promising approach to improve optimization efficiency over objective functions from a similar distribution. However, existing methods are restricted to learning from experiments sharing the same set of hyperparameters. In this paper, we introduce the OptFormer, the first text-based Transformer HPO framework that provides a universal end-to-end interface for jointly learning policy and function prediction when trained on vast tuning data from the wild, such as Google’s Vizier database, one of the world’s largest HPO datasets. Our extensive experiments demonstrate that the OptFormer can simultaneously imitate at least 7 different HPO algorithms, which can be further improved via its function uncertainty estimates. Compared to a Gaussian Process, the OptFormer also learns a robust prior distribution for hyperparameter response functions, and can thereby provide more accurate and better calibrated predictions. This work paves the path to future extensions for training a Transformer-based model as a general HPO optimizer.
Accept
In this work, authors investigate whether Transformers can be used for hyperparameter optimization. The work is interesting and authors outline how they frame the problem and solve practical difficulties. The resulting method is shown to be able to learn to HPO from historical HPO runs and text-based metadata. Some implementation details are missing and I would encourage the authors to add details when possible (taking into account the reviewers' feedback). The empirical evaluation of the paper contains sensible set of baselines and ablation studies. It is also appreciated the authors put extra effort into open sourcing parts of the code.
train
[ "arGI5wuwzdv", "hBFl6_-vJ6", "vuj4zRulnLkz", "mllX7rCmqTT", "7NXnu3TTJI1", "ey56R17-vCN", "wdSze6tiSim", "1L3PCyqqjv9", "c-9IWZ1700p", "AqzpBG-8MR", "wQVTpqnC1Rr", "YvpCk0M4lId", "sB1LLeGtRb" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer,\n\nOnce again, thank you very much for your valuable time spent on our submission and your thoughtful reviews!\nAs the Author-Reviewer discussion period is coming to an end soon, we wanted to check in with you if we have addressed your questions and concerns, and if we provided all information required for making your final evaluation. We would appreciate interacting with you.\n\nThank you again for your efforts!", " Very nice! I upgraded my rating, since these are significant improvements. This is under the assumption that these links will work indefinitely and are included in the paper such that follow up work is much easier. Thank You!", " Thank you for the detailed response. \n\nOpen-Sourcing\n* We have also added the checkpoints for OptFormer-H and OptFormer-B at (https://drive.google.com/drive/folders/1iNtdCj66TbzNeQzFMFgbzKyqZIZiP_ex?usp=sharing), and will continue to improve on open-sourcing the code by the camera-ready date for making the OptFormer more easily runnable. However, we hope that our current best efforts are satisfactory, given the time limit and technical challenges of open-sourcing proprietary code with large internal dependencies.\n\n* For the scoring found in the main paper, we refer the reviewer to the normalization ranges and scheme from the HPO-B paper [1] which we used for the HPO-B results. For transparency, we uploaded all of the results used for normalization and plotting as JSON files in the anonymous drive (https://drive.google.com/drive/folders/1A-B1IW7ZxmGbjNn6tHUknkMH5bP2va82?usp=sharing), and will add all of the extensive numbers including BBOB scores to the Appendix paper.\n\nFor Question 3:\n* You are right that our TS acquisition is different from Thompson sampling in the literature because here, we ignore the correlation of sampled functions across inputs. The effect of our acquisition function is to exploit the posterior mean and use the posterior variance for exploration (combined with input sampling from the prior policy). We will clarify this in the revision.\n\n* Conditioning yi on all sampled yj with j<i will fix the discrepancy and it would be a good idea to test. Nonetheless, now that our model has to predict yi on multiple fascinated yi’s rather than observed real values, it is different from the training setting and may introduce additional modeling errors.\n\n* Given the overall better performance EI over other acquisition functions in our ablation study (Figure 6 in Appendix E.1, all variants are close), we plan to use the more standard EI in place of TS in the main text and include TS together with PI, UCB in ablations. We’ve repeated the main body’s TS ablation studies in Figure 5 using EI and the same conclusion remains.\n\n**References** \\\n[1] HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on OpenML (https://arxiv.org/abs/2106.06257) \n", " Thanks for the detailed reply.\n\nOpen Sourcing\nThanks for open-sourcing more of your code. I think this is still not enough, though, and not all you can do. Especially for follow-up work. Still easily possible without uploading large datasets or compromising use data:\n1. Upload a trained model on public data only, e.g. OptFormer-H (TS).\n2. Add numbers on public data to the paper in a table in the appendix for with a public normalization function.\n\nMeta-data ablation\nGood point. I overlooked that at the time.\n\nThanks for all the answers to my questions. Here are my answers to some, numbered by the question order.\n\n2. Thank you for clarifying this. I misunderstood this.\n\n4. This is very interesting. I think this is a different (but interesting) acquisition function, I do not think it is Thompson Sampling. At least not what is meant by this in the BO literature. In Thompson Sampling we sample from the distributions over maxima of functions. Let us for example consider a prior that has some fixed noise on all positions eps \\in Uniform([0,1]) and consists of simple piece-wise linear functions of the form `[(0,eps),(0.5,0.1+eps),(0,eps)]`. Thompson sampling in the first step would always return 0.5 as this always is the function maximum, but your acquisition function (if I understood it right) does with high probability return something other than 0.5. This is because there is a large likelihood that eps > 0.1 + eps' (for eps and eps' from Uniform([0,1])) but the probability of eps > 0.1 + eps is 0. Do you know what acquisition function that is that you have? I think this would need clarification and motivation in the paper. A good reference for Thompson sampling is the BO book, chapter 7.9. (https://bayesoptbook.com) I think you could theoretically fix this in your setup by conditioning $y_i$ on all $y_j$ with j<i.", " We are very happy that the reviewer considers our experimental results strong and ablations insightful, and that our work proposes a very new direction! We respond below to the reviewer’s concerns and questions. If our responses have satisfied your concerns and you believe our paper is appropriate for publication, we kindly and respectfully ask our rating to be reconsidered. Thank you very much!\n* Open-sourcing.\n * Please also see our comment in the general response section. Due to privacy and proprietary concerns, it is prohibited (both ethically and legally) to release our internal RealWorldData dataset, and the privacy of our user data can be compromised from releasing the corresponding trained model. However, **we have released the core code in** (https://github.com/neurips2022optformer/optformer_neurips2022) for review and will add any additional implementation details upon request.\n\n * In the paper, we show figures instead of tables of numbers because it is visually easier to tell the difference. We will provide all numbers for future comparison in the final revision.\n\n* Comparison with SOTA methods in RealWorldData and HPO-B\n * We have provided comparisons to multiple competitive standalone (Vizier) and transfer-learning HPO solvers (ABLR, FSBO, HyperBO) in experiments. We will work on more baseline comparisons but we think the current set of experiments should suffice to support our claims in the introduction.\n\n* Meta-data ablation\n * As described in Appendix D.3, the data augmentation includes randomly removing all metadata in a training example. Thus the model does indeed see examples with and without metadata during training. Therefore, it is not sufficient to explain the worse performance only from a train/test distribution shift.\n\n## Questions\n* Discretization of scalars\n * Discretization allows DOUBLE and INTEGER parameters to be treated in the same way as metadata and other types of parameters in the unified serialization scheme. A reasonable alternative is to feed the parameter as a continuous input variable, but it is important to use a discretized value for the output target to learn a flexible discrete distribution. Using L2 loss to regress continuous parameters will suffer from learning multi-modal parameter distributions, which can exist from studies generated from e.g. BO algorithms.\n\n* Strong performance of using RealWorldData for training (Fig. 5 a) …, when comparing on HPO-B? Transfer from a smaller dataset is better than in-domain training\n * We believe this might be a misunderstanding. In Fig. 5a, the model trained on all data (OptFormer (TS)) is slightly better than the model trained on in-domain HPO-B (OptFormer-H (TS)) which is again better than the model trained on out-domain HPO data RealWorldData (OptFormer-R (TS)), and out-domain black-box data BBOB (OptFormer-B (TS)). This is consistent with the conclusion that both (1) more diverse training data and (2) more relevant data help the model. We will clarify this point.\n\n* \"temporal train/test splits\" in line 269?\n * All of the training tuning studies were generated before Feb 2020, and the test studies were generated afterwards up to March 2022. We have included the details in section 6 and appendix C.1 in the revision.\n\n* Thompson Sampling utility function\n * Inspired by the Thompson sampling method in bandit problems, we define our utility function as a sampled function value $y_i$ at a given location $x_i$ from the predictive distribution. It approximates the search for the maximum location in one function realization on a small sampled index subset ${x_i}$. More formally, $x^{*} = \\arg \\max_{x_i} y_i$, with $y_i \\sim p(y|x_i, …)$ and $x_i \\sim \\pi_{prior}(x|m,h)$ for all $i$.\n * We have included the formulation of all the three acquisition functions in Section 4.3 of the revision.\n\n* Limitation on the sequence length\n * Thanks for the suggestion. We have included the discussion in section 7 of the revision. More scalable architectures such as Performers [1] would help address that limitation.\n\n**References** \\\n[1] Rethinking Attention with Performers. ICLR 2021.\n\n", " Thank you for your encouraging comments! We are glad you find our work interesting and novel, and as well as provides an orthogonal approach to existing GP-based hyperparameter transfer methods. Please see our answers below to your concerns and questions.\n\n* Unclear meta knowledge\n * We agree that it is hard to diagnose the internal representation of the learned meta knowledge of the Transformer model. However, we can understand the model’s behavior by assessing its output policy. We suspect the model does more than memorize the choices given by training algorithms but learns the internal algorithm logic because we test on holdout functions in Section 6.1 that have different characteristics from training functions (full list of functions is given in Appendix C.1). With sufficient training data and large enough architecture, it should be possible to learn more complex algorithms on larger search dimensions.\n* Biased on closed-set model architectures and tasks (datasets)\n * That’s a very good question. The transfer learning approach allows one to perform better over test tasks similar to training tasks, but potentially in turn worse in other tasks. However, we provided a broad range of experiments to evaluate the model’s generalization performance in this paper, including the new experiments (Appendix E.5-6) over unseen BBOB functions, NASBENCH-201, and live CIFAR10 learning rate tuning.\n\n * We provided the details of the RealWorldData dataset’s collection in Appendix C.1. We split the dataset in temporal order to avoid information leak. In addition, the users who generated the test tuning experiments only started to use the tuning service *after* all the training studies were generated. We believe those test functions consist of a presentative set of functions for machine learning hyperparameter tuning tasks and non-overlapping with the training functions. \n\n* Ablation study regarding this architecture choice\n * This is a good suggestion. The main motivation of this paper is to provide a universal interface for HPO that can be modeled by a sequence model. The Transformer is a natural choice over RNN architectures such as LSTM due to the enormous success of transformers in all areas of ML in recent years. We hypothesize already that an LSTM will struggle to learn due to the long sequence length of our problem (e.g. 1000 tokens from 100 steps * 10 parameter dimensions) as e.g. [1] applied an LSTM for HPO for fixed input dimensions but found it hard to scale to even a sequence length of 100 without using curriculum learning. That being said, it would still be useful to include an LSTM baseline in our experiments.\n\n\n## Questions\n* More calibrated results of OptFormer than the GP. ECE comparison between OptFormer and OptFormer (TS)\n * We conjecture OptFormer learns better calibrated results because of the transfer learning setup. As shown in the relationship below, the set of HPO tasks is a subset of all possible blackbox optimization functions. We expect OptFormer learns a better and more specific prior from HPO data than a GP model, which possesses a very general prior assumption on local smoothness which could work broadly for blackbox optimization but maybe not as well over a specific subset of tasks.\n * $\\text{Training HPO tasks} \\approx \\text{Test HPO tasks} \\subset \\text{HPO tasks} \\subset \\text{Black-box optimization tasks}$\n\n * Furthermore, OptFormer (TS) is a policy to propose $x$. It uses the learned function prediction ability to rank samples from the prior policy and does not modify its predictive distribution of $y$ given $x$. Thus we cannot compare ECE between OptFormer and OptFormer (TS).\n* Vizier performs slightly better than OptFormer (TS) on the RealWoldData dataset. OptFormer performs much better than GP-UCB.\n * As shown in an ablation study in Appendix E.1, Figure 6, OptFormer (EI) performs even better than Vizier on RealWorldData. Vizier combines GP-UCB with a trust-region method that works more robustly in practice than a vanilla GP-UCB. However, tuning the hyperparameters of GP-UCB to work broadly is tricky in practice. We suspect this is another reason GP-UCB underperforms against Vizier and OptFormer (TS) / (EI). Vizier and GP-UCB hyperparameters are explained in Appendix B.1.\n\n**References** \\\n[1] Learning to learn without gradient descent by gradient descent. (ICML 2017)\n", " We really appreciate your positive comments and constructive feedback. Our intention of this work is to take the first step towards universal HPO methods. Below are our responses to your questions.\n\n* Generalization beyond the training data\n * The RealWorldData dataset contains very diverse HPO tasks that, in our opinion, lead to good generalization with comparable or better performance than the Vizier algorithm. Please note that test functions from the RealWorldData dataset were collected after the training subset in time and do not intersect in terms of the users who generated those studies. Those should represent a good set of out of domain test functions, as different users tune very different objectives.\n\n * Upon your request, we have conducted additional experiments for out-of-domain functions, including unseen BBOB functions (Appendix E5), along with NASBENCH-201 and tuning over a CIFAR10 training pipeline (Appendix E6). Please refer to our general response for details. Extensions to more flexible search spaces with conditional dependencies for general combinatorial spaces such as NAS will be in our future work.\n \n* Dataset generation\n * In our additional experiments, we find that it is sufficient to train OptFormer from trajectories generated by Vizier in the BBOB and HPO-B datasets (RealWorldData is fixed without our control of the algorithm). In our paper, we show that a single model can be trained on all the data and perform multiple tasks, but it is not necessary to generate the data using many optimizers. A single optimizer that generates data well should lead to better performance.\n\n* Pre-training models\n * We find the default T5 model architecture works well for our problem. We only modify the loss function with weighting so that the model does not need to predict the separating tokens. The only hyperparameter we tuned on a validation set is the dropout rate. The model is trained from scratch.\n\n## Questions\n* Compute the predictive distribution $p(y|...)$\n * As explained in the “Function prediction” paragraph in Section 4.3, we construct a piecewise uniform distribution in the continuous interval of [0, Q) and then map linearly back to the original function range.\n\n* Random search + Thompson sampling\n * By “Thompson sampling” we mean an acquisition function inspired by the Thompson sampling method in bandit problems. We define it as a sampled function value $y_i$ at a given location $x_i$ from the predictive distribution. It approximates the search for the maximum location in one function realization on a small sampled index subset ${x_i}$. In mathematical notation, this corresponds to: \n $x^{*} = \\arg \\max_{x_i} y_i$, with $y_i \\sim p(y|x_i, …)$ and $x_i \\sim \\pi_{prior}(x|m,h)$ for all $i$.\n\n * The difference between OptFormer (TS) and Random Search (TS) is that we sample locations x_i from a uniform distribution instead of the prior policy learned by OptFormer. We have included the formulation of all the three acquisition functions in Section 4.3 of the revision.\n\t\t\t\t\n* Computation budget for model training\n * As explained in Appendix D.2, we trained the model on a 4x4 TPU-v3 slice (16 devices in total) for up to 1M steps. We performed early stopping once the model started to overfit after about 200K steps. It takes about 1.4 days to train for 200K steps.\n\n* Open-source\n * We recently submitted our code at https://github.com/neurips2022optformer/optformer_neurips2022/. We will also work on open-sourcing the terabyte-sized datasets created from public benchmarks. Please also see our comment in the general response section, thank you!", " We thank the reviewer for their comments. Please see our answers to concerns and questions below.\n\n* Limited novelty\n * We definitely appreciate that applying Transformers to HPO is considered a novel point. However, please note that this is also the first ever method that makes it possible to meta-learn a single HPO model from scratch on a large hyperparameter tuning dataset. In particular, it is very nontrivial to develop a serialization scheme for unifying tuning tasks over different search spaces and contents.\nThis is also pointed out by other reviewers:\n * Reviewer **MJPS**: “This paper presents, to the best of my knowledge, the first approach that enables meta-learning across these different dimensions. While I don't think the method is ready for a practice yet, the paper marks a first important step towards more universal HPO methods.”\n * Reviewer **yjnm**: “It is interesting and novel to learn prior knowledge from text-based metadata.”\n* Readability\n * Thanks for pointing this out. Could you please provide more specific examples to inform us on our use of “sophisticated terms” to help us improve the paper?\n\n## Questions\n* Normalization in the evaluation metric.\n * It is necessary to normalize function values in order to provide an aggregated metric across test functions (500 in BBOB, 16 in RealWorldData and 84 in HPO-B) of different orders of magnitude in scale. Otherwise, the performance on test functions with the largest range will dominate the metric. It is a very common practice in various literature; see e.g. [1, 2, 3, 4, 5, 6]. We have clarified it in section 6 of the revision.\n* OptFormer comparison with GP-UCB\n * Please note that all models require priors, learned or hand-tuned, to perform inference. Often, users manually tune the prior of GP models to inject prior knowledge on the local smoothness of the objective landscape by tuning associated hyperparameters (choice of kernel, length scale, variance, etc). In contrast, OptFormer takes the meta/transfer-learning approach and learns implicit priors from HPO dataset. We do not claim OptFormer will be better than GP-UCB over all possible black-box optimization tasks, but do claim that OptFormer can learn a better prior for HPO tasks than GPs. Please take the following relationship as a reference: \n * $\\text{Training HPO tasks} \\approx \\text{Test HPO tasks} \\subset \\text{HPO tasks} \\subset \\text{Black-box optimization tasks}$\n\n* Open-Sourcing\n * Please also see our general comment in a separate reply. We have released the core code anonymously at https://github.com/neurips2022optformer/optformer_neurips2022. Model training is based on an already open-sourced T5X codebase (https://github.com/google-research/t5x) as explained in Appendix D.2. \n* Full landscape of black box optimization solvers\n * Thanks for the references. In the updated draft, we have appended a comparison between Vizier and BoTorch in Appendix E.7 to demonstrate that Vizier is a competitive algorithm already. Also, in the original draft, we provided comparisons to multiple competitive standalone (Vizier) and transfer-learning HPO solvers (ABLR [7], FSBO [8], HyperBO [9]) in the experiments. We are willing to work on more baseline comparisons but we believe the current set of experiments should suffice to support our claims in the introduction.\n\n**References** \\\n[1] HPO-B: A Large-Scale Reproducible Benchmark for Black-Box HPO based on OpenML. (https://arxiv.org/abs/2106.06257) \\\n[2] Bayesian Optimization is Superior to Random Search for Machine Learning Hyperparameter Tuning: Analysis of the Black-Box Optimization Challenge 2020 (JMLR 2021) \\\n[3] Google Vizier: A Service for Black-Box Optimization (KDD, 2017) \\\n[4] HEBO: Pushing The Limits of Sample-Efficient Hyperparameter Optimisation (JAIR 2021) \\\n[5] Taskset: A Dataset of Optimization Tasks (arXiv 2021) \\\n[6] Frugal Machine Learning (https://arxiv.org/abs/2111.03731) \\\n[7] Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start (NeurIPS 2017) \\\n[8] Few-Shot Bayesian Optimization with Deep Kernel Surrogates (ICLR 2021) \\\n[9] Pre-trained Gaussian processes for Bayesian optimization (https://arxiv.org/abs/2207.03084, code: https://github.com/google-research/hyperbo)", " We graciously thank all of the reviewers for their time and work in providing feedback on our paper, which will be very useful in improving its quality. We are also very happy that several reviewers consider our work novel and interesting supported by **strong experimental results**, with some comments even considering our work **a potential and important path forward in HPO!** Below, we address common questions and concerns:\n\n## Better results from using an Expected Improvement (EI) acquisition function, and additional out-of-domain experiments\n\nWe would like to emphasize that both the TS (main paper) and other variants especially EI (Appendix E.1) of OptFormer obtained comparable or better performance on challenging benchmarks over strong baselines. \n\nUpon the request of reviewer MJPS, we now provide additional experiments (Appendix E.5-6) to compare OptFormer (TS) and (EI) with Vizier on two sets of test functions: (1) hold-out **BBOB** test functions from general black-box optimization (we only compared the imitated policies in Sec 6.1 in the initial submission) (2) out-of-domain HPO tasks: **NASBench-201** [1] for neural architecture search and tuning a **live pipeline for training ResNet-50 on CIFAR-10** [2].\n\nThis demonstrates evidence that OptFormer can learn robust underlying representations of Bayesian optimization data acquisition strategies.\n\n\n\n## Key Message of our paper\nWe would like to respectfully emphasize that the key message of this paper is not to obtain a SOTA HPO method, but rather provide a new avenue of research. Our OptFormer is the first work that demonstrates the promise of applying large sequence models to take advantage of offline HPO data, and thus opens the door to many more possibilities over Bayesian Optimization methods.\n\n\n\n## Open Sourcing our Code\nFor this submission we have now anonymously included the code for: \n* Tokenization and preprocessing (https://github.com/neurips2022optformer/optformer_neurips2022/tree/main/converters)\n* Model + policy inference (https://github.com/neurips2022optformer/optformer_neurips2022/tree/main/t5x)\n* Data generation (https://github.com/neurips2022optformer/optformer_neurips2022/tree/main/augmentations)\n \nThe rest of the model training is based on the open-sourced T5X codebase [3] as explained in Appendix D.2. Checkpoints for models trained on public datasets can be found in (https://drive.google.com/drive/folders/1iNtdCj66TbzNeQzFMFgbzKyqZIZiP_ex?usp=sharing).\n\nWe will continue to work on open-sourcing the code and find ways to practically release the terabyte-sized generated dataset using public benchmarks. However, due to privacy and proprietary concerns, we may not release our internal RealWorldData dataset, as the privacy of our user data could be compromised from releasing the corresponding trained Transformer model [4].\n\n**References** \\\n[1] NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search (ICLR 2020) \\\n[2] https://github.com/google/init2winit \\\n[3] https://github.com/google-research/t5x \\\n[4] Extracting Training Data from Large Language Models (https://arxiv.org/abs/2012.07805)\n", " The authors propose using Transformer to imitate the hyper-parameter optimization. The authors claim this is the first work in the area, and claim the proposed method exceeds several classic HPO methods. Strength:\n1. very comprehensive evaluations.\n2. seemly reasonable idea.\n\nWeakness:\n1. limited novelty: perhaps applying transformer to HPO could be counted as a novel point, but I feel this is not enough.\n2. The paper is difficult to read, please try to improve the readability. Many places in the paper tries to impress the readers with sophisticated terms even on very simple concepts.\n 1. how do you justify max_i\\in{1:t}(y_i-y_rand)/(y_max-y_rand) a good evaluation metric? Why don't you use the y_i and plot as a range? \n\n3. Fig.4 made a few strong claims, especially that OPT-Former performs better than GP-UCB. I'm trying to understand the underly causes. Here is my guess: opt-former is trained on datasets that potentially contain tasks with similar distributions tested in Fig.4. The advantage of GP-UCB is to start without any priors and gradually approximate the underlying function contours by sampling. Without any prior data, I'm surprised that OPT-Former can perform better than GP-UCB. I'd be happy to see if I'm wrong, and it will be compelling if the authors can provide an anonymous link to the code for a quick comparisons. (key factor for me to improve the score)\n\n4. The methods used in this paper does not fully capture the landscape of black box optimization solvers today. The authors may find the following repo to be useful. (feel free use at your discretion, it is just a suggestion)\n\na. https://botorch.org/\n\nb. https://github.com/facebookresearch/nevergrad\n\nc. https://github.com/facebookresearch/LaMCTS\n\nd. https://github.com/uber-research/TuRBO\nThese repos have encapsulate several exciting BBO algorithms today.\n no limitation found", " The paper describes a new meta-learning approach for hyperparameter optimization (HPO) based on a transformer model. The model is trained on offline generated data that includes the metadata that characterizes the optimization problem, for example the search space and and the history of observed trials, i.e function values and input configuration. During inference time, the model can be combined with HPO policies, such as Thompson sampling or upper confidence bounds to suggest new hyperparameter configurations. \n### Reason for overall rating\n\nCurrent transfer learning approaches for HPO are limited to a fixed search space and the same underlying machine learning model and only transfer knowledge across different datasets. This paper presents, to the best of my knowledge, the first approach that enables meta-learning across these different dimensions. While I don't think the method is ready for a practice yet, the paper marks a first important step towards more universal HPO methods.\n\n\n### Strengths\n\nThe paper aims to learn a more general meta-learning approach for HPO, that generalizes not only across datasets, but also machine learning methods and search spaces. This, in theory, allows to access a much large amount of offline data and allows to generalize across different domains.\n\nOverall, I found the different parts of the paper, e.g tokenization, inference and decoding of the model, well motivated and clearly explained.\n\n`The empirical evaluation of the paper contains sensible set of baselines. Also the ablation study provides convincing insights in the proposed approach.\n\n\n\n### Weaknesses\n\nIt remains a bit unclear how well this model generalizes beyond the training data. For example, what would happen if the method is applied to other problem domains, such as neural architecture search or general gradient-free optimization problems. Similarly, how does the method scale with the dimensionality of the search space?\n\nThe dataset generation seems a bit ad-hoc. Is it really necessary to include trajectories of such a large variety of optimizers or would it be sufficient to limit to few state-of-the-art optimizers? This could potentially reduce the dataset size and would allow to us a smaller architecture.\n\nThe paper could elaborate on the pre-training of the model. For example, how did different design decision of the network architecture effect downstream performance? How difficult was the pre-training, e.g did you have to restart from previous checkpoints, etc ? \n\n\n - Section 6.2: How do you compute the predictive distribution p(y|...)? My understanding is that the transformer only predicts discrete outputs with [0, Q)\n\n- Section 6.4 prior policy: How is Random Search combined with Thompson sampling (Random Search-TS)?\n\n- What was the computational budget to train the transformer model and how long did it train?\n\n- Do you plan to open-source the dataset and the code to reproduce the results? While the method improves across a set of baselines, it does not improve yet over more sophisticated algorithms such as Vizier on real world datasets. I assume that it would also not outperform current state-of-the-art methods that early stop poorly performing configurations, such as Hyperband or BOHB. However, I think this is fine for a research paper, but it would not be sufficient for production.", " The paper studies hyperparameter transfer with metadata in an \"open-set\" setting -- allowing different configuration spaces across tasks. A transformer-based hyperparameter tuner, namely OptFromer, is proposed to predict policy and response function values (e.g., validation performance) in a sequence-to-sequence training style, where the learned policy maps text-based metadata to pre-discretized hyperparameter configurations. To my best knowledge, the proposed method is the first HPO method that learns prior knowledge from the collected text-based configurations. Experimental results on one collected real-world dataset and two public benchmarks were provided in terms of the policy behavior imitation, response function prediction, and HPO. Pros:\n- The proposed hyperparameter prior learning method is orthogonal to existing GP-based hyperparameter transfer methods in two folds: 1) it relaxes the limitation of sharing the same configuration space across different tasks, and 2) the data-driven approach given by training a transformer model significantly improves the efficiency. \n- It is interesting and novel to learn prior knowledge from text-based metadata. The seq-2-seq supervised learning framework is technically sounded with proper practical treatments. Moreover, the transformer structure is well-motivated to capture both symbolic and numerical manipulation.\n- While some necessary implementation details are missing in the manuscript, the augmented HPO policy with Thompson Sampling provides a good implementation similar to offline RL. \n- Extensive experimental results demonstrate the effectiveness of the learned HPO policy on its well-calibrated predictions and utility performance. \n\nCons:\n- The main concern of this work is its unclear **meta knowledge** learned from text-based metadata. How does the proposed OptFormer indeed imitate the other HPO algorithms? Does the transformer simply memorize the choices given by different HPO algorithms? Can the proposed method adapt to more complex algorithms (e.g., hypergradient-based) and large-scale hyperparameters? \n- The learned prior seems risky to be biased on closed-set model architectures and tasks (datasets). It remains unclear if the HPO policy can be generalized to **unseen** tasks. I may miss something in the appendix; yet, it will be helpful to give more details about the split of training/test set in the RealWorldData. A non-overlapping split over the tasks or algorithms will be more convincing. \n- One major technical contribution of this work is to introduce transformers for learning HPO priors. Hence, it is expected to give an ablation study regarding this architecture choice. One baseline based on RNN (e.g., LSTMs or GRUs) will be useful to validate this point empirically. \n - Table 4 shows more calibrated results of OptFormer than the GP-based methods. It is well known that GP could provide well-calibrated uncertainty estimates. Yet, in accordance with Eq (3-5), it seems like the proposed method just follows the standard supervised training strategy. Any insights into why OptFormer could show a better calibration result? Is the reason due to using the transformer architectures [40]? I'm also curious about the ECE comparison between OptFormer and OptFormer (TS)\n- As shown in Fig. 4, the Vizier performs slightly better than OptFormer (TS) on the RealWoldData dataset. While the paper implies the reason as the GP surrogate-based test functions, it is unclear why OptFormer performs much better than GP-UCB. Also, the comparison results between RealWorldData (*mixed algorithms*) and HPO-B (*controlled algorithms*) cast a shadow on the biased issue of the proposed OptFormer. \n\n**Post after rebuttal**\nThanks for providing a detailed response to the questions. Due to my travel schedule, I haven't got a chance to further discuss with the authors. Yet, most of my previous concerns were well addressed. Particularly, it would be interesting to add an LSTM baseline in future work and explore more the calibration of HPO from a pre-training perspective. I would like to champion this work by upgrading my score. N/A", " The authors train a single Transformer language model (LM) on a variety of tokenized HPO trajectories from a variety of tasks and HPO algorithms. For each HPO trajectory the LM is primed on the task/algorithm (name, search space, metric, algorithm) and is then trained to predict hyper-parameters as well as the optimized metric step-by-step for each trial in order. Strengths\n- The results are strong: Reproducing the performance of most algorithms up to 100 trials as well as improving upon them. This could be the path forward for the HPO community.\n- The authors provide insightful ablations.\n- The method is clearly described and simple.\n\nWeaknesses\n- This paper does neither open-source its codebase (built open an open-source codebase), nor the trained model and not even the training data (built upon open datasets). Actually, even a performance number for optimization for follow-up work to compare with is missing, as scores are calculated using an undisclosed metric and the main results are only reported in the form of plots and not tables. \n- For RealWorldData (and HPO-B actually, even though less interesting there) a SOTA BO method would be an interesting baseline to add, like HEBO.\n- Is the conclusion of Meta-data ablation (line 287) based on a model trained with meta-data? In that case, I would guess the worse performance stems from a train/test distribution shift, rather than from missing metadata really.\n\n\nSummary:\nThe results are strong with some evaluation problems. It is not reproducible, though, which in this case is a particularly big problem, as this paper proposes a very new direction for a field of research in which the expertise to reproduce the results based only on descriptions (previous methods require a very different background) and the resources to reproduce the results without data (running HPO with different optimizers on millions of problems) are missing. - Is the discretization of scalars for the input important? How would a model perform, where these numbers are normalized in some way and fed to the network directly?\n- Do you have an explanation for the strong performance of using RealWorldData for training (Fig. 5 a) compared to the larger HPO-B dataset, when comparing on HPO-B? (Transfer from a smaller dataset is better than in-domain training. This is unusual.)\n- What do you mean by \"temporal train/test splits\" in line 269?\n- How do you calculate the Thompson Sampling utility function? The listed limitations are fair, even though one limitation, I would expect there to be, is missing: Handling much longer sequences, as the Transformer is trained with a maximum sequence length." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "1L3PCyqqjv9", "vuj4zRulnLkz", "mllX7rCmqTT", "7NXnu3TTJI1", "sB1LLeGtRb", "YvpCk0M4lId", "wQVTpqnC1Rr", "AqzpBG-8MR", "nips_2022_r-6Z1SJbCpv", "nips_2022_r-6Z1SJbCpv", "nips_2022_r-6Z1SJbCpv", "nips_2022_r-6Z1SJbCpv", "nips_2022_r-6Z1SJbCpv" ]
nips_2022_H3o9a6l0wz
Optimal Transport-based Identity Matching for Identity-invariant Facial Expression Recognition
Identity-invariant facial expression recognition (FER) has been one of the challenging computer vision tasks. Since conventional FER schemes do not explicitly address the inter-identity variation of facial expressions, their neural network models still operate depending on facial identity. This paper proposes to quantify the inter-identity variation by utilizing pairs of similar expressions explored through a specific matching process. We formulate the identity matching process as an Optimal Transport (OT) problem. Specifically, to find pairs of similar expressions from different identities, we define the inter-feature similarity as a transportation cost. Then, optimal identity matching to find the optimal flow with minimum transportation cost is performed by Sinkhorn-Knopp iteration. The proposed matching method is not only easy to plug in to other models, but also requires only acceptable computational overhead. Extensive simulations prove that the proposed FER method improves the PCC/CCC performance by up to 10% or more compared to the runner-up on wild datasets. The source code and software demo are available at https://github.com/kdhht2334/ELIM_FER.
Accept
Authors propose a new strategy for a hard problem that reviewers found compelling and novel. The experimental details are complex and we encourage the authors to address the many issues the reviewers raise.
train
[ "rwZNEzH8b-L", "Rd25pMj1E3e", "lqE7BRkdnSC", "dbYGO683V9g", "7ArDRopBFUE", "lHckhRYyew", "9UfQWDxQ_v", "MbAXqAfQpnS", "A2itZp8qOaM", "6XSC0PiI8Jv", "8Cb8xqZjxXG", "jt1H1KhpCkl" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Many thanks for the good comments from the reviewer. Although feature normalization is not applied in the inference phase, we can expect positive performance. The reason is as follows:\n- The mechanism of allowing the model to learn domain-invariant features through training domains (i.e., training IDs in our case) and then performing naïve prediction with **the model trained** by the inference phase is the general setting of domain generalization studies such as [1]. The reason why this setting is valid is based on the assumption that a model that has completed training for sufficient domains will be robust even to unseen domains.\n- Also, since the domains in the inference phase have not been experienced during training phase, we can expect them to be already “in the center” without loss of generality.\n\nIf this paper is accepted, this discussion will be added to the camera-ready version.\n\nReference\n\n[1] K. Zhou et al., Domain Generalization with MixStyle, In ICLR 2021.", " Thank you for the reviewer's positive response. In the future, we will be able to design a robust parametric model for inter-ID matching derived from attention method(s), and this direction must be a valuable research topic. Also, thanks for the nice question. For example, $L_{ccc}$ has been often used in FER studies (e.g., workshop challenge) aimed at improving performance [1]. In other words, $L_{ccc}$ (or $L_{pcc}$) already plays an important role as a regularization loss function for performance improvement in the FER field. If this paper is accepted, a software link including additional experiments and demos will be added to the camera-ready version.\n\nReference\n\n[1] V. Karas et al., Time-Continuous Audiovisual Fusion with Recurrence vs Attention for In-The-Wild Affect Recognition, CVPRW 2022.", " I have carefully read the comments to my review and concerns, and I am overall happy with the authors' response. I acknowledge and appreciate the effort towards carrying over the mentioned experiments which as the results indicate show some extra insight into the proposed method.\n\nI can observe that attention methods would be a valid alternative method to that proposed despite the increased cost, and as such this option opens an interesting research topic.\n\nThe dependency of the proposed method with the pcc loss is quite impressive as the results seem to be crucially affected by removing it. Does this also apply to the competing methods beyond that of [7]?\n\nProvided the response I still believe the paper is solid enough to be considered for publication and hence I see no reason to change my score.", " In training phase, $f_{(\\varphi)}$ and $f_{(\\theta)}$ are expected to better learn the identity and expression joint features, and the identity-invariant expression features from the normalized features, respectively. As far as I'm concerned, it is impossible that the model has the ability to directly obtain the identity-invariant expression features from the identify and expression joint features during inference.", " Thanks for the reviewer's positive response.\n\nIf this paper is accepted, the software link including this experiment and demo will be added to the camera-ready version.", " Updated experiments for age look good. This is an interesting paper, that I think can be accepted.", " 1. Many thanks for the helpful comments from the reviewer. Comparative analysis between Sinkhorn, which does not require trainable parameters, and cross-attention using parameters, may highlight the value of the proposed method. Thus, we implemented a cross-attention network that learns the similarity between ID samples from a part of the transformer encoder [1]. In detail, different ID feature sets with sizes of $d\\times n$ and $d\\times m$, respectively, are passed through the multi-head attention (MHA) layer (self-attention). Then, the outputs are used together as inputs of another MHA layer (cross-attention). Here, notation is the same as those of Line 141 of the main body. Next, the feature set of $d\\times m$ size obtained through cross-attention is passed through the FC layer with a one-dimensional output to generate an attention weight of the $m$ size. As a result, the ID shift in Eq. 3 is replaced by this weight. The experimental result for AFEW-VA is shown in the table below.\n\n| Methods | RMSE-V | CCC-V | RMSE-A | CCC-A |\n| :--- | :---: | :---: | :---: | :---: |\n| ELIM (AL) | 0.186 | 0.602 | 0.198 | 0.581 |\n| ELIM (AL)-Att. | 0.182 | 0.656 | 0.217 | 0.552 |\n\n- This transformer encoder showed meaningful improvement in terms of RMSE-V and CCC-V, despite small parameters of less than 100K. This supports the fact that the weights generated by the cross-attention mechanism are compatible with the inter-ID matching process. Even though somewhat weak performance was observed in the arousal axis, which reflects the degree of emotional activation, this attention method will be very useful for matching samples showing different emotional expressions between IDs. This experimental result has been included in the revised Appendix.\n\n2. In the last line of the caption of Figure 2, the contents of ELIM inference are already mentioned. For easy understanding, we mentioned additional details of inference on Line 216 of the revised manuscript.\n\n3. Unlike AffWild(/2), which consists of several samples per ID, all facial images of AffectNet have different IDs. So, we newly designed “ELIM-Age” that groups samples by age group as domain instead of ID. Except for this preprocessing, ELIM-Age and ELIM are equivalent. The annotation and details of age labels were used as it is in [2].\n\n- The experimental result below implies the following two facts: 1) The learning mechanism of ELIM is valid even in AffectNet. 2) Dealing with shifts due to demographics such as age and ID should be also considered important in FER field.\n\n| Methods|RMSE-V|CCC-V|RMSE-A|CCC-A |\n| :--- | :---: | :---: | :---: | :---: |\n| Baseline [3]|0.37|0.60|0.41|0.34 |\n| Kossaifi et al. [4]|0.35|0.71|0.32|0.63 |\n| Hasani et al. [5]|0.267|0.74|0.248|0.85 |\n| CAF (AL) [6]|0.222|0.80|0.192|0.85 |\n| CAF (R18) [6]|0.219|0.83|0.187|0.84 |\n| ELIM-Age (AL)|0.221|0.821|0.187|0.856 |\n| ELIM-Age (R18)|0.209|0. 835|0.174|0.866 |\n\n- As for SEWA, we will test it as soon as it is approved for use and show the final result.\n\n4. The number of backbone parameters is the same in both CAF and ELIM. However, CAF utilizes a discriminator network, whereas ELIM utilizes a projector ($f_\\theta$ in Fig. 2), which is slightly larger than CAF. This may cause a difference in the total number of parameters of ELIM and CAF.\n\n5. The cause of the performance variation is randomness due to the statistically selected mini-batch samples and reference IDs. w/o $L_{va}$ means that $L_{va}$ of Line 169 is excluded from model training. Both experiments in Fig. 5 include $L_{pcc}$ and $L_{ccc}$. We partly described this fact in the caption of Figure 5 of the revised manuscript.\n\n6. Two previous works, i.e., HO-Conv and CAF that compete with the proposed method for SOTA performance basically use $L_{pcc/ccc}$. However, the other techniques such as [7] do not employ $L_{pcc/ccc}$. So, for fair comparison, we performed an ablation study on $L_{pcc/ccc}$. The experimental result for AFEW-VA is as follows. In the absence of $L_{pcc/ccc}$, the CCC performance decreased by about 0.15 in both the Valence and Arousal axes. Note that despite this performance degradation, the proposed method still outperforms [7] in all indicators.\n\n| Methods | PCC-V | PCC-A | CCC-V | CCC-A |\n| :--- |:---: |:---: |:---: |:---: |\n| Mitenkova et al. [7] | 0.33 | 0.42 | 0.33 | 0.40 |\n| ELIM w/ $\\mathcal{L}_{pcc/ccc}$ | 0.680 | 0.645 | 0.602 | 0.581 |\n| ELIM w/o $\\mathcal{L}_{pcc/ccc}$ | 0.620 | 0.532 | 0.453 | 0.429 |\n\n7. According to the reviewer’s comment, we modified the caption of Figure 4 of the revised manuscript and some parts of Lines 236-238.\n\nReferences\n\n[1] A Vaswani et al., NeurIPS 2017.\n\n[2] R. Poyiadzi et al., ArXiv 2021.\n\n[3] A. Mollahosseini et al., TAC 2017.\n\n[4] J. Kossaifi et al., CVPR 2020.\n\n[5] B. Hasani et al., TAC 2020.\n\n[6] D. Kim and BC Song, AAAI 2021.\n\n[7] A. Mitenkova et al., FG 2019.", " The proposed method assumes that the main cause of facial expression change is ID shift. On the other hand, depending on changes in age, gender, race, etc., the expression tendency may shift (cf. Line 299-307). In order to analyze the effect of shift caused by the demographics on FER performance, we newly designed ELIM-Age. ELIM-Age classifies samples by age group as domain instead of ID. Except for this process, ELIM-Age is the same as ELIM. The annotation and details of age labels were used as it is in [1]. The experimental result of ELIM-Age for AffectNet is as follows. Surprisingly, ELIM-Age showed better RMSE and CCC performance than all techniques including CAF. This result suggests the following two facts: 1) It is meaningful to deal with the shift in expression tendency due to demographics such as age group as well as ID. 2) The optimal matching of ELIM is also effective in matching between age groups.\n\n| Methods | RMSE-V | CCC-V | RMSE-A | CCC-A |\n| :---------------------: | :---------: | :-------: | :---------: | :------: |\n| Baseline [2] | 0.37 | 0.60 | 0.41 | 0.34 |\n| Kossaifi et al. [3] | 0.35 | 0.71 | 0.32 | 0.63 |\n| Hasani et al. [4] | 0.267 | 0.74 | 0.248 | 0.85 |\n| CAF (AL) [5] | 0.222 | 0.80 | 0.192 | 0.85 |\n| CAF (R18) [5] | 0.219 | 0.83 | 0.187 | 0.84 |\n| ELIM-Age (AL) | 0.221 | 0.821 | 0.187 | 0.856 |\n| ELIM-Age (R18) | 0.209 | 0. 835 | 0.174 | 0.866 |\n\nThanks to the reviewer's good comments, we were able to design experiments that could further enhance the value of this paper. If this paper is accepted, the source code as well as demos and experimental results will be uploaded immediately.\n\nReferences\n\n[1] R. Poyiadzi et al., Domain generalization for apparent emotional facial expression recognition across age-groups, ArXiv 2021.\n\n[2] A. Mollahosseini et al., Affectnet: A database for facial expression, valence, and arousal computing in the wild, TAC 2017.\n\n[3] J. Kossaifi et al., Factorized higher-order cnns with an application to spatio-temporal emotion estimation, CVPR 2020.\n\n[4] B. Hasani et al., Breg-next: Facial affect computing using adaptive residual networks with bounded gradient. TAC 2020.\n\n[5] D. Kim and BC Song, Contrastive adversarial learning for person independent facial emotion recognition. AAAI 2021.\n", " 1. Many thanks for the good comment from the reviewer. Considering such a situation, we adopted a solution in terms of configuration. That is, when training our model, we allocated a sufficiently large mini-batch size (cf. Line 207-208). As a result, we did not find an extreme case where the facial expressions of all samples between IDs are totally different each other. If the valence signs of facial expressions between ID groups are all different, ID-dependent features can be obtained by Eq. 4 due to the matching between samples of different facial expressions. However, as mentioned above, this phenomenon will happen extremely rarely. In addition, since ELIM finds the model solution through iterative training, such a case is hard to affect the overall performance. When training our model, we examined the validity of matched samples between IDs in detail. The additional analysis and actual matching results were added to the revised Appendix.\n\n2. In the limitations section (Section 6), we mentioned that the model training may be partially dependent on the selection of reference IDs (cf. Line 315-318). In order to explicitly analyze the effect of reference-target pair variation on overall performance, we changed the existing configuration of one-to-$N$ (ON) pair to one-to-one (OO) pair. The test results for the AFEW-VA dataset are as follows: We could observe the overall improvement of RMSE and CCC performance. Especially, there was an improvement of about 4% in terms of CCC-A. This proves that OO pairs composed of independent reference-target pairs are more advantageous for learning ID-invariant features than ON pairs.\n\n| Methods | RMSE-V | CCC-V | RMSE-A | CCC-A |\n| :-------------------- | :--------: | :-------: | :--------: | :-------: |\n| ELIM (AL)-ON | 0.186 | 0.602 | 0.198 | 0.581 |\n| ELIM (AL)-OO | 0.190 | 0.610 | 0.192 | 0.622 |\n\n3. In the last line of the caption of Figure 2, the details of inference are already mentioned. In detail, the prediction values are output through the backbone ($f_\\phi$) and the affine projector ($f_\\theta$). Note that at this time, feature normalization (Eq. 4) through ID shift is not used. For easy understanding, we added inference details on Line 216 of the revised manuscript.\n\n4. The domain gap between VA FER and AU prediction is much larger than that between VA FER and category (emotion) label-based FER. Thus, it is practically impossible to apply the proposed method to AU prediction in this rebuttal. Instead, we designed an additional experiment on AffectNet in which both category and VA labels were annotated. Unfortunately, however, since the IDs of all facial images of AffectNet are different, AffectNet cannot be directly applied to ELIM where ID grouping is performed as a preprocessing. On the other hand, based on an analysis that the expression tendency can be shifted according to changes in age, gender, and race (cf. Line 299-307), we newly implemented the so-called ELIM-Age, which groups samples by age group as a domain. Except for a specific pre-processing process, ELIM-Age and ELIM are equivalent. Referring to [1], age-annotated AffectNet was applied to ELIM-Age.\n\n- We compared the proposed method with some existing methods that used both category and VA labels for evaluation. The proposed method showed better performance in all aspects such as RMSE, CCC, and class accuracy (%) than other methods. Although the proposed method showed somewhat lower accuracy than Face2Exp [5], which is the latest method targeting only category labels, this difference will be sufficiently overcome if further tuning for category labels is performed with sufficient time. Also, please note that this paper is an FER study focusing on dimensional models of emotion, i.e., VA space.\n\n| Methods | Acc. | RMSE-V | CCC-V | RMSE-A | CCC-A |\n| :--- | :---: | :---: | :---: | :---: | :---: |\n| Baseline [2] | 0.58 | 0.37 | 0.60 | 0.41 | 0.34 |\n| VGG-Face [3] | 0.60 | 0.37 | 0.62 | 0.39 | 0.54 |\n| HO-Conv [4] | 0.59 | 0.35 | 0.71 | 0.32 | 0.63 |\n| ELIM-Age (R18) | 0.611 | 0.209 | 0. 835 | 0.174 | 0.866 |\n| Face2Exp [5] | 0.64 | - | - | - | - |\n\n5. The average of the MSE values between the predictions of the features that have not been normalized in Eq. 4 and the VA labels corresponds to $\\mathcal{L}_{va}$. We described the details in Lines 169-170 of the revised manuscript.\n\nReferences\n\n[1] R. Poyiadzi et al., Domain generalization for apparent emotional facial expression recognition across age-groups, ArXiv 2021.\n\n[2] A. Mollahosseini et al., Affectnet: A database for facial expression, valence, and arousal computing in the wild, TAC 2017.\n\n[3] D. Kollias et al., Generating faces for affect analysis, ArXiv 2018.\n\n[4] J. Kossaifi et al., Factorized higher-order cnns with an application to spatio-temporal emotion estimation, CVPR 2020.\n\n[5] D. Zeng et al., Face2Exp: combating data biases for facial expression recognition, CVPR 2022.\n\n", " The paper proposes to quantify the inter-identify variation by utilizing pairs of similar expressions for identify-invariant facial expression recognition. To find pairs of similar expressions from different identities, the authors define the inter-feature similarity as a transportation cost. Then, they perform optimal identity matching to find the optimal flow with minimum transportation cost by Sinkhorn-Knopp iteration. Strengths:\n1. The authors propose a method to quantify the inter-identify variation, which is different from the previous works in a non-quantitative way.\n2. The experimental results exceed most existing works\nWeaknesses:\n1. As mentioned in the paper, inter-identify variation is quantified by pairs of similar expressions. However, when the expressions of individuals with different IDs differ greatly in a batch, that is, the precondition fails. The methods may not work.\n2. The variations between reference and targets are calculated in a batch, but different references will be chosen in different batches. The variations among references are not considered.\n3. Although the authors describe the training process in detail, they do not mention the testing process, which I believe is a non-trivial process.\n4. The authors only verify that the proposed method performs better in VA prediction, but it is lack experiments on expression and AU prediction. 1. Please give reasonable explanations for the above weaknesses.\n2. In line 169, I do not find the definition of $L_{va}$ in the cited paper [50]. Maybe the author made a mistake. Please see the Questions.", " The paper proposes a method for facial expression recognition (by means of valence and arousal) which aims to remove the factors that relate to identity from the backbone features, so as to make the classifier be robust against identity-specific features. In particular, a method is proposed to remove from the feature representation the identity shift, by computing the optimal transport between the batch-specific features in a way to represent the image-specific domain shift from the other images in the batch. The features are then normalized according to the computing shifting mean and scale and passed through the final classifier which regresses the values of valence and arousal. The use of optimal transport to assign the image-specific mean vector is grounded on the idea that the batch-specific features can be rearranged, while preserving the total mass distribution, so as to remove the identity-based similarities encountered in the batch. The method is validated in a set of challenging datasets achieving competitive results. The paper is technically novel and achieves competitive results and hence has the merits to be accepted at NeurIPS. The use of OT to disentangle the factors of identity is novel, the motivation is clear and the experiments are compelling. The paper is well accompanied by algorithms and descriptions that help understanding the derivation, with further pseudo-code for the crucial parts in the Supplementary Material. The paper could benefit from a better clarity of presentation and elaboration on the significance and meaning of the learned shift, and thus I would like to particularly ask the authors to improve the manuscript’s presentation and writing substantially for better clarity.\n\nMy main comments and concerns are listed below:\n\n- Digging under the hood of Equations 3 and 4 as well as how the cost matrix is computed based on similarity, I can spot that there are some substantial similarities and differences between using the OT assignment and using cross-attention to attenuate the influence of the global features. Plugging the cost matrix into Eqn 3 and observing the “residual” behaviour of Eqn. 4 can derive to expressing the new features as a linear combination of the other images’ features, given by a weighted similarity between them. This resembles to me the use of cross-attention with X being the attention weights. Obviously, the authors propose a completely different way to assign these attention weights, but after reading the full derivation I would like to ask the authors if a small transformer encoder instead to compute the mean features in Equation (3) would also enforce the learned features to be identity-invariant. I think that such exploration is important and the paper would gain significant strength such comparison would be given. \n\n- It is not mentioned in the paper (or I missed that), but there is no reference to the process of inference. My understanding is that at test time no shift is obtained because the features are already expected to be “centered”, but some clarification in this regard would be appreciated.\n\n- I think the paper would gain strength by including the performance of the proposed approach on AffectNet and SEWA, while AffWild2 and AffWild are challenging, they are a subset of each other, and AFEW-VA has recently given way to the aforementioned datasets.\n\n- What makes the number of parameters on Table 1 be different for the proposed method and that of CAF for the AL-tuned and ResNet-18 backbones? \n\n- I am a bit confused about the results shown in Figure 5. First, the behaviour of the curves is a bit counterintuitive as e.g. there is a drop in two points of CCC between using 18 and 19 subjects, which is recovered after using 20 subjects. What is this behaviour due to? In addition, it is not clear to me what does w/o L_va mean, considering that this is the main training loss. The blue curve, does it correspond to a training using the PCC + CCC losses? Please do clarify. \n\n- It is important also to understand the contribution of the training losses in the method. How does this compare against competing methods? If they had been trained without L_ccc or P_ccc then I believe the authors should also consider a fair evaluation using their method trained with the same losses. This will indicate the contribution of the losses in the performance gain.\n\n- I didn’t get quite well what Figure 4 attempts to represent, so I would like to request the authors to elaborate on it.\n\n- Finally, as mentioned before, I believe the paper would greatly benefit from better writing and presentation. The readability of the paper is somewhat poor affecting its reach. \n\n As stated above, my main concerns or questions can be summarized as:\n1) Exploring or comparing against of attention-based methods for residual learning. \n\n2) Inference\n\n3) Use of SEWA or AffectNet to compare against CAF\n\n4) Different number of parameters for same networks in Table 1.\n\n5) Different losses and comparison against existing methods.\n\n6) Explanation of Figure 4\n The limitations are properly addressed in the paper and hence I have no further questions in this regard.", " The paper proposes an approach to identity invariant facial expression recognition. Identity matching is formulated as an optimal transport problem. The optimal flow for this problem is found using Sinkhorn-Knopp. Evaluations are conducted on three public FER datasets. Strengths\n\nPaper proposes a solution to a challenging problem in FER. Problem is well formulated in paper including needed background information to setup proposed solution. State of the art results are detailed on multiple public datasets. Results show the proposed approach can easily be integrated into different models. Ablation study on the relevance weights, and impact of number of IDs is well conducted. \n\nWeaknesses\n\nAs the paper focuses on identity invariant FER, other factors such as age, gender, and race could have a large impact on the ID matching and FER learning. While this is briefly discussed in Section 5.4 (A5), the discussion is largely superficial with a tech report being cited that talk about differences in facial expression across age. It is not clear how the proposed approach can handle these changes across different demographics. Handling this type of shift is important for generalized FER.\n\n How can the proposed approach handle shift due to demographics such as age, gender, or race? Negative impact has been addressed. Limitation regarding the reference ID is detailed, however, limitations on the scope of demographics and the proposed approach are not detailed. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4 ]
[ "dbYGO683V9g", "lqE7BRkdnSC", "9UfQWDxQ_v", "A2itZp8qOaM", "lHckhRYyew", "MbAXqAfQpnS", "8Cb8xqZjxXG", "jt1H1KhpCkl", "6XSC0PiI8Jv", "nips_2022_H3o9a6l0wz", "nips_2022_H3o9a6l0wz", "nips_2022_H3o9a6l0wz" ]
nips_2022_xdZs1kf-va
I2Q: A Fully Decentralized Q-Learning Algorithm
Fully decentralized multi-agent reinforcement learning has shown great potentials for many real-world cooperative tasks, where the global information, \textit{e.g.}, the actions of other agents, is not accessible. Although independent Q-learning is widely used for decentralized training, the transition probabilities are non-stationary since other agents are updating policies simultaneously, which leads to non-guaranteed convergence of independent Q-learning. To deal with non-stationarity, we first introduce stationary ideal transition probabilities, on which independent Q-learning could converge to the global optimum. Further, we propose a fully decentralized method, I2Q, which performs independent Q-learning on the modeled ideal transition function to reach the global optimum. The modeling of ideal transition function in I2Q is fully decentralized and independent from the learned policies of other agents, helping I2Q be free from non-stationarity and learn the optimal policy. Empirically, we show that I2Q can achieve remarkable improvement in a variety of cooperative multi-agent tasks.
Accept
The paper presents a novel method for dealing with nonstationarity in decentralized multi-agent reinforcement learning (MARL). While there are some concerns about the level of novelty, the approach is interesting and presented well. There are also concerns about the discussion and comparison with the state-of-the-art in decentralized MARL methods. We suggest the authors include comparisons to other decentralized MARL methods (such as the ones below) or state why such comparisons are not reasonable. Omidshafiei, Shayegan, et al. "Deep decentralized multi-task multi-agent reinforcement learning under partial observability." International Conference on Machine Learning. PMLR, 2017. Palmer, Gregory, et al. "Lenient Multi-Agent Deep Reinforcement Learning." Proceedings of the International Conference on Autonomous Agents and MultiAgent Systems. 2018. Lyu, Xueguang, and Christopher Amato. "Likelihood Quantile Networks for Coordinating Multi-Agent Reinforcement Learning." Proceedings of the 19th International Conference on Autonomous Agents and MultiAgent Systems. 2020.
train
[ "yIBJZ3pRWer", "aAaGEOVeuLZ", "q6ZLhxQBjH", "k9F9eRE5G2L", "8lb_AwMVDv", "4Y-HNlGbBVq", "1K2t9p-ytC", "Mrh9EHqMDu3", "jmJUJkgbSxX", "TsixHy2Vct9", "zdaFcnUy6Fc" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for their detailed rebuttal. While the authors responded to all my points, they remain points of weakness in the paper. Particularly as learning a stochastic model is hard and it is unclear how learning a stochastic model would perform in practice in this case. Also, I still find the theoretical aspects of this paper weak; in particular many of the results are not very informative and are better off moved to the appendix.\n\nNevertheless, I find this paper to have other strong qualities, which make it a great candidate for Neurips. Therfore, I am in favor of accepting this paper.", " Thank you! I am satisfied with the response provided to my review, and after reading the reviews by other reviewers and their respective responses, I'm increasing my score. This is a solid paper.", " > Stochastic environments \n\nStochastic environment is really an important topic. To respond to the concerns, we will summarize the analysis in the original version, discuss the tighter bound of Theorem 3, provide a new extension of I2Q for stochastic environments, and perform experiments on other stochastic games.\n\nFirst, in the original version, we do many stochastic experiments, including stochastic matrix games in Appendix B.6, noisy differential games in Fig. 8, and SMAC. In stochastic matrix games and noisy differential games, we impose strong stochasticity, and I2Q still outperforms IQL.\n\nSecond, in the original version, we provide **a tighter bound** in Appendix A, and now we have updated the tighter bound to the main pages in the revision. The bound is meaningful for explaining **why I2Q can be successfully applied in stochastic environments** (colored line in Appendix A). As discussed in Appendix A, since we update $f_i$ by Eq.12, the second term of Eq.12 makes the predicted next states $f_i(s, a_i)$ close to the **high-frequency** next states in the replay buffer, which means that the transition probability of $f_i(s, a_i)$ would not be too small. So I2Q value will be close to the true value and the worst cases where $f_i(s, a_i)$ has very small transition probabilities can be avoided.\n\nThird, **to *explicitly* model ideal transition probabilities in stochastic environments**, we propose a new extension of I2Q. **The key idea is to predict the transition probabilities, instead of the next state**. We extend $Q_i^{ss}(s,s')$ to $Q_i^{s\\rho}(s,\\rho(\\cdot|s))$, the value of state and the probabilities of **all next states** given the state. Similar to Eq. 11, the ideal transition probability is\n$$\\rho^*(s,a_i)= \\arg \\max_{\\rho(s,a_i) \\in P(s,a_i)} Q_i^{s\\rho}(s,\\rho(s,a_i))$$\n$P(s,a_i)$ is the set of possible transition probabilities given $s$ and $a_i$. We extend $f_i(s,a_i)$ to predict $\\rho^*(s,a_i)$, which could be a softmax or a Gaussian. Then we do not maintain a whole buffer $D_i$ but a sequence of small buffers $\\{D_i^m\\}$. In the collection of each buffer $D_i^m$, the agents act deterministic policies, and between different buffers, the agents update the policies. So, each $D_i^m$ contains a transition probability $\\rho^m(\\cdot|s)$ under deterministic policies. Then, at each update, we \n\n1 randomly select a buffer $D_i^m$ \n\n2 update $Q_i^{s\\rho}$ by minimizing $E_{s, s', r \\sim D^m_i}[(Q_i^{s \\rho}(s, \\rho^m(\\cdot|s))-r-\\gamma \\bar{Q}_i^{s \\rho }(s', f_i(s', a'^*_i)))^2]$\n\n3 Update $f_i$ by maximizing $E_{s, a_i \\sim D_i^m}[\\lambda Q_i^{s \\rho}(s, f_i(s, a_i))-(f_i(s, a_i)-\\rho^m(\\cdot|s))^2]$\n\n4 Update $Q_i$ by minimizing $E_{s, a_i \\sim D_i^m}[(Q_i(s, a_i)-\\bar{Q}_i^{s \\rho}(s, f_i(s, a_i))^2].$\n\nThe whole training process is similar to deterministic I2Q, just replacing $s'$ with $\\rho$. For the objective of $f_i$, the first term guarantees the optimality and the second term enforces the predicted transition probabilities to be in the set $P(s,a_i)$. Following the proof of deterministic I2Q, we could similarly prove that $f_i$ learns the ideal transition probabilities in stochastic tasks and according to Theorem 1, the agents converge to the optimum.\n\nHowever, although stochastic I2Q theoretically finds ideal transition probabilities, predicting the transition probabilities in complex environments is much more difficult than predicting the next state. Since we have analyzed the reason why deterministic I2Q can be successfully applied in stochastic environments in Theorem 3, we still choose the deterministic version in our paper due to its **practicability**.\n\nFinally, we test stochastic I2Q proposed above in stochastic games with 3 agents, 30 states, infinite-horizon. The action space of each agent is 5. Each state will transition to any state given a joint action according to transition probabilities. The transition probabilities and reward function are randomly generated and fixed. We generate 20 games, and **normalized rewards are shown in Figure 18**. I2Q significantly outperforms IQL and is very close to the optimum, empirically verifying that stochastic I2Q could learn ideal transition probabilities in stochastic tasks.", " > Removing forward model\n\nVery insightful suggestion! First, as you have mentioned, latent space can help the model learning in high dimensional tasks, and we have claimed that in discrete state space or large state space, we can map the state space to a **continuous embedding space** (line 170), and apply I2Q on the embedding space, and in SMAC, $Q_i^{ss}$ and $f_i$ are built on the **hidden state** of $Q_i$ (the output of RNN layer) (line 296). And we also demonstrated the question \"How does approximation of $f_i$ affect convergence?\" in Theorem 4. \n\nSecond, the forward model is also exactly what we want to remove. We **had designed a kind of I2Q implementation without forward model**, enlightened by implicit Q-learning [1], but did not include it in the original version because it is just an approximation. Implicit Q-learning learns the optimal value without predicting next actions, similarly, we can obtain QSS value without predicting next states. Following Implicit Q-learning, we introduce $V_i(s)$ and utilize expectile regression. Specifically, we \n\nupdate $Q_i^{ss}$ by minimizing $E_{s, s', r \\sim D_i}[(Q_i^{ss}(s, s')-r-\\gamma V_i(s'))^2]$\n\nupdate $V_i(s)$ by minimizing expectile loss $E_{s, s' \\sim D_i}[L_2^{\\tau}(Q_i^{ss}(s, s')-V_i(s'))]$, $L_2^{\\tau}(u)=|\\tau-1(u<0)| u^2$.\n\nUsing the approximate QSS value, we have\n\n$$Q_i^{ss}(s, s'^*) = \\max_{s' \\in N(s,a_i)} Q_i^{ss}(s, s')$$\n\nAccording to Eq.15 and the mathematical derivation between line 161 and line 162, we update $Q_i(s, a_i)$ to be $Q_i^{ss}(s, s'^*)$ by minimizing $$E_{s, a_i,s' \\sim D_i}[(Q_i(s, a_i)-\\max(\\bar{Q}_i(s, a_i), \\bar{Q}_i^{ss}(s, s')))^2].$$\n\nAlthough QSS value is a biased estimate in this implementation, the implementation without forward model is practical. We test I2Q w/o forward model in SMAC. **The results are shown in Figure 19**. I2Q w/o f shows similar performance to I2Q, and does not need to learn the forward model, thus it would be more practical in complex environments.\n\n[1] Kostrikov, Ilya, Ashvin Nair, and Sergey Levine. \"Offline Reinforcement Learning with Implicit Q-Learning.\" *International Conference on Learning Representations*. 2021.", " > \"Informal\" proof\n\nThe reason why the proofs seem informal is that they are not too difficult, without involving much complex mathematical derivation. In Theorem 1, we prove that by learning on the ideal transition probabilities the agents could converge to the optimum, and in Theorem 2, we prove that Eq 11 is an ideal transition function. The overall logic is clear and rigorous.\n\n> Comparison to other baselines\n\nThe existing studies of decentralized MARL are limited, and we do not find other recent papers for fully decentralized MARL *without* communication, which we think should be an advantage rather than a weakness of I2Q. Following the comments of Reviewer 8HEx, we also compare I2Q with independent SAC and TD3 in Appendix B10. It is **unfair** to compare I2Q with CTDE methods, which use the information of other agents. Although we do not run CTDE methods, since we use standard benchmarks, the results can be compared with that in published papers. In SMAC, to be honest, I2Q cannot outperform CTDE methods, e.g., VDN, QMIX, and QPLEX, where the winning rate could reach 100% in many tasks [1]. However, this cannot weaken the contribution of I2Q, because I2Q only uses local information. Due to the time limit, we will run the CTDE methods on SMAC and report the results in the final version. \n\nOn the other hand, in matrix games and differential games (N=2,3), I2Q converges to global optimum and thus will not be inferior to any other baselines, including CTDE methods. In fact, **many CTDE methods cannot converge to the optimum on the two matrix games in Figure 4**, as shown in [1] (Figure 2 and Figure 6 of [1]), but I2Q can converge to the optimum easily. We will also run CTDE methods on matrix games in the final version. \n\n[1] Wang, Jianhao, et al. \"QPLEX: Duplex Dueling Multi-Agent Q-Learning.\" *International Conference on Learning Representations*. 2021.", " > Novelty and difference between I2Q and D3G (QSS)\n\nWe think your main concerns are the novelty and difference between I2Q and D3G (QSS). The main novelty and contribution are not just introducing QSS-learning into MARL but proposing **a new paradigm** where agents independently learn on ideal transition probabilities. QSS is merely a technique for building the ideal transition probabilities, which can also be learned by other techniques. For example, in the response to Reviewer zsxJ(or Appendix B8), we extend $Q_i^{ss}(s,s')$ to $Q_i^{s\\rho}(s,\\rho(\\cdot |s))$ (i.e., the value of state and the probabilities of all next states given the state), to build the ideal transition probabilities in stochastic environments. Researchers could follow this paradigm to design other methods to build ideal transition probabilities. Thus, we believe our work is novel. \n\nThe difference between I2Q and D3G (QSS) has been discussed in Appendix B.7 of the original version. The motivation and implementation of I2Q are different from that of D3G, and **the main difference, which makes D3G not suitable for decentralized MARL**, is that the Cycle loss in D3G requires the transition probabilities in replay buffer $D_i$ to be deterministic. However, this is impossible even if the environment is deterministic, because other agents are also updating policies. Therefore, D3G still suffers from non-stationarity and cannot achieve competitive performance in the experiments.\n\nBesides adopting QSS-learning, we still analyze the reason why I2Q can be successfully applied in **stochastic environments**, and extend I2Q to the version **without forward model** in the response to Reviewer zsxJ(or Appendix B9), so we believe I2Q is a novel work instead of just a simple application of QSS-learning.\n\n> If we add a fixed randomly initialized reward function and the performance tolerance to the other algorithms, is there a performance improvement?\n\nThe randomly initialized reward function is introduced to only remedy the assumption of only one optimal policy for our theoretical results. Empirically, this is not required. **In all experiments, we do not use a randomly initialized reward function for I2Q and other baselines, thus the comparison is fair.**\n\n\n> More base RL algorithms\n\nThanks for your advice on generality! Since I2Q is a variant of Q-learning, it could be instantiated on Q-learning methods, e.g., DDPG, SAC, and TD3. We test I2Q on independent SAC and TD3, and **the results are shown in Figure 20**. I2Q also obtains performance gain on the two base algorithms. PPO is not a Q-learning method so it cannot be the base algorithm of I2Q.", " > About the two references\n\nThanks for the missing references, but the two studies do not follow our fully decentralized settings in Section 3.1, where communication is fully disabled and agents cannot share any information. Both [1] and [2] **allow communication** with neighboring agents according to a time-varying communication network (Definition 2.1 in [1] and Problem Formulation in [2]). In [1], neural network parameters are shared between neighboring agents, and in [2], actions produced by communicative policies are shared between neighboring agents. So we think the two methods should be classified as *decentralized learning with communication*. We have summarized the difference in Related Work of the revision.\n\n> SOTA fully decentralized MARL work\n\nFully decentralized MARL is a new field especially when combined with neural networks, and we do not find other recent papers for fully decentralized MARL without communication. So we believe our I2Q is a novel work in this field. In fact, IQL is a naive but widely adopted decentralized method and Hysteretic IQL is a classic decentralized baseline. And we compare the two methods and also IPPO in Appendix. Moreover, following the comments of Reviewer 8HEx, we also compare I2Q with independent SAC and TD3 in Appendix B10.\n\n[1] Zhang, Kaiqing, et al. \"Fully decentralized multi-agent reinforcement learning with networked agents.\" International Conference on Machine Learning. PMLR, 2018.\n\n[2] Konan, Sachin G., Esmaeil Seraj, and Matthew Gombolay. \"Iterated Reasoning with Mutual Information in Cooperative and Byzantine Decentralized Teaming.\" International Conference on Learning Representations. 2021.", " We thank all the reviewers for the efforts on reviewing our paper and the insightful suggestions! Following the comments, we extend I2Q, including I2Q for stochastic environments, I2Q w/o forward model, and instantiations on more base algorithms. We include these extensions and experimental results in Appendix B8, B9, and B10 in the revision (with colored section title), and answer all questions in detail. We hope our feedback could address the concerns and look forward to further discussions.", " This paper presents an important and interesting approach on fully decentralized MARL. Fully decentralized Q-learning is highly applicable to realistic and real-world applications. The method is evaluated extensively, showing a great potential. ## Strengths\n- Theoretical analysis\n- Extensive evaluations\n- Interesting perspective to the MARL problem with potential real-world applicability\n\n## Weaknesses\n- missing out on some related work on fully decentralized MARL\n- Lack of SOTA baselines - I like the idea behind this approach and I believe that this is a strong work with potential for real-world applicability. \n\n- Some important related work on fully decentralized MARL are not covered [1-2]. These works and similar prior work must be discussed to cover similarities and differences so that readers can select the proper method based on their applications. \n\n[1] Zhang, Kaiqing, et al. \"Fully decentralized multi-agent reinforcement learning with networked agents.\" International Conference on Machine Learning. PMLR, 2018.\n\n[2] Konan, Sachin G., Esmaeil Seraj, and Matthew Gombolay. \"Iterated Reasoning with Mutual Information in Cooperative and Byzantine Decentralized Teaming.\" International Conference on Learning Representations. 2021.\n\n- The approach potentially needs to be compared against SOTA fully decentralized MARL work to evaluate its utility and feasibility. Yes", " This paper proposes a new MARL algorithm under the DTDE paradigm. Specifically, the proposed algorithm (I2Q) is introduced based on the ideal transition probability (where each agent assumes that the others adopt the optimal actions for each decision) and a previous idea named QSS-Learning. Theoretical guarantee on the convergence of the proposed algorithm is provided under certain conditions. For experimental studies, the significant superiority of I2Q is demonstrated in matrix games, MPE, MA MuJoCo and SMAC. Pros:\n+ This paper is clearly written.\n+ The experimental part is relatively diverse and adequate.\n\n&nbsp;\n\nCons:\n- The novelty of the proposed algorithm is limited. \n\n&nbsp;\n\n\nMinor issues and typos:\n- l.299 \"SAMC\" → \"SMAC\" Questions:\n+ If we add a fixed randomly initialized reward function and the performance tolerance to the other algorithms, is there a performance improvement? \n\n\nI appreciate the authors’ reasonable introduction of QSS for DTDE.\nHowever, my major concern is the lack of novelty in the proposed algorithm. The current algorithm seems to be a simple application of the QSS-Learning in the MARL scenario, the QSS and the prediction of next state in this paper are consistent with the original QSS paper. \n\n&nbsp;\n\nSuggestions:\n\n+ I recommend the authors to combine the proposed approach with more base RL algorithms, such as PPO, SAC and TD3 to better evaluate of the generality of the method.\n\n+ The authorss should add a discussion of QSS and the differences between I2Q and QSS in the related work or background.\n The rationality and limitations of the main assumptions adopted in this paper are discussed in Sec.3.4.", " This paper presents I2Q, an algorithmic approach for decentralized MARL. The authors present the non-stationarity problem in this setting and propose to use \"ideal transition probabilities\" to solve it. Particularly, these are transition probabilities for which all agents are ensured to converge to an optimal solution when trained in a decentralized manner. The authors then propose to use the next state (in deterministic environments) as a representation of an action, and show that it induces an ideal transition probability, which ensures convergence to an optimal solution. They experiment on many baselines in various domains, showing the benefit of their approach. The paper proposes an elegant solution to the non-stationarity problem of decentralized MARL. I'm not able to say if it is the first method to solve this problem, and I hope one of the other reviewers will address this. The paper is clearly written, and the presentation is great. Also, I found everything to be easy to read and follow. Finally, the experiments section seems to have chosen a wide variety of tasks, and I'm glad the authors also chose to show results on the high dimensional problem of SCII.\n\n----------------------------------------------------------------------------------------------------------\n\nThe paper doesn't have strong flaws, but there are some issues that make it a borderline paper for Neurips.\n\nFirst, the theory is not very deep. There are many questions that remain open that the authors don't address theoretically, and I think are important for a better understanding of the problem. One of these, is convergence proof of I2Q, which the authors don't really prove, but only discuss informally.\n\nSecond, I feel that the deterministic assumption in the paper is a strong one, unless carefully addressed. In favor of the authors, they do discuss this in the paper, showing a result of the value gap, and also experiments on a wide variety of tasks. Still, I believe this is not adequately addressed. A stronger result for stochastic environments should be provided. I assume there exist some \"ideal transition probabilities\" for this setting. If it is the case that such are impossible to theoretically find, then this is an important point to address in the paper. Overall, I find Theorem 3 to be a trivial result. I wish to see an approach that tackles stochasticity explicitly, and provides a tighter bound for approximation errors.\n\nThird, the fact that I2Q must learn a forward model is troubling, as model-based methods usually fail againt state of the art model-free methods on high dimensional tasks (unless latent spaces are used, such as in MuZero). The authors don't address the problem of estimating $f$ in their work. Moreover, I feel that this is not addressed fully in the experiments either.\n\nFinally, while the experiments show results on different types of environments, I find that I2Q was not compared against enough baselines. There are a lot of new baselines on MARL, and particularly I would expect the authors to compare I2Q to at least three more baselines which are considered SOTA, and not only IQL - even if they are not decentralized.\n\n----------------------------------------------------------------------------\nStrengths:\n1. A new solution for decentralized MARL\n2. Proofs to formal statements seem correct\n3. Paper is clearly written and presentation is great\n4. Experiments show a variety of interesting tasks\n\nWeaknesses:\n1. Theory is weak\n2. Stochastic environments should be addressed\n3. Forward model should be addressed theoretically and in experiments\n4. Experiments are lacking comparison to other algorithms Most of my questions relate to things I already mentioned above.\n1. Can stochasticity be addressed with \"ideal transition probabilities\" more explicitly? How can we learn such probabilities? How would this affect training? If not possible, what are the limitations?\n2. How does approximation of $f$ affect convergence? The authors discuss limitations of their work. Some of these limitations coincide with points I've already raised. As mentioned above, I believe some of these points should be addressed more thoroughly in the paper." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "8lb_AwMVDv", "1K2t9p-ytC", "zdaFcnUy6Fc", "zdaFcnUy6Fc", "zdaFcnUy6Fc", "TsixHy2Vct9", "jmJUJkgbSxX", "nips_2022_xdZs1kf-va", "nips_2022_xdZs1kf-va", "nips_2022_xdZs1kf-va", "nips_2022_xdZs1kf-va" ]
nips_2022_awdyRVnfQKX
HierSpeech: Bridging the Gap between Text and Speech by Hierarchical Variational Inference using Self-supervised Representations for Speech Synthesis
This paper presents HierSpeech, a high-quality end-to-end text-to-speech (TTS) system based on a hierarchical conditional variational autoencoder (VAE) utilizing self-supervised speech representations. Recently, single-stage TTS systems, which directly generate raw speech waveform from text, have been getting interest thanks to their ability in generating high-quality audio within a fully end-to-end training pipeline. However, there is still a room for improvement in the conventional TTS systems. Since it is challenging to infer both the linguistic and acoustic attributes from the text directly, missing the details of attributes, specifically linguistic information, is inevitable, which results in mispronunciation and over-smoothing problem in their synthetic speech. To address the aforementioned problem, we leverage self-supervised speech representations as additional linguistic representations to bridge an information gap between text and speech. Then, the hierarchical conditional VAE is adopted to connect these representations and to learn each attribute hierarchically by improving the linguistic capability in latent representations. Compared with the state-of-the-art TTS system, HierSpeech achieves +0.303 comparative mean opinion score, and reduces the phoneme error rate of synthesized speech from 9.16% to 5.78% on the VCTK dataset. Furthermore, we extend our model to HierSpeech-U, an untranscribed text-to-speech system. Specifically, HierSpeech-U can adapt to a novel speaker by utilizing self-supervised speech representations without text transcripts. The experimental results reveal that our method outperforms publicly available TTS models, and show the effectiveness of speaker adaptation with untranscribed speech.
Accept
all reviewers agree * the paper is interesting and novel * the proposed method has solid experiments and good results * paper is well written this paper should be accepted to the conference.
train
[ "pJ_VBqH82j", "TTapwyv7Jw8", "Z4mYN8VBs_6", "4kMMs09D2vd", "NeCDaJDx83n", "NIFbXTtAwKz", "qbD_Sqw1y8h" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I have read the authors' responses which addressed my questions. I will keep the original rating.", " We appreciate for your helpful comments and suggestions. We have provided responses to your questions below to address your concerns.\n\n>Q1. Adding PP to the VITS posterior encoder is helpful. The posterior encoder in VITS is analogous to the acoustic encoder in HierSpeech. Have the authors tried adding PP loss to z_a also?\n\nAs you mentioned, adding the phoneme predictor to the acoustic encoder is helpful because it improves the capability of linguistic information in latent representations. Table 7 also shows that VITS trained with phoneme predictor has better performance in phoneme error rate (PER) and word error rate (WER). However, in our experimental settings, we found that adding the phoneme predictor to the acoustic encoder directly decreases the audio quality with more noise. To analyze this phenomenon, we evaluate the reconstruction quality of each model in terms of Mel-spectrogram reconstruction error (Mel distance) and perceptual evaluation of speech quality (PESQ). To compare the quality of reconstructed audio, we used the same models which we used for ablation studies (trained during 300k steps). For PESQ, we compared each model in both wide and narrow bands. For Mel distance, we calculated the l1 distance between ground-truth Mel-spectrogram from target audio and reconstructed Mel-spectrogram from the reconstructed audio.\n \n(Reconstruction flow: target audio --> linear spectrogram --> Acoustic encoder --> z_a --> Decoder --> reconstructed audio)\n\n|Model|Wide Band PESQ (↑)|Narrow Band PESQ (↑)|Mel distance (↓)|\n|------|:---:|:---:|:---:|\n|VITS|2.03|2.57|0.46|\n|VITS+Phoneme Predictor|2.00|2.55|0.49|\n|HierSpeech|2.12|2.67|0.44|\n\nThe results show that although adding a phoneme predictor to the acoustic encoder improves the linguistic capability in acoustic representation, it also degrades the audio quality. Hence, we did not add the phoneme predictor to the acoustic encoder. Compared with ground-truth audio, our model has almost similar phoneme error rate and word error rate, so we think that it is not necessary to use an additional phoneme predictor in our model.\n\n> Q2, What is “sliced” z_a?\n\nDue to the computational complexity, most end-to-end models [1, 2, 3] use the windowed generator training by upsampling the partial sequences in the decoder. In our case, we sample 32 sequences from the whole z_a, and upsample it by 256x which is the same size as hop size for STFT. We will add the details about the sliced z_a in the revised paper. \n\n> Q3. What is the “global conditioning” in Section 2.3 and how is it used? Is it different from speaker embedding? If it is different, does training on transcribed data require it?\nQ4.What is the speaker encoder and how is it trained?\n\nAs you mentioned, it is the same as speaker embedding. For the multi-speaker settings, we add the global speaker embedding to the residual block of the acoustic/linguistic encoder, residual block in normalizing flow, stochastic duration predictor, and the input of the decoder. For zero-shot or few-shot scenarios, we use the speaker (style) encoder to extract the speaker embedding from ground-truth speech. The speaker encoder is trained with the entire model jointly. We use the linear spectrogram as an input of the speaker encoder, and the speaker embedding with 256 dimensions is extracted. We will add the details of the speaker encoder and we will replace “style encoder” with “speaker encoder”. Specifically, the speaker encoder consists of two fully-connected layers with 256 hidden units, two one-dimensional convolutional networks with a residual connection (filter size of 256 and kernel size of 5), a multi-head self-attention module, and a projection layer followed by temporal average pooling. \n\n>Reference\n\n[1] Jeff Donahue, Sander Dieleman, Mikołaj Bińkowski, Erich Elsen, and Karen Simonyan, “End-to-End Adversarial Text-to-Speech,” International Conference on Learning Representations (ICLR), 2021. \n\n[2] Yi Ren, Chenxu Hu, Xu Tan, Tao Qin, Sheng Zhao, Zhou Zhao, and Tie-Yan Liu, “FastSpeech 2: Fast and High-Quality End-to-End Text to Speech,” International Conference on Learning Representations (ICLR), 2021. \n\n[3] Jaehyeon Kim, Jungil Kong, and Juhee Son, “Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech,” International Conference on Machine Learning (ICML), 2021\n", " We appreciate for your helpful comments and suggestions. We have provided responses to your questions below to address your concerns.\n\n>Q1. I think the author can discuss the difference of propose method with method in [1], which also makes use of self-supervised linguistic representation. Whether the only difference in the pipeline is that the method in [1] leverages a spectrum synthesizer and hifi-gan but the proposed method incorporates hifi-gin into HierSpeech pipeline (since the HierSpeech also utilizes GAN loss for waveform synthesis)?\n\nThanks for your suggestion. We will add VQTTS paper [1] in related works, which was accepted in Interspeech 2022 after our submission. We agree that this paper also utilizes the self-supervised speech representation for speech synthesis, however, we can discuss the differences in our method as below\n\n1. As you mentioned, while VQTTS is a two-stage text-to-speech system, we successfully integrate self-supervised speech representation into the singe-stage end-to-end text-to-speech pipeline.\n \n2. VQTTS replaces Mel-spectrogram with self-supervised vector quantized representation for the acoustic feature. However, as shown in our experiments, self-supervised speech representation has lack of acoustic capability. Hence, when using these representations for acoustic representation, it results in mispronunciation and over-smoothing problems in the synthesized speech, and this occurs more frequently in the multi-speaker scenarios. In our ablation study, Table 7 shows that using self-supervised representation directly causes performance degradation. This also means that the spectrogram contains more acoustic information than self-supervised representation. In the paper [1], they also mentioned the limitation in terms of reconstruction performance due to the information loss brought by quantization which results in a much lower PESQ score than that of using Mel-spectrogram. Because we had the same problem in early experiments, we designed our model by hierarchical structure using both linguistic representation from self-supervised representation and acoustic representation from spectrogram, which is important for expressive speech synthesis. In my opinion, to alleviate the problems mentioned above, VQTTS has additional networks using additional CNNs and Conformer blocks, and they also used additional prosody features from text sequences. So, we are not entirely sure that the performance of VQTTS is improved by solely using self-supervised representation as acoustic representation. We see that our hierarchical structure can be applied to VQTTS to improve the expressiveness in their synthesized speech. \n\n3. Unlike VQTTS, we utilize the self-supervised speech representation to disentangle linguistic information and acoustic information, and to learn each attribute hierarchically. With the hierarchical structure, we achieve better performance in the linguistic and acoustic metrics. Also, our model can extract the linguistic representation from audio-only data without text transcripts by demonstrating the untranscribed speaker adaptation. But VQTTS still needs the text-audio paired dataset.\n\nWe will add this discussion in the revised paper.\n\n>Reference\n\n[1] Chenpeng Du, Yiwei Guo, Xie Chen, and Kai Yu, \"VQTTS: High-Fidelity Text-to-Speech Synthesis with Self-Supervised VQ Acoustic Feature,\" Interspeech, 2022.\n\n", " We appreciate your helpful comments and suggestions. We have provided responses to your questions below to address your concerns.\n\n>Q1. The formulation of the acoustic VAE is misleading, since the \"weights sharing\" between the acoustic prior and the linguistic posterior is critical, but is not part of Eq. 2 (the ELBO)\n\nThanks for your suggestion about Equation 2. We acknowledge that this part may be misleading since we do not emphasize enough the weight-sharing between the acoustic prior and the linguistic posterior. As shown in Figure 2, the linguistic encoder is used to extract acoustic prior, which is also used as linguistic posterior. Although we also stated it in Line 128 and 152-153, we agree it is easy to miss as you mentioned. Following your suggestion, we will clarify this equation in the final manuscript. To avoid confusion, we will also emphasize the weight-sharing part in the revised paper.\n \n>Q2. The system overview lacks clarity, and adding all the loss terms (instead of a single KL term) could help.\n\nThanks for your suggestion. We will redraw the system overview in Figure 2 by adding all the loss terms. \n\n>Q3. Lines 85-92 :This paragraph repeats the previous paragraph?\n\nThank you for your advice. We will remove the same content which is stated in the previous paragraph, and revise it in the final manuscript.\n \n>Q4. Line 121: NVAE is not a general hierarchical VAE but rather a particular hierarchical VAE for images\n\nThanks for your advice. We acknowledge that NVAE is a particular hierarchical VAE for images, not a general hierarchical VAE. In the recent speech domain, BVAE-TTS [1] and PVAE-TTS [2] successfully adopted NVAE to the text-to-speech system by learning hierarchical latent representations to increase expressiveness, so we overstated it as general hierarchical VAE. To avoid confusion, we will clarify this term by removing “general”. \n\n>Q5. Line 128: It could be useful to define A here\n\nThank you for suggestion. We will add the definition of A here in the final manuscript.\n \n>Q6. Line 283: extra \"The\" \n\nThanks for your comments. We will remove the extra “The” in the revised version.\n\n>Q7. Line 306: Which KL divergence problem did you encounter? Posterior collapse? Other problems?\n\nWhen we trained the model with label smoothing and data augmentation, we experienced that the KL divergences increases more than that of the model without using them. It was not the case of posterior collapse, but it resulted in the audio quality degradation of synthetic speech. We will clarify it in the revised paper.\n\nTo avoid another KL divergence problem, we found that it is important to remove the silences of audio data for each particular dataset. Specifically, the audio of VCTK dataset has leading and trailing silences. Without removing silences, we had trouble training the model because the KL divergence between the linguistic posterior and prior increases, and therefore the model cannot synthesize speech from text. After we trim the audio under the threshold to determine silences for VCTK dataset, the model is successfully trained. Although these details are included in our attached source code, we will add these details in the appendix for better reproducibility. We recommend using 20dB as a threshold for VCTK and LibriTTS dataset, and not removing the silence for LJSpeech dataset. \n\n>Reference\n\n[1] Yoonhyung Lee, Joongbo Shin, and Kyomin Jung, \"Bidirectional variational inference for non-autoregressive text-to-speech,\" International Conference on Learning Representations (ICLR), 2020.\n\n[2] Ji-Hyun Lee, Sang-Hoon Lee, Ji-Hoon Kim, and Seong-Whan Lee, \"PVAE-TTS: Adaptive Text-to-Speech via Progressive Style Adaptation,\" ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022.\n", " The authors propose a novel speech synthesis model which utilizes untranscribed speech data to improve the representation of the linguistic information. The authors combine the representation of wave2vec (self-supervision) as an input to a linguistic encoder which is used as a prior for an acoustic VAE. The authors augments the VAE usual ELBO with multiple terms to improve the audio quality, and also add KL divergence term between the acoustic prior and the linguistic prior (which is conditioned on text) to support speech synthesis. \nThe authors perform multiple experiments to demonstrate the effectiveness of their method, and ablation studies to justify their design choices. The authors also demonstrate how the proposed method allows untranscribed text-to-speech by fine tuning the acoustic encoder over audio-only data. Strengths:\n* The audio quality of the provided samples is very good compared to competing methods.\n* The proposed model demonstrates consistent improvement over existing methods.\n* The proposed model supports untranscribed text-to-speech.\n\nWeaknesses:\n* The formulation of the acoustic VAE is misleading, since the \"weights sharing\" between the acoustic prior and the linguistic posterior is critical, but is not part of Eq. 2 (the ELBO)\n* The system overview lacks clarity, and adding all the loss terms (instead of a single KL term) could help.\n * Lines 85-92 :This paragraph repeats the previous paragraph?\n\n* Line 121: NVAE is not a general hierarchical VAE but rather a particular hierarchical VAE for images\n\n* Line 127: Eq. 2 does not make sense. As it currently stands, the rightmost term has no effect on the generative process or the acoustic posterior.\nIn practice, this effect is established via the weight sharing of p_theta_a(z_a|x_w2v) = q_phi_l(z_lx_w2v) but this critical information is rather easy to miss. Please fix it by adapting equation 2 to explicitly show the effect of the weight sharing.\n\nI add here an example of formulating Eq. 2 with explicit weight -sharing: replace q_phi_l(z_l|w_w2v) with p_theta_a(z_a|x_w2v) and separate Eq. (2) into two terms, ELBO + and KL( p(z_a|x_wv2) || p(z_a|c) )\nThis formulation explicitly shows that the conditional prior p(z_a|x_w2v) is regularized to match the linguistic conditional prior p(z_l |c)\n\n* Line 128: It could be useful to define A here\n\n* Line 283: extra \"The\"\n\n* Line 306: Which KL divergence problem did you encounter? Posterior collapse? Other problems? The authors discussed the limitations of the proposed method.", " In order to address the challenge in TTS that it is hard to infer both the linguistic and acoustic attributes from the text directly, the paper proposes HierSpeech, which is a high-quality end-to-end TTS system based on a hierarchical conditional variational autoencoder (VAE) utilizing self-supervised speech representations. The pipeline firstly extracts self-supervised linguistic representation from text, and then converts it to acoustic representations, and finally generates waveform. Strengths:\n1. The paper introduces self-supervised linguistic representation from wav2vec to help the TTS.\n2. The experiments verify the effectiveness of proposed pipeline. The paper also discusses the potential of HierSpeech on few-shot TTS and voice conversion.\n3. The paper is well-written and easy to understand.\n\nWeaknesses:\nThere is no obvious weakness.\n I think the author can discuss the difference of propose method with method in [1], which also makes use of self-supervised linguistic representation. Whether the only difference in the pipeline is that the method in [1] leverages a spectrum synthesizer and hifi-gan but the proposed method incorporates hifi-gin into HierSpeech pipeline (since the HierSpeech also utilizes GAN loss for waveform synthesis)?\n\n\n[1] VQTTS: High-Fidelity Text-to-Speech Synthesis with Self-Supervised VQ Acoustic Feature\n N/A", " This paper studies end-to-end text-to-speech synthesis and few-shot speaker adaptation using untranscribed speech. In particular, the authors extend from the VITS model and leverage self-supervised speech representations as the intermediate latent variable to bridge the gap between text and acoustic features. An additional phoneme prediction loss is used to refine self-supervised representation, making it closer to flow-transformed acoustic latent z_a. In addition, the injection of the intermediate latent inferred from self-supervised model enables speaker adaptation using untranscribed data. Strengths\n1. An interesting approach to integrate linguistic representations learned by SSL models into the end-to-end TTS pipeline\n2. Provides a neat way of speaker adaptation with untranscribed data\n3. Phoneme prediction loss seems to help not only HierSpeech but also VITS\n4. Good ablation studies comparing different layers of wav2vec representations for HierSpeech and favorable results compared to the baselines (VITS).\n\nWeaknesses\n1. Some details are missing. See the question section.\n 1. Adding PP to the VITS posterior encoder is helpful. The posterior encoder in VITS is analogous to the acoustic encoder in HierSpeech. Have the authors tried adding PP loss to z_a also?\n2. What is “sliced” z_a?\n3. What is the “global conditioning” in Section 2.3 and how is it used? Is it different from speaker embedding? If it is different, does training on transcribed data require it?\n4. What is the speaker encoder and how is it trained?\n Yes, the authors discussed it. It is nicely addressed and mitigation strategies are presented." ]
[ -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, 3, 3, 4 ]
[ "TTapwyv7Jw8", "qbD_Sqw1y8h", "NIFbXTtAwKz", "NeCDaJDx83n", "nips_2022_awdyRVnfQKX", "nips_2022_awdyRVnfQKX", "nips_2022_awdyRVnfQKX" ]
nips_2022_XA4ru9mfxTP
Unifying Voxel-based Representation with Transformer for 3D Object Detection
In this work, we present a unified framework for multi-modality 3D object detection, named UVTR. The proposed method aims to unify multi-modality representations in the voxel space for accurate and robust single- or cross-modality 3D detection. To this end, the modality-specific space is first designed to represent different inputs in the voxel feature space. Different from previous work, our approach preserves the voxel space without height compression to alleviate semantic ambiguity and enable spatial connections. To make full use of the inputs from different sensors, the cross-modality interaction is then proposed, including knowledge transfer and modality fusion. In this way, geometry-aware expressions in point clouds and context-rich features in images are well utilized for better performance and robustness. The transformer decoder is applied to efficiently sample features from the unified space with learnable positions, which facilitates object-level interactions. In general, UVTR presents an early attempt to represent different modalities in a unified framework. It surpasses previous work in single- or multi-modality entries. The proposed method achieves leading performance in the nuScenes test set for both object detection and the following object tracking task. Code is made publicly available at https://github.com/dvlab-research/UVTR.
Accept
The paper proposes a multimodal system for 3d object detection and 3 expert reviewers vote for its acceptance, after rebuttal, based on their appreciation of the good improvements brought by multimodality, and due various interesting details of the system. I agree with reviewer Bb8v that the writing should be polished, starting from the abstract, e.g. "Benefit from the unified manner, cross-modality interaction is then proposed to make full use of inherent properties from different sensors" -- this sentence reads very poorly.
train
[ "EBOzXUvZtw7", "QtD_CnMjGLD", "sDIp2eYg7a", "hA1nrdY3rbl", "4i8199g5Oi", "Ie7f6IIvjg2", "5TbNmbWhlD", "q-mbK1IORWd", "3XfmiG9QTh", "Mu0NI3SoEQI", "14UrmhefQzI", "M0fBxfEC_lE", "yiQBcypy7wO", "T_1IOSu7Qy3", "HbpSkyoYtaQ", "PapyNMDTWHx" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer Bb8v,\n\nWe follow your suggestions and keep polishing the paper. Because the revision cannot be uploaded now, we attach the revision of Section 3.2 below. Hope it can address your remaining concern.\n\n**3.2 Cross-modality Interaction**\n\nWith the unified representation in space $\\mathbf{V}_I$ and $\\mathbf{V}_P$, interactions across modalities can be easily conducted. Given the prior that LiDAR is advanced in localization and cameras provide context for classification, the cross-modality interaction is proposed from two separate aspects, *i.e.*, transferring geometry-aware knowledge to images in a single-modality setting and fusing context-aware features with point clouds in a multi-modality setting. In particular, *knowledge transfer* aims to optimize the features of the student with guidance from the teacher in the single-modality setting. Meanwhile, *modality fusion* is designed to better utilize all modalities in both training and inference stages.\n\n**Knowledge Transfer.** Considering *single modality* input in the inference stage, knowledge transfer is first designed to optimize features of the student with guidance from the teacher during training, which is important in an environment that lacks multi-modality data. Due to inherent properties, the geometry structure contained in images can be further exploited with the aid of point clouds, while the rich context in images can hardly be transferred to sparse point clouds. Therefore, we mainly focus on transferring knowledge from the geometry-rich modality to the poor one in this work. Benefiting from unified feature spaces, the cross-modality transfer can be easily supported, as illustrated in Figure 4. In particular, we take features before the last ReLU layer in the voxel encoder of $\\mathbf{V}_P$ as the geometry-rich teacher, marked as $\\mathbf{T}_P$. Meanwhile, the feature in the same position of $\\mathbf{V}_I$ is taken as the geometry-poor student, denoted as $\\mathbf{S}_I$. If we take one object query position $(x,y,z)$ from Section 3.3, the feature distance for knowledge transfer is formulated as\n\n$$\nd_ {KT}={PL_2}(\\mathbf{T}_ P(x,y,z), \\mathbf{S}_ I(x,y,z)), \\tag{3}\n$$\nwhere ${PL_2}$ represents the partial $L_2$ distance [40]. Without bells-and-whistles, the optimization objective for knowledge transfer is averaged from $N$ object queries of transformer decoder in Section 3.3, namely ${\\mathcal L}_ {KT}=\\frac1N\\sum_i $ $(d_ {KT})$.\nIt should be noted that the whole network is optimized in an end-to-end manner, with no need for extra procedures. Given the object position in each query, we can directly minimize the object-level distance with no need to exclude background features like [37]. In a similar pipeline, the knowledge transfer is further extended to support more input streams, like multi-frame images. The proposed cross-modality knowledge transfer is flexible with input modalities and brings consistent gains over various baselines in Tables 5 and 7.\n\n**Modality Fusion.** Different from the knowledge transfer, modality fusion aims to better utilize *all modalities* in both training and inference stages, which utilizes the complementary knowledge of point cloud and images to improve the performance and robustness. Thanks to the unified representation of each modality, feature fusion can be naturally applied. To be specific, given the processed feature space $\\mathbf{V}'_I$ and $\\mathbf{V}'_P$, we first select candidate modality for final prediction via modality switch, as depicted in Figure 2. That means we support single- or multi-modality input for prediction according to different settings. If both modalities are taken, $\\mathbf{V}'_I$ and $\\mathbf{V}'_P$ are added together to formulate the unified voxel space $\\mathbf{V}_U\\in{\\mathbb{R}^{X\\times Y\\times Z\\times C}}$. In this way, both modalities are well expressed in a unified manner, which can be further fused with a single convolution. The space $\\mathbf{V}_U$ unifies modalities with the explicit representation, which provides an expressive space for object interactions in Section 3.3.", " The authors have addressed all the comments I raised in my previous review and revised the manuscript accordingly. I believe this is a solid paper that deserves to be communicated.", " Dear Reviewer Bb8v,\n\nWe sincerely thank your feedback and support. Definitely, we will polish the whole framework and also rewrite the confusing part following your suggestions. Moreover, to provide more details and make our solution clear, we will release the code and models for both detection and the following tracking part to the public.", " First, I would like to thank the authors for conducting experiments in various environments (distance, weather conditions, light situation) and basic convolution head. Also, the additional explanation of the attention part helped me understand this work better. In particular, concerns were resolved through additional experiments.\n\nBut I still think the writing needs to be polished overall. So, assuming they rewrite what they mentioned, including section 3.2, I'll score it up as a \"borderline accept\".", " Dear Reviewer Fn2u,\n\nWe sincerely thank your feedback and support. Your constructive suggestions give us guidance to greatly improve this work.\n", " Thanks the authors for providing such a solid rebuttal with a lot of numbers and discussions! Most of my questions have been answered. Therefore, I increased my rating from 5 to 6.", " Table A-7: Comparisons between different light conditions in the nuScenes *val* set. We separate the dataset into two splits according to the light description.\n\n| Method | Modality | Backbone | NDS(%) [Day] | NDS(%) [Night] |\n| ---------------- | -------- | ----------- | --------------- | --------------- |\n| CenterPoint [24] | LiDAR | V0.075 | 65.1 | 40.1 |\n| **UVTR-L** | LiDAR | V0.075 | 67.8 | 41.4 |\n| **UVTR-C** | Camera | R101 | 44.5 | 23.5 |\n| **UVTR-M** | Both | V0.075-R101 | **70.3** (+2.5) | **42.6** (+1.2) |\n\n**Q3: Why introduce deformable attention for cross attention?**\n\nA3: Sorry for the misunderstanding. The *deformable attention* in the decoder is introduced mainly for the model efficiency, regardless of scale variance (not the *deformable convolution*). As declared in A1 of the *response to all reviewers*, the deformable attention generates sampling point $(x,y,z)$ for each object query and performs object-level interaction in a sparse manner, regardless of the spatial size of feature maps. Moreover, the usage of deformable attention also brings faster convergence speed during training [11]. We will make this motivation clear in the revision.\n\n**Q4: Comparison with a network using local attention or basic conv.**\n\nA4: Thanks for this suggestion. We guess *\"the network using basic conv\"* means the convolution-based head. If we misunderstand it, please rectify us and we will try to provide more detailed results. In this way, we provide a classic convolutional head (CenterPoint head) for comparisons in A2 of the *response to all reviewers*. It is clear that the proposed transformer decoder achieves significant gains over the convolution-based head. Please refer to Table A-1 for more details. We will add this comparison in the revision.", " Dear Reviewer *Bb8v*,\n\nThank you for the valuable suggestions. We address your questions below.\n\n**Q1: The relationship between knowledge transfer and modality fusion feels awkward.**\n\nA1: Sorry for the misunderstanding. The knowledge transfer and modality fusion are actually separate parts of cross-modality interaction in Section 3.2. **(1)** Knowledge transfer: As declared in L148-L149 of the main paper, *knowledge transfer* aims to optimize the features of the student with guidance from the teacher in the *single-modality* setting. For example, in the camera-based setting, the point cloud is utilized *during training only* to provide geometry prior for the optimization guidance of image features. In this way, the network is able to achieve better performance *with images only* during inference. This could be important in an environment that lacks LiDAR data. **(2)** Modality fusion: As illustrated in L165-L166 of the main paper, *modality fusion* is designed to better utilize all modalities in both training and inference stages. In modality fusion, we take the complementary knowledge of point cloud and images to improve the performance as you mentioned. We will revise the description in Section 3.2 to make their relationship clear.\n\n**Q2: Evaluate the model in various and more extreme environments.**\n\nA2: Thanks for this suggestion. We follow your suggestion and report the performance of *different distances* in Table A-5, *different weather conditions* in Table A-6, and *different light situations* in Table A-7. Because it is hard to validate such specific scenes, like a tunnel entrance. Here we give analyses of similar environments with long distances, rainy weather, and dark night. \n\n**(1)** Distance: In Table A-5, we report performance with different modalities for input at various distances. For LiDAR-based approaches, the proposed UVTR-L achieves better performance in all situations compared with CenterPoint [24]. Equipped with both LiDAR and camera inputs in UVTR-M, the framework attains significant gains, especially in a relatively far distance (3.3% NDS gain in 20~30$m$). If the object is too far (>30$m$), the performance gain decreases to 1.6% NDS, but still much better than CenterPoint and UVTR-L. \n\n**(2)** Weather condition: In Table A-6, we conduct experiments on different weather conditions, i.e., sunny and rainy. It is clear that the proposed UVTR-L achieves significant gain compared with CenterPoint in both conditions. And additional camera input brings much better results, especially in rainy weather (4.1% NDS gain). \n\n**(3)** Light situation: We perform experiments on the mentioned night situation in Table A-7. Compared with that in the daylight situation, both LiDAR-based and camera-based approaches perform inferior in the dark night, as presented in Table A-7. Compared with CenterPoint, the proposed UVTR-L still performs better. And the camera inputs still bring significant gains in both situations, especially in a daylight environment (2.5% NDS gain). These tables will be summarized in the final revision.\n\nTable A-5: Comparisons among different distances in the nuScenes *val* set. We separate the dataset into three splits according to the object distance.\n\n| Method | Modality | NDS(%) [<20$m$] | NDS(%) [20~30$m$] | NDS(%) [>30$m$] |\n| ---------------- | -------- | --------------- | ----------------- | --------------- |\n| CenterPoint [24] | LiDAR | 74.1 | 62.1 | 34.6 |\n| **UVTR-L** | LiDAR | 75.9 | 64.9 | 37.3 |\n| **UVTR-C** | Camera | 52.8 | 39.7 | 20.4 |\n| **UVTR-M** | Both | **77.2** (+1.3) | **68.2** (+3.3) | **38.9** (+1.6) |\n\nTable A-6: Comparisons between different weather conditions in the nuScenes *val* set. We separate the dataset into two splits according to the weather description.\n\n| Method | Modality | Backbone | NDS(%) [Sunny] | NDS(%) [Rainy] |\n| ---------------- | -------- | ----------- | --------------- | --------------- |\n| CenterPoint [24] | LiDAR | V0.075 | 64.6 | 64.4 |\n| **UVTR-L** | LiDAR | V0.075 | 67.4 | 67.9 |\n| **UVTR-C** | Camera | R101 | 43.1 | 48.3 |\n| **UVTR-M** | Both | V0.075-R101 | **69.7** (+2.3) | **72.0** (+4.1) |", " **Q8: Provide model inference speed.**\n\nA8: We follow your suggestion and provide model inference runtime in Table A-4. As presented in the table, for the LiDAR-based setting, UVTR-L consumes cost mainly from the sparse convolution backbone. And the transformer decoder with 3 layers costs about 18ms. For the camera-based setting, the consumption is mainly from the image backbone and view transform process. The voxel encoder and transformer decoder with 6 layers also bring a noticeable cost. Here, we try to reduce the cost in the voxel encoder with spatial separate convolution. For example, we use convolution with kernel sizes $(1,3,3)$ and $(3,1,1)$ to replace Conv3D with a kernel size $(3,3,3)$ for spatial context aggregation. This choice respectively brings 5.2 ms and over 10 ms for LiDAR-based and camera-based settings without too much performance drop. In this work, we focus more on the unified representation with good performance. And the framework can be further accelerated with engineering skills. For example, we directly adopt the naive grid sampling in the view transform. It can be optimized a lot using a CUDA version operator. We will try to make the framework more efficient.\n\nTable A-4: Model inference runtime in the nuScenes *val* set. *SpaSep* indicates spatial separate convolution in the voxel encoder. We test all the models and report results on a single NVIDIA Tesla V100 GPU.\n\n| Method | Backbone | NDS(%) | Backbone | ViewTrans | Encoder | Decoder | Total |\n| ------------- | -------- | ------ | -------- | --------- | ------- | ------- | ------- |\n| UVTR-L | V0.1 | 66.4 | 71.5ms | - | 17.1ms | 18.4ms | 107.0ms |\n| UVTR-L-SpaSep | V0.1 | 66.2 | 71.1ms | - | 11.9ms | 18.2ms | 101.2ms |\n| UVTR-C | R50 | 41.9 | 103.4ms | 64.1ms | 32.1ms | 36.5ms | 236.1ms |\n| UVTR-C-SpaSep | R50 | 40.8 | 102.1ms | 64.5ms | 21.2ms | 37.3ms | 225.1ms |\n| UVTR-C | R101 | 44.1 | 194.1ms | 64.7ms | 32.3ms | 36.1ms | 327.2ms |\n| UVTR-C-SpaSep | R101 | 43.2 | 192.1ms | 64.6ms | 21.9ms | 38.2ms | 316.8ms |\n\n**Q9: Provide results on Waymo and KITTI datasets.**\n\nA9: Thanks for this suggestion. We aim to provide a solution with the multi-view setting, and KITTI could not be so suitable for experiments. Meanwhile, compared with KITTI and Waymo datasets that use 64-beam LiDAR, the point cloud in the nuScenes is more sparse (with 32 beams) and more challenging. Thus, it could be better to perform cross-modality interaction in such a dataset. And the method that performs well in the nuScenes dataset usually achieves good results in the Waymo dataset, like CenterPoint [24]. Therefore, the detection and tracking results in the nuScenes dataset should be enough to validate the effectiveness of UVTR. Of course, following your suggestion, we try to perform experiments on the Waymo Open dataset. But such a large-scale dataset (over 1TB data) costs too many computational resources. And we cannot afford it in such a short period. We will report results in the Waymo dataset as you suggested in the final revision.", " Dear Reviewer *Fn2u*,\n\nThank you for appreciating our work with valuable suggestions. We address your questions below (Q1-Q6 for the weakness part, and Q7-Q9 for the limitation part).\n\n**Q1: How the model is trained to estimate the depth distribution $\\mathrm{D}_I$?**\n\nA1: We do not adopt supervision for the depth distribution $\\mathrm{D}_I$. It's interesting to discover that it performs well without depth supervision in our experiments. And we find that without depth supervision, the network trends to ensure high recall of predicted depth (high activation for a range of depth) rather than a particular depth. As you mentioned that it’s non-trivial to acquire dense depth annotation, actually we attempted to use point cloud to provide sparse depth annotation that only brings 0.5% NDS gain without special design. To keep the simplicity of the proposed framework, we do not adopt depth supervision in this work. This will be made clear in the revision.\n\n**Q2: How to lift camera features with $D$ set to 64?**\n\nA2: Yes, you are right, we adopt bilinear interpolation along the estimated occupancy rays. In particular, as declared in L108-L112 of the main paper, we first project the sampling point $(x,y,z)$ to the image plane $(u,v,d)$ with the given calibration matrix $\\mathbf{P}$, where $d$ denotes the reference depth along axis $D$. Then we capture the estimated probability in distribution $\\mathrm{D}_I$ using bilinear interpolation. In this manner, we can efficiently obtain the continuous depth within 64$m$ (satisfy most of the objects) from the ego vehicle. We will add more details to Section 3.1 to make this process clear.\n\n**Q3: How is 2D convolution used as encoders in the voxel space?**\n\nA3: For the Conv2D setting in Table 2 of the main paper, we process each layer of the voxel space along the axis $Z$ using 2D convolution. A simple solution is using Conv3D with kernel size $(1,3,3)$ that performs the same operation as Conv2D. \n\n**Q4: Where is the target features extracted from for multi-modality teacher?**\n\nA4: In the multi-modality knowledge transfer setting, the teacher features are extracted from the fused unified voxel space $\\mathrm{V}_U$, namely the mixture of both modalities.\n\n**Q5: How is knowledge transfer used in the multi-modality model?**\n\nA5: Sorry for the confusion. Knowledge transfer and modality fusion are separate parts in the cross-modality interaction of Section 3.2. In Table 5 of the main paper, we only perform knowledge transfer from knowledge-rich settings to knowledge-poor settings, like LiDAR-based to camera-based models or multi-modality to single-modality models. For multi-modality inputs, to keep the simplicity, we optimize the whole framework in an end-to-end manner without cascade training. That means in a multi-modality setting, we do not perform knowledge transfer in the training stage. Of course, applying it in a cascade training manner may bring extra improvements. We do not use it to avoid making the pipeline complex. We will add more training details in the supplementary material to make it clear.\n\n**Q6: Does the transformer decoder jointly detect different types of objects?**\n\nA6: Yes, the transformer decoder jointly detects different objects. Actually, we also set the loss weight of different types of objects to 1.0 without special design in the training stage.\n\n**Q7: Comparisons with widely-adopted CenterPoint head.**\n\nA7: Thanks for this suggestion. We add the comparisons with the CenterPoint head with different modalities in Table A-1 of the *response to all reviewers*. Compared with the CenterPoint head, the designed transformer head achieves significant gains with 1.0% NDS and 4.1% NDS for LiDAR-based and Camera-based settings, respectively. This proves the effectiveness of the designed transformer decoder in UVTR. We will add this table to the revision.", " Dear Reviewer *eNGY*,\n\nThank you for appreciating our work with valuable suggestions. We address your questions below.\n\n**Q1: Why sample with probability in view transform (Figure 3)?**\n\nA1: Thanks for this good question. Because we cannot get the real depth of each image in the camera-based setting (with the camera only). Therefore, we need to estimate the depth of each pixel when the view is transformed to the voxel space. There are actually three ways in the process **(1)** projecting each pixel like a ray with the same prob, **(2)** using estimated discrete depth, **(3)** using estimated depth distribution. For **(1)**, projecting pixels with the same prob cannot reflect the object structure in 3D space, which brings semantic ambiguity with much inferior performance in our experiments. For **(2)**, estimating discrete depth relies heavily on a pre-trained accurate depth estimator, which damages the end-to-end framework design in our UVTR. Thus, we adopt **(3)** to estimate the depth distribution $\\mathrm{D}_I$ for efficient view transform, which guarantees a high recall rate in depth and can be optimized in an end-to-end manner. We will make this clear in the revision.\n\n**Q2: Design choice of FPN, partial L2 distance, and deformable attention.**\n\nA2: In general, our design aims to keep the whole framework simple for good generality. We respond to each point as follows. \n\n**(1)** Choice of FPN: To alleviate the scale variance of objects in the image plane, we adopt the widely-used FPN in 2D object detection without special design. Without FPN, the scale variance of 2D objects cannot be well handled and thus harm the aggregated feature in the voxel space. \n\n**(2)** Choice of partial L2 distance: Actually, we choose the partial L2 distance by experimental results. We compare with naive L2 distance and partial L2 distance for knowledge transfer in Table A-3. The partial L2 distance brings a 0.4% NDS gain compared with the naive version. We will add this table to the supplemental material. \n\n**(3)** Choice of deformable attention: We choose deformable attention mainly for the model efficiency. Please refer to A1 of the *response to all reviewers* for more details.\n\nTable A-3: Comparisons between L2 and partial L2 distance in the nuScenes *val* set. Models are trained in 1/4 mini nuScenes *train* set.\n\n| Method | Modality | Backbone | NDS(%) | mAP(%) |\n| ---------------------- | -------- | -------- | --------------- | --------------- |\n| UVTR-L2C-L2 | Camera | R50 | 36.0 | 28.0 |\n| **UVTR-L2C-PartialL2** | Camera | R50 | **36.4** (+0.4) | **28.2** (+0.2) |\n\n**Q3: More details and motivation of the shallow voxel encoder.**\n\nA3: As declared in L136-L139 of the main paper, the voxel encoder is designed to facilitate local feature interaction in the generated voxel space. Experimentally, we found that the performance of the voxel encoder saturates with 3 convolutions, and more convolutions contribute little gain (about 0.1% NDS). To save the computational cost, we set the amount to 3 by default. We will provide this detail in the revision.\n\n**Q4: Is the network trained in an end-to-end manner?**\n\nA4: Sorry for the confusion. Yes, the models with different modalities are trained in an end-to-end manner. For the multi-modality optimization, we fine-tune the backbone (not fix) that pre-trained with every single modality, as declared in L212-L2126 of the main paper. Of course, we will make this part more clear.\n\n**Q5: More details on the network training.**\n\nA5: Thanks for this suggestion! For the knowledge transfer setting, all the losses are used together to optimize the network in an end-to-end manner. In this process, to provide high-quality features for knowledge transfer, we follow the classic distillation paradigm and fix the pre-trained teacher model. We provide extra descriptions of the network optimization in the supplementary material. And we will follow your suggestion to add more descriptions for better illustration.\n\n**Q6: Using learnable weight to combine different modalities.**\n\nA6: Thanks for this suggestion! It's a good idea to dynamically combine the feature from different modalities. To keep the simplicity of the whole framework, we directly summarize the features for the unified voxel space $\\mathrm{V}_U$ in the manuscript. We will try your suggestion later.", " **Other attached experimental results**\n\nTo address the concerns of each reviewer, we also conduct the following experiments. You can find them in response to each reviewer.\n\n- Comparisons between L2 and partial L2 distance. (Table A-3 in *Author response to Reviewer eNGY*)\n- Model inference runtime. (Table A-4 in *Author response to Reviewer Fn2u*)\n- Comparisons among different distances. (Table A-5 in *Author response to Reviewer Bb8v*)\n- Comparisons between different weather conditions. (Table A-6 in *Author response to Reviewer Bb8v*)\n- Comparisons between different light conditions. (Table A-7 in *Author response to Reviewer Bb8v*)\n\n**Additional Reference**\n\n[51] Shaoyu Chen, Xinggang Wang, Tianheng Cheng, Qian Zhang, Chang Huang, and Wenyu Liu. Polar Parametrization for Vision-based Surround-View 3D Detection. *arXiv:* 2206.10965, 2022.\n\n[52] Aleksandr Kim, Aljovsa Ovsep, and Laura Leal-Taixe. EagerMOT: 3D Multi-Object Tracking via Sensor Fusion. In *ICRA*, 2021. \n\n[53] Yihan Zeng, Chao Ma, Ming Zhu, Zhiming Fan, and Xiaokang Yang. Cross-Modal 3D Object Detection and Tracking for Auto-Driving. In *IROS*, 2021.", " Dear all reviewers,\n\nWe sincerely thank your effort in the review with valuable comments and suggestions. We first address the common concerns, followed by detailed responses to each reviewer separately. We hope our responses clarify existing concerns and make these points clear. We will really appreciate it if R3 can kindly reconsider the decision, provided that the main concerns are well addressed.\n\n**Q1: The reason for deformable attention in the decoder** (Reviewer eNGY/Bb8v)\n\nA1: We choose deformable attention mainly for the model efficiency. Unlike vanilla attention that looks over all possible spatial locations for each query *Q*, the deformable attention generates sampling point $(x,y,z)$ for each object query and performs object-level interaction in a sparse manner, regardless of the spatial size of the feature maps. This is quite important, especially in a 3D voxel space with a large spatial size. Moreover, the usage of deformable attention also brings faster convergence speed during training [11]. We will make these points clear in the revision.\n\n**Q2: Comparisons with convolution-based head** (Reviewer Fn2u/Bb8v)\n\nA2: The transformer decoder is designed for object-level interaction and efficient object feature capture from the voxel space $\\mathrm{V}_U$. Following the reviewers' suggestion, we compare it with the classic convolution-based CenterPoint head in Table A-1. In particular, to suit the CenterPoint head without bringing too much cost, we compress the axis $Z$ of the constructed unified voxel space $\\mathrm{V}_U$ (after voxel encoder) in Figure 2 with a summation. As presented in Table A-1, with the same head, the proposed UVTR still surpasses CenterPoint [24] with 0.5% NDS and 1.5% mAP. Compared with the convolution-based head, the designed transformer head achieves significant gains with 1.0% NDS and 4.1% NDS for LiDAR-based and camera-based settings, respectively. This proves the effectiveness of the designed transformer decoder in UVTR.\n\nTable A-1: Comparisons with convolution-based head in the nuScenes *val* set. *CPHead* indicates the adopted CenterPoint head.\n\n| Method | Modality | Backbone | Head | NDS(%) | mAP(%) |\n| ---------------- | -------- | -------- | ----------- | --------------- | --------------- |\n| CenterPoint [24] | LiDAR | V0.1 | Convolution | 64.9 | 56.6 |\n| UVTR-L-CPHead | LiDAR | V0.1 | Convolution | 65.4 | 58.1 |\n| **UVTR-L** | LiDAR | V0.1 | Transformer | **66.4** (+1.0) | **59.3** (+1.2) |\n| UVTR-C-CPHead | Camera | R101 | Convolution | 40.0 | 35.1 |\n| **UVTR-C** | Camera | R101 | Transformer | **44.1** (+4.1) | **36.2** (+1.1) |\n\n**Performance on downstream tracking**\n\nTo better illustrate the capability and generality of the proposed UVTR, we further conduct experiments on the downstream tracking task. In particular, we follow the classic tracking-by-detection paradigm and apply the simple greedy tracker in CenterPoint. As presented in Table A-2, the proposed UVTR achieves leading tracking performance with the greedy tracker in different settings. Specifically, in a camera-based setting, the proposed UVTR-L2CS3 surpasses previous SOTA at the leaderboard (BEVTrack) with **17.8%** AMOTA. It further proves the effectiveness and generality of the proposed cross-modality interaction in UVTR. We will add this table to the revision.\n\nTable A-2: Comparisons among leading methods with similar models in the nuScenes *test* set. * indicates the state-of-the-art method at the leaderboard with no publication.\n\n| Method | Modality | Tracker | AMOTA(%) | AMOTP | Recall |\n| ---------------- | -------- | ------------------ | ---------------- | ----- | ------ |\n| CenterPoint [24] | LiDAR | Greedy | 63.8 | 0.555 | 0.675 |\n| **UVTR-L** | LiDAR | Greedy | **67.0** (+3.2) | 0.656 | 0.703 |\n| PolarDETR [51] | Camera | Transformer | 27.3 | 1.185 | 0.404 |\n| BEVTrack* | Camera | Private | 34.1 | 1.107 | 0.463 |\n| **UVTR-L2CS3** | Camera | Greedy | **51.9** (+17.8) | 1.125 | 0.599 |\n| EagerMOT [52] | Both | Two-stage | 67.7 | 0.550 | 0.727 |\n| AlphaTrack [53] | Both | Position+Apperance | 69.3 | 0.585 | 0.723 |\n| **UVTR-M** | Both | Greedy | **70.1** (+0.8) | 0.686 | 0.750 |\n", " The paper proposes a new multimodal architecture that receives images and a three-dimensional point cloud and detects 3D objects. The new architecture is based on fusing feature vectors both extracted from voxel spaces and feeding a transformer decoder with an MLP classifier. Besides being able to combine image and point cloud, experiments showed that the proposed strategy can also work in the presence of noise in the point cloud or lack of image data. Furthermore, the experimental results indicate that the proposed architecture is superior to several baselines in terms of NDS and mAP metrics. - Strengths: \n - While every single contribution taken separately seem limited, the integration of all idea appears to be an interesting contribution to field object detection. As far as robust multimodality fusion is concerned, the paper has shown a coherent and wise approach to provide good performance in the presence of noise and lack of data.\n\n - The experiments are done thoroughly, and the discussion section clearly interprets the observations from the results.\n\n- Weakness:\n - Many important details for design choices and experiments are either missing or not properly addressed. These points are summarized and listed in the “Questions”.\n \n - Although the experimental results showed that the proposed approach is superior to the baselines, in several cases (multimodal approaches), the superiority is marginal, e.g., 71.1 versus 70.0, 67.1 versus 66.8. \n \n - Many technical components (particularly the simple ones) are adopted without strong motivation. 1. It is unclear how the “sample with prob” (Figure 3) is being done. According to the text, it seems that the features are being weighted by the occupancy grid store in the tensor D. Why is this a sample and with probability?\n2. Why is FPN being used? Why partial L2 distance? Why apply deformable attention and not a vanilla attention layer? Is there any property in the task that suggests those are the better choice?\n3. The voxel encoder is shallow. Please provide more details and motivation related to the design choices.\n4. The training is not clear. Is the network training in an end-to-end manner? Are the weights of the image and voxel backbone frozen? \n5. Regarding the knowledge transfer and the training, it is not clear whether there is stage-wise training being applied or all the losses are used to train the networks in an end-to-end manner. Please kindly consider improving the description of the network training.\n6. Why is not used a linear layer or an MLP to learn how the weigh the features from the modalities before combining them? Yes. The limitations were adequately addressed in the Discussion and Conclusion section.", " This paper focuses on 3D object detection in driving scenarios. It proposes to fuse image features with LiDAR point cloud features following a late-fusion pipeline. The fusion leverages a learned unified 3D voxel feature space. Moreover, it adopts a transformer decoder for object relationships\tmodeling. A feature mimicking based method is also proposed for knowledge transfer between two input modalities. This paper demonstrates 3D detection performance improvement over single-modality and other joint-modality competing methods on the nuScenes benchmark. S-1) Image and LiDAR point cloud fusion is a challenging research problem with high real-world application values.\n\nS-2) The cross-modality interaction is simple yet effective. It could be an easily applied method for knowledge transfer between different sensors. \n\nS-3) It is a good choice of using the transformer decoder as detection head because it can capture object-level relationships. Currently, most 3D object detection heads are based on center heat-map estimation, which lacks the ability to model interactions among different objects.\n\nS-4) For the multi-modality fusion model training, it is a good strategy to first separately pre-train the branch of each modality and then conduct fine-tuning on the fusion model. Such a staged training pipeline can improve the training robustness and this is enabled by the proposed unified voxel feature space.\n\nS-5) The ablation studies are extensive, which demonstrate the effectiveness of each proposed module and network architecture design. For example, Table 1 justifies the necessity of using 3D voxels instead of 2D pillars when encoding lifted image features.\n\nS-6) Figure 5 is inspiring. It clearly demonstrates the robustness advantages of using multi-modality inputs.\n\nW-1) In Eq. (1), how is the model trained to estimate the depth distribution $D_{I}$? For example, what losses are used? Moreover, it’s non-trivial to acquire dense depth annotation for each camera view.\n\nW-2) For camera branch, the voxel resolution is $128 \\times 128$ but $D$ is only set to 64. Thus, how do you lift camera features into the voxel space when determining the occupancy probability for voxel centers? Do you do some type of interpolation along the estimated occupancy rays?\n\nW-3) For Table 2, how is 2D convolution used as encoders in a 3D voxel space?\n\nW-4) In Table 5, for camera student and multi-modality teacher, where are the target/teacher features extracted from? Are they from the camera branch or the LiDAR branch, or a mixture of both?\n\nW-5) Also in Table 5, for the best performing multi-modality fusion model, how is knowledge transfer used in its training recipe? It’s not quite clear how does knowledge transfer benefit the fused model. For example do you first train the LiDAR model, and then distill and train the camera model, which followed by the joint fine-tuning?\n\nW-6) Does the transformer decoder jointly detect different types of objects (e.g. pedestrians, vehicles)? Please see the bullets mentioned in the weakness and limitation sections. L-1) I’m wondering if the authors have also done experiments on comparing deformable transformer decoder with the widely adopted CenterPoint [1] detection head. This can help justify the importance and advantages of modeling object-level interactions.\n\nL-2) Can the authors provide comparisons on the model inference speed and computation cost? We all know that latency control is critical for 3D object detection in autonomous driving scenarios.\n\nL-3) Can the authors also provide results on other more widely adopted benchmarks such as Waymo Open Dataset and KITTI Dataset?\n\n[1] Yin, Tianwei, Xingyi Zhou, and Philipp Krahenbuhl. \"Center-based 3d object detection and tracking.\" Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2021.", " In this paper, the authors proposed a 3D detection framework by projecting various sensor data, which are acquired at the same time, onto a voxel feature space. They first point out the limitation of conventional methods where input- or feature-level data are represented as 2D data with reduced dimensions (range image, predicted depth map, etc.). To overcome these types of issues, they introduce a unified voxel space without loss of spatial information. Additionally, the authors make the neural network more effective in extracting sensor fusion features by applying for knowledge transfer / Data augmentation without the complex aligning process. (+) They improved the performance of the 3D detector by devising a sensor fusion method on the voxel representation that is richer than the bird-eye's view representation. Through sections 3 and 4 and the supplementary, the authors provide sufficient information to reconstruct and train the proposed network.\n\n(-) The relationship between modality transfer learning techniques and cross-modality interaction feels awkward. As mentioned by the authors, Image and Lidar points are already complementary to each other, so just using the two features should improve performance. However, the knowledge transfer method seems to improve performance by making the features extracted from images similar to the features extracted from the lidar sensor data.\n\n(-) The authors need to conduct performance evaluation in various environments to reveal the advantages of sensor fusion well.\n I think the authors' cross-modality interaction module should be validated in a more extreme environment. Rather than simply comparing whether features extracted from individual sensors are used, it is necessary to test results in an environment where one of the two sensors does not work at all. (For example, experiments on whether detection is possible only with camera data at a long distance where lidar sensor data does not exist, and experiments on cases where the camera does not work well at night or when passing through a tunnel entrance)\n\nIntroduction of deformable attention for cross attention: Unlike the image space, which contains various scales, the size of the object is determined in the actual 3D space. Therefore, it is not necessary to obtain information of various scales using deformable convolution. Please explain the reason for using deformable attention in unified voxel space. Additionally, can the authors attach a comparison with a network using local attention (basic conv.)?\n The authors mentioned the shortcomings of their detector and the direction to be taken in the future. The disappointing point is that they restate known limitations, where voxel-based methods have a large increase in computation voxel-based methods have a large increase in the amount of computation compared to other representation methods (e.g., range view / bird’s eye view), in this field." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "hA1nrdY3rbl", "14UrmhefQzI", "hA1nrdY3rbl", "5TbNmbWhlD", "Ie7f6IIvjg2", "HbpSkyoYtaQ", "PapyNMDTWHx", "PapyNMDTWHx", "HbpSkyoYtaQ", "HbpSkyoYtaQ", "T_1IOSu7Qy3", "nips_2022_XA4ru9mfxTP", "nips_2022_XA4ru9mfxTP", "nips_2022_XA4ru9mfxTP", "nips_2022_XA4ru9mfxTP", "nips_2022_XA4ru9mfxTP" ]
nips_2022_FzdmrTUyZ4g
Monte Carlo Tree Descent for Black-Box Optimization
The key to Black-Box Optimization is to efficiently search through input regions with potentially widely-varying numerical properties, to achieve low-regret descent and fast progress toward the optima. Monte Carlo Tree Search (MCTS) methods have recently been introduced to improve Bayesian optimization by computing better partitioning of the search space that balances exploration and exploitation. Extending this promising framework, we study how to further integrate sample-based descent for faster optimization. We design novel ways of expanding Monte Carlo search trees, with new descent methods at vertices that incorporate stochastic search and Gaussian Processes. We propose the corresponding rules for balancing progress and uncertainty, branch selection, tree expansion, and backpropagation. The designed search process puts more emphasis on sampling for faster descent and uses localized Gaussian Processes as auxiliary metrics for both exploitation and exploration. We show empirically that the proposed algorithms can outperform state-of-the-art methods on many challenging benchmark problems.
Accept
This paper proposes a novel combination of Bayesian optimization and Monte Carlo Tree Search for more sample-efficient black-box optimization. The method is adding complexity, but the empirical results are thorough and show a clear benefit. The reviewers on this paper did not come to unanimous decision, but the clear majority advocated for acceptance, and I concur, especially because reviewer 6Fro did not spell out their concerns with sufficient concreteness.
train
[ "yIflUXxssCW", "ZczyHKspmTU", "PkZP1TxeQq", "ol-8hjV4MX8", "KcKoOpHhgK2", "BuCCRbEp9c", "bEjGmK4LDlp", "t4f9SIZLbXR", "laAYALK9Hiw" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " After reading the responses by the authors, several issues have been addressed.\nHowever, I still think the motivation of this paper is not convincing enough for me. \nDue to the sufficient experimental results, I'm increasing my score to weak accept. ", " Thank you for your feedback. We address the specific questions as follows:\n\n> Authors may need to highlight the motivation and make it reasonable, explain in detail why you combine the two approaches.\n\nIt is a common practice to run multiple optimization processes with different initial conditions in black-box optimization. A key part of our idea is to integrate all optimization processes into a tree structure, and instead of running each process to its end, we would prefer to select the most promising process at each stage. On the basis of this concept, we construct the MCTS structure incorporating all of these processes. \nFurthermore, the combination of both modeling and direct search approaches can be beneficial. \nIn most modeling approaches, a sample is selected randomly in a trusted region where the information is collected.\nSurrogate models may, however, utilize historical data from search models rather than randomly selecting samples from the neighborhood. Consequently, one sample that is used in the search model to determine the descent direction also provides information for constructing the surrogate model. In direct search optimization, since the descent direction of a local point is approximated from the learning model, the search is not completely random. As a backup strategy, the STO proposes an alternative search in the opposite direction in the event that the first search fails to yield a decent result. In this way, the sample efficiency can be improved by combining the two different approaches. \n\n> Algorithm 1 should be rewritten for clarity.\n\n I have rewritten the algorithm 1, and I would like to clarify three equations:\nThe equation of UCT in path selection (as in line 5 of Algorithm 1 and equation (2) in main text) is \n\n$$\nuct_i = -y^*_{i} + C_d \\cdot \\sum_{j=1}^{J} (dy_{i,-j}) + C_p \\cdot \\sqrt{\\log{n_{node}}/n_{i}}\n$$\n\nHere, $C_d$ is a weight factor controlling the importance of recent improvements, $C_p$ is a hyper-parameter for the extent of exploration, $n_{node}$ and $n_i$ are the number of visits to the parent node and child $i$, respectively. $y*$ i is the best value found at the child node $i$, $dy_{i, -j}$ is the most recent j’s improvement at node $i$, ($j=1,...,J$), and is set to 0 if no improvement is obtained at that step. \n\nThe equation on checking if a new branch should be added ((as in line 6 of Algorithm 1 and line 195 in main text) is:\n\n$$\nUCT_{explore} = -\\sum_{i=1}^{N} (y^*_{i})/N+C'_p \\cdot \\sqrt{ \\log{n_n} }\n$$\n\nIn which $C_p$' is also a hyper-parameter for the extent of exploration, $N$ is the number of children to the checking parent node, and $n_n$ is $n_{node}$ as above equation (There is an error at displaying $n_{node}$ in above line.)\n \nThe equation on determining if a leaf node is worth exploitation:\n\n$$\n-y^*_{i} + C_d \\cdot \\sum_{j=1}^{J} (dy_{i,-j}) > C''_p \\cdot n_f\n$$\n\nIn which $C''_p$ is another parameter for the extent of exploration, and $n_f$ is the number of visits on leaf node.\n\n>More experiments should be done to test the sensitivity of the hyper-parameters in the MCDescent,\n\nWe have made an ablation study in Supplementary material, including $C_d$, $C_p$, $C'_p$, $C''_d$, $C''_p$, and function value to switching to fine-grained STO. \n\n>The number of samples is too small for the RL tasks. \n\nWe are limited by the hardware resources available to us. Models based on BO are memory-intensive.\nWe ran several TuRBO/LaMCTS runs up to 10K samples on an HPC node with 256GB memory, but the resources were unavailable at this time.\nMCDecent runs on Google Colab, with only 16GB of memory (DRAM and GPU), and cannot accommodate more than 3.5K samples.\nMoreover, the average reward for LaMCTS at 10K samples is 392, and at 3K it is 297, which accounts for 75% of the rewards from 10K samples. Therefore, we usually limit the comparison to 3K.\n\n\n> It is better to examine MCDescent on high-dimensional synthetic functions.\n\nIn supplementary material we also added a performance plot for \"Ackley-500d\" which has many local minima. The budget ratio (descent : BO) is set to 1:1 which is the same value for Ackley-50d and Ackley-100d. We figured out that the contribution from STP model is greated reduced on this function compared with Ackley-50d and Ackley-100d. In order to achieve better performance, we have to raise the computational power spent on BO.\n\n> How does MCDescent compare to other methods in terms of running time?\n\nThank you for pointing this out.\nNevertheless, due to limited resources, we conduct our testing on different types of machines.\nFor the function Michalewicz-100d: \n* Nelder-mead and CMA can process 3K samples in a matter of minutes. \n* It may take TuRBO 2 hours for first 3K samples and LaMCTS 6 hours for first 3K samples on a HPC node (AMD EPYC 7742, 4608 GFlop/s, 256 GB of DDR4 RAM)\n* The MCDecent collects 3K samples within two hours on a Google Colab platform (Intel(R) Xeon(R) 2.20GHz, 16GB DRAM, P100 GPU)", " Thank you for your feedback. We address the specific questions as follows:\n\n> The article is poorly written, and often it is difficult to figure out the intention of the authors. For instance, in Algorithm 1, the index i is undefined and probably is different from the index i in (2), which probably is j in Alg.1. dy is also undefined in the algorithm, although (2) gives sufficient hint about it. Other variables are undefined as well, but familiarity with MCTS is sufficient to have a reasonable guess. The authors often inform us what the algorithm is not, instead of focusing on a clear description of the algorithm, and explaining the differences subsequently.\n\nIt is a common practice to run multiple optimization processes with different initial conditions in black-box optimization. A key part of our idea is to integrate all optimization processes into a tree structure, and instead of running each process to its end, we would prefer to select the most promising process at each stage. On the basis of this concept, we construct the MCTS structure incorporating all of these processes. \n \nFurthermore, we believe that the combination of both modeling and direct search approaches can be beneficial. \nIn most modeling approaches, a sample is selected randomly in a trusted region where the information is collected.\nSurrogate models may, however, utilize historical data from search models rather than randomly selecting samples from the neighborhood. Consequently, one sample that is used in the search model to determine the descent direction also provides information for constructing the surrogate model. The efficiency of the sample can be improved by utilizing the same sample in two different approaches. \n \nIn direct search optimization, since the descent direction of a local point is approximated from the learning model, the search is not completely random. As a backup strategy, the STO proposes an alternative search in the opposite direction in the event that the first search fails to yield a decent result.\n\n\n\nAdditionally, I would like to clarify three equations:\nThe equation of UCT in path selection (as in line 5 of Algorithm 1 and equation (2) in main text) is \n\n$$\nuct_i = -y^*_{i} + C_d \\cdot \\sum_{j=1}^{J} (dy_{i,-j}) + C_p \\cdot \\sqrt{\\log{n_{node}}/n_{i}}\n$$\n\nHere, $C_d$ is a weight factor controlling the importance of recent improvements, $C_p$ is a hyper-parameter for the extent of exploration, $n_{node}$ and $n_i$ are the number of visits to the parent node and child $i$, respectively. $y*$ i is the best value found at the child node $i$, $dy_{i, -j}$ is the most recent j’s improvement at node $i$, ($j=1,...,J$), and is set to 0 if no improvement is obtained at that step. \n\nThe equation on checking if a new branch should be added ((as in line 6 of Algorithm 1 and line 195 in main text) is:\n\n$$\nUCT_{explore} = -\\sum_{i=1}^{N} (y^*_{i})/N+C'_p \\cdot \\sqrt{ \\log{n_n} }\n$$\n\nIn which $C_p$' is also a hyper-parameter for the extent of exploration, $N$ is the number of children to the checking parent node, and $n_n$ is $n_{node}$ as above equation (There is an error at displaying $n_{node}$ in above line.)\n \nThe equation on determining if a leaf node is worth exploitation:\n\n$$\n-y^*_{i} + C_d \\cdot \\sum_{j=1}^{J} (dy_{i,-j}) > C''_p \\cdot n_f\n$$\n\nIn which $C''_p$ is another parameter for the extent of exploration, and $n_f$ is the number of visits on leaf node.\n", " Thank you for your careful reading of our paper. We answer the specific questions as follows.\n\n>The fact that the proposed methods are focused on deterministic functions wasn't made explicit until Line 216 on page 6, and later in the conclusion. I think this would have been good to make clear from the very beginning.\n\nThank you for your suggestion. We will add this at the beginning of the text.\n\n> The uncertainty regions in some of the results are very large. Could more runs have been performed to bring down this uncertainty?\n\nI suppose the result with “Walker-204d” is the one with large uncertainty, as in all others our approach has similar uncertainty as TurBO and LaMCTS.\nWe have added additional runs to it, and updated the plot in main pdf as well as put it into the supplementary material.\n\n\n> I am not sure that Figure 4 conveyed much meaning to me. It was hard to understand the point you were trying to get across.\n\nOur study on Fig 4. is to examine the optimization trajectory on the objective function landscape and see how the trajectories differ from each method. \nIn BO-based TuRBO models, the optimization begins by sampling randomly within the trusted region, and gradually locates the optimal point; in such a case, the trajectory swirls around to collect information about the state space. \nLa-MCTS utilizes TuRBO and emphasizes the exploration of the entire state space. As we can see, such a trajectory will explore a larger area of space. With our MCDescent, we utilize TuRBO, but use direct search to locate the optimal point in a shorter period of time.\n\n\n> Line 67-68 - This sentence doesn't make very much sense to me as written.\n\nThank you for your careful reading. Here I would like to express it as: “Bayesian Optimization algorithms are a typical class of algorithms that uses modeling approaches. These algorithms build their surrogate models based on the Gaussian Process.”\n\n\n> On line 267 you state that some high dimensional spaces are inhospitable to descent approaches? What are the characteristics of these inhospitable spaces? How can you tell if a space is inhospitable?\n\nDirect search algorithms often suffer from highly non-smooth objectives with many local minima. Despite the fact that the number of iterations required to produce an error (with $E[f(x) - f(x^*)] < \\epsilon$ ) is proportional to the number of dimensions n ( with a complexity of $O(n)$, [Bergou, et al.]), we wish to improve its performance in order to enable the STP method to find a significant descent direction and step with a limited number of steps. Typically, such work in high dimensional space requires the use of a small initial step size. However, small initial steps in descent lead to samples that are close to one another, which is not preferred to BO at the initial stage.\n\nIn supplementary material we also added a performance plot for \"Ackley-500d\" which has many local minima. The budget ratio (descent : BO) is set to 1:1 which is the same value for Ackley-50d and Ackley-100d. We figured out that the contribution from STP model is greated reduced on this function compared with Ackley-50d and Ackley-100d. In order to achieve better performance, we have to raise the computational power spent on BO.\n\n[Bergou, E. H., Gorbunov, E., and Richtárik, P. Stochastic three points method for smooth minimization.\nSIAM Journal on Optimization, 30(4):2726–2749, 20]\n", " Thank you for your comments and questions. We have inlined our responses. \n\n> Weaknesses: The algorithm has eight hyperparameters, and their values vary for different benchmarks. Also, there is no explanation about how they are tuned. I would be happy to give the paper a better score if this weakness is solved.\n\nReply: In our approach, we emphasize the absolute value of the objective function at every step of the path selection and tree exploration stages. As a consequence, we will have to set up these hyper-parameters for tuning in order to fit the scaleness of the function value and the optimization improvement. Here is how they were tuned in our approach. \nThe budget ratios, initial step size $\\alpha$, and switching value for descent optimization are determined by running BO and descent optimization only on the root node. It is recommended to switch to the fine-grained model when the performance of STP becomes limited. If, however, the STP outperforms the BO significantly, the budget for descent optimization is reduced.\n$C_d$, $C''_d$ and $C_p$ should be set to reflect the acceptable improvement speed of the optimization on a single node and should be adapted to the same scaleness of the function value. \nThe weighting factors $C'_p$ and $C''_p$ for tree exploration are set in such a way to ensure that the exploitation on a node is not \"optimization saturated\". \n\nWe also made an ablation study on hyper parameters appended to Supplementary material.\n$C_d$ emphasizes the recent improvement. A small $C_d$ will make the path focus on the node with best value at the current step and may lead to local minima. However, a large $C_d$ will over emphasize the recent improvement, so the path tends to focus on newly added nodes due to their large improvement during the first few steps. $C_p$ and $C'_p$ control the selection of branches. When they are high, sub-optimal nodes or new branching nodes are selected for exploration.\n\n$C''_p$ and $C''_d$ are responsible for splitting the leaf node. When these two parameters are set to large values, it is likely that a new sibling leaf will be created at the selected leaf, which will significantly reduce the performance of the algorithm. In contrast, A path with small values will tend to select the node with a local optimal value. In spite of the fact that $C''_d$ and $C''_p$ have the same functionality as $C_d$, $C_p$, and $C'_p$, the criteria for tree exploration at leaf node is different since a leaf always has zero children, which means the normal UCT formula is no longer valid.\n\n> Where is the definition of $C_d$ I may have overlooked it, but I could not find it in the main material. (It is mentioned in the supplementary as \"weight of recent improvement.\")\n\nHere, I would like to clarify three equations:\nThe equation of UCT in path selection (as in line 5 of Algorithm 1 and equation (2) in main text) is \n\n$$\nuct_i = -y^*_{i} + C_d \\cdot \\sum_{j=1}^{J} (dy_{i,-j}) + C_p \\cdot \\sqrt{\\log{n_{node}}/n_{i}}\n$$\n\nHere, $C_d$ is a weight factor controlling the importance of recent improvements, $C_p$ is a hyper-parameter for the extent of exploration, $n_{node}$ and $n_i$ are the number of visits to the parent node and child $i$, respectively. $y*$ i is the best value found at the child node $i$, $dy_{i, -j}$ is the most recent j’s improvement at node $i$, ($j=1,...,J$), and is set to 0 if no improvement is obtained at that step. \n\nThe equation on checking if a new branch should be added ((as in line 6 of Algorithm 1 and line 195 in main text) is:\n\n$$\nUCT_{explore} = -\\sum_{i=1}^{N} (y^*_{i})/N+C'_p \\cdot \\sqrt{ \\log{n_n} }\n$$\n\nIn which $C_p$' is also a hyper-parameter for the extent of exploration, $N$ is the number of children to the checking parent node, and $n_n$ is $n_{node}$ as above equation (There is an error at displaying $n_{node}$ in above line.)\n \nThe equation on determining if a leaf node is worth exploitation:\n\n$$\n-y^*_{i} + C_d \\cdot \\sum_{j=1}^{J} (dy_{i,-j}) > C''_p \\cdot n_f\n$$\n\nIn which $C''_p$ is another parameter for the extent of exploration, and $n_f$ is the number of visits on leaf node.\n\n\n> I think there are better citations for CMA-ES. Please explain the reason if it is meaningful to cite this specific implementation.\n\nThank you for pointing this out. There are two sources with the citation information: a Github repository at [Hansen, et. al. (2019)], and the other [Hansen, et. al. (2022)]. Both of them have the latest implementations with version r3.2.2. As I checked out the information from Github, I used the citation from there. \n\n> This is not a question, but I would like to see Figure 1 of the appendix in the main material if possible because it is very helpful for understanding the algorithm.\n\nThank you a lot for the suggestion. This figure will be moved to the main text. \n\n[Hansen, et. al. (2022)] Nikolaus Hansen, et al. (2022). CMA-ES/pycma: r3.2.2 (r3.2.2). Zenodo. https://doi.org/10.5281/zenodo.6370326\n", " This paper proposes a method for Black-Box Optimization which incorporates the idea of UCT in sampling.\nThe proposed method, MCDescent, shows promising results in several benchmarks.\n Strengths:\nThe idea is simple, and the algorithm is easy to understand.\nProbably the implementation is also not complex.\n\nWeaknesses:\nThe algorithm has eight hyperparameters, and their values vary for different benchmarks.\nAlso, there is no explanation about how they are tuned.\nI would be happy to give the paper a better score if this weakness is solved.\n\n---\nPost rebuttal.\nThank you very much for the detailed comments on my concerns.\nI changed my score. There should be a discussion focusing on the hyperparameters.\nFor example, if possible, I would like to see the following explanation.\n1. How sensitive the performances are to the hyperparameters.\n1. How the authors chose the values in Table 2.\n1. Is there any guideline for tuning the hyperparameters?\n\nWhere is the definition of $C_d$?\nI may have overlooked it, but I could not find it in the main material.\n(It is mentioned in the supplementary as \"weight of recent improvement.\")\n\nI think there are better citations for CMA-ES.\nPlease explain the reason if it is meaningful to cite this specific implementation.\n\nThis is not a question, but I would like to see Figure 1 of the appendix\nin the main material if possible because it is very helpful for understanding the algorithm. I could not think of any direct possibility of potential negative societal impact.\n", " This paper focuses on the problem of Black-Box-Optimization, optimizing a function while only being able to sample its value at different points. The authors propose a novel combination of building a tree to organize different regions that are being searched, and uses modifications of STP and TuRBO to optimize from a given point. The entire method balances exploring new regions for more promising spots with exploiting and optimizing in a good spot. Modified UCT is used to guide traversal of the tree, which also includes an option to create a new child at each node. This overall method is experimentally evaluated in many settings and shown to be competitive with previous approaches, while surpassing their performance on many problems. \n Strengths: \n- I quite liked this paper and felt that its approach to combining MCTS with BO was unique and makes a very solid contribution. \n- The evaluation is thorough and complete and is very convincing as to the strengths of the proposed methods. \n- I appreciated the ablation studies that were done. Overall the evaluation feels solid. \n- The paper is well-written and clear. \n\nWeaknesses: \n- The fact that the proposed methods are focused on deterministic functions wasn't made explicit until Line 216 on page 6, and later in the conclusion. I think this would have been good to make clear from the very beginning. \n- The uncertainty regions in some of the results are very large. Could more runs have been performed to bring down this uncertainty? \n- I am not sure that Figure 4 conveyed much meaning to me. It was hard to understand the point you were trying to get across. \n\nMinor feedback: \n- Line 67-68 - This sentence doesn't make very much sense to me as written. \n- Line 122 - \"... have similar values...\" might better be: \"... have more similar values...\"\n- Algorithm 1 - \"GP.correlation lengh\" seems like it should be \"GP.correlation length\" \n- Line 176 - \"In the scene the ...\" might sound better as \"In the event that the ...\"\n- Line 227 - \"...we create following ...\" perhaps should be \"...we create the following...\"\n\n - On line 267 you state that some high dimensional spaces are inhospitable to descent approachs? What are the characteristics of these inhospitable spaces? How can you tell if a space is inhospitable?\n This was addressed to my satisfaction. \n", " The paper considers optimization of deterministic black-box functions. The proposed algorithm combines Monte Carlo tree search with local descent/bayesian optimization. Empirical work shows elicits strong performance of the proposed method compared to various baselines on a diverse set of domains. The article is poorly written, and often it is difficult to figure out the intention of the authors. For instance, in Algorithm 1, the index i is undefined and probably is different from the index i in (2), which probably is j in Alg.1. dy is also undefined in the algorithm, although (2) gives sufficient hint about it. Other variables are undefined as well, but familiarity with MCTS is sufficient to have a reasonable guess. The authors often inform us what the algorithm is not, instead of focusing on a clear description of the algorithm, and explaining the differences subsequently. \n\nTherefore, while the algorithm could be interesting, it is fairly difficult to evaluate the strength of it.\n\nThe empirical results seem to indicate that the proposed algorithm performs well. Some of the domain choices seem a bit strange for black-box optimization (why would we want to optimize a linear policy on a MuJoCo domain this way, and not a more complex policy by reinforcement learning?), but ultimately it is possible to use these domains as well for testing. \n\n-------------------------\nAfter rebuttal:\n\nI appreciate the effort of the authors to improve the paper according to the suggestions of the reviewers. I hope, it will become a better paper after a few more iterations. - -", " This paper proposes a new MCTS-based optimization method that combines descent methods and BO. Different from LA-MCTS, the proposed MCDescent does not use partition due to the limitation of high-dimensional space and uses a set of unique mechanisms in the MCTS framework. Expeirments show the MCDescent can outperform baselines and sota black-box optimization methods. Although the experimental results show the superior performance of the proposed method, I have the following concerns.\n\n1. The novelty of the proposed method is limited. Authors may need to highlight the motivation and make it reasonable, explain in detail why you combine the two approaches.\n\n2. The writing of this paper should be improved. For example, Algorithm 1 should be rewritten for clarity, the notations used in the current version are very confusing and inconsistent.\n\n3. More experiments should be done to test the sensitivity of the hyper-parameters in the MCDescent, e.g., $C_p$. \n\n4. The number of samples is too small for the RL tasks. Although the authors want to examine the performance of different algorithms under a limited number of evaluations, the too-small budgets lead to inferior performance, and the policy does not converge on some tasks. For example, in the Walker environment, the number of samples used in LA-MCTS paper is 40000. However, it is only 3000 in this paper. \n\n5. The authors said, \" the performance of MCDescent may be limited in some high dimensional spaces that are inhospitable to descent\napproaches\". It is better to examine MCDescent on high-dimensional synthetic functions.\n\nThere are also some minor problems.\n\n1. More references are needed, such as lines 34-35.\n\n2. The experiments of Tree expansion and Optimization route can not be named \"Ablation study.\" How does MCDescent compare to other methods in terms of running time? The author should discuss this and shows it in the experiments. Yes." ]
[ -1, -1, -1, -1, -1, 6, 7, 2, 6 ]
[ -1, -1, -1, -1, -1, 4, 2, 5, 4 ]
[ "ZczyHKspmTU", "laAYALK9Hiw", "t4f9SIZLbXR", "bEjGmK4LDlp", "BuCCRbEp9c", "nips_2022_FzdmrTUyZ4g", "nips_2022_FzdmrTUyZ4g", "nips_2022_FzdmrTUyZ4g", "nips_2022_FzdmrTUyZ4g" ]
nips_2022_E3LgJdPEkP
A Mean-Field Game Approach to Cloud Resource Management with Function Approximation
Reinforcement learning (RL) has gained increasing popularity for resource management in cloud services such as serverless computing. As self-interested users compete for shared resources in a cluster, the multi-tenancy nature of serverless platforms necessitates multi-agent reinforcement learning (MARL) solutions, which often suffer from severe scalability issues. In this paper, we propose a mean-field game (MFG) approach to cloud resource management that is scalable to a large number of users and applications and incorporates function approximation to deal with the large state-action spaces in real-world serverless platforms. Specifically, we present an online natural actor-critic algorithm for learning in MFGs compatible with various forms of function approximation. We theoretically establish its finite-time convergence to the regularized Nash equilibrium under linear function approximation and softmax parameterization. We further implement our algorithm using both linear and neural-network function approximations, and evaluate our solution on an open-source serverless platform, OpenWhisk, with real-world workloads from production traces. Experimental results demonstrate that our approach is scalable to a large number of users and significantly outperforms various baselines in terms of function latency and resource utilization efficiency.
Accept
I agree with the reviewers that this is a well-written paper on an interesting application of mean-field games. The paper is a nice blend of theoretical developments and experimental evaluations. I believe that it will be well-received by the NeurIPS community and recommend acceptance.
test
[ "aTsMvAVdXqo", "AKd5hb8bPg4", "Srpze0q2Fmm", "n5ThODp-KED", "uP1olfpnIbL", "tl512tHeJCH", "RXmjQBk6v9f", "nrJ8aC7-Hv", "yu1jcSF6t6s", "w0DRwt-4Kp", "rAoAVpOCKT_", "dgUbSxAFuKG", "9aQuAoHcu9R", "4cqXLWViweC", "wrxrVmFBlL" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks to the authors for their detailed reply. I expected the paper to discuss cloud resource management in more detail since the title is \"A Mean-Field Game Approach to Cloud Resource Management with ***\". However, putting a lot of scenario descriptions in the appendix does not help the reader to understand how to formulate the problem with RL and why the MFG framework and NAC can solve it well, especially for readers from the industry.\n\nAlternatively, the paper could claim to build MFG frameworks and NAC solutions for ALL RL-compatible application problems from a more general perspective. In this way, the contribution of this paper will be more valuable.\n\nAnyway, this is just a writing issue. I believe the proposed solution has good potential to be applied to other scenarios. Therefore, the current score is appropriate.\n", " Thanks for the additional comments and we appreciate your time for discussing in more detail!\n\n5. We do agree that resource allocation and scheduling are highly related. When we fix the function placement and scheduling algorithm, the resource allocation policy learned by the RL agent will be the optimal policy given the function placement/scheduling “state”. In another word, the objective of the resource allocator is mainly for guaranteeing QoS under the function placement/scheduling framework, which is the scope of the RL formulation.\nNote that we are not claiming that it is the optimal resource allocation policy at **any** time because as you also mentioned, the policy should naturally be a joint effort from both the function placement/scheduling control plane and the resource allocation control plane. In addition, how quickly resource requests issued from the framework can be guaranteed is determined by the underlying cluster autoscalers/schedulers on a container orchestration platform (like Kubernetes). And indeed, the coordination between serverless resource allocation and the underlying system scheduling would be an interesting question to study.\n\n6. Thanks for the comment! We do want to clarify two concepts: (1) serverless computing platform; and (2) cluster management or container orchestration platform (like Borg / Kubernetes / Twine / YAWN). To the best of our knowledge, the serverless computing platform is still using a master-worker architecture in any open-sourced or commercial serverless platform. However, the reason why “commercial platforms like AWS Lambda or Azure Functions can scale to thousands of concurrent functions” is because their underlying cluster management platform is hierarchical as we mentioned earlier. Their way of scaling is through a hierarchical design (but for cluster management like Kubernetes not for the serverless platform). In a hierarchical cluster management platform, to scale to thousands of concurrent functions, each small/medium-sized sub-cluster/system pool can be used to deploy one serverless platform (different sets of functions run on different clusters), on which our framework can be applied.\n\nWe also want to mention that the experiments we did (including both simulation-based experiments and the real experiments on a serverless computing system) serve as a validation of the theory. With these validation results, our MFG formulation and NAC framework (with algorithm proved theoretically) have the potential to be applied to a wide range of problems (e.g., RL for congestion control, video streaming, power management, etc.) and open the door to welcome more systems research based on the theory, namely the convergence results and the approximation errors would decrease as the number of agents increases.", " Thanks for answering my questions in such detail. I have adjusted the score accordingly.\nHere are some additional comments.\n\n5. I think resource allocation and scheduling are highly related. Because the function placement ultimately determines the resource sharing quality. For example, if you pack many containers on one worker, you won't be able to allocate more resources to satisfy SLO because the worker's total amount of resources is limited. In other words, a system needs both a good resource allocator and a good scheduler. It would be interesting to see some discussions in the paper.\n\n8. Just a minor comment regarding \"the serverless platform architecture is a centralized model where a central manager controls all workers\". It is probably true just for OpenWhisk, which does not have a scalable architecture. By contrast, commercial platforms like AWS Lambda or Azure Functions can scale to thousands of concurrent functions.", " 8. Note that given that the serverless platform architecture is a centralized model where a central manager controls all workers, it naturally cannot scale beyond a large number of servers. The central manager is usually the bottleneck for scalability. In addition, we did perform a larger-scale experiment with 120 functions and found that the percentage difference of the p99 latency for each function between the 20- and 120-agent settings is smaller than 1.9%. We have also found that as the number of agents increases, the OpenWhisk controller is not able to handle higher throughput, and thus we did not further evaluate against larger-scale settings (also due to limited time, budget, and compute resources). In reality, the large-scale clusters are managed hierarchically. The most common way in cloud datacenters is to use a two-tier model where a large cluster is divided into a couple of sub-clusters (see [f][g][h][i][59] below). In such a model, one serverless platform can be deployed in one system pool in a sub-cluster. NAC can then be applied to each system pool/sub-cluster and interact with the central manager of the system pool for action actuation. We argue that a two-tier model could potentially solve the scalability issue (as we mentioned earlier). On the other hand, we found that a Non-MFG MARL solution such as MADDPG [j] is already not able to scale beyond 7 or 8 agents in practice (the convergence time substantially exceeds the same training budget compared to our solution). We thank the reviewer for raising this question and we will include the explanations from this comment in the discussion of scaling beyond the serverless platform’s centralized model in the Appendix.\n\n As a final remark from a theoretical perspective, a well-known result from the MFG literature (see, e.g., [49]) says that the mean-field approximation error goes down (in the order of $1/\\sqrt{N}$ where $N$ is the number of agents) as the number of agents increases. In this sense, a larger-scale system will make the MFG approach even more likely to be applicable.\n\nReferences:\n\n[a] Real-world Video Adaptation with Reinforcement Learning. H. Mao, S. Chen, D. Dimmery, S. Singh, D. Blaisdell, Y. Tian, M. Alizadeh, E. Bakshy. ICML 2019.\n\n[b] Neural Adaptive Video Streaming with Pensieve. H. Mao, R. Netravali, M. Alizadeh. ACM SIGCOMM 2017.\n\n[c] Learning Scheduling Algorithms for Data Processing Clusters. H. Mao, M. Schwarzkopf, S. Venkatakrishnan, Z. Meng, M. Alizadeh. SIGCOMM 2019.\n\n[d] Park: An Open Platform for Learning Augmented Computer Systems. H. Mao, P. Negi, A. Narayan, et al. NeurIPS 2019.\n\n[e] The datacenter as a computer: An introduction to the design of warehouse-scale machines. Barroso, Luiz André, Jimmy Clidaras, and Urs Hölzle. Synthesis lectures on computer architecture 8.3 (2013): 1-154.\n\n[f] Hydra: a federated resource manager for data-center scale analytics. Curino, C., Krishnan, S., Karanasos, K., Rao, S., et al. NSDI 2019.\n\n[g] Omega: Flexible, Scalable Schedulers for Large Compute Clusters. Schwarzkopf, M., Konwinski, A., Abd-El-Malek, M. and Wilkes, J. EuroSys 2013.\n\n[h] Apache Hadoop YARN: Yet Another Resource Negotiator. Vavilapalli, V.K., Murthy, A.C., Douglas, C., Agarwal, S., Konar, M., Evans, R., et al. SoCC 2013.\n\n[i] Large-scale Cluster Management at Google with Borg. Verma, A., Pedrosa, L., Korupolu, M., Oppenheimer, D., Tune, E. and Wilkes, J. EuroSys 2015.\n\n[j] Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments. Ryan Lowe, Yi Wu, Aviv Tamar, Jean Harb, Pieter Abbeel, Igor Mordatch. NIPS 2017", " 4. The reward function consists of two main parts (which are also the two main objectives): one for serverless function performance (meeting QoS latency corresponds to higher rewards) and the other one for function container resource utilization (higher utilization corresponds to higher rewards). For each function, we use the average utilization across all function containers. We use the average because the metric that cloud providers are most interested in is the average resource utilization (see Google’s Datacenter as a Computer [e]). In a nutshell, the design principle of the reward function is to align with the optimization problem’s objective (which is the QoS performance and the average resource utilization).\n5. The control plane of a serverless platform such as OpenWhisk consists of a resource allocator (responsible for allocating resources for each container and scaling the number of containers for a function), and a request scheduler (responsible for assigning requests to any worker/container, load balancing, and admission control). In our case, we did not change the original request scheduling algorithm of OpenWhisk (which tends to send requests to the same worker until it is full because it tries to pack more load to one single server). Our approach is orthogonal/agnostic to the request scheduling algorithm: The action from the RL agent directly goes to the resource allocator to adjust the vertical and horizontal scalings; and the request scheduler picks a worker and allocated function instance to send the request based on its own static algorithm.\n6. Resource allocation actions from each agent do need some time to be executed/actuated by the resource allocator. In our experiments, we observed that the average latency for horizontal scaling is at O(100ms) with the p99 being around 280ms and the average latency for vertical scaling is up to 30ms (by directing writing into the cgroup resource control configuration file). Each RL step, then, spans from the time the action is generated to the time the action is executed and measurement is taken. That being said, the delay is still there because the action is not immediately applied to the environment (and this problem is a limitation for RL in general which is out of the scope of this paper). Thus, in that sense, we did not explicitly model the delay, and the experimental results illustrate how robust the learned policies are to the delay. In addition, we did not do proactive scaling (same as the single-agent RL approach FIRM) because that requires complicated time series forecasting about the client workload.\n7. In this work, we do not explicitly consider function dependencies. Since our approach is a reactive one that automatically scales based on the observed measurements, function dependency is instead indirectly addressed through the measured performance and utilization metrics. To explicitly model function dependencies, our framework can be potentially applied to FIRM (which does critical component localization first based on dependencies), and we consider FIRM’s critical component localization as an orthogonal and complementary work to ours. In addition, according to the paper from ATC 2020 [56], almost 60% of the workloads contain only a single function, and thus only focusing on single-function applications already accounts for a large portion of cloud workloads.\n", " We thank the reviewer for the valuable feedback, especially the suggestions on improving the scope of our work from multiple system design perspectives. Our detailed responses are as follows. We will also update the paper accordingly to make these points clear. \n\n1. Since multiple existing works have performed the extension to the heterogeneous setting (see references [46, 69, 22]), we have skipped these discussions given the space limitations. Intuitively, we will need to apply our existing proofs to each type of agent, and the main results will still hold as long as there are a finite number of agent types. In the actual implementation, the only difference is that we will select a representative agent for each type of agent in the heterogeneous setting, instead of only selecting a single representative agent as in the homogenous setting. The type classification in resource management is essentially based on the function’s sensitivity to each type of resource (e.g., compute-intensive, memory-intensive, and I/O-intensive). The rest of the implementation remains unchanged.\n\n2. In serverless computing, the cloud provider controls the resource allocation to user-registered functions. As mentioned in Section 1 paragraph 1, each function is associated with user-defined latency QoS which is the performance objective that the user wants to optimize; the provider aims to increase resource utilization by managing the resources (optimally) shared by each function without violating each QoS. As functions from self-interested users compete for shared resources in the cluster, the problem serves as a strong and natural motivating example for our MFG formulation and solution. However, we would like to mention that even though our work is motivated by the resource management problem in cloud serverless platforms, for the sake of generality, we have aimed to make the MFG/methodology part generic enough to potentially cover a wide range of applications that can be formulated with RL, including serverless computing in particular. For example, RL has been applied in systems and networking areas such as ABRL [a], Pensieve [b], Decima [c], and many others (Park [d]). Our methodology part is intended to provide a general framework that addresses the scalability challenges for multi-agent scenarios. MFG is a suitable model to circumvent the scalability challenge by approximating the finite-agent game with an infinite-population limit. To demonstrate the effectiveness of our framework, we used serverless FaaS as one specific example, implemented the system, and evaluated the function performance managed by MFG agents. In terms of the assumption from QoS-driven serverless resource management, we have tailored the RL action, state, and reward function to better learn an optimal resource allocation policy and facilitate RL training. We would like to refer the reviewer to Appendix E for a more detailed discussion of the problem formulation under the RL pipeline, which was unfortunately not included in the main text due to space limitations.\n\n Note that since our work is a general MFG framework independent of the underlying RL formulation so long as fitting into a sequential decision-making process, one only needs to change the RL formulation (i.e., state, action, and reward function) to apply to other RL for systems scenarios. We thank the reviewer for raising this question. We will add the explanation from this comment to the next version of our paper to establish a clearer connection between the background and our solution.\n\n3. For vertical scaling, we only consider adjusting the number of CPU cores and the memory capacity because major commercial serverless FaaS platforms (e.g., AWS Lambda, Azure Functions, IBM Cloud Functions) only have the two as possible vertical scaling options, to the best of our knowledge. In addition, according to [55], serverless functions are not sensitive to lower-level resources such as LLC and memory bandwidth, which is also consistent with our findings during experiments. Therefore, we have chosen to include only CPU cores and memory capacity as the vertical scaling target. However, our NAC framework is applicable to more fine-granular resource allocation knobs (as mentioned in #2 by tuning the action and state space as well as the reward function).", " 2. Regarding the validation in large-scale system cases, note that given the serverless platform architecture is a centralized model where a central manager controls all workers, it naturally cannot scale beyond a large number of servers. The central manager is usually the bottleneck for scalability. In addition, we did perform a larger-scale experiment with 120 functions and we found that the percentage difference of the p99 latency for each function between the 20- and 120-agent settings is smaller than 1.9%. We also found that as the number of agents increases, the OpenWhisk controller is not able to handle higher throughput and thus we did not further evaluate against larger-scale settings (also due to limited time, budget, and compute resources). The most common way to deal with scalability in cloud datacenters is to use a two-tier model where a large cluster is divided into a couple of sub-clusters (see [a][b][c][d][59]). In such a model, one serverless platform can be deployed in one system pool in a sub-cluster. NAC can then be applied to each system pool and interact with the central manager of the system pool for action actuation. We argue that a two-tier model could potentially solve the scalability issue (as we mentioned earlier). We thank the reviewer for raising this question, and we will include the explanations from this comment in the discussion of scaling beyond the serverless platform’s centralized model in the Appendix.\n\n\nReferences:\n\n[a] Hydra: a federated resource manager for data-center scale analytics. Curino, C., Krishnan, S., Karanasos, K., Rao, S., et al. NSDI 2019.\n\n[b] Omega: Flexible, Scalable Schedulers for Large Compute Clusters. Schwarzkopf, M., Konwinski, A., Abd-El-Malek, M. and Wilkes, J. EuroSys 2013.\n\n[c] Apache Hadoop YARN: Yet Another Resource Negotiator. Vavilapalli, V.K., Murthy, A.C., Douglas, C., Agarwal, S., Konar, M., Evans, R., et al. SoCC 2013.\n\n[d] Large-scale Cluster Management at Google with Borg. Verma, A., Pedrosa, L., Korupolu, M., Oppenheimer, D., Tune, E. and Wilkes, J. EuroSys 2015.\n", " We thank the reviewer for the insightful feedback. Our detailed responses are as follows. \n1. Regarding the potential disjointness between background and methodology, we would like to mention that the serverless resource management problem is a strong motivating application for our MFG framework and NAC solution but our framework could potentially be applied to other applications that can be formulated with RL. In serverless computing, the cloud provider controls the resource allocation for user-registered functions. As mentioned in Section 1 paragraph 1, each function is associated with user-defined latency QoS which is the performance objective that the user wants to optimize; the provider aims to increase resource utilization by managing the resources (optimally) shared by each function without violating each QoS. That’s why “self-interested users compete for shared resources”, to answer your specific question. To see a clearer connection between the specific serverless problem and the generic MFG formulation, we would like to refer the reviewer to Appendix E for a more detailed discussion of the problem formulation under the RL pipeline, which was unfortunately not included in the main text due to space limitations. We model the resource management in a serverless platform as a sequential decision-making problem that can be solved by the RL framework (illustrated in Fig. 6). At each step in the sequence, the RL agent (labeled as <5>) monitors the system and application conditions from both the OpenWhisk data store (labeled as <3>) and the Linux cgroups. Measurements include function-level performance statistics (i.e., tail latencies on execution time, waiting time, and cold-start time for serving function requests) and system-level resource utilization statistics (e.g., CPU utilization of function containers). These measured telemetry data are pre-processed and used to define a state, which is then mapped to a resource management decision by the RL agent. In this model, we consider both vertical and horizontal resource scaling actions (described in Sec. 5.2). The decision made by the RL agent is then passed by the horizontal and vertical scaler to the FaaS controller (<2>), and finally changes the system state and function performance, which finishes an RL state transition.\n\n\n On the other hand, even though our work is motivated by the resource management problem in cloud serverless platforms, we believe that it is also applicable to other RL frameworks as well. For instance, RL has been applied in the systems and networking area (such as video streaming, congestion control, power management, and so on). For the sake of generality, we aimed to make the MFG framework generic enough to potentially cover a wide range of applications that can be formulated with RL. Our aim was to provide a general framework that addresses the scalability challenges for multi-agent scenarios. MFG is a suitable model to circumvent the scalability challenge by approximating the finite-agent game with an infinite-population limit. To demonstrate the effectiveness of our framework, we used serverless FaaS as one specific example, implemented the system, and evaluated the function performance managed by MFG agents. In terms of the assumption from QoS-driven serverless resource management, we have tailored the RL action, state, and reward function to better learn an optimal resource allocation policy and facilitate RL training. Note that since our work is a general MFG framework independent of the underlying RL formulation so long as fitting into a sequential decision-making process, one only needs to change the RL formulation (i.e., state, action, and reward function) to apply to other RL for systems scenarios. We thank the reviewer for raising this question and we will add the explanation from this comment to the next version of our paper to build a clearer connection from the background to our solution.\n", " 2. (Continued) Let’s also consider a concrete example where there are two agents in the environment (which used a single-agent RL algorithm to independently train to convergence), each controlling one function. Suppose that given the current state, agent-1 makes the (optimal) decision to scale up CPU shares by 256 to achieve optimality. When both agent-1 and agent-2 are present (at the same current state), agent-2 also wants to scale up CPU shares by 512 (simultaneously), which affects the final CPU share ratio for both agents. Then, the previously generated decision by agent-1 (for the single-agent case) is no longer optimal in the two-agent case. In our preliminary experiments, we have observed that the suboptimal policy (by single-agent RL algorithm) resulted in up to 14x performance degradation compared to our MARL solution in terms of the end-to-end p99 latency. We thank the reviewer for raising this question. In the next version of our paper, we will make the point clearer to the reader that the single-agent RL algorithm suffers from environment non-stationarity issues and performance degradation in multi-agent settings.\n\n3. We thank the reviewer for asking this clarifying question. We would like to clarify that \\lambda will not appear in Equation (1). Equation (1) is used to show that the regularized NE is a good approximation of the unregularized one. Specifically, it shows that when applying the regularized NE (\\pi^\\star, \\mu^\\star) back to the unregularized MFG, its corresponding value should be close to that of the unregularized NE in the same unregularized MFG. Since the values we compare are both defined based on the unregularized MFG, there is no superscript of \\lambda in the notation of the value function V. Instead, \\lambda (implicitly) appears in \\pi^\\star and \\mu^\\star, because the regularized NE (\\pi^\\star, \\mu^\\star) will be different for different levels of \\lambda of regularization. \n", " We thank the reviewer for the valuable comments. Our detailed responses are as follows.\n1. We appreciate the reviewer’s concern regarding the comparison with (Cayci et al., 2021) in terms of theory, but we would like to emphasize that the online (“single-loop”) nature of our MFG approach introduces an additional non-stationarity challenge (which is inescapable in multi-agent scenarios) and makes our analysis very different from a single-agent one (Cayci et al., 2021). Specifically, the reviewer is correct in that if one uses a standard “double-loop” mean-field approach, the setting will be very similar to that of (Cayci et al., 2021) because by fixing the mean-field state (population distribution), the learning agent effectively faces a single-agent problem. However, such double-loop solutions are hardly practical, since it is almost impossible to freeze the population distribution in a cloud computing system to let a representative agent learn an optimal policy. Such methods also suffer undesirable “zigzag” fluctuations as we demonstrated in our simulations. What differentiates our approach from single-agent policy optimization (Cayci et al., 2021) is the fact that we are considering a more practical online (“single-loop”) mean-field learning paradigm. In our setting, the environment (transition and reward functions) simultaneously evolves as the agents update their policies. The environmental non-stationarity hence adds another layer of complexity, because the agent needs to be aware of the impact it has on the environment when it updates its policy. To deal with this challenge (which does not exist in Cayci et al., 2021), our solution carefully balances the time scales of policy updates and mean-field estimates (in a non-asymptotic manner) to ensure that the policies are still updated in a consistent way even under environmental non-stationarities (primarily due to actions of other agents). This also explains why our convergence rate is very different from that of single-agent optimization (Cayci et al., 2021). We thank the reviewer for raising this point, and we will make this point clearer to the readers in the next version of the paper.\n\n2. We did not present in this paper the comparison results with single-agent RL algorithms proposed for resource management (e.g., FIRM [48], MIRAS [ICDCS 2019], and FaaSRank [74]) for two main reasons. (1) Firstly, those works that proposed single-agent RL solutions have different assumptions from ours and it would not an apple-to-apple comparison. In their single-tenant isolated training environment, the agent could train to convergence and achieve optimal performance. Single-agent RL solutions like FIRM assume that the agent is in an isolated environment where there is only the application that the agent manages. However, in our case, functions from all customers compete for shared resources in a cluster and all agents are concurrently interacting with the environment. From the point of view of each agent, the environment is no longer stationary in the multi-agent domain. The single-agent RL solution is not aware of the other agents in the same environment. At the training stage, since the state transitions and rewards depend on the joint actions of all agents whose policies keep changing during the learning process, each agent enters an endless cycle of adapting to other agents. We observed that the policy learned by an agent could not converge during training. (2) Secondly, with the same training budget, we picked the last checkpoint from RL training (though not converged) and evaluated the online policy-serving performance (during the inference stage). We found that due to performance degradation when applying single-agent RL algorithms to the multi-agent domain, the single-agent RL solution is even worse than the heuristics-based approach (our baseline). This is because, at the policy-serving stage, the underlying environment could be updated (by other agents) during the time between an action is generated (by an agent) and executed by the FaaS controller. We performed a more comprehensive measurement study of applying single-agent RL solutions in multi-tenant serverless platforms in another paper but we did not include that reference to provide anonymity.\n", " We thank the reviewer for the insightful feedback. Regarding the (potential) disjointness between the FaaS problem and the MFG formulation, we would like to mention that serverless computing is a natural application scenario of the MFG formulation, and serves as a strong motivation to study the convergence properties of learning algorithms in MFGs. For the sake of generality, we aimed to make the MFG framework generic enough to potentially cover a wide range of applications that can be formulated with RL (e.g., congestion control, video streaming bitrate adjusting, and power management), including serverless computing in particular. To demonstrate the effectiveness of our framework, we used serverless FaaS as one specific example, implemented the system, and evaluated the function performance managed by MFG agents. To see a clearer connection between the specific serverless problem and the generic MFG formulation, we would like to refer the reviewer to Appendix E for a more detailed discussion of the problem formulation under the RL pipeline, which was unfortunately not included in the main text due to space limitations. Specifically, we have discussed the formulation of resource management in a serverless platform as a sequential decision-making problem that can be solved by the MFG/RL framework (illustrated in Fig. 6). By approximating the finite-agent game with an infinite-population limit, our solution largely resolves the scalability challenge. Our theoretical proof then establishes the finite-time convergence formally and provides support for our proposed solution. We thank the reviewer for pointing out such disconnection. We will make these points clear in the main text of the paper to ensure a smooth transition. We would also be happy to discuss further if the reviewer has other questions or comments about the paper.", " The paper proposes a scalable mean-field game (MFG) approach to the problem of cloud resource management. It introduces an online natural actor-critic (NAC) algorithm for MFGs, which uses function approximation and lets the mean-field state naturally evolve as the agents learn. The authors establish finite-time convergence of NAC with linear function approximation and softmax parameterization, but also implement neural-net based function approximation. Experimental results on open-source serverless platform OpenWhisk with real-world workloads from production traces demonstrate that NAC is scalable to a large number of agents and significantly outperforms other common baselines. Strengths\n1. The paper is well written. Even the theory parts that are more involved are not hard to follow and the authors provide adequate refences and clarifications to explain the different concepts. I found this very helpful, because the NAC algorithm contains a number of steps such as policy evaluation, gradient estimation, mean-field update and policy update.\n2. The problem that motivates this work is an important real-world problem. Furthermore, a MFG approach is a quite reasonable design choice for this problem, in particular for large numbers of users where collective user behavior can be summarized by a population distribution, and a common policy can be applied to all agents.\n3. I was unable to check the proofs in detail, but the proposed algorithm and the ingredients therein make sense, as they follow faithfully various prior works.\n4. The related work is concise but seems to cover a lot of important and relevant prior works.\n5. The experiments show that the proposed NAC does not suffer from the zigzag fluctuations of the double-loop version of NAC that uses a fixed point iteration. Furthermore, experiments with real-world workloads from production traces show that NAC significantly outperforms other common baselines such as ENSURE and OpenWhisk's original resource manager. \n\nWeaknesses\n1. The novelty is not strong. Convergence of entropy-regularized natural policy gradient with linear function approximation has been already studied by (Cayci, He, and Srikant, 2021). And indeed this work builds very heavily upon this specific work (which, incidentally, is a pre-print and has not been accepted to peer-reviewed venues yet). The main difference is that NAC is now applied to a MFG setting, where the main solution concept is that of a Nash Equilibrium. Still, due to the mean-field approximation, the agent essentially faces a single-agent policy optimization problem, and the game-theoretic setting roughly becomes equivalent to a single-agent Markov decision process. That said, I do not claim that the two works are identical, since the deal with a different setting; however, they do overlap significantly.\n2. In the experimental evaluation, NAC is only compared to two other heuristic-based baselines. It would have made much more sense to also compare against other RL-based schemes. As the authors point out in the related work, there are various RL-based frameworks for scheduling or resource management (e.g., FIRM, FaaSRank, etc.). I feel that the authors would have made a much stronger point is they were able to show that the NAC algorithm can outperform other RL-based competitors (even by focusing on a simpler objective such as minimizing latency only). Even if some approaches are single-agent, the authors could also experiment multi-agent RL-based baselines to showcase the benefits of their framework (even in a basic setup to demonstrate the scalability challenges with traditional multi-agent approaches). 1. In terms of theory, what are the main differences of this work with the NAC setting with linear function approximation by (Cayci, He, and Srikant, 2021)? I understand that the setting is a bit different, because this work focuses on mean-field approximation for a multi-agent setting, whereas the cited work is purely single-agent. On the other hand, the mean-field approach effectively transforms the problem into a single-agent one. What are the additional innovations/challenges (particularly in terms of theory and proofs) of this work compared to the work by (Cayci, He, and Srikant, 2021)?\n2. Why have the authors not included any RL-based baseline (except for the double-loop version of NAC)? Multi-agent RL-based approaches without the mean-field assumption may suffer from limited scalability, so the authors could show the benefit of MFG NAC in a rather basic setting. Is it also possible to compare single-agent RL-based approaches (e.g., FIRM) to the proposed framework, e.g., by looking at the problem from the perspective of the single agent?\n3. In Equation (1), right in the middle do the authors perhaps mean V^{\\pi,\\lambda}_{\\mu^*}(\\rho) instead of V^{\\pi}_{\\mu^*}(\\rho)? Shouldn't the \\lambda appear somewhere?\n\n---AFTER REBUTTAL---\nThe authors addressed my concerns; I am therefore willing to raise my recommendation to Weak Accept. The authors have addressed the limitations of their work.", " The competition and scheduling of cloud service resources is a significant research problem with economic value. Due to the emergence of new cloud service paradigms such as serverless computing, we consider the aid of multi-agent reinforcement learning (MARL) to address new management challenges. To this end, the authors design a mean-field game (MFG) approach that can efficiently manage large-scale cloud users and applications. They propose an online natural actor-critic algorithm for approximating functions in mean-field games and demonstrate the finite-time convergence of the algorithm theoretically. The study evaluates the solution's effectiveness on OpenWhisk using workloads captured from real production systems, showing advantages in scalability, latency, and resource utilization. Strengths:\n- The cloud resource competition and scheduling problem has real research significance and economic value\n- Comprehensive theoretical basis and proof of properties\n- The efficiency advantage of the proposed solution makes it adaptable to real system scenarios\n\nWeaknesses:\n- The background and problem formalization are insufficient, resulting in the disjointed problem background and methodology\n- Validation in large-scale system scenarios/cases Overall, this paper constructs a comprehensive multi-agent reinforcement learning system and demonstrates the properties of online natural actor-critic algorithms in detail. Still, the most prominent weakness is the lack of necessary problem formulation (i.e., cloud resource management). Therefore, at the very beginning of the paper, it is suggested to supplement the problem case study and formalization to formalize the resource scheduling problem into a multi-agent reinforcement learning process in the context of a serverless computing scenario. This issue involves a lot of real details and needs to be explained. For example, cloud users do not need to manage the computing resources in a serverless computing scenario, so how can self-interested users compete for resources. Not applicable.", " The paper presents a mean field game approach to the many selfish agent problem. The paper contextualises this around the resource management problem for FaaS before moving on to a theoretical evaluation of the problem and finishes with experimental results. The paper presents a clear introduction to the FaaS problem along with a detailed mathematical analysis of the MFG solution. \n\nHowever, the paper is very dense and feels like two papers rammed into one. The introduction is all about the Cloud and the FaaS problem. The next section of the paper is a dense mathematical work on the MFG problem - with no mention of the FaaS problem. The FaaS only returns as one of the examples in the results section. It would have probably been better to have couched this paper as a fully theoretical paper and just keep the FaaS as one of the results examples. This would have given more space for covering the 'middle paper'. This could have helped in making it clear what the 'NeurIPS' element of this work was. The paper is in general well written. No small questions. This was not discussed.", " This paper aims to solve the problem of improving latency and resource utilization in serverless platforms. The paper proposes a mean-field game approach where they design a natural actor-critic (NAC) learning paradigm for MFGs with function approximation. The evaluation shows the convergence of their approach, and shows some latency and resource efficiency improvement on the OpenWhisk platform.\n\nNote: I am not an expert on the mean-field game theory side so I am not sure about the theoretical contribution, but I am confident in my evaluation of the serverless system side. I appreciate the authors’ effort to try to formalize the resource allocation problem in serverless.\n Strengths\n1. Interesting approach to map resource allocation (or autoscaling) problem in serverless to MFGs, and the authors try to solve the system challenge in a theoretica way.\n2. The paper applies their approach to a real system (OpenWhisk).\n\nWeaknesses\n1. The mapping between the idealized theoretical setup and the real system is a bit vague. E.g., the major part considers homogeneous agents and only the OpenWhisk experiments consider heterogeneous agents.\n2. The paper does not evaluate or prove the scalability of the proposed algorithm.\n 1. The majority of the paper assumes homogeneous agents, and the theorems and proofs are all based on this assumption. However, the evaluation on OpenWhisk assumes heterogeneous agents. Could you provide more details on how did you adapt the algorithms for real deployment? And are those proofs still valid for heterogeneous agents? Please clarify.\n2. It is unclear in the paper why this MFG approach is uniquely tailored for serverless. The theorems did not include any assumptions from the serverless system.\n3. This paper only considers a very coarse-grained resource allocation type – vertically adding CPUs or memory, or horizontally adding containers. How does it scale to many different resource allocation knobs such as CPU frequency, memory bandwidth, I/O bandwidth, etc? \n4. The reward function in Line 332 seems to be constructed naively by averaging CPU and memory utilizations, could you clarify that?\n5. How does the resource allocation part interact with the scheduler? Do you use round-robin or least loaded worker? How do you decide which functions are co-located in one worker (function placement issue)? Those are unclear in the paper.\n6. Usually resource allocation needs some time (seconds to minutes) to take effect, so the reward would only be observed after a certain delay. Therefore, resource allocation needs to be proactive, predicting what will happen next. However, the algorithm does not seem to consider such delay.\n7. In prior work such as FIRM, they consider a workflow of serverless functions and incorporate dependencies between functions. How does MFG work considering dependencies?\n8. Though the authors claim that scalability is the main issue of resource allocation in the cloud the paper does not provide proof or evaluate the scalability of their approach. In the evaluation, only 50 functions are tested which indicates only 50 agents are running concurrently. But the production environment usually serves thousands of different functions simultaneously.\n The authors discussed some limitations and potential social network impacts in the paper. I found some limitations after reading the paper (mentioned in the above Questions section). " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 2 ]
[ "RXmjQBk6v9f", "Srpze0q2Fmm", "n5ThODp-KED", "uP1olfpnIbL", "tl512tHeJCH", "wrxrVmFBlL", "nrJ8aC7-Hv", "9aQuAoHcu9R", "w0DRwt-4Kp", "dgUbSxAFuKG", "4cqXLWViweC", "nips_2022_E3LgJdPEkP", "nips_2022_E3LgJdPEkP", "nips_2022_E3LgJdPEkP", "nips_2022_E3LgJdPEkP" ]
nips_2022__vfyuJaXFug
Translation-equivariant Representation in Recurrent Networks with a Continuous Manifold of Attractors
Equivariant representation is necessary for the brain and artificial perceptual systems to faithfully represent the stimulus under some (Lie) group transformations. However, it remains unknown how recurrent neural circuits in the brain represent the stimulus equivariantly, nor the neural representation of abstract group operators. The present study uses a one-dimensional (1D) translation group as an example to explore the general recurrent neural circuit mechanism of the equivariant stimulus representation. We found that a continuous attractor network (CAN), a canonical neural circuit model, self-consistently generates a continuous family of stationary population responses (attractors) that represents the stimulus equivariantly. Inspired by the Drosophila's compass circuit, we found that the 1D translation operators can be represented by extra speed neurons besides the CAN, where speed neurons' responses represent the moving speed (1D translation group parameter), and their feedback connections to the CAN represent the translation generator (Lie algebra). We demonstrated that the network responses are consistent with experimental data. Our model for the first time demonstrates how recurrent neural circuitry in the brain achieves equivariant stimulus representation.
Accept
The paper constructs recurrent neural circuits that represent stimuli equivariantly with respect to a given symmetry, taking the example of the 1D translation group. Most Reviewers were positively impressed by the general framing of the problem in terms of group theory and the elucidation of a connection between Continuous Attractor Networks and Lie groups. The main weakness pointed out by the Reviewers was that the high-level exposition of the paper tended to conceal the distinction between novel original contributions and the previous literature. This concern was however resolved in the rebuttals in a way that seems to satisfy all Reviewers. Further concerns about the biological plausibility of the proposed general construction and the potential difficulty of extending it beyond the restricted 1D case analyzed in the paper were also raised by several comments in the reviews, but also in this case the clarifications in the rebuttals seemed to convince Reviewers, who by and large expressed optimistic views about the significance of the work for its potential for future developments and applicability for modeling other neural systems.
train
[ "Ls4JqrnHKlB", "akCK7uZpfe", "spPuxrMYP2", "yPoM_UHVRt4", "LTHpjhS3SZ7", "EYWPN5QzOB0", "LideWf1GSS0L", "q3tERdXMRaL", "Do5YKCcthOK", "pSqGTJ4eHjR", "-hBUcuS8RZH", "B_XkWhXjuzF", "BhssC8rjE_", "wpRyVOiyDSi" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your responses. I appreciate the additional discussions that the authors have promised. It is clear to me that this submission is an important contribution.", " Thanks for the reviewer's suggestion. We will definitely comment on the first paper in our revised manuscript.\nAlso, the temporal scaling is quite relevant to our future work and it is pretty interesting to study the temporal scaling in recurrent dynamics from a scaling group point of view. ", " This seems *very* relevant to the discussion of grid cells and should probably be cited in the paper.\nhttps://link.springer.com/article/10.1007/s10827-020-00742-9\n\nNot necessarily for this paper, but the authors may be interested in work by Lindeberg and colleagues on temporal scaling. For instance:\nhttps://link.springer.com/article/10.1007/s10851-015-0613-9 \n\n", " Thank you for your clarifying the novelty and main contributions of your paper. Reading through the paper again with these clarifications was very helpful and I now believe this is an important contribution. I have raised my score accordingly.", " Thank you for your insightful comments and valuable suggestions. We will revise our paper accordingly. \n\n### Comparison with previous works\n\nThere were earlier works having similar ideas that a separate population of neurons representing transformations, e.g., grid cells represent the 2d translation of positions (e.g., Stachenfeld, Nat. Neurosci., 2017; Whittington et al., NeurIPS 2018;), but to our best knowledge, none of the previous works used theoretical analysis which directly starting from Lie group to derive a concrete, biologically plausible nonlinear recurrent circuit dynamics.\n\n### Multiplicative modulation\n\nThank you for the interesting suggestion that shunting inhibition might be a way to achieve multiplicative modulation. We will consider and discuss this possibility in our revision. Another reason motivating us to consider the multiplicative modulation of speed neurons’ firing rate is from the observation of drosophila. We believe that the multiplicative modulation of firing rate results from inhibitory neurons in the circuit, as a form of gain control widely observed in cortical circuits.\n\n### Neural variability and heterogeneity\n\nTo emphasize the main mechanism and simplify the analysis, we consider homogeneous neurons (lines 93) as well as no internal neural variability in the network. We do acknowledge that real cortical circuits have heterogeneity among neurons and there exists large neuronal response variability. It has been suggested that the CAN is resistant to input noise by removing the noise component that is perpendicular to the attractor manifold (Deneve, Nat. Neurosci., 1999; Wu, Neural Computation, 2002), and then the equivariant representation can be still held. The heterogeneity will break the perfect equivariant representation in recurrent network. One possibility is that the neural heterogeneity and neural noise can help the brain implement sampling-based Bayesian inference (e.g., Orban, Neuron 2016; Echeveste, Nat. Neurosci., 2020), where the instant state of the CAN and the activity of speed neurons represent the samples of head direction and rotation speed, respectively (see more details in our reply to Reviewer 2shN).\n\n### Generalization to grid cells\n\nThank you for the useful references, and for pointing out the complexities of place cells and grid cells. We shall add more discussions about place cells and grid cells in the revised manuscript. \n\nA recent study (Gardner, et al., Nature 2021) reported that the population activities of grid cells appear to reside in a 2D torus manifold, which is a finite 2D domain with periodic boundary conditions. CAN based on 2D torus has also been proposed in the literature (e.g., Burak and Fiete, PLoS Comput. Biol. 5, 2009). The 2D torus is the 2D version of the ring (which is 1D torus) that is being studied in present study. The 2D additive group with clock arithmetic (due to periodic boundary conditions) is a Lie group and is still abelian. The grid periodicity may be caused by \"rolling\" the 2D torus over the 2D spatial domain. Therefore it is possible to generalize our current theoretical framework to the 2D domain to partially explain the properties of grid cells in future work. \n\nMeanwhile, we also notice there is a separate population of grid cells that are more selective to speed (Sun, PNAS 2014). Besides, the continuous spectrum of speed neurons’ time constant might form a basis to explain the temporal process, which may potentially be explained by a temporal scaling group.\n", " We are deeply grateful for your positive review and insightful comments. We shall follow your valuable suggestions to improve the presentation of our paper. \n\n### 1. About non-Gaussian profile. \n\nThis is an interesting point. The profile of tuning curve is determined by the recurrent connection profile (Eq. 9) in the network model. In principle, we can vary the recurrent connection profile, to let the recurrent circuit self-consistently generate different tuning curves. \nIn particular, the high kurtosis profile may be considered a scale-mixture of Gaussians, i.e., Gaussians with different scales. And then the recurrent connections could also be set as the mixture of many Gaussian connection profiles to generate the high kurtosis tunings. We shall explore this possibility in future work. \n\nMathematically, we can use the math trick from the conjugate prior in Bayesian statistics to ensure that the recurrent connections and the firing rates of neurons are in the same family of profiles, e.g., Gaussian or generalized Gaussian.\n\nIn the present study, the neuronal selectivity does not matter when there are an infinite number of neurons. But if the tuning curve is monotonic, then the network dynamics will be heavily affected by infinite number of neurons, resulting in that the integral of the monotonic tuning diverges.\n\n### 2. Improving the notation. \n\nThank you so much for the detailed suggestions on improving the notation and presentation. We will follow your suggestions in revising our paper. We will differentiate the notation for discrete and continuous variables clearly in the revised manuscript.\n\n### 3. About the infinite number of neurons. \n\nThere are two main reasons for considering infinite number of neurons in the present study. First, the 1d translation operator is defined to act on a continuous function $\\bar{u}(s)$ (Hilbert space, lines 25), which is interpreted as the network response. Second, considering the limit of infinite number of neurons brings a lot of benefit to theoretical analysis, e.g., multiplication of the recurrent connection matrix with neuronal firing rates can be treated as an integration. In the simulation, as long as the number of neurons is not too small, the simulation results can be approximately regarded as achieving equivariant representation. But if the neuron number is too small, the network responses cannot translate along the attractor manifold smoothly and will exhibit the zigzag effect. \n\nThe Drosophila’s heading direction circuit contains about 40 E-PG neurons as mentioned by reviewer 2shN. For finite number of neurons, the network model can still have equivariant representation, except that the profile and the convolution kernel may slightly deviate from Gaussian. And the profile and kernel may be obtained by numerical optimization, as suggested by recent work (Norman et al., bioRxiv 2022). Note that Norman’s study did not formulate from group equivariant representation point of view.\n\nWe shall discuss this point in the revised manuscript. \n\n### 4-5 typos \nIn the line 22, it should be the “instantaneous orientation”. We will correct the typos in the revised manuscript. Thank you!\n", " \n### On the multiplicative modulation \n\nWe are concerned that whether multiplicative modulation directly on synaptic weights is biologically plausible, and whether this modulation is fast enough to match the neuronal dynamics. Therefore, we consider multiplicative modulation on the firing rate of speed neurons, which is supported by the Drosophila’s experiment. In reality, multiplicative modulation of neuronal firing rate can be done by inhibitory neurons in the circuit, in the form of gain control as widely observed in experiments.\n\n### On the title of the paper\n\nBased on your comment, we would like to narrow down the title of the paper to be “Translation equivariant representation in recurrent networks with a continuous manifold of attractors”. In the brain, there exist 1D CAN (orientation, head direction, etc.) and 2D CAN (spatial location), and rarely 3D CAN. Our results can be extended to 2D CAN.\n\n### Reply to the reviewer's question about the challenges the authors faced.\nThere are several challenges we are facing to extend our current work. For example, our work assumes the infinite number of neurons to simplify the theoretical analysis, and it is difficult to obtain closed-form solutions for the finite number of neurons, although numerical approaches work (see our detailed reply above). Moreover, our work does not consider noise and neural heterogeneity and it is a fundamental question to study how the equivariant representation is compatible with those features (see our detailed reply above). Besides, we are still working on extending our method to explain grid cells and place cells, and/or develop a biologically plausible recurrent circuit model to represent a *non-commutative* group structure.", " Thank you for your insightful comments and helpful suggestions on improving the presentation of the paper. We shall revise our paper by following your suggestions. \n\n### On the novelty and main contributions of the paper\n\nThe novelty and main contributions of this paper are that it provides overarching connections between Lie group, continuous attractor network (CAN), and the drosophila’s neural circuit, and demonstrates for the first time how recurrent neural circuitry in the brain implements equivariant stimulus presentation. Here, we present a new perspective from the Lie group theory to understand CAN as a way for the brain to implement equivariant representation of features: we start from the requirements of equivariant representation to derive step by step how the structure of CAN and extra speed neurons for realizing translation operators are determined.\n\n Specifically, the novelty of our study includes:\n- We start from Eq. 5 (Lie group) to derive the Eqs. 21 and 22 (the model of compass circuit) to show the compass circuit is able to achieve equivariant representation.\n- The Eq. 22 is a novel theoretical result that gives a theoretical prediction of implementing path integration. The theoretical result is not presented in earlier work, e.g., Ref. 30-33.\n- The network model in Sec. 4 is directly derived from the Lie group. Although similar model was proposed in Ref. 20 and 28, the earlier work didn’t study Lie group.\n- We formulate the concrete path integration problem in neuroscience by the concept of commutation in Lie group.\n\nWe believe the present work gives new insight into understanding the well-established CAN dynamics and the Drosophila’s compass circuit, motivates us to apply group theory to interpret more brain functions (see discussions in lines 290-295). We will revise our paper to further clarify and emphasize our contributions relative to past work. \n\n### Comparison to previous works\n\nDue to the page limit, we did not systematically compare our work with previous works in detail. To our best knowledge, no previous works connected Lie group to a well-established recurrent circuit mode (i.e., CAN), nor a concrete neural circuit in the brain. \n\nSpecifically, the network model in Sec. 3 is inspired by Ref. 27, the model in Sec. 4 is similar to Ref. 20 and 28, and the model in Sec. 5 is similar to Ref. 30-33. However, previous works have mainly focused on studying the dynamics of CAN and its roles in brain functions, and have not interpreted CAN from the point of view of group equivariant representation.\n\n### On finite number of neurons\n\nThank you for pointing out this important issue. Indeed, there are only 40 E-PG cells in the Drosophila head-direction circuit. For the convenience of theoretical analysis, we have considered an idealized CAN with infinite number of neurons, resulting in that the translation operator (Eq. 1) acts on a continuous function (Hilbert space, lines 25). \n\nFor finite number of neurons, the network model can still have equivariant representation, except that the profile and the convolution kernel may slightly deviate from Gaussian. And the profile and kernel may be obtained by numerical optimization, as suggested by a recent work (Norman et al., bioRxiv 2022). Note that Norman’s study didn’t formulate from group equivariant representation point of view. We will discuss this point in revision. \n\n### On the neuronal response variability\n\nTo highlight the basic mechanism of implementing equivariant representation, the current study has not included neural noise. It has been suggested that the CAN is resistant to input noise by removing the noise component that is perpendicular to the attractor manifold (Deneve, Nat. Neurosci., 1999; Wu, Neural Computation, 2002), and then the equivariant representation can be still held. \n\nApparently, the noise will break the perfect structure of equivariant representation in a CAN. It has been suggested that the internal noise is essential for the brain implementing sampling-based Bayesian inference (e.g., Orban, Neuron 2016; Echeveste, Nat. Neurosci., 2020), where the instant state of the CAN and the activity of speed neurons represent the samples of head direction and rotation speed, respectively. There were recent studies suggesting that a CAN with internal Poisson-like variability performs sampling-based inference on the attractor manifold (Zhang, et al., bioRxiv 2020). \n", " ### Reply to reviewer’s questions\n\n1. The missing citation in line 37: Deneve, J. Neurosci., 2007\n\n2. The derivation of Eq. 7 from Eq. 6 relies on two assumptions as listed in lines 93 – 95. We assume homogeneous neurons in the population (line 93), i.e., every neuron has the same profile of tuning curve, and the preferred stimulus $x_j$ of all neurons are uniformly distributed in the stimulus space. The two assumptions are heavily used in neural coding studies (e.g., Pouget, Neural Computation 1998; Dayan&Abbott, 2001; Wu, Neural Computation 2002).\n\n3. The $u(x-s)$ is just the change of notation of $\\bar{u}(x)$, in order to emphasize the neuronal response depends on the difference between x and s.\n\n4. The discussion of introducing an external input is presented in the above.\n\n5. The free parameter s in line 131 means that whatever the value of s is in Eq. 10, $\\bar{u}(x-s)$ and $\\bar{r}(x-s)$ are the stationary states of Eq. 8a-8b. This indicates that the network holds a continuous manifold of attractors, and the location of each attractor in the attractor manifold is characterized by s. This is a unique property of continuous attractor networks.\n\n6. Yes, Eq. 13 is derived from Eq. 5, while exponentiating Eq. 5 gives $\\hat{T}(a)=\\exp(a\\hat{p})$.\n\n7. The $\\tau v \\partial_x W_r$ in Eqs. 15 and 16 is replaced by the rightmost term in Eq. 19a. If we set $\\tau$ on the leftmost in Eq. 19b as zero, Eq. 19a-b will be mathematically equivalent to Eq. 15 and 16. The introduction of Eq. 19 is to replace the multiplicative modulation on synaptic weight by the multiplicative modulation on firing rate, while the latter, we think, can be more easily implemented by neural circuits and has also direct evidence from the Drosophila experiment.\n\n8-9. We will fix the typos in the revised manuscript.\n\n10. The notation $U_j$ in Eq. 6 is not union, but is a scalar representing the peak firing rate (Eq. 92).\n\n11. We will fix the different definitions in Eqs. 7 and 10. They only differ by a coefficient in the denominator inside the exponential function, and this difference does not change our later results.\n\n", " Thank you for the insightful review and interesting questions. We shall follow your suggestions to improve the presentation of our paper. \n\n### Novelty and main contributions\n\nThe novelty and main contributions of this paper are that it provides overarching connections between Lie group, continuous attractor network (CAN), and the drosophila’s neural circuit, and demonstrates for the first time how recurrent neural circuitry in the brain implements equivariant stimulus presentation. Here, we present a new perspective from the Lie group theory to understand CAN as a way for the brain to implement equivariant representation of features: we start from the requirements of equivariant representation to derive step by step how the structure of CAN and extra speed neurons for realizing translation operators are determined.\n\n Specifically, the novelty of our study includes:\n- We start from Eq. 5 (Lie group) to derive the Eqs. 21 and 22 (the model of compass circuit) to show the compass circuit is able to achieve equivariant representation.\n- The Eq. 22 is a novel theoretical result that gives a theoretical prediction of implementing path integration. The theoretical result is not presented in earlier work, e.g., Ref. 30-33.\n- The network model in Sec. 4 is directly derived from the Lie group. Although similar model was proposed in Ref. 20 and 28, the earlier work did not study Lie group.\n- We formulate the concrete path integration problem in neuroscience by the concept of commutation in Lie group.\n\nWe believe the present work gives us new insight into understanding the well-established CAN dynamics and the Drosophila’s compass circuit, motivates us to apply group theory to interpret more brain functions (see discussions in lines 290-295). We will revise our paper to further clarify and emphasize our contributions relative to past work. \n\nThe present work is post-dicting available evidence by using the 1d translation group. In the future we believe that we can use other groups to predict neural circuits (discussed in lines 290-295). \n\n### Comparison with previous works\n\nDue to the page limit, we did not systematically compare our work with previous works in detail. To our best knowledge, no previous works connected Lie group to a well-established recurrent circuit mode (i.e., CAN), nor a concrete neural circuit in the brain.\n\nSpecifically, the network model in Sec. 3 is inspired by Ref. 27, the model in Sec. 4 is similar to Ref. 20 and 28, and the model in Sec. 5 is similar to Ref. 30-33. However, previous works have mainly focused on studying the dynamics of CAN and its roles in brain functions, and have not studied CAN from the point of view of group equivariant representation.\n\n### Introducing external input to the neural circuit\n\nOur work emphasizes an internal mechanism which actively induces the translation of neural representation (continuous attractors), and hence the external input is not included for the sake of simplicity. We are also interested in introducing external inputs that representing instantaneous stimulus direction and the instantaneous velocity into Eqs. 19a and 19b respectively. Some examples can be found in earlier results (Ref. 30 and 33). Specifically, an external input which has a bump profile located at the instantaneous stimulus direction can be introduced in Eq. 19a. And the external speed input can be introduced to all speed neurons, e.g., Ref. 30 introduced a spatially constant term whose value is dependent on speed to all speed neurons. \n\nFrom Drosophila’s study, we know that the external input to speed neurons conveys the speed information, which can be computed from sensory inputs or inherited from the efference copy of the motor command. It is also possible that the connections from CAN to speed neurons also help to compute the speed information, but further experimental evidence is needed.\n\n", " The paper introduces a translation-equivariant neural model where the neural response to a stationary stimulus is equivalent to convolving with a Gaussian; however, through recurrence, the response is maintained in the absence of a stimulus. Introduces the use of Lie algebra for deriving the translation generator.\n\n\n Strengths:\n- the framing in terms of group theory, equivariant representations, and the exposition is fairly clear. \n\nWeaknesses:\n- it is unclear to me what parts of the paper are mostly for exposition (drawing on previous work of Kechen Zhang), and what parts are novel contributions. Most of the paper consists of theoretical derivations, and it’s often not made clear where results were proved in other papers.\n- given the focus on the 1D case I think the title is a bit too general.\n\nFor example:\n- Section 3.2 shows that the fixed points of continuous attractors are equivariant representations, but I’d be surprised if this result was not already known or demonstrated\n- Section 4.1 uses the derivative as the translation operator, as does Kechen Zhang’s (1996) paper which is cited here but not really discussed. It seems that the real contribution is the connection between this derivative to group actions, but I wish this were more clearly laid out.\n\nAnd some thoughts on the circuit model:\n- The discussion of biological (im)plausibility seems weak. There’s not really a problem of multiplicative modulation — synaptic weight multiplied by variable input would do fine (as the authors do in eq 19b). And the other problem they mention — Dale’s law — is easily circumvented by having excitatory and inhibitory neurons, again as the authors do.\n- The more critical biological concerns, I think, are (1) biological noise (spike timing, synaptic noise, etc.) and (2) that the Drosophila E-PG cells number about 40, whereas the attractors considered are in the infinite neuron limit. I think some acknowledgement of these points/experiments proving the analysis holds up under these constraints would strengthen the paper.\n- in particular there’s a fair amount of literature with CANs and how they deal with noise, drift, limited precision (e.g. Burak & Fiete, 2009) — and it’s surprising to see very little of this discussed here.\n\nGiven that velocity-controlled inputs are already the inputs to many CANs for path integration, and examples beyond integrators aren’t discussed here, it seems the main contribution is an interpretation of/twist on existing models, rather than a new model per se. For this reason, I would have appreciated more suggestions on what to do with this Lie group understanding: whether on novel neural circuits, new neural mechanisms that implement the transport operator, or even learning of group of structure.\n\n I am curious about challenges the authors faced - i.e., what did not work?\n general limitations, see above.\nsocietal impact: n/a\n", " The authors provide an exciting extension to the classic bump circuit model [20]. The work provides a connection between Lie group translations and continuous differential recurrent circuits (or continuous attractor networks, CANs) to demonstrate that CANs can effectively encode a continuous variable shift, such as position. They derive the circuit equations, implement numerical simulations that match the theoretical assumptions, and suggest modifications to more appropriately accommodate known biological constraints. **Originality:** Of course the concept of using a recurrent circuit to encode an environment variable, such as position, is not new. Additionally, the application of Lie algebra to achieve equivariance in neural circuits is not new. However, as the authors point out and as far as I know, the connection that they provide between these two concepts is both novel and exciting. I am fairly confident that the additional clarity and rigor that the present study provides over previous work in the space is an original contribution.\n\n**Quality:** I believe the mathematics are technically sound, and the supporting experiments demonstrate the applicability of the theory. I have some minor comments below about including more discussion on weaknesses in applying the theory.\n\n**Clarity:** I include minor comments below to improve clarity, but overall the exposition is clearly communicated.\n\n**Significance:** I believe the submitted work is a significant theoretical contribution. It draws on several ideas from iconic work in neural coding [20], and the application of Lie algebra lends itself to several future extensions.\n 1) Implications of the gaussian tuning curve assumption (lines 87-88) – A Gaussian profile is certainly a fair assumption; it is used in a majority of neural sensitivity research. But there’s also good evidence to suggest that at least visually selective neurons (probably others as well) are better fit by leptokurtic tuning curves, which demonstrates an increased selectivity for preferred stimuli [1]. I think this could be covered in a future publication (i.e. is not necessary to implement in the rebuttal phase), but I am curious if you have thoughts on how one might apply the derivation in section 2 of the appendix to a generalized normal distribution? Additionally, does neuron selectivity have an impact when you are considering an infinite set?\n\n2) notation clarifications – I initially ran into some confusion with the notion that I think could have been avoided with a more gentle introduction at the top of section 2. \n\n2.a) There are a lot of conceptually overlapping variables that correspond to the stimulus & its perturbations (s, s’), neuron preferences & their perturbations (x, x’), the addition of an index when referring to individual neurons (x_j), and resultants from perturbing a stimulus (e.g. ‘a’ in T(a)*s = s+a). I understand the need for all of them, but an early exposition on what they are to orient the reader would have helped me.\n\n2.b) I also had confusion coming from switching between a discrete (countable) notation for the neurons and a continuous notation based on the preferred stimulus. As some examples of the former, consider the pictorial description in Fig 2.a; the use of the term “index” in the horizontal axis label in fig 2f; and the use of the subscript j in equation 6. An example of the latter is found in the use of an integral over alternate stimuli in the denominator of equation 8b, and the explanation of an “excitatory neuron preferring stimulus s=x” on lines 112-113. This is explained briefly on lines 121-126, although there you walk back the infinite neuron description and describe it as “a large population of neurons.” I am not suggesting that any of this is wrong, per say, but more recommending that you spend a moment earlier (for example near line 86 or at the top of section 2) to explain what is going on here and when/why you will be flexible with notation.\n\n3) What are the empirical implications of the infinite neuron assumption? I would appreciate a discussion at the end of the paper on what it means to implement such a system with a finite set of neurons. I may have missed this in the paper, but what were the implementation details for the experiments shown in figures 3 and 4? How does the number of neurons (i.e. resolution of the discretization of the stimulus space) influence the analysis?\n\n4) Line 22 “decode the instant orientation” did you mean instantaneous?\n\n5) Typo on line 35 of the supplement – “clear to se”\n\n[1] Ringach, D. L., Shapley, R. M., & Hawken, M. J. (2002). Orientation selectivity in macaque v1: Diversity and laminar dependence. Journal of Neuroscience, 22(13), 5639–5651.\n The authors note limitations in sections 5 and 6. I note some questions that touch on limitations of the presented work (e.g. the mismatch between theory & experimentation).", " Many artificial neural networks should obey symmetry relationships. This paper derives a translation operator for a ring attractor network inspired by recent findings from the drosophila head direction system. The resulting system is designed to be equivariant to translation and is at least roughly neurobiologially plausible.\n Strengths:\nThis paper strikes a nice balance between abstractness and specificity. The math is initiated at a very high level, but is then implemented for a very specific circuit.\n\nThe model makes a number of predictions that could be tested with existing technologies.\n\nWeaknesses:\nMy major concern about this paper is that perhaps something similar was already worked out, perhaps for the grid cell system but I'm unaware of that work.\n \n35: ...explicitly represent...\n77-8: ...the above derivation...\n92: Not convinced it's strictly necessary that the units have identical tuning curves. Agree that it's convenient.\n115: Divisive normalization...\n138: ...disrupted by noise.\n171: ...utilize the ...\n213: Don't know that much about drosophila specifically, but in general, multiplicative modulation of weights is not challenging in neurobiological systems. Keywords are ``shunting inhibition'' and ``cortical gain control''. \n\nWould be good to note that there are in fact speed cells in the grid system, e.g., Sargolini et al., (2006). However their firing rate profiles are relatively complex (see for instance, Dannenberg, et al., 2019, J Neuro; 2020, eLife).\n\nAuthors might be interested to compare these results to Tanaka & Nelson (2019, PRE). In thinking about 3-D transformations, Shepard (1994, Psychonomic Bulletin & Review) may be helpful.\n\n \nThe paper is far too sanguine about extending this approach to place cells/grid cells. This is likely to be much more complicated than one might expect from this paper. Place cells do not monotonically map the 2-D environment. Locations nearer to reward locations and boundaries are overrepresented. In addition, the grid system is much more complicated than a 2-D surface. 2-D position is represented in conjunction with head direction and speed. Moreover, there appears to be a continuous spectrum of time constants in the speed system (Dannenberg, et al., 2019).\n", " The authors propose a recurrent neural circuit model with a continuous attractor state that models the heading direction system in the Drosophila and includes a circuit mechanism for translating the heading direction at arbitrary speeds. Moreover, their model is closely related to the Drosophila heading direction circuit. Originality: As someone who is not an expert in the field of continuous attractor networks, it is difficult for me to assess the originality of this work in part because it was not clear what exactly were the authors' contributions. My impression is that the main contributions were stated in sections 4 and 5, but even within those sections it is not clear which parts are novel (e.g., equation 19 appears to be known). While they include numerous references and the second paragraph of the conclusion discusses on the novelty of this work, it would still be useful (for me) if the authors further delineate what is their contribution versus what has already been established in the literature.\n\nQuality: As far as I can tell, the paper is technically sound.\n\nClarity: I did not find the presentation particularly clear, but this could be due to my lack of expertise in the area. I've listed clarifying questions below.\n\nSignificance: The paper address an interesting problem in neuroscience; however, it is difficult for me to asses exactly what the authors' contributions are versus what was already known. I readily acknowledge this may be my own lack of expertise, but I would like the authors to clarify this for me. - Line 37: The sentence \"For example, ...\" needs a citation.\n- Line 96: Why is eq 6 the mean response of all neurons? Are you assuming infinitely many neurons? Is there a citation justifying this? Am I missing something obvious?\n- Eq (7): What does $\\overline u(s)\\equiv\\overline{u}(x-s)$ mean? This doesn't appear to be true based on the definition of $\\overline{u}(x-s)$.\n- Sec 3.1: Is there no external input to the system? Is it easy to include? How would that affect the system?\n- Line 131: What is the definition of \"free parameter\"?\n- Eq (13): Can I interpret this effectively as a restatement of the fact that $\\hat T(a)=\\exp(a\\hat p)$?\n- To what extent are eqs (16) and (19) equivalent? The authors discuss the relationship, but its not clear to me what the precise relationship is.\n\nMinor:\n- Line 20: reflects -> reflect\n- Line 43: whereas -> however?\n- Eq (6): I find the notation $\\cup_j$ confusing. It looks like you're taking the union over $j$.\n- Eq (10): The definition of $\\overline{u}$ is different here than in eq. (7) - Eq (19ab): Can external stimulus inputs be included in this model? It seems like those would be reconciled with the speed inputs via some sort of Kalman filter. Is the speed $v$ assumed to be an external stimulus that feeds into all of the speed neurons? Presumably the speed is computed in part from the CAN responses? I'm not suggesting that this circuit should include these computations, rather I'm trying to understand to what degree this model is flexible enough to include these considerations." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 8, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 2 ]
[ "EYWPN5QzOB0", "spPuxrMYP2", "LTHpjhS3SZ7", "pSqGTJ4eHjR", "BhssC8rjE_", "B_XkWhXjuzF", "-hBUcuS8RZH", "-hBUcuS8RZH", "wpRyVOiyDSi", "wpRyVOiyDSi", "nips_2022__vfyuJaXFug", "nips_2022__vfyuJaXFug", "nips_2022__vfyuJaXFug", "nips_2022__vfyuJaXFug" ]
nips_2022_RO0wSr3R7y-
3DILG: Irregular Latent Grids for 3D Generative Modeling
We propose a new representation for encoding 3D shapes as neural fields. The representation is designed to be compatible with the transformer architecture and to benefit both shape reconstruction and shape generation. Existing works on neural fields are grid-based representations with latents being defined on a regular grid. In contrast, we define latents on irregular grids which facilitates our representation to be sparse and adaptive. In the context of shape reconstruction from point clouds, our shape representation built on irregular grids improves upon grid-based methods in terms of reconstruction accuracy. For shape generation, our representation promotes high-quality shape generation using auto-regressive probabilistic models. We show different applications that improve over the current state of the art. First, we show results of probabilistic shape reconstruction from a single higher resolution image. Second, we train a probabilistic model conditioned on very low resolution images. Third, we apply our model to category-conditioned generation. All probabilistic experiments confirm that we are able to generate detailed and high quality shapes to yield the new state of the art in generative 3D shape modeling.
Accept
All reviewers agree to accept this work, which presents a creative new shape representation for 3D generative modeling. The negative aspects raised by the reviewers are fairly minor, and most were addressed during the rebuttal phase (please be sure to incorporate all comments/additional results into the final camera-ready version). During the post-rebuttal discussion, reviewers suggested nominating for a spotlight given the new results on realistic data.
train
[ "77169KRl2XV", "8cmN-0rM6bN", "fqF8YzxvzX0", "7Uw9QhaY9Ay", "kCgZYCvsK5P", "0bvJKyXZcD", "kB8PK959tdT8", "EXOojKbopJ5", "QXa1WljqFPg", "YzLqcqNaKn2", "xMJOIOzK9uU", "rY6lB1VEEQe", "HMyyucKlkJk", "4Tqf7Ah7MRr" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewers,\n\nThanks again for the review. Since the author/reviewer discussion phase is ending tomorrow, we would like to ask if our comments helped clarify your concerns or if there are additional questions we can help with.", " We thank the reviewers for their comments. We are happy to see that reviewers found our papers to bring an “**interesting**” idea to 3D generation (Reviewer uFdG and tLeQ), that our method is “**novel**” (Reviewer jKmw) and explores “**sparse nature**” of 3D shapes (Reviewer tLeQ). We are also glad our work was found to show results which are “**impressive, accurate, detailed**” (Reviewer jKmw), “**comprehensive**” (Reviewer jKmw), “**promising**” (Reviewer tLeQ), and “**qualitatively good**” (Reviewer uMXb). Finally, we are delighted to see that our technical presentation is “**excellent**” (Reviewer uMXb), and “**clear**” (Reviewer tLeQ and Reviewer jKmw). We would like to post a summary of all new experiments for this rebuttal here and answer to the individual reviewers separately.\n\n \n\n### Summary of new experiments\n\n1. Comparison with traditional surface reconstruction, Poisson Surface Reconstruction. (Reviewer uFdG)\n \n2. Generalization results on real point clouds, D-FAUST human models (Reviewer uFdG)\n \n3. Generalization results on real world images conditioned generation, ABO images (Reviewer uFdG)\n \n4. Generative model trained on ABO models (Reviewer uFdG)\n \n5. Memory and runtime comparison (Reviewer uFdG and jKmw)\n \n6. Reconstruction performance on non-uniformly sampled point clouds (Reviewer jKmw)\n \n7. Alternative choice of interpolation method (Reviewer uMXb)", " > l.141 What is the motivation behind the non linear interpolation between local latent codes? Could it be ablated?\n\nNadaraya–Watson estimator is a non-parametric regression estimator. We choose it for the following reasons:\n\n1. It is widely used in machine learning ([4] Sec 14.7.4)\n \n2. It can be easily implemented with the softmax function (most deep learning frameworks provided a numerically stable version of softmax)\n \n3. The attention operator used in Transformers can be seen as a special case of the Nadaraya-Watson estimator [5]. From this perspective, we allow any point to attend all learned latent locations.\n \n4. Most importantly, it is efficient to backpropagate through the interpolation to design an end to end learning framework.\n \n\nThere are many other interpolation methods that exist, e.g., kriging in geo-statistics. However, making them differentiable is not that straightforward. We do agree that changing the interpolation might even improve our method further so we follow the reviewer suggestion. We show results of another simple interpolation methods which only considers k nearest neighbors for the interpolation. It is called knn interpolation in PointNet++ [3]. As shown in the table, the results are very close (improved 0.004). The network can automatically learn latents that are suitable for the interpolation method. So we believe that the interpolation method will be a minor issue in our work and orthogonal to our contribution. However, there is definitely some potential for future work.\n| | Nadaraya-Watson | KNN | Improvements |\n|-------|:---------------:|:-----:|:------------:|\n| IoU ↑ | 0.953 | 0.957 | 0.004 |\n\n> Compared to autodecoders, this probabilistic pipeline can no longer represent a shape with a latent code.\n\nHowever, we can represent a shape with a set of latent codes. The set is much smaller compared to ConvOccNet and IF-Net. Also an autoencoder may use a latent code with many dimensions.\n\n\n\n### References\n\n[1] Nash, Charlie, Yaroslav Ganin, SM Ali Eslami, and Peter Battaglia. \"Polygen: An autoregressive generative model of 3d meshes.\" In International conference on machine learning, pp. 7220-7229. PMLR, 2020.\n\n[2] Fu, Huan, Mingming Gong, Chaohui Wang, Kayhan Batmanghelich, and Dacheng Tao. \"Deep ordinal regression network for monocular depth estimation.\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2002-2011. 2018.\n\n[3] Qi, Charles Ruizhongtai, Li Yi, Hao Su, and Leonidas J. Guibas. \"Pointnet++: Deep hierarchical feature learning on point sets in a metric space.\" Advances in neural information processing systems 30 (2017).\n\n[4] Murphy, Kevin P. Machine learning: a probabilistic perspective. MIT press, 2012.\n\n[5] Zhang, Aston, Zachary C. Lipton, Mu Li, and Alexander J. Smola. \"Dive into deep learning.\" arXiv preprint arXiv:2106.11342 (2021).", " > Quantizing 3D positions of local latents sounds very similar to using a grid. What is the difference here?\n\nWe agree with this as we can easily predict continuous point coordinates, thus making the quantization not necessary. However, some works found that quantization leads to high accuracy, e.g., PolyGen [1] and DORN [2]. The quantization converts a regression problem to a classification problem. We conjecture that a classification problem might be easier than a regression problem for deep networks. But digging into the reason behind this is beyond the scope of our paper. We follow (current) best practices suggested by previous papers [1] [2].\n\nOn the other hand, we are using 8-bit quantization which leads to a 256x256x256 grid. In this sense, the resolution is larger than all previous works, according to Table 1. Not to mention, we can still increase the bit length, e.g., increase to 14-bit. In this sense, we have a higher precision in localizing latents than previous work using only a grid.\n\n> Some very related papers such as \"Local Deep Implicit Functions for 3D Shape\" [CVPR 2020] are not mentioned. In this one, \"the global implicit function is decomposed into the sum of N local implicit functions.\", not on a grid...\n\nWe have added the missing reference.\n\n> The intro+related Sections mention shape representation and generation, while the method section starts with an input point cloud.\n\nThanks for pointing this out. Sampling point clouds from surfaces is a data preprocessing step. We did as suggested to mention surfaces at the beginning of this section as the input of our method.\n\n> l. 33 : Why is fixed length representation a benefit? Wouldn't an adaptive representation be more useful, with a tiny representation for simple shapes, and long representation for complex ones?\n\nWe consider the fixed length representation to be a trade off with distinct benefits in our application. If the representation is variable length, we need to go through all data samples in the dataset to find a maximum length. Also, in training, we have to pad shorter sequences to the largest length in the data batch, just as what people do in natural language processing. It’s more convenient to deal with fixed length representations in auto-regressive models or models using masked tokens.\n\n", " ### References\n[1] Sun, Yongbin, Yue Wang, Ziwei Liu, Joshua Siegel, and Sanjay Sarma. \"Pointgrow: Autoregressively learned point cloud generation with self-attention.\" In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 61-70. 2020.\n\n \n\n[2] Nash, Charlie, Yaroslav Ganin, SM Ali Eslami, and Peter Battaglia. \"Polygen: An autoregressive generative model of 3d meshes.\" In International conference on machine learning, pp. 7220-7229. PMLR, 2020.\n\n \n\n[3] Esser, Patrick, Robin Rombach, and Bjorn Ommer. \"Taming transformers for high-resolution image synthesis.\" In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12873-12883. 2021.\n\n \n\n[4] Van Den Oord, Aaron, and Oriol Vinyals. \"Neural discrete representation learning.\" Advances in neural information processing systems 30 (2017).\n\n \n\n[5] Mittal, Paritosh, Yen-Chi Cheng, Maneesh Singh, and Shubham Tulsiani. \"Autosdf: Shape priors for 3d completion, reconstruction and generation.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 306-315. 2022.\n\n \n\n[6] Yan, Xingguang, Liqiang Lin, Niloy J. Mitra, Dani Lischinski, Daniel Cohen-Or, and Hui Huang. \"Shapeformer: Transformer-based shape completion via sparse representation.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6239-6249. 2022.\n\n \n\n[7] Chang, Huiwen, Han Zhang, Lu Jiang, Ce Liu, and William T. Freeman. \"Maskgit: Masked generative image transformer.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 11315-11325. 2022.\n\n \n\n[8] Yu, Jiahui, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan et al. \"Scaling Autoregressive Models for Content-Rich Text-to-Image Generation.\" arXiv preprint arXiv:2206.10789 (2022).", " > Maybe I missed it, but it was not clear how the autoregressive generation ordering was determined (L156-158). I think the authors should also provide some discussions on whether (or how) the model is / is not invariant to the permutation of such ordering; for completeness, results comparing different input permutations may also give a better understanding.\n\nThe current permutation we are using follows related work, e.g., PointGrow [1] (Section 3) and PolyGen [2] (Section 2.2). We sort points according to their coordinates and flatten them. This is also referred to as scanline order in other papers. Such a model is not permutation invariant. We choose the ordering according to these existing works. For example, in VQGAN [3] (Fig. 47), different orderings were studied. Scan line order works best.\n\n> For the \"image-conditioned generation\" experiments comparing single-image shape reconstruction, I would like to understand more where the actual performance gain comes from. Besides the new irregular latent grid representation, the entire backbone architecture (GPT) is also much more powerful than OccNet (ResNet-18); even the pretraining data is entirely different. Therefore, I am not totally convinced that all the benefit comes from the new representation. I think a critical missing experiment is to ablate this factor; specifically, a baseline that uses GPT but predicting occupancy like OccNet should be included to verify that the irregular latent grids do have its merits.\n\nTo make a fair comparison, we use the same encoder as in OccNet (ResNet-18). The different thing is the decoder part. OccNet is a deterministic method. Given an image feature vector, it outputs a shape surface deterministically. For our method, we can sample output shape probabilistically given the same image feature vector. GPT is an autoregressive text generation model. It is often used in VQVAE-like models for discrete token generation, e.g., VQGAN [3]. In general, we argue that probabilistic methods give results sampled from a probabilistic distribution, while deterministic method tends to produce averaged results (mean of the distribution). We would also like to highlight that the focus of our contribution is in generative modeling and OccNet is not a generative modeling method. Directly using GPT to predict occupancy at a dense resolution would be virtually impossible even with an excessively large cluster. For example, we currently predict 512 tokens. The large scale GPT3 model can process less than 5000 tokens. Since self-attention scales quadratically, we see no way to scale to 128 * 128 * 128 tokens.\n\n> For the image-conditioned experiments (following the above concern), it was not clear whether there are still input point clouds being conditioned. If yes, then the comparison against OccNet becomes yet more unfair; if not, it is not very clear what the input pipeline would look like. My guess is that a full shape reconstruction network (Fig 3) is trained first, then only the Transformer part is taken for the experiment (with the PointNet discarded); it would be good if the authors could clarify.\n\nOnce the shape reconstruction network (Stage One) was trained, it can be used in all generative models (Stage Two), e.g., category-conditioned, image-conditioned and point-conditioned. This two-stage training strategy is also used in all VectorQuantization + AutoRegressive models, e.g., VQVAE[4], VQGAN [3], MaskGIT[7], Parti[8], AutoSDF [5], and ShapeFormer[6] (some is concurrent work).\n\n- When training, we need image-surface pairs (which is the same as in OccNet). The shape reconstruction network (Fig 3) converts surface to discrete tokens.\n \n- When doing inference, we do not need shape surfaces (so the comparison with OccNet is fair). Given sampled discrete tokens conditioned on images, we need the shape reconstruction network to decode the tokens to a final surfaces.\n \n\nWe will make the process clearer in the next revision.\n\n", " \n> In Section 5, further comparison with IF-NET is needed: the proposed method slightly improves over IF-NET. How does it compare to IF-NET (and other methods) in other respects - resource consumption, train and inference time, etc.?\n\n \n\n> In A.4., how do the methods compare in terms of memory and runtime?\n\nWe measure the memory consumption and runtimes of ConvOccNet and IF-Net. The metrics are compared with two variants of our method (Nadaraya–Watson estimator and knn interpolation). The occupancies are evaluated on a 128x128x128 grid. Then, we perform MarchingCubes on the grid to get triangle meshes. For both IF-Net and Ours(NW), we are unable to fit the 128x128x128=2097152 points to an 80GB A100 GPU. So we feed 50k points in a forward pass. The results can be found in the row Multiple Pass. The quality would remain the same for Multi Pass and Single Pass. The runtime is comparable, but our memory consumption is higher.\n| | | ConvOccNet | IF-Net | Ours(NW) | Ours(KNN) |\n|----------------|-------------|:----------:|:------:|:--------:|:---------:|\n| Multiple Pass | Time(sec) | 0.3119 | 0.5743 | 2.03 | 0.3367 |\n| | Memory (MB) | 500 | 1600 | 25343 | 509 |\n| Single Pass | Time(sec) | 0.104 | - | - | 0.302 |\n| | Memory (MB) | 1359 | - | - | 17689 |\n\n> Explain the purpose of Eq.(8) and Eq.(9).\n\nEq (8) is the likelihood of generation we are going to maximize. Eq (9) is the detailed expansion of each term on the right hand of Eq (8).\n\n \n\n> In A.6., ll.62-63 - “Bad” in what sense? Non-realistic generations by the proposed method should be highlighted as well.\n\nExamples of undesirable results are: 1) Missing thin structures. 2) Overly blurred / smoothing of sharp features or smaller details. 3) Creation of spurious and unnatural details. We highlighted more samples as requested in the current revision.\n\n> In l.130, explicitly state which positional encodings are used, for completeness of the presentation.\n\nWe followed the suggestion and now formulated the positional encodings in the appendix A.11.\n\n> Introduction section needs to be revised to more clearly describe the main aspects of the proposed approach.\n\nWe will make it more clear in the final revision.\n\n> Missing references\n\nWe have added the mentioned references in the current revision.\n\n> Nit: In Table 1, consider writing all numbers in the same manner such that it will be clear which parameters are identical for different methods.\n\nWe discussed this suggestion, but would prefer to keep the numbers as is. Otherwise, it is hard to understand the grid resolutions, if the numbers are multiplied.\n\n### References\n\n[1] Van Den Oord, Aaron, and Oriol Vinyals. \"Neural discrete representation learning.\" Advances in neural information processing systems 30 (2017).\n\n \n\n[2] Esser, Patrick, Robin Rombach, and Bjorn Ommer. \"Taming transformers for high-resolution image synthesis.\" In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12873-12883. 2021.\n\n \n\n[3] Mittal, Paritosh, Yen-Chi Cheng, Maneesh Singh, and Shubham Tulsiani. \"Autosdf: Shape priors for 3d completion, reconstruction and generation.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 306-315. 2022.\n\n \n\n[4] Yan, Xingguang, Liqiang Lin, Niloy J. Mitra, Dani Lischinski, Daniel Cohen-Or, and Hui Huang. \"Shapeformer: Transformer-based shape completion via sparse representation.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6239-6249. 2022.\n", " > A possible weakness of the proposed approach may be its sensitivity to shape sampling. E.g., how would the method work if the input point clouds were not all sampled at N = 2048 points? How were these points obtained? Would irregular sampling density affect the results of the proposed method? This possible limitation needs to be addressed in the paper, with some experimental results and a discussion. Parts of the method's description which require further classification.\n\n> The authors need to address further a potential limitation of the method, namely, its sensitivity to changes in sampling of the input point clouds.\n\nWe followed the reviewers suggestion and conducted additional experiments. Our method is actually surprisingly robust when used with different sampling patterns compared to previous work. To illustrate problems that might be caused by irregular density sampling we design a simple experiment. We randomly choose an “anchor” point on the surface. We want points near the “anchor” point to have a high probability to be sampled. The probabilities are defined by a Gaussian function exp(-beta * dist(p, anchor)). Here small beta gives rise to uniform sampling, while large beta assigns large sampling probability near the “anchor” point (in an extreme case, some areas on the surface will never be sampled from). We use this function to simulate an irregular density sampling on the surface. The results can be found in the following Table. We observed a significant drop in IF-Net’s results. On the contrary, we do not see a large performance drop in our method. Also see Appendix A.10 (Fig. 20) for reconstructed meshes.\n\n| | | IF-Net | | | Ours | |\n|----------|:-------:|:------:|:------:|:-------:|:------:|:------:|\n| | Uniform | beta=1 | change | Uniform | beta=1 | change |\n| IoU↑ | 0.934 | 0.902 | -0.032 | 0.953 | 0.954 | +0.001 |\n| Chamfer↓ | 0.041 | 0.050 | +0.009 | 0.040 | 0.040 | +0.000 |\n| F1↑ | 0.967 | 0.937 | -0.030 | 0.966 | 0.964 | -0.002 |\n\n\n> In Figure 4, what’s the output of the model? How is a point cloud or another shape representation decoded from it?\n\nThis figure is to illustrate the autoregressive generation of discrete tokens.\n\n* Training\n \n\t* The inputs are conditional information C and discrete tokens. The tokens are obtained from the vector quantization (Figure 3 Lower Right and Eq (5)).\n\t* The outputs are the probability of discrete tokens (Eq (8))\n \n\n* Inference\n \n\n\t* In each sampling step, we feed the conditional information C and sampled token set to the network (Eq (8)).\n \n\t* In each sampling step, a discrete token is sampled from the output probability distribution.\n \nWhen we have all tokens, we decode the discrete tokens to latents and then final implicit surfaces (Figure 3)\n \n\nThe pipeline is commonly used in VectorQuantization+AutoRregressive models (e.g., VQVAE [1], VQGAN [2], AutoSDF [3] and ShapeFormer [4]).\n\n> Figure 5 is inconsistent with Figure 4. Show a block diagram of the proposed architecture, as you do in Figure 4.\n\nThe training of the two methods is very different. Figure 4 shows an architecture that generates tokens one by one in an autoregressive fashion. Figure 5 shows an architecture that fills in / replaces masked tokens in a given grid of tokens. Some of the replaced tokens are kept and others (low probability ones) discarded. The discarded tokens are again masked and the process proceeds iteratively.\n\n> In Section 4, how are x_{i,k} obtained for different i-s? What is the difference between x_{i,k} and x_{j,k} for different i, j? Please explain.\n\nX_i is a 3D coordinate, x_{i,1}, x_{i,2} and x_{i,3} are 3 components of the coordinate. This is explained in Sec 4.1 L157. The coordinates x_i are obtained from FPS, which is explained in L121 and Eq (1). Both x_i and x_j are the points belonging to the subsampled point set in Eq (1).\n\n", " > The proposed generative tasks are only on shape-net, no real images or point clouds.\n\n> Suggestion: Having examples with non-shape-net data to illustrate robustness and utility in practice.\n\n> If no good results with real data for generative tasks, I would drop that claim and focus on the new representation's ability for reconstruction, efficiency, spatial adaptivity, etc.\n\nWe clarify that, we already have an additional experiment on scene level reconstruction in the original submission which is shown in the appendix A.4. However, it is true that more additional datasets on real data could improve the paper. We therefore conducted more experiments as follows:\n\n1. We show real world images as conditional input to our generative model trained on ShapeNet. The images are from the dataset ABO [4]. The results can be found in the Appendix A.7 (Fig. 17).\n \n2. We show real world scans reconstruction results with our reconstruction network trained on ShapeNet. These scans are taken from the dataset D-FAUST [5]. We evaluate both IF-Net and our method. The results can be found in the following table. Also see Appendix A.8 for a visualization of reconstructed meshes.\n \n| | IF-Net | Ours |\n|-----------|--------|--------|\n| Chamfer ↓ | 0.0238 | **0.0210** |\n| F-Score ↑ | 0.9940 | **0.9953** | \n\n3. We can run our methods (category-conditioned generation) on the dataset ABO [4] from scratch. The results can be found in the Appendix A.9 (Fig. 19). Note that due to the short time limit of the rebuttal, we only choose a subset of the dataset ABO to train our network. In this experiment, the train/val/test set are composed of 1184 chair models in total. The size is much smaller than ShapeNet (52472).\n \n\n> Memory consumption and running times are not clear.\n\nWe measure the memory consumption and runtimes of ConvOccNet and IF-Net. The metrics are compared with two variants of our method (Nadaraya–Watson estimator and knn interpolation). The occupancies are evaluated on a 128x128x128 grid. Then, we perform MarchingCubes on the grid to get triangle meshes. For both IF-Net and Ours(NW), we are unable to fit the 128x128x128=2097152 points to an 80GB A100 GPU. So we feed 50k points in a forward pass. The results can be found in the row Multiple Pass. The quality would remain the same for Multi Pass and Single Pass. The runtime is comparable, but our memory consumption is higher.\n\n| | | ConvOccNet | IF-Net | Ours(NW) | Ours(KNN) |\n|----------------|-------------|:----------:|:------:|:--------:|:---------:|\n| Multiple Pass | Time(sec) | 0.3119 | 0.5743 | 2.03 | 0.3367 |\n| | Memory (MB) | 500 | 1600 | 25343 | 509 |\n| Single Pass | Time(sec) | 0.104 | - | - | 0.302 |\n| | Memory (MB) | 1359 | - | - | 17689 |\n### References\n[1] Peng, Songyou, Michael Niemeyer, Lars Mescheder, Marc Pollefeys, and Andreas Geiger. \"Convolutional occupancy networks.\" In European Conference on Computer Vision, pp. 523-540. Springer, Cham, 2020.\n\n[2] Chibane, Julian, Thiemo Alldieck, and Gerard Pons-Moll. \"Implicit functions in feature space for 3d shape reconstruction and completion.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6970-6981. 2020.\n\n[3] Kazhdan, Michael, Matthew Bolitho, and Hugues Hoppe. \"Poisson surface reconstruction.\" In Proceedings of the fourth Eurographics symposium on Geometry processing, vol. 7. 2006.\n\n[4] Collins, Jasmine, Shubham Goel, Kenan Deng, Achleshwar Luthra, Leon Xu, Erhan Gundogdu, Xi Zhang et al. \"Abo: Dataset and benchmarks for real-world 3d object understanding.\" In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 21126-21136. 2022.\n\n[5] Bogo, Federica, Javier Romero, Gerard Pons-Moll, and Michael J. Black. \"Dynamic FAUST: Registering human bodies in motion.\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 6233-6242. 2017.", " >The advantage over point cloud-based generative modeling is not well established, both conceptually and experimentally.\n\n\nWe agree that different 3d representations (e.g., point clouds, voxels, implicit surfaces) have their own advantages and disadvantages. Our work is an implicit surface generation method (via neural fields, often called neural implicit representations or coordinate-based networks). When we have an implicit surface, the corresponding point cloud representation can be obtained by sampling on the surface. However, obtaining a surface-based representation for a given point cloud is much more difficult and the process may result in errors. In general, a surface-based representation (such as ours) contains more information about topology and local neighborhoods and it is therefore more appropriate for downstream tasks, such as rendering, analysis, and editing. Therefore, we believe it would be unusual to compare surface-based representations to point-cloud based representations in actual experiments. While the specific advantage over point-based generative modeling may not be established in our paper, the general advantage of surfaces over point-clouds is well established in general. We therefore do not think that pure point-based modeling can be seen as a competitor.\n\n \n\nA distinct advantage of modern data structures for generative modeling is that they interpolate high-dimensional features and then post-process the high-dimensional feature using an MLP. If the suggestion is to adapt point clouds to be used as backbone of a neural field, there are multiple ingredients that are unspecified. How to interpolate features from a point cloud, how to incorporate high-dimensional features, and how to integrate everything so it becomes trainable end to end. This is a research project on its own and requires multiple design decisions to be made. In particular, it seems difficult to integrate Poisson Surface Reconstruction in such a framework. While recent research from NeurIPS 2021 shows how to make Poisson Surface Reconstruction differentiable, this still requires quite a bit of work to build a complete system and it is not evident how it could be competitive. It seems much more intuitive to build on Neural Fields. We will clarify this in the final revision.\n\n> It is a bit of a stretch to call this a grid-based algorithm and only consider grid-based alternatives.\n\n \n\nThe focus of our work is to study shape representations in the context of generative modeling based on neural fields. Again, it is important that a data structure is compatible with generative modeling, e.g. can be trained with a VQ-VAE and be used by an auto-regressive transformer. Further, the data structure / framework needs to have the ability to interpolate high-dimensional latents post-processed with an MLP and the framework needs to be trainable end-to-end including the interpolation. In this context, there are no other competing data structures and we compare to the state of the art that uses regular grids (ConvOccNet [1]) and a multi-scale grid (IF-Net [2]). Also, 2D grids are a popular representation for generative image modeling and many state of the art methods build on 2D grids, e.g. Parti, Dalle2, VQGAN. This also gives additional support to our claim that it is reasonable to assume that 3D grids would be the current state of the art.\n\n \n\n> The reconstructions are only compared against other networks, there are no comparisons to surface reconstruction methods e.g. Poisson or edge-preserving MLS.\n\n>Suggestion: Having comparisons with surface reconstruction techniques other than only neural grids. The point clouds look clean and seem to sample the shapes well.**\n\nPoisson Surface Reconstruction is only a small part of a potential competing framework. If the suggestion is to use Poisson Surface Reconstruction as an independent post-process combined with a generative method to generate point clouds (not sure which one), there are many open design decisions and this would require additional research as discussed above. If the suggestion is simply to compare only the reconstruction part, this is a good idea to make the paper more complete and we are happy to provide this additional information below.\n\nWe compare our method with Poisson Surface Reconstruction (PSR) [3]. The results are shown in the following Table. Best results are highlighted in **bold** and second best results are shown in _italic_. Note that PSR requires per-point normals as input. We also updated the results in the current revision (Appendix A.12). In summary, Poisson Surface Reconstruction performs well, but not as well as our method or IF-Net.\n\n| | PSR | OccNet | ConvOccNet | IF-Net | Ours(3DILG) |\n|-----------|:-----:|:------:|:----------:|:------:|:-----------:|\n| Chamfer ↓ | 0.043 | 0.072 | 0.052 | _0.041_ | **0.040** |\n| F-Score ↑ | 0.922 | 0.858 | 0.933 | **0.967** | _0.966_ |", " This paper proposes a geometry representation based on irregular grids for generative shape modeling. Strengths:\n- It is an interesting idea, especially when combined with transformers and vector quantization.\n- Some good results on reconstructed and generated shapes, although I am not convinced with the comparisons.\n\nWeaknesses:\n- The advantage over point cloud-based generative modeling is not well established, both conceptually and experimentally.\n- It is a bit of a stretch to call this a grid-based algorithm and only consider grid-based alternatives.\n- The reconstructions are only compared against other networks, there are no comparisons to surface reconstruction methods e.g. Poisson or edge-preserving MLS.\n- The proposed generative tasks are only on shape-net, no real images or point clouds.\n- Memory and runtimes are not clear.\n My suggestions for improvement are:\n- Having examples with non-shape-net data to illustrate robustness and utility in practice.\n- Having comparisons with surface reconstruction techniques other than only neural grids. The point clouds look clean and seem to sample the shapes well.\n- If no good results with real data for generative tasks, I would drop that claim and focus on the new representation's ability for reconstruction, efficiency, spatial adaptivity, etc. Please see above.", " The paper proposes a method for detailed shape reconstruction from point clouds and images, as well as shape generation. The method utilizes FPS and KNN together with a PointNet to create patch embeddings of train shapes, and then it encodes a shape indicator function using a transformer architecture. The paper demonstrates impressive shape reconstruction and generation results for all of the aforementioned tasks. Strengths\n- The paper presents a novel method for encoding local shape information using irregular latent spaces and using transformer-based autoregressive models for shape reconstruction from point clouds and images.\n- The paper presents impressive - accurate and detailed, shape generation results from high-res and blurred images, using compact shape latent spaces.\n- The presentation is very clear.\n- The paper presents a comprehensive set of experimental results illustrating the advantages of the proposed method and comparing it to current SOTA methods.\n\nWeaknesses:\n- A possible weakness of the proposed approach may be its sensitivity to shape sampling. E.g., how would the method work if the input point clouds were not all sampled at N = 2048 points? How were these points obtained? Would irregular sampling density affect the results of the proposed method? This possible limitation needs to be addressed in the paper, with some experimental results and a discussion.\nParts of the method's description which require further classification.\n- In Figure 4, what’s the output of the model? How the point cloud, or another shape representation, is decoded from it?\n- Figure 5 is inconsistent with Figure 4. Show a block diagram of the proposed architecture, as you do in Figure 4.\n- In Section 4, how are x_{i,k} obtained for different i-s? What is the difference between x_{i,k} and x_{j,k} for different i, j? Please explain.\n- In Section 5, further comparison with IF-NET is needed: the proposed method slightly improves over IF-NET. How does it compare to IF-NET (and other methods) in other respects - resource consumption, train and inference time, etc.?\n- Explain the purpose of Eq.(8) and Eq.(9).\nComments regarding supplementary material.\n- In A.4., how do the methods compare in terms of memory and runtime?\n- In A.6., ll.62-63 - “Bad” in what sense? Non-realistic generations by the proposed method should be highlighted as well.\nOther comments.\n- In l.130, explicitly state which positional encodings are used, for completeness of the presentation.\n- Introduction section needs to be revised to more clearly describe the main aspects of the proposed approach.\n- A related work which should be cited: Y. Li, S. Pirk, H. Su, C. R. Qi, L. J. Guibas, FPNN: Field Probing Neural Networks for 3D Data, Neural Information Processing Systems (NIPS 2016).\n- In Section 6.1, l.214, provide reference for GPT.\n- Nit: In Table 1, consider writing all numbers in the same manner such that it will be clear which parameters are identical for different methods. Please see and address the questions raised in the previous section. The authors need to address further a potential limitation of the method, namely, its sensitivity to changes in sampling of the input point clouds.", " This paper presents a new 3D shape representation for 3D generative modeling. The proposed shape representation is formed as irregular grids of latent vectors in the 3D space that roughly forms the shape, which can be encoded with Transformer-based networks and decoded via autoregressive generation, similar to VQ-GAN. Experiments on 3D shape generation conditioned on different context is performed on ShapeNet and shows promising results.\n Strengths:\n- The authors proposed a very interesting idea of bringing recent advances on generative image modeling into 3D shapes, specifically VQ-GAN. It nicely brings in the sparse nature of 3D shapes, and bridges together commonly accepted differentiable point cloud processing operations with autoregressive generation.\n- The experimental results are pretty promising. I especially like the fact that, conditioned on a single input image, the model is capable of generating different shape modalities, which is something previous deterministic shape reconstruction methods cannot achieve.\n- The paper is nicely written and the presentation is clear.\n\nWeaknesses:\n- Maybe I missed it, but it was not clear how the autoregressive generation ordering was determined (L156-158). I think the authors should also provide some discussions on whether (or how) the model is / is not invariant to the permutation of such ordering; for completeness, results comparing different input permutations may also give a better understanding.\n- For the \"image-conditioned generation\" experiments comparing single-image shape reconstruction, I would like to understand more where the actual performance gain comes from. Besides the new irregular latent grid representation, the entire backbone architecture (GPT) is also much more powerful than OccNet (ResNet-18); even the pretraining data is entirely different. Therefore, I am not totally convinced that all the benefit comes from the new representation. I think a critical missing experiment is to ablate this factor; specifically, a baseline that uses GPT but predicting occupancy like OccNet should be included to verify that the irregular latent grids do have its merits.\n- For the image-conditioned experiments (following the above concern), it was not clear whether there are still input point clouds being conditioned. If yes, then the comparison against OccNet becomes yet more unfair; if not, it is not very clear what the input pipeline would look like. My guess is that a full shape reconstruction network (Fig 3) is trained first, then only the Transformer part is taken for the experiment (with the PointNet discarded); it would be good if the authors could clarify.\n It would be great if the authors could address all the concerns listed above. The limitations and potential societal impact have been adequately addressed.", " This submission proposes to irregularly distribute local SDF latent codes in 3D, to adapt to shapes and not be restricted to a 3D grid with fixed resolution.\nIn addition, authors present ways to quantize their representation, and use it in the generative setting with uni- and bi-directional transformers. Strengths:\n- excellent technical presentation. The use of colors in the text and equations to refer to different pipeline parts is a great idea.\n- generating multiple plausible outputs given 1 single conditioning signal (eg. Fig. 10) is a very good use of probabilistic models\n- results look qualitatively good, and the proposed pipeline is properly compared against recent models (IF-Net, ConvOccNet) that distribute latent codes on fixed grids.\n\n\nWeaknesses:\n- Quantizing 3D positions of local latents sounds very similar to using a grid. What is the difference here?\n- Some very related papers such as \"Local Deep Implicit Functions for 3D Shape\" [CVPR 2020] are not mentioned. In this one, \"the global implicit function is decomposed into the sum of N local implicit functions.\", not on a grid...\n- The intro+related Sections mention shape representation and generation, while the method section starts with an input point cloud. l. 33 : Why is fixed length representation a benefit? Wouldn't an adaptive representation be more useful, with a tiny representation for simple shapes, and long representation for complex ones?\n\nl.141 What is the motivation behind the non linear interpolation between local latent codes? Could it be ablated? Compared to autodecoders, this probabilistic pipeline can no longer represent a shape with a latent code." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 4 ]
[ "nips_2022_RO0wSr3R7y-", "nips_2022_RO0wSr3R7y-", "7Uw9QhaY9Ay", "4Tqf7Ah7MRr", "0bvJKyXZcD", "HMyyucKlkJk", "EXOojKbopJ5", "rY6lB1VEEQe", "YzLqcqNaKn2", "xMJOIOzK9uU", "nips_2022_RO0wSr3R7y-", "nips_2022_RO0wSr3R7y-", "nips_2022_RO0wSr3R7y-", "nips_2022_RO0wSr3R7y-" ]
nips_2022_N0tKCpMhA2
Coresets for Vertical Federated Learning: Regularized Linear Regression and $K$-Means Clustering
Vertical federated learning (VFL), where data features are stored in multiple parties distributively, is an important area in machine learning. However, the communication complexity for VFL is typically very high. In this paper, we propose a unified framework by constructing \emph{coresets} in a distributed fashion for communication-efficient VFL. We study two important learning tasks in the VFL setting: regularized linear regression and $k$-means clustering, and apply our coreset framework to both problems. We theoretically show that using coresets can drastically alleviate the communication complexity, while nearly maintain the solution quality. Numerical experiments are conducted to corroborate our theoretical findings.
Accept
The reviewers have converged around the idea that the paper proposes an interesting approach to vertical federated learning; they also conclude that the authors have provided replies to reviews that answered questions and provided useful clarifications, which encourages the acceptance of the paper. I will stress the need for the authors to properly include further updates for the camera ready version of the paper; in their reply, they indeed make several promises to reviewers and it is important that such updates be properly included (last comment to NZai, intermediary comment to XK2T).
train
[ "mr7ZzPRVDv", "q08fGS38zS", "biHk5a9hUv2", "UyOeA-P_A2k", "R6Tx3FCcP_t", "-o33SwYDY8", "aft5knh6wgd", "vTRGBXodyS0", "peLxfWfXQAi", "RjJkufbFxlR" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks a lot for your support! We will address your comments in the updated version.", " Thanks a lot for your support!", " I have read the author's rebuttal and I keep my score. \n\n", " Thanks for the response, which addresses my concerns and questions. I will maintain my scores. \n\nBTW, it would be better if the authors could describe the security model in the revised paper. Besides, add a discussion that the central server can be replaced with any party (e.g. Party $T$), since in practical VFL applications, there is usually no such a central server. ", " Dear Reviewer XK2T,\n\nThank you for your positive review and feedback. We answer your specific questions below.\n\n> ''Can you please explain (intuitively) the relation between Theorem 4.2 to the parameter which represents the maximum sensitivity gap overall points. Same for $\\tau$ in Theorem 5.2. Can you give please more details about the parameter which represents the maximum sensitivity gap, how is it related to the data e.g., in linear regression?''\n\nFor VRLR in theorem 4.2, as the parameter $\\gamma$ becomes larger, the sensitivity gap $\\zeta$ becomes smaller (Lemma E.2). Intuitively, $\\gamma\\in (0,1]$ represents the degree of orthonormal among data in different parties. As the larger $\\gamma$ is, the more orthonormal among the column spaces of $X^{(j)}$, and thus $U$ is more close to the orthonormal basis computed on $X$ directly. Consequently, we have \n$$\\sup\\limits_{\\theta\\in \\mathbb R^d}\\frac{cost^R_i(X,\\theta)}{cost^R(X,\\theta)}\\approx \\sum_{j\\in [T]} \\sup\\limits_{\\theta\\in R^d}\\frac{cost^R_i(X^{(j)},\\theta)}{cost^R(X^{(j)},\\theta)},$$ \nwhich implies that the sensitivity gap $\\zeta$ is more close to 1 if each $g_i^{(j)}\\approx \\sup\\limits_{\\theta\\in R^d}\\frac{cost^R_i(X^{(j)},\\theta)}{cost^R(X^{(j)},\\theta)}$ is close to the local sensitivity. \n\nFor VKCM in Theorem 5.2, as the parameter $\\tau$ becomes smaller, the sensitivity gap $\\zeta$ becomes smaller (Lemma F.2). Intuitively, as $\\tau$ is more close to 1, Assumption 5.1 implies that there exists a party $t\\in [T]$ whose local pairwise distances $\\|x_i^{(t)} - x_j^{(t)}\\|$s are close to the corresponding global pairwise distances $\\|x_i - x_j\\|$s. Then the global sensitivity is approximately dominated by the local sensitivity $g_i^{(t)}$, which implies that $g_i=\\sum_{t\\in [T]} g_i^{(t)}$ is indeed an approximate upper bound of the global sensitivity.\n\nWe will add more discussions in the future version.", " Dear Reviewer 4KgV,\n\nThanks for your time and effort! We have fixed typos according to your suggestions and will move more contents of robust coreset to the main text in the future version.\n\n> ''I could not find the details of the hardware system used for the implementation. Was it really a distributed set up or was the data just broken into parts and experiments were done on single system?''\n\nWe conduct experiments on a single system that simulates the distributed settings and reports the training loss/test loss/communication complexity.\n\n> ''Will the results for regularized regression also extend to the setting when the data matrix is low rank? Is there any assumption here that data is full rank?'' \n\nAssumption 4.1 does not require each local matrix $X^{(j)}$ to be full rank. For each party $j$, the number $d_j'$ of the orthonormal basis $U^{(j)}$ might be smaller than the number of features $d_j$, which implies that the matrix $X^{(j)}$ can be low rank. However, we do need some kind of ''full rank'' assumptions between parties, i.e., the combination $U$ of $U^{(j)}$s should be ''full rank''. \n\n> ''Also the theoretical results hold for linear regression (no regularization) also?''\n\nFor linear regression without a regularizer, our theorems/results also hold.\n\n> ''Instead of giving the calculated values of communication complexity in the graphs, is it possible to compute the time required for the communication in the experimental setup?''\n\nSince we use a single system to simulate the distributed setting, it may be inaccurate to count the communication time. Thus, we report the communication complexity that also matches our theoretical results.", " Dear Reviewer Nzai,\n\nThanks for your time and effort! We will revise our paper according to your suggestions by moving more contents of robust coreset to the main text and adding more discussions on privacy analysis. \n\n## Definition\n\n> ''How is the cost evaluated in Section 6? In Definition 2.3 and Definition 2.4, the cost is evaluated on the coresets (i.e. $\\sum_{i\\in S}$). However, in my opinion, although the models are trained/computed on the coresets, the cost should be evaluated on the entire dataset (i.e., $\\sum_{i\\in [n]}$) when assessing the performance.''\n\nIn Section 6, we partition the YearPredictionMSD dataset into a training set and a testing set. For the ridge regression task, we train it using the coreset/uniformly sampled subset from the training set to learn a model $\\theta$, and report the regression loss $cost^R(T, \\theta)$ on the testing set $T$. For the $k$-means clustering task, because it is an unsupervised learning problem, we optimize on the training set and get the model $C$, then report the clustering cost $cost^C(X, C)$ on the full training set $X$.\n\nWe make more clarification in the revised PDF version.\n\n## Privacy Analysis\n\n> ''I suggest the authors discuss the privacy of the proposed algorithms. To be specific, analyze what can be leaked from the messages transmitted in Algorithm 1. I also suggest the authors clarify the security assumption. It seems the server and clients are assumed to be semi-honest. But when they become malicious, is the proposed algorithm safe?''\n\nWe agree that privacy is very important in federated learning and will add more discussions later. The whole framework can be decomposed into two parts: the coreset construction and the model training. \n\nAs for the coreset construction part (Algorithm 1), the privacy leakage comes from the \"sensitivity score\" $g_i^{(j)}$ of the data points in different parties. To tackle this problem, we can use multi-party computation/secure aggregation such as [Bonawtiz et al, Practical Secure Aggregation for Privacy-Preserving Machine Learning, CCS'17] to transport the sum $g_i=\\sum_{j=1}^T g_i^{(j)}$ to the server without letting the server know the exact values of $g_i^{(j)}$s (Line 7 of Algorithm 1). By applying secure aggregation, the server only knows $(S,w)$ and $\\mathcal{G}^{(j)}$s. We added a comment in the revised PDF to illustrate this (footnote, page 5).\n\nFor the model training part, we can apply the secure VFL algorithms if existed, e.g., using homomorphic encryption on SAGA for regression (it is an extension from SGD to SAGA [27]). For clustering in the VFL setting, currently we do not know any secure methods. \n\nThe previous discussion assumes the semi-honesty model. Suppose some parties are malicious. For instance, party $j$ can report a large enough $\\mathcal{G}^{(j)}$ (Line 2 of Algorithm 1) such that the server sets the number of samples $a_j\\approx m$ in party $j$ (Line 4 of Algorithm 1), where $m$ is the coreset size of $S$. Consequently, party $j$ can sample a large multiset $S^{(j)}$ which heavily affects the resulting coreset $S$. By e.g., reporting $S^{(j)}$ of uniform samples, party $j$ can make $S$ close to uniform sampling and loss the theoretical guarantees in Theorem 3.1. \n\n\n## Experiment\n\n> ''How does the CENTRAL baseline solve the ridge regression problem? Does it solve by the closed-form solution or gradient descent? If by the closed-form solution, how does it solve the Lasso regression and elastic nets in Appendix A.2?''\n\nIn the experiment, we use the scikit-learn package to solve the problems. As for the ridge regression, scikit-learn uses the lbfgs solver (some second-order methods). For lasso and elastic net, scikit-learn uses the SAGA solver. \n\n> ''In Table 1, why the communication complexity of C-SAGA decreases w.r.t. the size of coresets? As indicated by Theorem 2.5, the communication complexity should be $\\Omega(mT)$ where $m$ is related to the size of coresets.'' \n\nWe are sorry that we forgot to multiple the coreset size when computing the communication complexity of SAGA in Table 1. A larger coreset size indeed has a larger communication complexity. Table 1 is fixed and the graph in the main content is updated in the revised pdf. \n\n> ''Moreover, I suggest the authors report the communication of coreset construction and model training, respectively. It would help readers understand the overhead of your method.''\n\nThanks for your suggestion. We will report the communication complexity of coreset construction and model training in the future version. In general, the communication complexity of coreset construction is very small compared to that of model training.", " This work considers how to subsample the datasets via coresets under the vertical federated learning (VFL) setting. By doing so, the communication cost of VFL algorithms can be sharply reduced. The authors first introduced a unified coreset construction algorithm in VFL, which requests an importance value for each sample, and then proposed how to estimate the importance values for regularized linear regression and k-means clustering, respectively. Experimental results show that the proposed algorithms can reduce the communication complexity without harming the model performance significantly. ## Strengths\nOverall, this is a solid work for me. The techniques are well introduced and the paper is easy to follow. Although the authors made two ideal assumptions when proposing algorithms for regularized linear regression and k-means clustering, they also provided the theoretical analysis when the assumptions are not satisfied. The experimental results are positive: the proposed algorithms work well when using less than 4% of the training data. \n## Weakness\nPrivacy is an important issue in VFL, but the authors did not analyze the privacy of the proposed algorithms. And the experimental results can be explained more in-depth. \n ## Definition\nHow is the cost evaluated in Section 6? In Definition 2.3 and Definition 2.4, the cost is evaluated on the coresets (i.e. $\\sum_{I \\in S}$). However, in my opinion, although the models are trained/computed on the coresets, the cost should be evaluated on the entire dataset (i.e., $\\sum_{I \\in \\[n\\]}$) when assessing the performance.\n\n## Privacy analysis\nI suggest the authors discuss the privacy of the proposed algorithms. To be specific, analyze what can be leaked from the messages transmitted in Algorithm 1. I also suggest the authors clarify the security assumption. It seems the server and clients are assumed to be semi-honest. But when they become malicious, is the proposed algorithm safe? \n\n## Experiments\nHow does the CENTRAL baseline solve the ridge regression problem? Does it solve by the closed-form solution (i.e., $(X^TX + \\lambda I)^{-1} X^T y$) or gradient descent? If by the closed-form solution, how does it solve the Lasso regression and elastic nets in Appendix A.2? \n\nIn Table 1, why the communication complexity of C-SAGA decreases w.r.t. the size of coresets? As indicated by Theorem 2.5, the communication complexity should be $\\Omega(mT)$ where $m$ is related to the size of coresets. \n\nMoreover, I suggest the authors report the communication of coreset construction and model training, respectively. It would help readers understand the overhead of your method. \n\n## Some other points\n- The caption of Figure 1 is missing.\n- The term “robust coreset” is not introduced. \n- In many previous VFL studies, there is no such a central server. However, it seems the central server can be substituted by Party T in Algorithm 1, so it looks good to me. \n The authors have discussed future directions in Section 7.", " The paper gives a unified framework to construct coresets for the regularized regression and k-means clustering in the vertical federated learning setting where the features of the data are distributed over multiple parties. The aim is to make the communication complexity sublinear in the no. of data points. The high level idea is to calculate local sensitivities for points at each party and then use them to sample points at each party. Finally take a union of these samples. The main contribution is to show that this intuitive technique gives a coreset with much lower communication complexity under some assumptions on the data. When there are no assumptions, the authors also give some lower bounds. The authors also validate their theoretical claims with empirical results on real world dataset. Strengths:\n1) The use of coresets in the VFL setting to reduce communication complexity, to the best of my knowledge is novel. It is an important and interesting application.\n2) The idea is simple and intuitive\n3) The theory and the proofs appear sound and correct. I didnot go through the proofs in the appendix in details but had a quick look.\n4) Empirical evaluations validate theoretical claims in terms of the cost.\n\nWeaknesses:\n1)The writing is sloppy at a few places. Blelow are a few examples:\n i)Line 22: \"Most of the VFL literature focus on the privacy issue, and design secure...\" should be and designing. \n ii) Line 66 \"how to defend again\" should be against. \n iii) Line 68 last sentence is not proper.\n iv)Algorithm 1, line 3 summation should be over j.\n2) I could not find the details of the hardware system used for the implementation. Was it really a distributed set up or was the data just broken into parts and experiments were done on single system?\n3) Most of the proof techniques(other than lower bounds) and the coreset construction techniques are very well known in literature and there is lack of novelty in that sense.\n4) The remarks regarding robust coresets are given without even the definiton of robust coreset in the main paper. I believe atleast the definition and some intuition/discussion as to why the robust coresets are possible without assumptions should be in main body otherwise the main paper becomes less readable. Please clarify the following questions\n1) Will the results for regularized regression also extend to the setting when the data matrix is low rank? Is there any assumption here that data is full rank? Also the theoretical results hold for linear regression( no regularization) also? you have already shown some empirical results in appendix.\n2) Instead of giving the calculated values of communication complexity in the graphs, is it possible to compute time required for the communication in the experimental setup?\n Please see the weaknesses/ questions section. Also please do a complete proof reading and try to make some sentence constructions better.", " This paper forges a link between vertical federated learning where the attributes of the data are spread along multiple machines, to coresets - a data summarization that approximates the cost of every query on the data.\nSpecifically speaking, they focus on the problems of k-means and least mean squares regression.\n\nThe main idea is to reduce the communication complexity of an algorithm by computing a coreset for the distributed data - to do so, the authors extend the framework of importance (sensitivity) sampling to the vertical federated learning settings; see Algorithm 1 and Theorem 3.1.\n\nFocusing on Least squares regression the authors demonstrate that it is often challenging to build a \"strong\" (for every query) coreset for VRLR - the lower bound on the communication complexity is Ω(n).\nThen, they show how under a loose assumption, this can be done efficiently. According to this presumption, the subspace produced by data on any one party should not be entirely contained inside the subspace produced by other parties. Under such an assumption, one can get a coreset of size-independent from n, polynomial in the dimension d, and the appropriation error.\n\nThe same scenario hold for k-means - which requires a communication complexity Ω(n). However, assuming any two data points that may be differentiated can also be differed on a party that is \"important\" to some amount.\n\nFinally, experimental results show that coresets reduce the communication complexity and maintain a good quality solution.\n I like the paper, mainly, the idea of using coreset in this new realm. The ideas are very interesting, and the findings are robust.\n\n\n\nSome comments and questions:\n1. Theorem 3.1 - S, W is not defined in the theorem.\n2. The empirical results are good but can be extended, e.g., 1. try on a larger number of parties, other datasets, and more.\n3. Please define (or at least explain) a robust coreset in the manuscript. \n\nCan you please explain (intuitively) the relation between $\\gamma$ in Theorem 4.2 to the parameter which represents the maximum sensitivity gap overall points. Same for \\tau in Theorem 5.2.\n\n\nCan you give please more details about the parameter which represents the maximum sensitivity gap, how is it related to the data e.g., in linear regression? The coreset requires a specific assumption on the data (that may not be satisfied in real-world)." ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "UyOeA-P_A2k", "biHk5a9hUv2", "R6Tx3FCcP_t", "aft5knh6wgd", "RjJkufbFxlR", "peLxfWfXQAi", "vTRGBXodyS0", "nips_2022_N0tKCpMhA2", "nips_2022_N0tKCpMhA2", "nips_2022_N0tKCpMhA2" ]
nips_2022_5pvB6IH_9UZ
CHIMLE: Conditional Hierarchical IMLE for Multimodal Conditional Image Synthesis
A persistent challenge in conditional image synthesis has been to generate diverse output images from the same input image despite only one output image being observed per input image. GAN-based methods are prone to mode collapse, which leads to low diversity. To get around this, we leverage Implicit Maximum Likelihood Estimation (IMLE) which can overcome mode collapse fundamentally. IMLE uses the same generator as GANs but trains it with a different, non-adversarial objective which ensures each observed image has a generated sample nearby. Unfortunately, to generate high-fidelity images, prior IMLE-based methods require a large number of samples, which is expensive. In this paper, we propose a new method to get around this limitation, which we dub Conditional Hierarchical IMLE (CHIMLE), which can generate high-fidelity images without requiring many samples. We show CHIMLE significantly outperforms the prior best IMLE, GAN and diffusion-based methods in terms of image fidelity and mode coverage across four tasks, namely night-to-day, 16x single image super-resolution, image colourization and image decompression. Quantitatively, our method improves Fréchet Inception Distance (FID) by 36.9% on average compared to the prior best IMLE-based method, and by 27.5% on average compared to the best non-IMLE-based general-purpose methods. More results and code are available on the project website at https://niopeng.github.io/CHIMLE/.
Accept
This paper introduces a conditional image synthesis method based on Implicit Maximum Likelihood Estimation (IMLE). Compared to previous work CIMLE, the paper has introduced a divide-and-conquer method to accurately estimate latent code without evaluating many samples. The paper has received consistently positive reviews. Reviewers found the idea intuitive and interesting, and the method effective (especially compared to CIMLE). The rebuttal further addressed the concerns and included comparisons with the same backbone architecture and additional baselines and evaluations. The AC agreed with the reviewers’ consensus and recommended accepting the paper.
train
[ "AXC-myKlLiT", "rmRMyYCXHw", "YZuYGmxH4B-", "15jk5Et9mY", "WpYb705hDdW", "CYs5k9tfwTs", "NlhJehHcrVZ", "P-hRK30QSiV", "jeny33SzRsa", "v9PQrIvts9KW", "9zDuvpVFTwr", "0gUioh31eDT", "CFbRucya0p", "baCBlUW6kv", "3YnjSVpya-", "YRp69Ep-Qn-", "ND3Oh4C5ssH" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " **For Q1**: We want to remind the reviewer that while the generation procedure is hierarchical, the number of such levels of hierarchy is a lot smaller than the number of parallel generated samples, so it does not compromise generation speed. On the contrary, the hierarchical generation procedure greatly improves the generation efficiency, as shown in Figure 2.\n\n**For Q2**: As discussed on lines 85-89 in Section 2.2, achieving high generated image quality requires generating many samples. However, sample generation is expensive and this limits the performance of cIMLE in practice. The proposed method overcomes this limitation by generating samples more efficiently. The results show that this is critical to improving generated image quality — our method substantially improves in FID by 33.6% on average compared to “cIMLE + our architecture” (which only differs in the sampling procedure from our method).\n\nHope this helps clarify your questions; please let us know if you have any further comments or questions.", " For Q1: the hierarchical conditioned generation process does deprecate the parallelism of inference within the model. \n\nFor Q2: the author still does not discuss too much about why high efficiency/low cost is vital for this problem (as described in weakness).\n\nFor Q3: the author compared with multiple methods with the same architecture to show the effectiveness of proposed strategy, which solves my question. \n\nThe author has solved some of my questions, while missing to answer Q2 well. ", " We have obtained the final results for SRFlow -- please refer to our original response for details. We would appreciate any feedback and are happy to clarify any further concerns if there are any.", " Thank you for the responses. I think these address my concern in raised in the review.", " We hope our response addressed your concerns. We would greatly appreciate any feedback and if you have any remaining concerns, we would be more than happy to clarify them before the discussion deadline on Aug 9th 8pm UTC / 4pm ET. Regarding the results on SRFlow, we are expecting to have the final results within the next 24hrs, thank you for your patience.", " We hope our response addressed your concerns. We would greatly appreciate any feedback and if you have any remaining concerns, we would be more than happy to clarify them before the discussion deadline on Aug 9th 8pm UTC / 4pm ET.", " We hope our response addressed your concerns. We would greatly appreciate any feedback and if you have any remaining concerns, we would be more than happy to clarify them before the discussion deadline on Aug 9th 8pm UTC / 4pm ET.", " We hope our response addressed your concerns. We would greatly appreciate any feedback and if you have any remaining concerns, we would be more than happy to clarify them before the discussion deadline on Aug 9th 8pm UTC / 4pm ET.", " ## Q1: The method is presented in a much more general way, although only one way is considered\n\nA1: This work considers the latent code search problem in the context of image generation, but the underlying principles (latent code division, partial evaluation, iterative combination) are more broadly applicable beyond image generation. We want to keep the description of the methodology general in order to help readers who might want to apply the method to other domains.\n\n## Q2: Improved Precision and Recall Metric\n\nA2: As suggested, we computed the Improved Precision and Recall metric [a] and show the results compared to baselines in the table below.\n\n| | Night-to-day | Night-to-day | SR | SR | Col | Col | DC | DC |\n|------------|---------------------|------------------|---------------------|------------------|---------------------|------------------|---------------------|------------------|\n| | Precision$\\uparrow$ | Recall$\\uparrow$ | Precision$\\uparrow$ | Recall$\\uparrow$ | Precision$\\uparrow$ | Recall$\\uparrow$ | Precision$\\uparrow$ | Recall$\\uparrow$ |\n| BicycleGAN | $0.522$ | $0.041$ | $0.615$ | $0.159$ | $0.744$ | $0.518$ | $\\underline{0.869}$ | $\\underline{0.486}$ |\n| MSGAN | $0.479$ | $0.003$ | $0.545$ | $0.156$ | $0.694$ | $0.578$ | $0.766$ | $0.346$ |\n| DivCo | $0.611$ | $0.007$ | $0.561$ | $0.153$ | $0.759$ | $0.484$ | $0.845$ | $0.310$ |\n| MoNCE | $\\textbf{0.818}$ | $0.008$ | $0.699$ | $0.120$ | $\\textbf{0.787}$ | $\\underline{0.624}$ | $0.830$ | $0.244$ |\n| cIMLE | $0.578$ | $\\underline{0.054}$ | $\\underline{0.827}$ | $\\underline{0.278}$ | $0.638$ | $0.423$ | $0.853$ | $0.441$ |\n| CHIMLE | $\\underline{0.785}$ | $\\textbf{0.352}$ | $\\textbf{0.934}$ | $\\textbf{0.697}$ | $\\underline{0.761}$ | $\\textbf{0.757}$ | $\\textbf{0.941}$ | $\\textbf{0.717}$ |\n\nAs shown in the table above, our proposed method outperforms all baselines by a significant margin across all tasks in recall, and in precision in most cases. In the few remaining cases, only one baseline outperforms our method, and it does so at the expense of a lower recall.\n\n## Q3: What is the image size?\n\nA3: For Super-Resolution, our input is $32\\times32$ and our output size is $512\\times512$. For all other tasks, the input and target resolution are $256\\times256$ and we downsample the input to the corresponding operating resolution at each level of the hierarchy. We will include this in the camera-ready. Regarding scaling up to image size of 1K, one can simply add an additional level in the hierarchy to reach that resolution. \n\n## Q4: Comparison to SRFlow\n\nA4: As suggested, we started the training of SRFlow on super-resolution. ~~Although the training has not yet converged, we include the results we got so far and we will update them as the training progresses. The results table below shows the FID and faithfulness-weighted variance (FwV) achieved by SRFlow so far, alongside our results.~~ \n\n[UPDATE]: The training has finished and we show the FID and faithfulness-weighted variance (FwV) results in the table below.\n\n| | FID$\\downarrow$ | FwV ($\\sigma=0.2$)$\\uparrow$ |\n|--------|------------------|------------------------------|\n| SRFlow | $91.55$ | $0.89$ |\n| CHIMLE | $\\textbf{16.01}$ | $\\textbf{5.61}$ |\n\n~~From the results shown in the table, so far our method is outperforming SRFlow.~~ \n\nAs shown in the table, our model outperforms SRFlow which validates the effectiveness of the proposed approach.\n\n[a] Kynkäänniemi et al. Improved Precision and Recall Metric for Assessing Generative Models. NeurIPS 2019.", " ## Q1: Does the hierarchical conditional generation process deprecate the parallelism of this method?\n\nA1: The proposed method is still parallelized over $m=100$ samples for each conditioning input —while it is not parallelized over different levels of the hierarchy, there are only $L=4$ of such levels which is much smaller than $m$.\n\n## Q2: How does sampling efficiency connect to the proposed strategy?\n\nA2: The proposed divide-and-conquer strategy reduces the search space for the latent code to a more promising region. Because the region is smaller, there are more samples generated within a given area within the region than outside of it. This makes it more likely to find a sample that is close to the observed image, which leads to better sampling efficiency.\n\n## Q3: Whether the improved architecture or the method contributes more to the SOTA performance?\n\nA3: We performed the suggested ablation study and trained cIMLE using the same architecture our method uses on two tasks (Super-resolution and Colourization) to disentangle the effect of the sampling strategy and network architecture. We find that our method still outperforms cIMLE by 33.6% on average with the same network architecture, which validates the effectiveness of our method.\n\nIn addition, we retrained various GAN-based baselines (BicycleGAN, MSGAN and MoNCE) with our architecture to further validate our method’s effectiveness. We observed that the GAN-based baselines failed to converge when trained from scratch with our architecture, so we pretrained their generator using our method which gave them an advantage. We show the FID results in the table below.\n\n| | Super-Resolution (SR) | Colourization (Col) |\n|-------------------------------|-----------------------|---------------------|\n| BicycleGAN + our architecture | $53.30$ | $66.32$ |\n| MSGAN + our architecture | $57.94$ | $81.86$ |\n| MoNCE + our architecture | $31.72$ | $\\underline{27.85}$ |\n| cIMLE + our architecture | $\\underline{21.13}$ | $42.67$ |\n| CHIMLE | $\\textbf{16.01}$ | $\\textbf{24.33}$ |\n\nAs shown above, our method consistently outperforms the baselines which demonstrates the effectiveness of our method.", " ## Q1: Improved Precision and Recall Metric\n\nA1: As suggested, we computed the Improved Precision and Recall metric [a] and show the results compared to baselines in the table below.\n\n| | Night-to-day | Night-to-day | SR | SR | Col | Col | DC | DC |\n|------------|---------------------|------------------|---------------------|------------------|---------------------|------------------|---------------------|------------------|\n| | Precision$\\uparrow$ | Recall$\\uparrow$ | Precision$\\uparrow$ | Recall$\\uparrow$ | Precision$\\uparrow$ | Recall$\\uparrow$ | Precision$\\uparrow$ | Recall$\\uparrow$ |\n| BicycleGAN | $0.522$ | $0.041$ | $0.615$ | $0.159$ | $0.744$ | $0.518$ | $\\underline{0.869}$ | $\\underline{0.486}$ |\n| MSGAN | $0.479$ | $0.003$ | $0.545$ | $0.156$ | $0.694$ | $0.578$ | $0.766$ | $0.346$ |\n| DivCo | $0.611$ | $0.007$ | $0.561$ | $0.153$ | $0.759$ | $0.484$ | $0.845$ | $0.310$ |\n| MoNCE | $\\textbf{0.818}$ | $0.008$ | $0.699$ | $0.120$ | $\\textbf{0.787}$ | $\\underline{0.624}$ | $0.830$ | $0.244$ |\n| cIMLE | $0.578$ | $\\underline{0.054}$ | $\\underline{0.827}$ | $\\underline{0.278}$ | $0.638$ | $0.423$ | $0.853$ | $0.441$ |\n| CHIMLE | $\\underline{0.785}$ | $\\textbf{0.352}$ | $\\textbf{0.934}$ | $\\textbf{0.697}$ | $\\underline{0.761}$ | $\\textbf{0.757}$ | $\\textbf{0.941}$ | $\\textbf{0.717}$ |\n\nAs shown in the table above, our proposed method outperforms all baselines by a significant margin across all tasks in recall, and in precision in most cases. In the few remaining cases, only one baseline outperforms our method, and it does so at the expense of a lower recall.\n\n## Q2: GAN-based Method Trained with ADA\n\nA2: As suggested, we compared to GAN trained with ADA in the limited data setting. We choose the task with fewest training images (980 images), image decompression, for the comparison under this setting. We choose the best performing GAN-based baseline for that task, BicycleGAN, and trained it with ADA. The FID result is shown in the table below.\n\n| | FID$\\downarrow$ |\n|------------------|------------------|\n| BicycleGAN | $87.35$ |\n| BicycleGAN + ADA | $\\underline{86.55}$ |\n| CHIMLE | $\\textbf{73.69}$ |\n\nAs shown in the table above, our proposed method outperforms BicycleGAN trained with ADA. This result also demonstrates that our proposed method performs well in the limited training data case.\n\n## Reference\n\n[a] Kynkäänniemi et al. Improved Precision and Recall Metric for Assessing Generative Models. NeurIPS 2019.", " ## Q1: Baseline Comparison with the Same Backbone Architecture\n\nA1: As suggested, we retrained BicycleGAN and MSGAN on super-resolution (SR) and colourization (Col) using the same generator architecture used by our method. Furthermore, we also retrained two other baselines, cIMLE and MoNCE, with the same architecture. We observed that the GAN-based baselines failed to converge when trained from scratch with our architecture, so we pretrained their generator using our method (which gave them an advantage over the vanilla randomly initialized versions). We show the FID results in the table below.\n\n| | Super-Resolution (SR) | Colourization (Col) |\n|------------|-----------------------|---------------------|\n| BicycleGAN + our architecture | $53.30$ | $66.32$ |\n| MSGAN + our architecture | $57.94$ | $81.86$ |\n| MoNCE + our architecture | $31.72$ | $\\underline{27.85}$ |\n| cIMLE + our architecture | $\\underline{21.13}$ | $42.67$ |\n| CHIMLE | $\\textbf{16.01}$ | $\\textbf{24.33}$ |\n\nAs shown above, our method still consistently outperforms the baselines with the same network architecture, thereby validating the effectiveness of our method.\n\n## Q2: What is the backbone architecture?\n\nA2: Please refer to Section 3 of the paper and Section A of the supplementary materials for details.\n\n## Q3: PSNR for Super-Resolution (SR)\n\nA3: As suggested, we computed the PSNR metric for our method and the baseline methods on SR and show the results in the table below.\n\n| | PSNR$\\uparrow$ |\n|-------------------------------|------------------|\n| BicycleGAN | $15.99$ |\n| BicycleGAN + our architecture | $17.83$ |\n| MSGAN | $16.18$ |\n| MSGAN + our architecture | $15.67$ |\n| MoNCE | $18.47$ |\n| MoNCE + our architecture | $19.42$ |\n| RFB-ESRGAN | $\\underline{20.13}$ |\n| cIMLE | $20.11$ |\n| cIMLE + our architecture | $19.97$ |\n| CHIMLE | $\\textbf{20.30}$ |\n\nAs shown above, our method outperforms the baselines on SR in terms of PSNR, which validates our method’s performance.\n\n## Q4: Citation Format\n\nA4: Good catch, we will fix it in the camera-ready.", " We thank all reviewers for your time, constructive comments and unanimous appreciation of the proposed method and the good results. In particular, the reviewers remarked “the proposed divide-and-conquer strategy […] is very interesting” (R VssD), “the paper is well-written, and the idea […] is sound”(R WPok), “this paper proposes an interesting idea […] sounds quite reasonable” (R LstA), “this work demonstrates that CHIMLE significantly outperforms the prior best IMLE-based method […] achieves superior image fidelity and mode coverage” (R UYaB) and “the results show great improvement on multiple tasks” (R VssD).\n\nWe found the comments and questions very helpful, which add value to our work. Here, we provide a one-sentence summary of our response to questions raised by the reviewers — please refer to our individual responses to each review for the details. \n\n### Q1: What is the effect of using the proposed architecture for the baselines? (R UYaB, R VssD)\n\nA1: We have tried this, and found that our method still consistently outperformed the baselines. \n\n\n### Q2: Improved Precision and Recall Metric [a]? (R WPok, R LstA)\n\nA2: Our method achieves the best or nearly the best performance across tasks.\n\n### Q3: PSNR for Super-Resolution (SR)? (R UYaB)\n\nA3: As suggested, we computed the PSNR metric on super-resolution and found our method outperformed the baselines. \n\n### Q4: Comparison to GAN trained with ADA on limited training data (R WPok)\n\nA4: We tried this, and found that our method outperformed the GAN-based baseline under this setting.\n\n### Q5: Does the hierarchical conditioned generation process deprecate parallelism of this method? (R VssD)\n\nA5: No, the method parallelizes over the generation of different samples.\n\n### Q6: The method is presented in a general way, although only one way is considered (R LstA)\n\nA6: We presented it in a general way to make it easier for readers who might want to apply the underlying ideas to contexts beyond image generation.\n\n### Q7: Not sure whether this approach may be scaled up to image size of 1K? (R LstA)\n\nA7: Yes, it can, by adding one more level to the hierarchy.\n\n### Reference \n[a] Kynkäänniemi et al. Improved Precision and Recall Metric for Assessing Generative Models. NeurIPS 2019.", " This work proposes a new method Conditional Hierarchical IMLE to get around limitation of requirements of a large number of samples to generate high-fidelity images.\nThe proposed CHIMLE is shown to improve generated image fidelity, with a clear reduction in Fréchet Inception Distance compared to the prior best IMLE-based method. \nStrengths:\nTo generate samples in a way such that the best sample is about as similar to the observed image as if a large number of samples had been generated without actually generating that many samples,\nthe author proposes several methods, including partitioning the latent code, partial evaluation of latent code components, and iterative construction of latent code. These ideas give rise to a the method of Conditional Hierarchical IMLE or CHIMLE. \nThis work demonstrates that CHIMLE significantly outperforms the prior best IMLE-based method in terms of both fidelity and diversity on a variety of tasks. Besides, they also show that CHIMLE achieves superior image fidelity and mode coverage compared to leading general-purpose multimodal and task-specific methods.\n\n\n\nWeaknesses:\n1. In the experiments, what backbones are adopted in CHIMLE for different tasks, e.g., colorization, super resolution. If the comparison fair as different methods use different generator structure, e.g., BicycleGAN, MSGAN.\n2. Some proper metrics should be used for different tasks, e.g., PSNR for super-resolution.\n3. Please check the citation forms, I find some citations are in false format, e.g.:\nYingchen Yu Rongliang Wu Shijian Lu Fangneng Zhan, Jiahui Zhang. Modulated contrast for versatile image synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022. See above comments. The limitations and social impact have been discussed in the manuscript.", " The paper focuses on \"one-shot\" (so the SOTA autoregressive transformer and diffusion models are not considered for the comparison in this paper) multi-modal conditional generative tasks. The proposed method is based on the IMLE algorithm. The authors reduce the computational cost (in the training stage) of the previous conditional IMLE framework by a divide and conquer strategy. Experiments are conducted on night-to-day, monocular image super-resolution, image colorization, image decompression, and layout-to-image tasks to validate the effectiveness of the proposed method over the previous conditional IMLE scheme. *Strengths*\n1. The paper is well-written, and the idea of divide and conquer for improving cIMLE is sound.\n2. The authors provide a clear introduction to the previous cIMLE method, which is very helpful for readers unfamiliar with this field.\n3. According to extensive experimental results, the proposed method synthesizes more realistic and diverse images compared to several baseline approaches.\n*Weakness*\n1. Several metrics are proposed to evaluate the realism and diversity of the generated images, e.g., precision and recall. Can the authors provide more intuitions why these metrics are not used in measuring the diversity in conditional generative modeling?\n- Kynkäänniemi et al., \"Improved Precision and Recall Metric for Assessing Generative Models.\"\n2. Recent data augmentations can significantly improve the performance of GAN-based approaches, e.g., ADA, especially in the limited training data case. Can the authors comment on 1) how the proposed IMLE-based performs against the GAN-based methods trained with ADA and 2) how the proposed algorithm performs in the limited training data case?\n- Karras et al., \"Training Generative Adversarial Networks with Limited Data.\" Please see Weakness. The authors address the limitations and potential social impact.", " This paper aims to generate diverse and high-fidelity images in conditional image synthesis with a relatively small number of samples. Compared with the previous method conditional IMLE that counters the mode collapse, this paper improves the tradeoff between sampling efficiency and quality by introducing a diver-and-conquer selection mechanism by dimensions in the latent space. The proposed strategy brings better fidelity as well as the diversity of the generated results. Strength:\n+ The proposed divide-and-conquer strategy for latent code selection is very interesting. The division of latent code by dimensions is simple yet effective, and the partial evaluation of latent code components can well address the sub-problems with a modified criterion. When combing the latent code components, this method also considers the correlation between them and proposes the solve the sub-problems from low to high resolutions.\n+ The intuition behind this proposed method is clearly explained. \n+ The experimental results show great improvement on multiple tasks (16x super-resolution, night-to-day image translation, image colorization, extreme compression artifacts reduction)\n\nWeakness:\n- The divide-and-conquer algorithm aims to solve conceptually difficult problems with improved efficiency and parallelism. However, since this method requires constructing latent code at lower resolutions first, parallelism cannot be achieved. Besides, although the adoption of this time-tested algorithm into latent space sampling is very interesting, the author does not discuss too much about why high efficiency/low cost is vital for this problem. While the author does mention the sampling efficiency of CHIMLE is better than cIMLE, it is not very clear how it connects to the proposed strategy.\n- This paper changes both network structure and the sampling strategy, while the experimental results do not include such ablation studies that disentangle the two. In the IMLE paper, the authors at least compare the SRIM and BicycleGAN with the same generator architecture to show the effectiveness of the proposed method, which could serve as a good example for this paper. 1. As expressed in the weakness part, I have the following concern: does the hierarchical conditioned generation process deprecate the parallelism of this method? \n2. And, how does the sampling efficiency connect to the proposed strategy that uses divide-and-conquer?\n3. It is unclear whether the improved network architecture or the CHIMLE method contributes more to the SOTA performance. Can this be verified in new ablation studies that adopt the same generator architecture while using cIMLE, or adopt cIMLE's RRDB structure while using CHIMLE? The author has addressed the limitations and potential negative societal impact at the end of this paper. ", " This paper presents a novel way of applying a non-adversarial image generation technique called IMLE to image-to-image problems.\nNamely, the authors propose to split a latent code into several parts corresponding to different image scales. These sub-codes are selected coarse-to-fine: First, the code corresponding to the lowest resolution is found among the randomly sampled candidates. Afterward, this latent sub-code gets frozen, and subsequent parts of the code are chosen. As demonstrated in the paper, this hierarchical approach achieves the same quality as the baseline, with a lower number of generated samples.\n\nAccording to the reported results of the evaluation, the presented approach outperforms several general-purpose image-to-image models, as well as some task-specific methods. 1. Strengths.\n\n This paper proposes an interesting idea of incorporating multi-scale techniques into the IMLE-based generative model.\n This sound quite reasonable and it's a pleasure to see that this trick helps to reduce the number of required candidate samples, which is a common downside of IMLE.\n Moreover, the obtained results show that in the considered setting the presented solution outperforms even the task-specific baselines, which proves that CHIMLE may be useful for applications.\n Also, the authors have submitted the code, which is a good service for the community. \n\n1. Weaknesses.\n\n * The explanation is sometimes unnecessarily wordy: while the method finally just employs the multi-scale search of latent codes, it is presented in a much more general way, although only one way of latent division is considered. \n * The evaluation uses F-measures computed from precision and recall. As shown in [1], these metrics struggle with issues, and it is more common to compute so-called Improved Precision and Recall [1].\n * Unfortunately, the resolution of generated images is not specified explicitly in the paper. Therefore, I am not sure if this approach may be scaled up to the image size of 1K. I ask the authors to provide the image size the model operates. \n * Why were not models with diverse outputs considered as task-specific baselines? E.g. for the case of super-resolution, SRFlow [2] may be a suitable one. This paper presents a novel way of applying a non-adversarial image generation technique called IMLE to image-to-image problems.\n\n\n1. References\n\n [1] Kynkäänniemi et al. Improved Precision and Recall Metric for Assessing Generative Models. NeurIPS 2019.\n\n [2] Lugmayr et al. SRFlow: Learning the Super-Resolution Space with Normalizing Flow. ECCV 2020.\n\n1. Post-rebuttal comments.\n\n I thank the authors for their feedback. It has addressed my main concerns. Therefore, I am inclined to increase the score. I ask the authors to address the concerns listed above. The authors have adequately addressed the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 5, 5, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 3, 4 ]
[ "rmRMyYCXHw", "v9PQrIvts9KW", "WpYb705hDdW", "NlhJehHcrVZ", "jeny33SzRsa", "v9PQrIvts9KW", "9zDuvpVFTwr", "0gUioh31eDT", "ND3Oh4C5ssH", "YRp69Ep-Qn-", "3YnjSVpya-", "baCBlUW6kv", "nips_2022_5pvB6IH_9UZ", "nips_2022_5pvB6IH_9UZ", "nips_2022_5pvB6IH_9UZ", "nips_2022_5pvB6IH_9UZ", "nips_2022_5pvB6IH_9UZ" ]
nips_2022_xK6wRfL2mv7
Sharpness-Aware Training for Free
Modern deep neural networks (DNNs) have achieved state-of-the-art performances but are typically over-parameterized. The over-parameterization may result in undesirably large generalization error in the absence of other customized training strategies. Recently, a line of research under the name of Sharpness-Aware Minimization (SAM) has shown that minimizing a sharpness measure, which reflects the geometry of the loss landscape, can significantly reduce the generalization error. However, SAM-like methods incur a two-fold computational overhead of the given base optimizer (e.g. SGD) for approximating the sharpness measure. In this paper, we propose Sharpness-Aware Training for Free, or SAF, which mitigates the sharp landscape at almost zero additional computational cost over the base optimizer. Intuitively, SAF achieves this by avoiding sudden drops in the loss in the sharp local minima throughout the trajectory of the updates of the weights. Specifically, we suggest a novel trajectory loss, based on the KL-divergence between the outputs of DNNs with the current weights and past weights, as a replacement of the SAM's sharpness measure. This loss captures the rate of change of the training loss along the model's update trajectory. By minimizing it, SAF ensures the convergence to a flat minimum with improved generalization capabilities. Extensive empirical results show that SAF minimizes the sharpness in the same way that SAM does, yielding better results on the ImageNet dataset with essentially the same computational cost as the base optimizer.
Accept
This paper proposes a novel optimization method, called SAF, for reaching flat minima. The main claim is that the proposed method does not suffer from computational overhead of SAM-like methods which is typically 2x SGD. The proposed method is based on a novel loss that minimizes the KL-divergence between the output of the network with previous weights and current weights. This allows avoiding the extra computational overhead of SAM. Authors show that their method can achieve better empirical results in a compute-constrained regime. Reviewers are in agreement about the novelty of the proposed method and that the paper is well-written and easy to follow. The empirical results also show a clear advantage over other methods in a compute-constrained regime. The main concern about accepting the paper is due to some mismatch between reported numbers in the paper and that of original SAM paper as well as lack of clarity on some experimental details. I am leaning towards acceptance given the advantages mentioned above but I strongly recommend authors (and want to see this implemented for camera-ready) to at the very least make the following changes (as well as the ones proposed by reviewers) to adhere to clarity and reproducibility standards of publications in computer science and increase the impact of their paper: 1- In Table 1 (and perhaps Figure 1?), add ALL reported results for SAM including but not limited to a) The results reported in the original SAM paper. b) The results reported by other papers running SAM themselves c) The results of running SAM by authors. 2- In Table 1 (and perhaps everywhere), always make it extra easy for the reader to know if the number is taken from another paper or it is a the result you reproduced yourself. 3- Add all experimental details (including augmentation techniques used and what hyper-parameters are tuned) for all experiments. When there are mismatches that makes the comparison more difficult, this becomes even more important and allows the reader to make-up their mind about where the improvement might be coming from. 4- If you have considerations about original SAM paper’s numbers not being reproduced by other papers, you can add it as a footnote or discussion but it is still important to report them.
train
[ "2zAxlUm3Kti", "L3jgQMXH15g", "vIKGV4z6l5x", "mEX6EDV6Cx", "TkvVkaa4Yqw", "ty2xiB6ifDX", "HJMF1c05cW", "m87V0qVB34h", "ZSyF0kYHZL", "YB8WeDS1UhL", "SQoPLdPvB86", "TJnObd90-Lt", "s8jyIpPVK5j", "t0AEd_w-uRz", "RJaATRurbu0", "5xZbbcF1Ams", "3qx2-s-ERgq", "ahZOHFzJn98", "EBEODdjDYMU", "tXCVWY5tWaH" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response and the additional experiments.", " Continued of A11 \n\nFor the gap between the reported results of [2,37] and [21], the reported results of ViT-S/32-SAM are precisely listed as\n\n|Vit-S/32-SAM Reported in [2] | Vit-S/32-SAM Reported in [37] |Vit-S/32-SAM Reported in [21] |Vit-S/32-SAM Reported in our submission| \n| :-----------: | :-----------: |:-----------:|:-----------:|\n|70.5|70.5|68.9|68.9|\n\nAs we stated in our response, \n\n>We carefully reproduced the ViT-SAM results using only basic data augmentation (inception style) and only obtained 68.9% accuracy for ViT S/32, which **exactly** matches the reported results in LookSAM paper [21].\n\nThe most plausible reason why the 70.5 in [2,37] are higher than 68.9 in [21] and our submission is the jax framework and TPU difference. We consulted the authors of [37] via email during the rebuttal period to ask about the implementation of ViT-S/32. The author of [37] replied us as follow, \n\n>Another tricky part is my experiment used AdamW whose weight decay is multiplied by learning rate, but seems the latest jax implementation fixs weight decay (not multiply by lr), so you would need to take care with this.\n Another tricky part is, SAM/GSAM is per-worker (per-GPU) perturbation, which means the perturbation based on gradient is different across GPUs. So in PyTorch it means the gradient is not sychronized until you finished the second pass of backward; in jax grad is not synchronized by default unless you call \"jax.pmean\"; but in PyTorch, I think it's synchronized by default unless you do some specific modifications.\n Because grad is not synchronized, the number of workers and n_samples_per_worker is important, so you might need to check both batchsize and n_TPU_cores (8x8 in my eperiment) in the appendix. For example, with 8x8 TPUs, fix batchsize=4096, each worker has 4096//64=64 images; with 4x4 TPUs, this number is 256, and the result could be worse. \n\n| |Framework (AdamW trick)|TPU or GPU (m-sharpness trick)|\n| ----------- | :-----------: |:-----------:|\n|[37]|Jax| 8x8 TPUs |\n|Ours|Pytorch|8 GPUs |\n\nThe two tricks explain the difference of Vit-S/32-SAM's performance between [2,37] with [21] and ours. \n\nLast, we have typos in the first table of our response A1. We sincerely and unreservedly apologize for this. The ResNet-50-SAM result should be 76.9 instead of 76.0. In our submission, we report the 76.9 for ResNet-50-SAM in Table 1 of Page 7. Furthermore, the entries 77.5 (SAM reported) and 70.5 [2] should not under the column of strong data augmentation. However, the results of the table in A2 are under the same setting of [2,6,37]; this ensure fair comparison throughout. \n", " **Q11** On reproducing SAM. \n\n**A11** First, we would like to re-emphasize (as we have arleady stated in Line 230 of our submission) that we religiously followed the experimental settings of SAM’s follow-up works [2,21,6,37]. The comparison are thus fair and reliable. Our proposed SAF and MESA have been verified to **improve the efficiency** of the SAM-like methods with comparable or even better generalization performance. Improving the efficiency of SAM to the base optimizer without sacrificing generalization performance is the main contribution of our submission. \n\nIn more detail, the main point of our response is also to clarify the fairness of our experimental setting, which is \n\n>As we stated on Line 230, we followed the experimental settings of SAM’s follow-up works [2,21,6,37] for fair comparison with them.\n\nThe difference of the reported accuracy of ResNet 50 on the ImageNet trained by SAM between SAM paper [8] and SAM's follow-up works [2,6,37] are listed as \n\n|ResNet 50-SAM in [8]| ResNet 50-SAM in [2] | ResNet 50-SAM in [6] |ResNet 50-SAM in [37] |ResNet 50-SAM in our submission| \n| :-----------: | :-----------: |:-----------:|:-----------:|:-----------:|\n|77.5|76.7|76.7|76.9|76.9|\n\nOur reported result matches the reported results in [2,6,37] of ResNet 50-SAM up to 0.2. This unavoidable slight discrepancy is due to randomness of initialization and the selection of batches at each gradient descent step. \n\nThe reason SAM [8] reports a higher accuracy (77.5) has been stated in our response. \n\n>More specifically, SAM uses 100 epochs (vs 90 epochs in our setting) and SAM uses label smoothing of 0.1 (vs no label smoothing in our setting). Thus SAM reports 77.1% accuracy with ResNet 50 SGD while [2,21,6,37] and our paper report 76.0% accuracy. The mentioned paper [2] (cited as [2] in our submission) is the first work to use such different settings than SAM.\n\nAs we also claimed in Appendix A.4, we strictly follow the setting of [2,21,6,37] to use basic data augmentaion to resize and crop images to 224-pixel resolution, normalize them, cosine learning rate schedule, SGD optimizer with momentum 0.9, weight decay 0.0001, **no label smoothing** and **only 90 epochs for ResNets**, 300 epochs for ViT.\n\n\nSecond, our result of ViT-S/32-SAM matches the reported results in the LookSAM paper [21] (exactly 68.9 vs 68.9!). Under the same setting, our proposed SAF and MESA have been **verified to be more efficient** (Training speed SAF 5,108 img/s vs 4,273 img/s LookSAM) and **achieve better performance** (SAF 69.5 vs 68.8 LookSAM) than LookSAM.\n\nTo be continued in response to the additional questions (4/4)\n\n\n", " **Q9** Why $\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})$ is a constant? Is there any derivation of this argument in the original paper or their answer?\n\n**A9** We stated in our reponse \"$\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}\n(f_{\\theta_i})$ is a constant **with respect to the variable $\\theta_t$**\". This is clearly true as $\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}\n(f_{\\theta_i})$ does not involve the term $\\theta_t$. \n\nWe **did not claim** \"$\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})$ is a constant\". It clearly depends on some variables such as $\\gamma_i$ and $\\theta_i$. \n\n---\n\n**Q10** On A4 The answer does not explain 'why SAF/MESA found flatter local minima than SAM\n\n**A10** We **did not claim** either in our submission or our response that \"SAF/MESA found flatter local minima than SAM\". What we claimed in our submision in lines 281 is \n\n>The visualized loss landscape clearly demonstrate that SAF can converge to a region as flat as SAM does.\n\nWhat we claimed in our response A5 is \n\n>and showed that SAF’s loss landscape is as flat as MESA’s.\n\nYour question and claim that \"why SAF/MESA found flatter local minima than SAM\" is thus not truly reflective of our claim. We didn't make this claim. Pictorially, we show that SAF/MESA's loss landscapes are as flat at SAM's. We have updated Figure 1 in Appendix A3 to show that SAF/MESA find local minima which are **as flat as SAM's**. \n\n\n\nWe re-emphasize here that the motivation and main contribution of this submission is to improve the efficiency issue of SAM. We do not claim to find flatter local minima than SAM or its variants. \n\n---", " **Q6** This equation is not equivalent to the original equation in L164-165 $\\underset{\\theta_t}{\\arg\\min}R_{\\mathbb{B}\\_t}(f_{\\theta_t}) = \\underset{\\theta_i \\sim \\mathrm{Unif}( \\Theta)}\n{\\mathop{\\mathbb{E}}}[ \\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$.\n\n**A6** Our response first stated that \n>$\\underset{\\theta_t}{\\arg\\min}R_{\\mathbb{B}\\_t}(f_{\\theta_t})=\\underset{\\theta_t}{\\arg\\min}[ \\gamma_t R_{\\mathbb{B}\\_t}(f_{\\theta_t})R_{\\mathbb{B}\\_t}(f_{\\theta_t})+ \\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$.\n\nThis holds as the added term does not involve $\\theta_t$ (it only involves $\\theta_i$ and $i\\ne t$).\n\nThen we stated in our response\n>\"Note that for $i \\ne t$, $\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})$ is a constant with respect to the variable $\\theta_t$ as it does not involve $\\theta_i$. Therefore, we enummerate all the terms $\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})$ that satisfy $i \\neq t$, and thus we obtain the LHS of Equation (7) \" \n\nSpecifically, $\\underset{\\theta_t}{\\arg\\min}R_{\\mathbb{B}\\_t}(f_{\\theta_t})=\\underset{\\theta_t}{\\arg\\min}[ \\gamma_t R_{\\mathbb{B}\\_t}(f_{\\theta_t})R_{\\mathbb{B}\\_t}(f_{\\theta_t})+ \\sum_{i<t} \\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$ = $\\underset{\\theta_t}{\\arg\\min} \\sum_{i=1}^t \\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})=\\underset{\\theta_t}{\\arg\\min} \\underset{\\theta_i \\sim \\mathrm{Unif}( \\Theta)}\n{\\mathop{\\mathbb{E}}}[ \\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$.\n\nThis gives the LHS of Equation (7) $\\underset{\\theta_i \\sim \\mathrm{Unif}( \\Theta)}\n{\\mathop{\\mathbb{E}}}[ \\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$. That is how we derive $\\underset{\\theta_t}{\\arg\\min}[ \\gamma_t R_{\\mathbb{B}\\_t}(f_{\\theta_t})R_{\\mathbb{B}\\_t}(f_{\\theta_t})+\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]=\\underset{\\theta_t}{\\arg\\min} \\underset{\\theta_i \\sim \\mathrm{Unif}( \\Theta)}\n{\\mathop{\\mathbb{E}}}[ \\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$\n\nTherefore, it is equivalent to the original equation, which is \n> $\\underset{\\theta_t}{\\arg\\min}R_{\\mathbb{B}\\_t}(f_{\\theta_t}) = \\underset{\\theta_i \\sim \\mathrm{Unif}( \\Theta)}\n{\\mathop{\\mathbb{E}}}[ \\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$.\n\n\n\n---\n**Q7** Also, the product $\\gamma_t R_{\\mathbb{B}\\_t}(f_{\\theta_t})R_{\\mathbb{B}\\_t}(f_{\\theta_t})$ is not equivalent to the product term in the original paper $\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})$ since $\\gamma_i \\ne \\gamma_t$ in general. \n\n**A7** Throughout the submission, we use $t$ to index the current model update step, and $i < t$ to index the pervious steps. The natural number **$i$ represents an iteration index**, which indexes $\\theta_i \\sim \\mathrm{Unif}( \\Theta)$ in Equation (7) of our submission. When the iteration index $i$ arrives at $i=t$ ($t$ is the current iteration and $i$ is a previous iteration), the term $\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})$ is clearly equivalent to the term $\\gamma_t R_{\\mathbb{B}\\_t}(f_{\\theta_t})R_{\\mathbb{B}\\_t}(f_{\\theta_t})$.\n\n---\n**Q8** Furthermore, $\\underset{\\theta_t}{\\arg\\min}R_{\\mathbb{B}\\_t}(f_{\\theta_t})=\\underset{\\theta_t}{\\arg\\min}[ \\gamma_t R_{\\mathbb{B}\\_t}(f_{\\theta_t})R_{\\mathbb{B}\\_t}(f_{\\theta_t})]$ in their answer have an error. \n\n**A8** The esteemed reviewer stated that\n> $\\underset{\\theta_t}{\\arg\\min}R_{\\mathbb{B}\\_t}(f_{\\theta_t})=\\underset{\\theta_t}{\\arg\\min} \\gamma_t R_{\\mathbb{B}\\_t}(f_{\\theta_t})R_{\\mathbb{B}\\_t}(f_{\\theta_t})=\\underset{\\theta_t}{\\arg\\min} \\frac{\\eta_t}{\\rho^2} R_{\\mathbb{B}\\_t}(f_{\\theta_t})^2$ which means $R_{\\mathbb{B}\\_t}(f_{\\theta_t})=\\frac{\\eta_t}{\\rho^2} R_{\\mathbb{B}\\_t}(f_{\\theta_t})^2$\n\nHowever, the equation you claimed \"$R_{\\mathbb{B}\\_t}(f_{\\theta_t})=\\frac{\\eta_t}{\\rho^2} R_{\\mathbb{B}\\_t}(f_{\\theta_t})^2$\" **does not hold**. Note that the equation we claimed \"$\\underset{\\theta_t}{\\arg\\min}R_{\\mathbb{B}\\_t}(f_{\\theta_t})=...=\\underset{\\theta_t}{\\arg\\min} \\frac{\\eta_t}{\\rho^2} R_{\\mathbb{B}\\_t}(f_{\\theta_t})^2$\" is to find the solution of **arg min**. The equation does not hold without the \"**arg min**\". One cannot \"cancel\" $\\arg\\min$ on both sides of any equation. Therefore, the equation you derived is not correct, and we do not have error in this part, to the best of our knowledge. \n", " * **On the derivation of L164-L165**\n\n While authors argued that $\\arg\\min_{\\theta_t}R_{\\mathbb{B_t}}(f_{\\theta_t})=\\arg\\min_{\\theta_t}\\gamma_{t} R_{\\mathbb{B_t}}(f_{\\theta_t}) R_{\\mathbb{B_t}}(f_{\\theta_t})=\\arg\\min_{\\theta_t}[\\gamma_t R_{\\mathbb{B_t}}(f_{\\theta_t}) R_{\\mathbb{B_t}}(f_{\\theta_t})+\\gamma_i R_{\\mathbb{B_t}}(f_{\\theta_i}) R_{\\mathbb{B_i}}(f_{\\theta_i})]$ in A3, There exist several issues in this equations.\n\n * This equation is not equivalent to the original equation in L164-165 $\\arg\\min_{\\theta_t}R_{\\mathbb{B_t}}(f_{\\theta_t})=\\arg\\min_{\\theta_t}\\underset{\\theta_i \\sim \\operatorname{Unif}(\\{\\theta_{1}, \\theta_{2}, \\ldots, \\theta_{t}\\})}{\\mathbb{E}}[\\gamma_i R_{\\mathbb{B_t}}(f_{\\theta_i}) R_{\\mathbb{B_i}}(f_{\\theta_i})]$. Where is expectation term ($\\underset{\\theta_i \\sim \\operatorname{Unif}(\\{\\theta_{1}, \\theta_{2}, \\ldots, \\theta_{t}\\})}{\\mathbb{E}}$) is your answer? Also, the product $\\gamma_t R_{\\mathbb{B_t}}(f_{\\theta_t}) R_{\\mathbb{B_t}}(f_{\\theta_t})$ is not equivalent to the product term in the original paper($\\gamma_i R_{\\mathbb{B_t}}(f_{\\theta_i}) R_{\\mathbb{B_i}}(f_{\\theta_i})$) since $\\gamma_i\\neq \\gamma_t$ in general. Therefore, I think there is still **gap between their answer and the derivation of equation** $\\arg\\min_{\\theta_t}R_{\\mathbb{B_t}}(f_{\\theta_t})=\\arg\\min_{\\theta_t}\\underset{\\theta_i \\sim \\operatorname{Unif}(\\{\\theta_{1}, \\theta_{2}, \\ldots, \\theta_{t}\\})}{\\mathbb{E}}[\\gamma_i R_{\\mathbb{B_t}}(f_{\\theta_i}) R_{\\mathbb{B_i}}(f_{\\theta_i})]$. \n * Furthermore, $\\arg\\min_{\\theta_t}R_{\\mathbb{B_t} }(f_{\\theta_t})=\\arg\\min_{\\theta_t}\\gamma_{t} R_{\\mathbb{B_t}}(f_{\\theta_t}) R_{\\mathbb{B_t}}(f_{\\theta_t})$ in their answer have an error: Since author defined $\\gamma_t := \\frac{\\eta_t}{\\rho^2} \\cos(\\Phi_t)$, $\\gamma_t = \\frac{\\eta_t}{\\rho^2}$ as the angle between the gradient using the same mini-batch should 1 (Also, I would recommend authors to define $\\cos(\\Phi_t)$ in mathematical notation to prevent ambiguity for readers (e.g., State of the parameter($\\theta_i? \\theta_t?$) is not clear to compute the gradient)). By plugging this to the equation $\\arg\\min_{\\theta_t}R_{\\mathbb{B_t} }(f_{\\theta_t})=\\arg\\min_{\\theta_t}\\gamma_{t} R_{\\mathbb{B_t}}(f_{\\theta_t}) R_{\\mathbb{B_t}}(f_{\\theta_t})$, $\\arg\\min_{\\theta_t}R_{\\mathbb{B_t} }(f_{\\theta_t})=\\arg\\min_{\\theta_t}\\gamma_{t} R_{\\mathbb{B_t}}(f_{\\theta_t}) R_{\\mathbb{B_t}}(f_{\\theta_t}) = \\arg\\min_{\\theta_t}\\frac{\\eta_t}{\\rho^2}R_{\\mathbb{B_t}}(f_{\\theta_t})^2$ which means $R_{\\mathbb{B_t} }(f_{\\theta_t}) = \\frac{\\eta_t}{\\rho^2}R_{\\mathbb{B_t}}(f_{\\theta_t})^2 arrow R_{\\mathbb{B_t} }(f_{\\theta_t}) = \\frac{\\rho^2}{\\eta_t}$. However, **the last equation does not hold in general** since $\\rho$ is the radius of the neighborhood and $\\eta_t$ is the learning rate that is defined as arbitrary.\n * Also, equation $\\arg\\min_{\\theta_t}\\gamma_t R_{\\mathbb{B_t}}(f_{\\theta_t}) R_{\\mathbb{B_t}}(f_{\\theta_t})=\\arg\\min_{\\theta_t}[\\gamma_t R_{\\mathbb{B_t}}(f_{\\theta_t}) R_{\\mathbb{B_t}}(f_{\\theta_t})+\\gamma_i R_{\\mathbb{B_t}}(f_{\\theta_i}) R_{\\mathbb{B_i}}(f_{\\theta_i})]$ is not clear, neither. Why $\\gamma_i R_{\\mathbb{B_t}}(f_{\\theta_i}) R_{\\mathbb{B_i}}(f_{\\theta_i})$ is constant? Is there **any derivation of this argument** in the original paper or their answer?\n\n\n\n* **On A4**\n * The answer does not explain 'why SAF/MESA found flatter local minima **than SAM**: They only explain the motivation (as shown in Fig. 3) behind SAF and MESA. Although this motivation can explain how SAF and MESA can find the flat minima, this motivation does not explain SAF and MESA find flatter minima than SAM. Is there any specific case (even for Toy example) where SAM would fail but SAF/MESA would not? Concrete numerical examples (even in low-dimensional cases) would help the readers' understanding.", " * **On reproducing SAM**\n\n While authors argued that previous works [2, 6, 21, 37] exactly match the results of thier table for SAM, I find **this is not true** for [2, 21, 37]:\n\n * Xiangning et al. [2] reported the performance of ResNet50-SAM with Inception-style preprocessing as 76.7 %(See the first row in Table 2 in [2]), which is 0.7% better than 76.0% in their answer. \n * Yong et al. [21] **did not report the performance of ResNet50-SAM** with Inception-style preprocessing.\n * Juntang et al. [37] reported the performance of ResNet50-SAM with Inception-style preprocessing as 76.9 %(See the first row in Table 1 in [37]), which is 0.9% better than 76.0% in their answer. \n\n Also, the 76.0% in author response does not match to the reported value in their paper (76.9% in Table 1), neither. Also I would like to point out that original SAM paper use basic data augmentation (See page 5 of original SAM: '*In this setting, following prior work (He et al., 2015; Szegedy et al., 2015), we resize and crop images to 224-pixel resolution, normalize them, and use batch size 4096, initial learning rate 1.0, cosine learning rate schedule, SGD optimizer with momentum 0.9, label smoothing of 0.1, and weight decay 0.0001.*') to get 77.5% in their paper (Also, see the open sourced code of SAM: https://github.com/google-research/sam/blob/main/sam_jax/datasets/dataset_source_imagenet.py) \n\n Also, second Table in the A1 have some typos. Does the first row of this Table mean SAM? If so, then I think author incorrectly explain the result of ViT-S/32-SAM: **Table 2 in [2] (70.5%) reports the results of Inception-style preprocessing (with resolution 224) rather than a combination of strong data augmentations** (See the caption of Table 2 in [2]). Also this 70.5% matches to the results of VIT-S/32-SAM in [37] (See Table 1 in [37]).Therefore, I think the authors misunderstand their main references [2, 8, 21, 37] and refer results of these references incorrectly. For the similar reason, credibility of the first row in second Table in A2 should be replenished.", " Dear Reviewer 3mAW,\nThanks for the comments and for raising the score of our paper. \nBest regards,\nThe authors", " Thank you for the response. I am surprised that an inference forward pass is only 15% of the time for a standard forward and backward pass so would recommend double checking this. I also appreciate the experiments with additional data augmentation, though the models are still far from SoTA despite the SoTA recipes being public. Despite these concerns, I am changing my recommendation to accept but I am not confident in this recommendation. ", " Thank you for the clarifications.", " \nThank you for your feedback. Our responses to the weak points and questions are as below.\n\n---\n**Q1:** Why is the ResNet baseline for SAM much lower than that reported in the SAM paper? Are the methods in this paper additive with more modern approaches.\n\n\n**A1:** This is because of the difference in the setting. The recent follow-up works for SAM adopt a different setting from the original SAM work. As we stated on Line 230, we followed the experimental settings of SAM’s follow-up works [2,21,6,37] for fair comparison with them. Our reported results of SAM and SAM’s variants **exactly** match the results in [2,21,6,37] (and the experimental results in [2,21,6,37] are all the same).\n\nDetails on the settings: SAM uses 100 epochs (vs 90 epochs in our setting) and SAM uses label smoothing of 0.1 (vs no label smoothing in our setting). Thus SAM reports 77.1% accuracy on ResNet 50 SGD while [2,21,6,37] and our paper reports 76.0% accuracy. \n\nFor comparison to the original SAM work, we conducted experiments to use stronger data augmentation (Mix Up and Cut Mix only) for our proposed MESA. Our SAF achieves a consistent 1.2% accuracy improvement over the reported results of the original SAM work. The results are listed below.\n\n|ResNet-50|SAM Basic data augmentation | Augmentation with Mixup and Cut Mix | \n| ----------- | :-----------: |:-----------:|\n|SAM|76.0|77.5|\n|SAF|77.5|78.7|\n\n\nMESA can still be effective for the training with stronger data augmentation. The architecture of ResNet 50 model in https://arxiv.org/pdf/2110.00476.pdf has been improved and trained with all data augmentation strategies to achieve 80% accuracy. We will evaluate SAF and MESA with the improved ResNets in a revised version of the paper. As we stated in footnote 2 of page 7, we failed to reproduce GSAM on the ImageNet by modifying the example on the CIFAR 10 dataset.\n\n---\n**Q2:** Why is MESA only 15% more expensive than standard training\n\n **A2:** MESA only requires one more forward pass, which requires **no computation of any gradients**, to construct the trajectory loss (line 9 in Alg 1). The forward propagation is conducted under “with torch.no_grad()” in the pytorch framework, which only requires 15% more computations than the standard forward and backward propagation. SAF does not have such an additional forward pass as SAF loads the saved outputs (line 5 in Alg 1). We will emphasize this point when we discuss MESA in the revised version of the paper. \n\n---\n\n**Q3:** Why is GSAM much better than all other methods for the ViT model (minor)\n**A3:** This is possibly because of the data augmentation and tricks such as Random Erase [1]. The authors of GSAM did not release their codes for their experiments on the ImageNet dataset. +footnote 2\n\n[1] Zhong, Zhun, et al. \"Random erasing data augmentation.\" Proceedings of the AAAI conference on artificial intelligence. Vol. 34. No. 07. 2020.\n\n---\n**Q4:** I did not understand very well how SAF and MESA relate to SAM. In particular, Equation 8 is not well motivated and I did not see where the KL came from.\n\n**A4:** Our methodology section details how we derive SAF and MESA from SAM to address its efficiency issue. We aim to derive an equivalent term as the sharpness term defined by SAM in Equation (5). The equivalent term should require much fewer computations than SAM. Equation (7) shows that the loss difference term is precisely the equivalent term we are looking for. As we stated in Line 175, the vanilla loss will be canceled out if we minimize the equivalent term in Equation (7) directly. Thus, we propose the trajectory loss in Equation (8) for SAF, which is derived from Equation (7). The reason for using KL is because minimizing the CE loss is equivalent to minimizing the KL loss. The proof is delineated below. \n\nWe are given that $\\\\{\\hat{y_i}\\\\}\\_{i=1}^n$ are the ground truths and $\\\\{f_\\theta(x_i)\\\\}_{i=1}^n$ are the outputs of the neural network $f_\\theta$. The definition of the KL loss is as follow,\n\n$\\mathrm{KL}(\\hat{y},f_\\theta(x)) =\\sum\\limits_{i=1}^n \\hat{y_i} \\log\\hat{y_i} - \\hat{y_i} \\log f_\\theta(x_i)$\n\nThe definition of the Cross-Entropy loss is as follow,\n\n$\\mathrm{CE}(\\hat{y},f_\\theta(x))= \\sum\\limits_{i=1}^n- \\hat{y_i} \\log f_\\theta(x_i) =\\mathrm{KL}(\\hat{y},f_\\theta(x)) - \\sum\\limits_{i=1}^n \\hat{y_i} \\log\\hat{y_i}$\n\nBecause the term $H(\\hat{y})=\\sum\\limits_{i=1}^n \\hat{y_i} \\log\\hat{y_i}$ is a constant with respect to the variable $\\theta$, we have, \n\n$\\underset{\\theta}{\\arg\\min} \\mathrm{KL}(\\hat{y},f_\\theta(x)) = \\underset{\\theta}{\\arg\\min} [\\mathrm{CE}(\\hat{y},f_\\theta(x))+H(\\hat{y})]= \\underset{\\theta}{\\arg\\min}\\mathrm{CE}(\\hat{y},f_\\theta(x))$ \n\nTherefore, we have proven that minimizing the CE loss is equivalent to minimizing the KL loss.\n", " **Q5:** Although the results showed that SAF/MESA found flatter local minima than SAM, one cannot find the clear intuition behind these empirical results.\n\n**A5:** Intuitively, our proposed trajectory loss prevents the model from overfitting to some critical training samples by suppressing the change of training loss on them, and thus helps the model in seeking a flatter minimum (Figure 3 in our submission). More specifically, for those samples whose predictions (or instance-wise loss) changes fast, minimizing our trajectory loss would suppress their change in losses and consequent overfitting by enforcing their predictions to be close to the ones from the past model (SAF) or EMA model (MESA). \n\nThis motivation is supported by our empirical observations. Some training samples’ predictions are changing significantly between adjacent epochs. For example, about 25% of training samples’ predictions are different from the previous epoch on the CIFAR 100 dataset. Among them, half (12.5%) of training samples’ predictions are changing from correct predictions to the wrong prediction. SAF and MESA can decrease the percentage of these samples from 25% to 21%.", " **Q3:** It would be recommended to provide the detailed derivations of L163-L167. \n\n**A3:** In our submission, we have already given a detailed derivation of Equation (11) (Lines 202 to 203) in the supplementary document. This derivation is analogous to the derivation of Equation (7) (Lines 166 to 167). We provide the detailed derivation of Lines 163 to 167 here. \n\nOur motivation has been clearly stated in the paper, which is to derive an equivalent term of the sharpness $R_{\\mathbb{B}\\_t}(f_{\\theta_t})$ **without additional computations**. The derivations in Lines 164 and 165 are meant to find such an equivalent term. As defined in Equation (5), $R_{\\mathbb{B}\\_t}(f_{\\theta_t})$ is always non-negative, we have, \n\n$\\underset{\\theta_t}{\\arg\\min} R_{\\mathbb{B}\\_t}(f_{\\theta_t})= \\underset{\\theta_t}{\\arg\\min} \\gamma_t R_{\\mathbb{B}\\_t}(f_{\\theta_t})R_{\\mathbb{B}\\_t}(f_{\\theta_t})$\n\n$=\\underset{\\theta_t}{\\arg\\min}[ \\gamma_t R_{\\mathbb{B}\\_t}(f_{\\theta_t})R_{\\mathbb{B}\\_t}(f_{\\theta_t})+\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$\n\n\n\nNote that for $i \\ne t$, $\\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$ is a constant with respect to the variable $\\theta_t$. Therefore, we traverse all the terms that satisfy $i \\neq t$, and thus we obtain the LHS of Equation (7) in Lines 164 to 165.\n\nFor Equation (7) (Lines 166 to 167), we intend to demonstrate that $\\mathop{\\mathbb{E}}_{\\theta_i \\sim \\mathrm{Unif}( \\Theta)}[ \\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$ is the equivalent term we are looking for. We have,\n\n$\\mathop{\\mathbb{E}}\\_{\\theta_i \\sim \\mathrm{Unif}( \\Theta)}[ \\gamma_i R_{\\mathbb{B}\\_t}(f_{\\theta_i})R_{\\mathbb{B}\\_i}(f_{\\theta_i})]$\n\n$\\approx \\mathop{\\mathbb{E}}\\_{\\theta_i \\sim \\mathrm{Unif}( \\Theta)} \\big[ \\eta_i \\cos(\\Phi_i) \\| \\nabla_{\\theta_i} L_{\\mathbb{B}\\_t}(f_{\\theta_i})\\| \\, \\| \\nabla_{\\theta_i} L_{\\mathbb{B}\\_i}(f_{\\theta_i}) \\| \\big]\\qquad$ Substitute Equation (5) \n\n$= \\mathop{\\mathbb{E}}\\_{\\theta_i \\sim \\mathrm{Unif}( \\Theta)} \\big[\\eta_i \\nabla_{\\theta_i} L_{\\mathbb{B}\\_t}(f_{\\theta_i})^\\top \\nabla_{\\theta_i} L_{\\mathbb{B}\\_i}(f_{\\theta_i}) \\big]$\n\n$\\approx \\mathop{\\mathbb{E}}\\_{\\theta_i \\sim \\mathrm{Unif}( \\Theta)}\\big[ L_{\\mathbb{B}\\_t}(f_{\\theta_i}) - L_{\\mathbb{B}\\_t}(f_{\\theta_{i+1}}) \\big]\\qquad$ Apply a First-order Taylor Expansion \n\n$=\\frac{1}{t-1} \\big[L_{\\mathbb{B}\\_t}(f_{\\theta_1}) -L_{\\mathbb{B}\\_t}(f_{\\theta_2})+\\cdots+L_{\\mathbb{B}\\_t}(f_{\\theta_{t-1}})-L_{\\mathbb{B}\\_t}(f_{\\theta_t})\\big]$\n\n$= \\frac{1}{t-1} \\big[ L_{\\mathbb{B}\\_t}(f_{\\theta_1}) -L_{\\mathbb{B}\\_t}(f_{\\theta_t})\\big]$\n\n\nTherefore, we have shown that minimizing the loss difference $L_{\\mathbb{B}\\_t}(f_{\\theta_1}) -L_{\\mathbb{B}\\_t}(f_{\\theta_t})$ is approximately equivalent to minimizing the sharpness. We have added this detailed derivation to the revised appendix. \n\n\n\n---\n\n**Q4:** Although the results showed that SAF/MESA found flatter local minima than SAM, one cannot find the clear intuition behind these empirical results.\n\n**A4:** Intuitively, our proposed trajectory loss prevents the model from overfitting to some critical training samples by suppressing the change of training loss on them, and thus helps the model in seeking a flatter minimum (Figure 3 in our submission). More specifically, for those samples whose predictions (or instance-wise loss) changes fast, minimizing our trajectory loss would suppress their change in losses and consequent overfitting by enforcing their predictions to be close to the ones from the past model (SAF) or EMA model (MESA). \n\nThis motivation is supported by our empirical observations. Some training samples’ predictions are changing significantly between adjacent epochs. For example, about 25% of training samples’ predictions are different from the previous epoch on the CIFAR 100 dataset. Among them, half (12.5%) of training samples’ predictions are changing from correct predictions to the wrong prediction. SAF and MESA can decrease the percentage of these samples from 25% to 21%.\n\n \n\n", " Thank you for the comments and suggestions! We answer your questions below. \n\n\n---\n\n**Q1:** I found that there exists a gap between reported SAM accuracies in previous studies and this paper. \n\n**A1:** This is because of the differences in the settings. The recent follow-up works for SAM adopt different settings from the original SAM work. As we stated on Line 230, we followed the experimental settings of SAM’s follow-up works [2,21,6,37] for fair comparison with them. Our reported results of SAM and SAM’s variants **exactly** match the results in [2,21,6,37] (and the experimental results in [2,21,6,37] are all the same).\n\nMore specifically, SAM uses 100 epochs (vs 90 epochs in our setting) and SAM uses label smoothing of 0.1 (vs no label smoothing in our setting). Thus SAM reports 77.1% accuracy with ResNet 50 SGD while [2,21,6,37] and our paper report 76.0% accuracy. The mentioned paper [2] (cited as [2] in our submission) is the first work to use such different settings than SAM. \n\nTo further clarify this point, we conducted experiments with stronger data augmentation (Mix Up and Cut Mix only) for our MESA. Our SAF achieves 1.2% accuracy improvement over the reported results of the original SAM work. The results are listed below.\n\n\n|ResNet-50|basic data augmentation | strong data augmentation | \n| ----------- | :-----------: |:-----------:|\n|SAM|76.0 [2,21,6,37] |77.5 (SAM reported)|\n|MESA|77.5|78.7|\n\n\nThe discrepancy between the reported results for ViT-S/32 in [2] is also due to the differences in the ettings. The ViT paper reports ViT results with strong data augmentations while [2] reports ViT results without the augmentations of Mix up and RandAugment. We carefully reproduced the ViT-SAM results using only basic data augmentation (inception style) and only obtained 68.9% accuracy for ViT S/32, which **exactly** matches the reported results in LookSAM paper [21]. \n\nWe also conducted experiments to use stronger data augmentation (Cut Mix and Random Erase) for our SAF. Our MESA achieves a 1.7% accuracy improvement over the reported results of [2]. The results are listed below.\n\n|ViT-S/32| basic data augmentation |strong data augmentation |\n| ----------- | :-----------: |:-----------:| \n|ViT|68.9 [21] |70.5 [2] |\n|SAF|69.6|72.2|\n\n---\n**Q2:** It would be recommended to add the results of SAF/MESA for longer training and stronger augmentation. \n\n**A2:** Thanks for the suggestions. We conducted the experiments on training the ResNet 50 by SAF for longer epochs (90 -> 200 epochs) with the basic data augmentation (only Inception-style augmentation). We cite the 200-epoch SGD results with the same setting in [1] for reference. The results are listed below. \n\n\n|ResNet 50 |90 epochs|200 epochs | \n| ----------- | :-----------: |:-----------:|\n|SGD|76.0|76.4 [1] |\n|SAF|77.8|78.5|\n\nThe results indicate that our SAF is still much better (2.1% improve) than SGD in the longer training (200 epochs) setting. \n\nWe also conducted experiments to train ResNet 50 by MESA with stronger data agumentation (Mix up and Cut mix). The results are listed below. \n\n\n|ResNet-50|Basic data augmentation | Augmentation with Mixup and Cut Mix | \n| ----------- | :-----------: |:-----------:|\n|SAM|76.0|77.5|\n|SAF|77.5|78.7|\n\nThe results show that our MESA is still effective in the stronger data augmentation setting.\n\n\n[1] Kwon, Jungmin, et al. \"Asam: Adaptive sharpness-aware minimization for scale-invariant learning of deep neural networks.\" International Conference on Machine Learning. PMLR, 2021.\n\n[2] Foret, Pierre, et al. \"Sharpness-aware Minimization for Efficiently Improving Generalization.\" International Conference on Learning Representations. 2020.\n", " We appreciate your valuable comments and answer your questions in order.\n\n---\n**Q1:** Is the replacement of cross-entropy with KL (in Sec. 3.2) really necessary?\n\n**A1:** Minimizing the KL loss is equivalent to minimizing the cross-entropy loss, which is proved below. \n\n\nWe are given that $\\\\{\\hat{y_i}\\\\}\\_{i=1}^n$ are the ground truths and $\\\\{f_\\theta(x_i)\\\\}_{i=1}^n$ are the outputs of the neural network $f_\\theta$. The definition of the KL loss is as follow\n\n$\\mathrm{KL}(\\hat{y},f_\\theta(x)) =\\sum\\limits_{i=1}^n \\hat{y_i} \\log\\hat{y_i} - \\hat{y_i} \\log f_\\theta(x_i)$\n\nThe definition of the Cross-Entropy loss is as follow,\n\n$\\mathrm{CE}(\\hat{y},f_\\theta(x))= \\sum\\limits_{i=1}^n- \\hat{y_i} \\log f_\\theta(x_i) =\\mathrm{KL}(\\hat{y},f_\\theta(x)) - \\sum\\limits_{i=1}^n \\hat{y_i} \\log\\hat{y_i}$\n\nBecause the term $H(\\hat{y})=\\sum\\limits_{i=1}^n \\hat{y_i} \\log\\hat{y_i}$ is a constant with respect to the variable $\\theta$, we have, \n\n$\\underset{\\theta}{\\arg\\min} \\mathrm{KL}(\\hat{y},f_\\theta(x)) = \\underset{\\theta}{\\arg\\min} [\\mathrm{CE}(\\hat{y},f_\\theta(x))+H(\\hat{y})]= \\underset{\\theta}{\\arg\\min}\\mathrm{CE}(\\hat{y},f_\\theta(x))$ \n\n\n---\n**Q2:** Is the reduction of computation cost from 2x to 1x SGD really meaningful? \n\n**A2:** Yes, it is extremely meaningful especially for training large-scale neural networks and the commercialization of deep learning models. For example, if one trains a ResNet 152 model on ImageNet dataset with 8 Nvidia V-100 GPUs for 3 days, considering ​​Google Cloud Platform charges $2.28 per GPU per hour [Refer](https://venturebeat.com/2021/10/15/ai-weekly-ai-model-training-costs-on-the-rise-highlighting-need-for-new-solutions/)), the total cost of SAM will be \\\\$2,646 (\\\\$2.28 $\\times$ 8 GPUs $\\times$ 2 $\\times$ 3 days $\\times$ 24 hours) , while the total cost of SAF will be $1,313. There is a significant cost savings between implementing SAM and SAF. \n\nThus, significant efforts over the past couple of years have been devoted to halving the computational resources required for SAM (to match the base optimizer SGD), and our work shows that we can obtain the generalization ability of SAM without suffering from the curse of double computation.", " Thank you for your constructive comments! We give point-to-point replies to your questions in the following. \n\n\n\n---\n\n**Q1:** It is mentioned that before epoch E_start the outputs are not stable. What would happen if from the first epoch this method is used? Does this instability affect the performance?\n\n**A1:** In the submission, we used $E_\\mathrm{start} = 5$ for both SAF and MESA on the CIFAR datasets. It is worth noting that computing the proposed trajectory loss requires information from previous training epochs. Hence our method is **not** applicable for the first few epochs. Following your kind suggestion, we applied our method from the earliest valid training epochs. Specifically, the earliest epoch of SAF is $E_\\mathrm{start} = 4$, as SAF takes the output in $\\tilde{E}=3$ epochs ago to compute the trajectory loss. The earliest epoch of MESA is $E_\\mathrm{start} = 1$, as MESA takes the output of the Exponential Moving Average (EMA) model to compute the trajectory loss. \n\nWith the above setting, we conducted additional experiments with ResNet18 to investigate model's performance with different $E_\\mathrm{start}$'s. The results are stated below. \n\n| |CIFAR-10|CIFAR-100 | \n| ----------- | :-----------: |:-----------:|\n|SAF ($E_\\mathrm{start}$ = 4) | 96.35 $\\pm$ 0.04|80.11 $\\pm$ 0.07|\n|SAF ($E_\\mathrm{start}$ = 5) | 96.37 $\\pm$ 0.02|80.06 $\\pm$ 0.05|\n|MESA ($E_\\mathrm{start}$ = 1) | 96.18 $\\pm$ 0.06|79.54 $\\pm$ 0.04|\n|MESA ($E_\\mathrm{start}$ = 5) | 96.24 $\\pm$ 0.02|79.79 $\\pm$ 0.09|\n\nThe above experimental results demonstrate that the instability affects the final performance only marginally. MESA’s performance decreased by 0.25% on the CIFAR 100 dataset but is unchanged on the CIFAR 10 dataset. We will add these experimental results in our updated version.\n\n\n\n---\n\n**Q2:** How should the hyper-parameters ($\\tau$, E, and $\\lambda$) be selected for your method? The method introduces a few hyper-parameters. \n\n**A2:** We choose these optimal parameters via standard grid search on an isolated CIFAR-100 validation set. This has been detailed in Appendix A.3. We find some parameters ($\\tau$, $\\tilde{E}$, $E_\\mathrm{start}$) are consistent among different architectures and datasets, and only the coefficient $\\lambda$ needs to be tuned for different architectures and datasets. \n\n\n---\n\n**Q3:** In Figure 4b, why does MESA have higher sharpness at the beginning of training but lower sharpness at the end of training compared to SAF? Also, from the visualization, it appears that MESA has the flattest landscape.\n\n**A3:** As explained in Lines 203 and 204, MESA computs the trajectory loss using the output of the EMA model, which averages the past model weights in an exponentially decaying fashion and thus weighs the more recent weights higher. Thus, compared with SAF, the sharpness of MESA will be affected more by the latest model that is updated by vanilla SGD. Therefore, when the sharpness of the vanilla (SGD) model is increasing/decreasing (resp.), MESA’s sharpness would be higher/lower (resp.) than that of SAF’s. \n\nAs for the visualization that MESA appears to have the flattest landscape, this is because to produce the visualization, we (and others) use a large amount of Gaussian perturbation in the parameter space. Thus, the visulization involves not only the central landscape around the converged minimum, but also the landscape in **several epochs** ago. As we can see in Figure 4b of our submission, MESA's sharpness is lower than SAF's from epochs 140 to 185. However, only the sharpness of the central landscape **around the converged minimum** is more predicative of the generalization ability. We also visualized the loss landscape with a tiny amount (value of 0.07 vs 0.2) of Gaussian perturbation to visualize the central landscape around the converged minimum, and showed that SAF’s loss landscape is as flat as MESA’s. We have updated the visualization with a tiny amount of adversarial perturbation in Figure 1 of the appendix. \n\n\n\n", " This paper proposes a sharpness-aware minimization method that adds almost no computational overhead. The authors propose to use the training loss differences (more precisely, the KL divergence of output predictions) to proxy the sharpness term. The advantage of using this new metric would be that it only requires additional memory but no additional computations. They also propose an adjusted method that addresses the memory problem of the first method. The effectiveness of the proposed methods is then shown experimentally for Imagenet and CIFAR datasets. Strengths:\n\n1- The paper is very well-written. \n\n2- The limitation of the initial method in terms of memory has been properly addressed in the follow-up method.\n\n3- The method is simple and efficient to use.\n\n4- The experiments use state-of-the-art settings.\n\nWeaknesses:\n\n1- The method doesn't always outperform SAM in terms of accuracy. This is not so much a weakness because the method is mainly intended to improve computational overhead.\n\n2- The experiments are only for image-classification tasks.\n\n3- The method introduces a few hyper-parameters. 1- It is mentioned that before epoch E_start the outputs are not stable. What would happen if from the first epoch this method is used? Does this instability affect the performance?\n\n2- How should the hyper-parameters (tau, E, and lambda) be selected for your method? Have you studied various hyper-parameters? It is mentioned that a fixed value is used for these hyper-parameters.\n\n3- In Figure 4b, why does MESA have higher sharpness at the beginning of training but lower sharpness at the end of training compared to SAF? Also, from the visualization, it appears that MESA has the flattest landscape.\n The limitation of the method with regards to memory is mentioned and properly addressed.\n", " This paper proposes efficient implementations of sharpness-aware training. Specifically, this paper proposed Sharpness-Aware Training for Free (SAF) which requires cached prediction for previous checkpoints, and Memory-Efficient Sharpness-Aware (MESA) training which requires two forward propagation and one back propagation for each optimization step to achieve sharpness-aware training. While these approaches require relatively light computation costs compared to the original SAM, they achieved competitive accuracy in various tasks including CIFAR-10/100 and ImageNet-1k. * Originality: This paper provides a novel idea to estimate the sharpness by leveraging the trajectory of weights (Eq. 7). Based on this idea, they proposed time-efficient (SAF) and memory-efficient (MESA) implementations of sharpness-aware training [+].\n\n* Clarity: While Fig. 3 is intuitive to understand the key idea of this paper [+], it is still unclear to me that the equation in L164 and the second approximation in eq 7. Also, it is unclear that \\Phi_t is defined on what parameter of the optimization step [-].\n\n* Significance: I found that there exists a gap between reported SAM accuracies in previous studies and this paper. In Table 1 of this paper, the authors reported SAM accuracy as 76.9/78.6/79.3 for ResNet-50/101/152. However, the original SAM paper reported their accuracy as 77.5/79.8/80.8 with 100 epoch training (See Table 2 of [1]). I think this gap is critical since the accuracy of SAF 77.8/79.3/80.0, the minor improvements only exist for ResNet-50, and MESA 77.5/79.1/80.0, there are no improvements for SAM. Also, the ViT-S/32 results in Table 1 also make some confusion: In Table 2 of [2], they reported the accuracy of ViT-S/32-SAM as 70.5, which is severely better performing than the 68.9 in this paper. The authors should clarify this for a fair comparison.\n\n[1] Foret, Pierre, et al. \"Sharpness-aware minimization for efficiently improving generalization.\" arXiv preprint arXiv:2010.01412 (2020).\n\n[2] Chen, Xiangning, Cho-Jui Hsieh, and Boqing Gong. \"When vision transformers outperform ResNets without pre-training or strong data augmentations.\" arXiv preprint arXiv:2106.01548 (2021). Q1. In Fig. 2 and Fig. 5, the authors provide visualization of the loss landscape for various sharpness-aware training. Although the results showed that SAF/MESA found flatter local minima than SAM, one cannot find the clear intuition behind these empirical results. Also, while these visualizations tell us that MESA found flatter local minima than SAF for WRN28-10/CIFAR-100, the test accuracy of SAF is better than MESA. Therefore, it can be interpreted as flatter local minima considered in this paper would not result in better test accuracy in practice. Authors should clarify this paradox to remove the confusion of readers.\n\nQ2. As mentioned in the original SAM paper, the SAM optimizer can find local minima with better test accuracy with longer training (See Table 2 of [1]). Can SAF and MESA also utilize the longer training without overfitting? I think this property would be important to the scalability of SAF & MESA. While this paper provides a novel interpretation of sharpness by leveraging the trajectory. several points are needed to improve the clarity and significance of the paper.\n\n1. It would be recommended to provide the detailed derivations of L163-L167. While the intuition behind these lines is clear, further clarity can be attained by providing the entire derivation of equations to readers.\n\n2. It should be clarified why SAM accuracy in Table 1 is severely lower than the original SAM paper [1]. Since they did not report any confidence interval in Table 1, this accuracy gap can change the interpretation of experimental results in Section 4.1. \n\n3. It would be recommended to add the results of SAF/MESA for longer training and stronger augmentation. Since many SOTA models in computer vision [3,4] exploit longer training and stronger augmentation, the scalability of SAF/MESA should be validated with those settings.\n\n[3] https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives\n\n[4] Liu, Zhuang, et al. \"A convnet for the 2020s.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.\n\n** After discussion phase ** \n\nThanks to the clarification of the authors, I resolved my misunderstanding of this paper. However, I think these discussion points should be reflected in the main paper: Their key trick using $\\arg \\min$ is not specified in their derivation of the main paper. Also, further clarification on differences between main references (e.g., label smoothing, number of epochs) should be clarified in their main paper since the gap between their performances is sufficiently large to change the interpretation of experimental results.\n", " This paper focuses on the computation cost perspective of Sharpness-Aware Optimization (SAM), which has 2x cost compared with SGD. \nThe first proposed algorithm, SAM for Free (SAF), derives an alternative/surrogate optimization objective which can uses the past optimization trajectory to estimate the sharpness loss. The SAF algorithm has almost the same computation cost as SGD, while inducing some extra memory usage. To mitigate the memory cost, the authors propose the second algorithm, Memory-Efficient Sharpness-Aware Training (MESA), which reduces the memory usage significantly at the tradeoff of 15% more computation cost. The empirical results of SAF and MESA are pretty good compared with SAM and SGD on popular benchmarks (CIFAR and ImageNet), indicating that the the proposed algorithms indeed reduce the computation costs of SAM while maintaining the performance. Strengths:\n\n1. The SAF algorithm is proposed in a principled way: The authors studied the SAM loss carefully, and obtained some interesting observations (Sec. 3.1), naturally leading to an elegant algorithm\n\n2. The authors indeed make the algorithm practical: Even though the general idea (Sec. 3.1) is elegant, it may still have some limitations when being used as a practical optimizer. The authors tackled some numerical challenge in Sec. 3.2, and moreover, they mitigated the memory cost issues by using the trick of exponential moving average (EMA), leading to the memory-efficient version (MESA). I really appreciate the efforts the authors spent on making the algorithm practically useful.\n\n3. The empirical results are good: The proposed algorithms, SAF and MESA, have slightly better performance than SAM with ~1/2 computation cost. Also, the extra memory cost of MESA is indeed negligible compared with SAF.\n\nWeakness: \n\n1. Is the replacement of cross-entropy with KL (in Sec. 3.2) really necessary? I think cross-entropy is still the best choice for classification, so I guess the replacement may have some negative impacts. I think it's better to have an ablation study on this replacement. 1. Is the reduction of computation cost from 2x to 1x SGD really meaningful? \nTheoretically speaking, 1x and 2x costs are about the same. Empirically, I know there are many factors that affecting the computation cost of training, and I'm not sure if 50% cost reduction is very meaningful practically. \n\n2. See Weakness-1 above. That's one of my questions. See the Questions section above.", " This paper proposes two methods for sharpness aware minimization (SAM) without the cost of SAM, as SAM doubles training cost. The methods are 1) SAF which adds a distillation loss from the predictions of prior epochs and 2) MESA which adds a distillation loss from the exponential moving average of the weights. For resnets on imagenet, SAF and MESA match or outperform other methods including SAM, ESAM, and Vanilla SGD. Strengths:\n- The paper tackles and interesting and practical problem, which is how to make training better without the added cost of methods like SAM.\n- For ResNets on ImageNet, the empirical performance when compared to baselines is good. Moreover, this is without the cost.\n\nWeaknesses:\n- I do not understand how MESA is only 15% more expensive than standard training. If I am understanding correctly, MESA requires a forward pass through both the EMA weights and the standard weights. Doesn't this require two forward passes where there was previously only one?\n- It seems like free should not be in the name of SAF because there is an additional memory overhead (minor)\n- I did not understand very well how SAF and MESA relate to SAM. In particular, Equation 8 is not well motivated and I did not see where the KL came from.\n- I have a few concerns with the ResNet baselines: 1) The ResNet baseline for SAM is below that reported in the SAM paper -- could this perhaps be because perturbations are synced? 2) Modern ResNets, e.g. with techniques from the timm library which is the library used in this paper can get to 80% accuracy (https://arxiv.org/pdf/2110.00476.pdf). Are the methods presented in this paper additive with these modern approaches?\n- I understand that there is not space to answer all of these questions.\n- Why is the ResNet baseline for SAM much lower than that reported in the SAM paper? \n- Why is MESA only 15% more expensive than standard training\n- The best ResNet50 in the paper is < 78%. However, modern day ResNets (e.g., from https://arxiv.org/pdf/2110.00476.pdf in the popular timm library, which is used in this paper) can get ~80%. Are the methods in this paper additive with more modern approaches.\n- Why is GSAM much better than all other methods for the ViT model (minor). The paper claims to address this in Section 3.3 but a revision could benefit from an explicit limitations section (minor)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 3, 6, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 2 ]
[ "5xZbbcF1Ams", "vIKGV4z6l5x", "HJMF1c05cW", "TkvVkaa4Yqw", "ty2xiB6ifDX", "TJnObd90-Lt", "TJnObd90-Lt", "ZSyF0kYHZL", "SQoPLdPvB86", "RJaATRurbu0", "tXCVWY5tWaH", "s8jyIpPVK5j", "t0AEd_w-uRz", "ahZOHFzJn98", "EBEODdjDYMU", "3qx2-s-ERgq", "nips_2022_xK6wRfL2mv7", "nips_2022_xK6wRfL2mv7", "nips_2022_xK6wRfL2mv7", "nips_2022_xK6wRfL2mv7" ]
nips_2022_m8YYs8nJF3T
Distributional Convergence of the Sliced Wasserstein Process
Motivated by the statistical and computational challenges of computing Wasserstein distances in high-dimensional contexts, machine learning researchers have defined modified Wasserstein distances based on computing distances between one-dimensional projections of the measures. Different choices of how to aggregate these projected distances (averaging, random sampling, maximizing) give rise to different distances, requiring different statistical analyses. We define the \emph{Sliced Wasserstein Process}, a stochastic process defined by the empirical Wasserstein distance between projections of empirical probability measures to all one-dimensional subspaces, and prove a uniform distributional limit theorem for this process. As a result, we obtain a unified framework in which to prove sample complexity and distributional limit results for all Wasserstein distances based on one-dimensional projections. We illustrate these results on a number of examples where no distributional limits were previously known.
Accept
After the rebuttal period, the reviewers have come to an agreement on the paper being novel, interesting, the contributions being significant. The rebuttal also addressed most of the concerns, though I agree with reviewer 84Jv on the comment that experiments on non-compact settings would be a plus to see the limits of the theory. Overall, I believe this is a nice continuation for the prior art and I recommend an acceptance for the paper.
train
[ "cwgP2zx2Vii", "k9xW54zb86z", "o4YFl0mqXrkK", "F6xLuGl4f0", "4mlQmbiPOug", "19M2B1exfom", "JiMF7Fl-YVz", "3GlabsjbJwT", "9xtqoZl2AD1", "RXqFTdE_Yl", "C9TBUJ-yeW4" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your detailed response and the changes made to the document (especially the clarification for the example with $p=1$). Also, I agree with you on the particular issue raised in 2, thanks for pointing out my misunderstanding. \n\nAfter reading the other reviews and responses, the authors' goal of proposing a general strategy to study the limiting distributions of the sliced Wasserstein processed was made more clear.\n\nI have therefore increased my score.", " I would like to thanks the authors for their answer to my questions. \n\nI especially appreciate the answer about the centering in expectation, where they explain that their method is naturally centered in $W_p^p(P_u,Q_u)$. I however truly think that having the Subgaussian case would have strengthened the paper much more and make it more complete.\n\nI therefore decide to raise my score by 1. ", " Thank you for providing answers to my questions. I understood the theoretical difficulties that the authors have for the non-compact support distributions and I believe that the available technique may not resolve it easily. I think the paper is a good contribution to the statistical optimal transport theory community. ", " We thank the reviewer for the thorough reading and thoughtful comments. We believe the reviewers' comments have helped us strengthen the draft considerably, and we invite the reviewer to reconsider their recommendation in light of our revision.\n\n1. Centering at expectation: \n\nWe appreciate the reviewer raising this important point.\n\nFor the purposes of inference in practice, it is much better to obtain a limit result centered at $W_p^p(P_u, Q_u)$ (which is the population-level quantity of interest) rather than at $\\mathrm{E} W_p^p(P_{nu}, Q_{nu})$ (which does not possess a practical interpretation in terms of the measures $P$ and $Q$).\n\nHowever, most of the distributional limit results in the OT literature are centered at the latter quantity, and developing results centered at the former, population-level quantity often requires significant effort.\n(For a further discussion of the difficulty of obtaining limits with the correct centering, see the introduction to ref. [21] in the original submission.)\n\nThe fact that our process is centered at $W_p^P(P_u, Q_u)$ is therefore an important benefit of our results and approach, and we have emphasized this fact in our revision.\n\nTo connect this with the results obtained in the referenced paper of Nadjahi et al., we note that their Corollary 2 is $\\textit{not}$ sharp under assumption $\\textbf{(CC)}$ which holds throughout out work, and in fact under this assumption $|\\mathrm{E} SW_p^p(P_n, Q_n) - SW_p^p(P, Q)| = o(n^{-1/2})$ for $p > 1$. \n\n2. Possible extension to a more general setting where P and Q are subgaussian: \n\nThe assumption of compact support is indeed restrictive.\nHowever, it represents a relatively common state of affairs in optimal transport, where theorems are first established in the compactly supported case and only later extended to the unbounded setting, often with different techniques.\nObtaining this extension for the sliced Wasserstein process is an attractive question for future work.\n\n The compactness assumption is used to guarantee that the set of Kantorovich potentials corresponding to $P_u$ and $Q_u$ for any $u \\in \\mathbb{S}^{d-1}$ is uniformly Lipschitz, and is therefore a subset of a Donsker class. If the supports of $P$ and $Q$ were unbounded, in order to deduce the Donsker property, we would need additional assumptions on the $1$-dimensional projections of $P$ and $Q$ as well as the cost function, (see e.g. Theorem 5.2 of [hundrieser2022unifying]) that do not hold for $p$-Wasserstein distances ($p > 1$) in general. We have added a remark clarifying this point after Theorem 2.1.\n\n3. Code: \n\nIn a late stage of editing, we removed a link to a Github version of the code due to anonymity concerns. We have uploaded an anonymized version with the revised supplement and apologize for creating confusion.\n\n4. Remaining space and small typos: \n\nWe thank the reviewer for pointing out the typos and the problem of remaining space. We have used the additional space to offer another application of our techniques (to the distributional sliced Wasserstein distance) and further discussion on the implications of our results throughout.\n\n[hundrieser2022unifying] Marcel Klatt and Carla Tameling and Axel Munk, Empirical Regularized Optimal Transport: Statistical Theory and Applications, SIAM J. Math. Data Sci., 2, 419-443, 2020.", " We thank the reviewer for the thorough reading and thoughtful comments. \n\n1. Novelty of results:\n\nThe reviewer argues that our main results appear already in references [20, 24]. To restate our goal with this paper: we aim to define a new object (the sliced Wasserstein process) whose convergence explains, and yields as a corollary, the results of [20] and [24]. We view this as an important conceptual step forward in the understanding of the sliced Wasserstein distance and its variants, and allows direct derivation of limit results for variants for which no limit results currently exist. For example, we are aware of no limit theorems for the Distributional Sliced Wasserstein distance defined in [27]. Corollaries 2.2 and 2.3 immediately imply a central limit theorem and bootstrap consistency result for this distance, which we have added as an additional example to the paper. We believe the ease with which these and other theorems can be derived from our main result illustrates the impact of our approach.\n\n2. Necessity of the assumption of compactness of the supports of the probability measures:\n\nThe assumption of compact support is indeed restrictive.\nHowever, it represents a relatively common state of affairs in optimal transport, where theorems are first established in the compactly supported case and only later extended to the unbounded setting, often with different techniques.\nObtaining this extension for the sliced Wasserstein process is an attractive question for future work.\n\nThe compactness assumption is used to guarantee that the set of Kantorovich potentials corresponding to $P_u$ and $Q_u$ for any $u \\in \\mathbb{S}^{d-1}$ is uniformly Lipschitz, and is therefore a subset of a Donsker class. If the supports of $P$ and $Q$ were unbounded, in order to deduce the Donsker property, we would need additional assumptions on the $1$-dimensional projections of $P$ and $Q$ as well as the cost function, (see e.g. Theorem 5.2 of [hundrieser2022unifying]) that do not hold for $p$-Wasserstein distances ($p > 1$) in general. We have added a remark clarifying this point after Theorem 2.1.\n\n3. State the assumption as absolutely continuous probability measures instead of bounded connected supports:\n\nWe respectfully disagree with the reviewer's assertion that our results are limited to absolutely continuous probability measures.\nFor instance, there exist probability measures that are not absolutely continuous with respect to the Lebesgue measure on $\\mathbb{R}^d$ but whose one-dimensional projections have support which is an interval (e.g., the uniform measure over the unit sphere in $\\mathbb{R}^d$).\nThe one-dimensional projections need not even be absolutely continuous; for instance, consider the distribution on $[0, 1]$ whose CDF is $F(x) = x/2$ for $x \\leq 1/2$ and $F(x) = x/2 + 1/2$ for $x > 1/2$.\nIn short, though the assumption \\textbf{(CC)} does exclude some examples, it does not restrict the application of the theorem to absolutely continuous measures.\n\n4. Trimmed sliced Wasserstein distance: \n\nWe thank the reviewer for pointing out that this was not clearly explained in our original submission. We have borrowed this terminology from [24], and have added more context to the revision.\n\n5. Extension to sliced-Sinkhorn divergences: \n\nExtending our framework to the case of entropy-regularized transport is indeed possible and relatively straightforward, to obtain a \"sliced Sinkhorn process\". We have not pursued this direction in the interest of keeping focus on the un-regularized sliced distances (which are more common in practice), but agree that the extension to the regularized case is interesting.\n\n6. Example corresponding to $p = 1$ in Appendix B.1: \n\nThis is a good point that we should have clarified. The only obstacle to include $p = 1$ in the main result is that there do not seem to be general conditions under which the Kantorovich potentials under $p = 1$ are unique. The assumption $(CC)$ only works for $p > 1$. In the simulation we present in Appendix B.1, we have verified directly that the potentials corresponding to all one-dimensional projections are unique, so that the main distributional convergence result still holds. We will clarify this.\n\n[hundrieser2022unifying] Marcel Klatt and Carla Tameling and Axel Munk, Empirical Regularized Optimal Transport: Statistical Theory and Applications, SIAM J. Math. Data Sci., 2, 419-443, 2020.", " We thank the reviewer for the thorough reading and thoughtful comments. \n\n1. Discrete SWD:\n\nThe fact that the discrete SWD enjoys a Gaussian limit can indeed be deduced from the CLT for the Wasserstein distance, though this CLT for the Wasserstein distance does not directly imply what the limiting variance for the discrete SWD will be. Obtaining the limiting variance requires a joint convergence result like Theorem 2.1.\n\nThere is stability in the limiting distribution for various values of $L$. Examining the form of the limiting covariance in Eq.~(7), conditioned on the (potentially random) directions $u_1, \\dots, u_L$, we obtain that $\\sqrt{n}(\\widehat{SW}_p^p(P_n, Q_n) - \\widehat{SW}_p^p(P, Q)) \\rightsquigarrow \\mathcal{N}(0, \\sigma^2)$ where \n\n$\\sigma^2 = (1-\\lambda)\\mathrm{var}(P)(\\frac{1}{L}\\sum_{i=1}^L f_{u_i}(u_i^\\top X)) + \\lambda \\mathrm{var}(Q)(\\frac{1}{L}\\sum_{i=1}^L f^c_{u_i}(u_i^\\top Y))$.\n\nAs $L \\to \\infty$, if the $u_i$ are drawn independently at random from the uniform distribution on the sphere, then this quantity converges almost surely to the limiting variance corresponding to the standard SWD.\nIn particular, for $L, L'$ sufficiently large, the limiting variances will be close with high probability.\n\n2. Rate of convergence in Theorem 2.4:\n\nWe presented Theorem 2.4 to permit the construction of asymptotic confidence intervals. By Slutsky's theorem, a consistency result (e.g., Theorem 2.4) suffices for this purpose, and we have clarified this point in our revision. Since our focus was on asymptotic inference, we did not pursue a quantitative version of this result, but we believe it to be a quite interesting question for future work. Such a result which would necessitate the development of quantitative convergence bounds for Kantorovich potentials, which to our knowledge are not known.\n", " We thank the reviewer for the thorough reading and thoughtful comments. \n\n1. Possible extension to a more general setting where P and Q have arbitrary supports:\n\nThe assumption of compact support is indeed restrictive.\nHowever, it represents a relatively common state of affairs in optimal transport, where theorems are first established in the compactly supported case and only later extended to the unbounded setting, often with different techniques.\nObtaining this extension for the sliced Wasserstein process is an attractive question for future work.\n\nAs the reviewer notes, the compactness assumption is used to guarantee that the set of Kantorovich potentials corresponding to $P_u$ and $Q_u$ for any $u \\in \\mathbb{S}^{d-1}$ is uniformly Lipschitz, and is therefore a subset of a Donsker class. If the supports of $P$ and $Q$ were unbounded, in order to deduce the Donsker property, we would need additional assumptions on the $1$-dimensional projections of $P$ and $Q$ as well as the cost function, (see e.g. Theorem 5.2 of [hundrieser2022unifying]) that do not hold for $p$-Wasserstein distances ($p > 1$) in general. We have added a remark clarifying this point after Theorem 2.1.\n\n2. Simulation studies about the Max-sliced Wasserstein distance:\n\nWe included an example of the max-sliced Wasserstein distance in section 4.2 where we call it ``Wasserstein project pursuit,'' which is alternate terminology (proposed by ref. [28]) for the max-sliced Wasserstein distance. We have modified this section to clarify that this section illustrates the limit for the max-sliced Wasserstein distance. We have also added an example with a non-Gaussian limit in section B.2 of the appendix.\n\n[hundrieser2022unifying] Marcel Klatt and Carla Tameling and Axel Munk, Empirical Regularized Optimal Transport: Statistical Theory and Applications, SIAM J. Math. Data Sci., 2, 419-443, 2020.", " The paper provides a Donsker-typed theorem for Projected Wasserstein distance in $l^{\\infty}$ norm over the projected directions ball $S^{d-1}$. From that, it derives Limit Theorems for some well-known Sliced Wasserstein distances. Some simulation studies are carried out to support the theoretical findings. **Strength**: \n\n1. The results in this paper are important for Optimal Transport theory and application. The presentation of the paper is clean and easy to read. \n\n2. All the proofs are solid and written carefully. \n 1. Throughout the paper, the distributions of interest $P$ and $Q$ are assumed to be compactly supported. A natural question is \"can we extend the result to a more general setting where $P$ and $Q$ have arbitrary supports?\". It seems like the compact condition of the supports is only needed to show the existence of the potential and to bound the entropy number. If we can extract those refined conditions and put them together into a more general result (maybe in Appendix), it would be more impactful to OT research. If it is not possible, it is helpful to have a paragraph to explain why.\n\n2. It would be interesting to see some simulation studies about the limiting distribution of the empirical process of Max-Sliced Wasserstein distances, even in low-dimensional cases. It would be helpful to build some intuition about this non-Gaussian limit. The paper is well written and has no major limitations. ", " Optimal transport distances (a.k.a. Wasserstein distance) have recently drawn ample attention in statistics and machine learning communities as powerful discrepancy measures for probability distributions. One of the bottlenecks of Wasserstein distance consists of its expensive computational cost. An alternative to making this problem computationally tractable leads in the fact that it can be cast on averaging calculations of $1$-D Wasserstein distances. This latter approach is referred to as Sliced Wasserstein distance (SWD). In a nutshell, SWD relies on projecting the data sample on some $u$-direction from the unit sphere, ($u \\in \\mathbb{S}^{d-1}).$\n\nThis paper addresses a theoretical unified problem of proving a Central Limit Theorem for the empirical SWD distance under a compact supports condition satisfied by the distributions in question. The authors show that the asymptotic distribution is a tight-centered Gaussian process on the unit sphere $\\mathbb{S}^{d-1}$.\n\n The paper is easy to follow, and it presents a unified framework for proving central limit theorem for a family of SWD distance, including vanilla SWD, Max-Sliced, Trimmed SWD, etc. This result in itself is solid, especially for statistical applications involving SWD. \n\n \n- For the discrete SWD:\n - can one get directly the central limit from the one verified by the Wasserstein distance?\n - does this limit process uniform with respect to $L$, the number of $u$-random direction projections. Namely, if there is $L \\leq L'$ random projections, the limit is (approximately) the same?\n- In Theorem 2.4., can one derive an order of the convergence rate to zero? This is not applicable to the paper.", " The authors present a unified method for providing distributional limits for processes constructed from the empirical Wasserstein distance between one dimensional projections of empirical distributions. To do so, they rely on the theory of empirical processes and on the Hadamard directional differentiability of various functions. This article follows a natural set of results that have been proven for both classical and entropic optimal transport. The presentation is clear and the article is easy to read. I develop the strengths and weaknesses further in the following.\n\nStrengths:\n- The paper proposes a unified method to prove distributional limits for distances based on sliced distributions and 1D Wasserstein distance. The method is based on a previous work [20], that addressed the case of empirical optimal transport.\n\nWeaknesses:\n- The novelty of this article seems to be limited as most of the major results have been proven in [19,23].\n- The assumption (CC) that the support of the projected measures is connected for all the vector in the unit sphere is very restrictive. I agree that the general case involves a complicated geometry of the distributions, however, the unified framework on which the paper is based concerns very few applied cases.\n- The method for proving the results of the paper uses classical tools of weak convergence of empirical processes and does not particularly differ from the case of classical optimal transport.\n\nOverall, the results presented here are restricted to probability measures under strict assumptions and are therefore rather limited: every one dimensional projection of probability distributions onto the unit sphere must have an interval as a support. Moreover, this paper seems to have been published in a hurry (see \"Minor comments\" below), there is notably no conclusion and no perspective of using such results for example.\n\nMinor comments:\n- The labels are visible in the margin.\n- The reference [28,29] appears twice.\n- l.94 : \"uniquness\"\n- In the proof of Theorem 2.1 and in Corollary 2.3, the random variables $X_i$ and $Y_i$ are not defined.\n- In section 3.2, it is question of MSW, $\\tilde{W}$ and WPP (which is not defined). Standardisation of terminology is necessary.\n- In Theorem 3.1, the dependence on $\\delta$ of the trimmed SW disappeared.\n- l. 190 : \"given in section 4.1\" should be removed/rewritten. - Can you specify where the assumption of compactness of the support of the probability measure in $\\mathbb{R}^d$ is necessary?\n\n- As proved in [Nadjahi], the sliced-Sinkhorn divergences do not depend on the entropy regularization parameter, unlike the entropy regularized optimal transport (or Sinkhorn divergence). Therefore, considering sliced-Sinkhorn divergences could also be of interest. As similar empicial process methods have been used for entropy regularized OT, would it be possible to extend you result to this particular framework?\n\n- It could be made clear from the beginning that the proposed method is in any case limited to absolutely continuous probability measures, as their support must be connected.\n\n- Since your results apply for various functions F, it might have been interesting to propose a new divergence to assert the interest of your generalisation. I however acknowledge that it is not an easy task to propose a new satisfactory distance.\n\n- More could be said about the trimmed sliced Wasserstein distance, and the precise meaning of \"robustification\".\n\n- In the appendix B.1, you consider the case $p=1$, which is not included in the Wasserstein distance setting that you develop (Section 2). In particular Theorem 2.1 is stated for the case $p>1$. Are the results of appendix B.1 can be obtain directly? If so, this could be clarified.\n\n[Nadjahi] - \"Statistical and Topological Properties of Sliced Probability Divergences\", Nadjahi et al. The authors have specified that their method is quite restrictive in the sense that it only applies to a rather small set of probability distributions.", " This paper defines the Sliced Wasserstein process, a stochastic process defined by the empirical Wasserstein distance between slices of empirical probability measures. A uniform distributional limit theorem is derived. Experimental results demonstrate the convergence towards the limiting Gaussian process. Strengths: \n- This work build upon the mathematical techniques developed in [20] and adapt them well for the SW process. \n- The results seem correct (I just skimmed through the proofs).\n\nNeutral:\n- Numerical results are convincing but I would have expected more examples, maybe ones that show the theory also hold (or not?) in the unbounded case.\n\nWeaknesses:\n- The paper is not very clear: more space could have been given for defining mathematical objects; a sketch of proof of the main theorem could have been given knowing that there is a lot of space not used at the end of the paper.\n- Contrary to what is said in the Checklist, the code is not given (it is specified it is given in a footnote shown page 6), and I would have liked to look at it.\n- The authors limit themselves to compactly supported probability measures P and Q (I am not certain if it’s possible to go beyond compact, maybe sub-Gaussian?). At least the authors could have developed a bit on why they considered this assumption and where are the difficulty for the unbounded case.\n- The authors did not verified the documents they sent: all the labels of equations and sections are visible…\n- There are a lot of typos (for instance the title of the SM does not correspond to the paper's title).\n- I don't see any real applications for these results: I know NeurIPS accepts very theoretical results, but it would be useful to know how this could improve or help us understand SW distances better given the already known results on the theory of SW.\n - A question I have regarding the SW process is the following: G_n defined in Equation 2 is said to enjoy a uniform central limit theorem. My naive question is: given the fact that $\\mathbb{E}[W_p^p(P_{nu}, Q_{nu})]\\neq W_p^p(P_u,Q_u)$ (due to the permutation/transport plan of the Wasserstein distance), then G_n is defined according to the \"wrong expectation\"? (see Corollary 2 of https://proceedings.neurips.cc/paper/2020/file/eefc9e10ebdc4a2333b42b2dbb8f27b6-Paper.pdf for instance where you need to compute a sample complexity). Maybe what I am saying is not true because the authors are considering a single slice. I would like to have their opinion on this.\n\n- How hard would it be to go beyond the compact case? At least Sub-Gaussian.\n\n- I suggest the authors should use the remaining space they have (1.5 pages!) to develop sketches of proof in the main paper, or do more experiments, maybe to show that their theory is valid for the unbounded case numerically?\n\nI am willing to raise my score accordingly if the answers to my questions are satisfying and the weaknesses addressed. No potential negative impact" ]
[ -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "4mlQmbiPOug", "F6xLuGl4f0", "JiMF7Fl-YVz", "C9TBUJ-yeW4", "RXqFTdE_Yl", "9xtqoZl2AD1", "3GlabsjbJwT", "nips_2022_m8YYs8nJF3T", "nips_2022_m8YYs8nJF3T", "nips_2022_m8YYs8nJF3T", "nips_2022_m8YYs8nJF3T" ]
nips_2022_Iksst2czYoB
Stochastic Multiple Target Sampling Gradient Descent
Sampling from an unnormalized target distribution is an essential problem with many applications in probabilistic inference. Stein Variational Gradient Descent (SVGD) has been shown to be a powerful method that iteratively updates a set of particles to approximate the distribution of interest. Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem. A natural question then arises: ``Can we derive a probabilistic version of the multi-objective optimization?''. To answer this question, we propose Stochastic Multiple Target Sampling Gradient Descent (MT-SGD), enabling us to sample from multiple unnormalized target distributions. Specifically, our MT-SGD conducts a flow of intermediate distributions gradually orienting to multiple target distributions, which allows the sampled particles to move to the joint high-likelihood region of the target distributions. Interestingly, the asymptotic analysis shows that our approach reduces exactly to the multiple-gradient descent algorithm for multi-objective optimization, as expected. Finally, we conduct comprehensive experiments to demonstrate the merit of our approach to multi-task learning.
Accept
The paper presents a particle-based method to approximate multiple target distributions simultaneously. The proposed particle-updating dynamics is shown to decrease the KL to every target (makes a Pareto improvement), and the resulting particles prefer the intersection of all targets (Pareto common) which differs it from a related method (which prefers Pareto front). Although the technical framework is not completely novel (follows MGDA (MOO)), reviewers agree that the proposed method for multi-distribution approximation is inspiring and the paper has well implemented the idea. Nevertheless, there still remains a few imprecise statements that authors need to address upon acceptance. 1. Precise meeding of Eqs. (1,2). It is impossible that a single $q$ simultaneously minimizes each individual KL in general. Reviewer yFUe mentioned this but the reply did not clearly define this. The notation/formulation needs clarification even if it follows previous work. 2. Equation below Line 114 is only true if $\phi$ is in RKHS. 3. Some statements on MOO-SVGD might be improper. It seems conflicting that MOO-SVGD “updates the particles individually and independently”, while also “employs a repulsive term”, which is an interation among particles. More precise description is expected on (MOO-SVGD) “encourages the particle diversity without any theoretical-guaranteed principle to control the repulsive term”: MOO-SVGD is not originally intended for multi-distribution approximation, and it also provides a stationary distribution characterization.
train
[ "cupTJmaE6q4", "04yPUOg10rQ", "PsAWGzbNwdt", "poWZ6ak3Ou", "Cyqoskf1jqP", "i5XNcvNIhB4", "koGlyUM6G-X", "1Bfufgou0Nx", "0lgSxt6XD0l", "y8Gdyrr4TB9", "XhH5KTWEHfj", "h8q2OUzC4GI", "x2IxHQwXoKQ", "1Z-7GKS_AIC", "ViKY-WVuwGb", "yn0fQ1UXHHh", "ZpUQy03IVbA" ]
[ "author", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to thank the reviewer for spending their time evaluating our paper and providing detailed feedbacks. We will include the intuition behind our objective function in the next version of our manuscript, as you suggested.\n\nBest,\n\nAuthors.\n", " Thank you for recognizing our efforts and providing constructive reviews. We greatly appreciate your endorsement.\n\nBest regards,\n\nAuthors.", " The authors' response has addressed most of my concerns. I'll raise my score accordingly.", " We greatly appreciate your reconsideration!\n\nBest regards, \n\nAuthors.", " Thanks so much for the reply and I apologize for the late response! I think the response was detailed and addressed most of my concerns. I have increased my score accordingly!", " I appreciate the author's detailed response. It addresses most of my concerns. Personally, I think in the revised version, adapting the current presentation of the proposed method to a format like MOO will be easier for the readers to understand. For example, although the objective $w^TUw$ is inspired by MOO, it is less clear from reading the main text until the author's reply. ", " Dear Reviewer 6tPS,\n\nWe would like to thank you again for spending your time evaluating our paper.\n \nAs the discussion period is expected to conclude shortly, we look forward to hearing your feedback about whether we have addressed your concerns in the rebuttals.\n \nBest regards,\n \nAuthors", " Dear Reviewer yFUe,\n\nWe would like to thank you again for spending your time evaluating our paper.\n \nAs the discussion period is expected to conclude shortly, we look forward to hearing your feedback about whether we have addressed your concerns in the rebuttals.\n \nBest regards,\n \nAuthors\n", " Dear Reviewer TqCR,\n\nWe would like to thank you once again for spending your time evaluating our paper.\n \nAs the discussion period is expected to conclude shortly, we look forward to hearing your feedback about whether we have addressed your concerns in the rebuttals.\n \nBest regards,\n \nAuthors\n", " **Error bar** \n\nWe already plotted the accuracy scores in the upper part of Figure 5. Regarding your suggestion of running the experiments over a number of realizations, we have run all the methods on three different random seeds and updated the result on the revision. Thanks for pointing this out to make our comparison more reliable.\n\n\n**Increasing the number of particles leads to performance drop** \n\nAn intuitive explanation for this behaviour is that the more particles are included in training, the more among them are ejected from the high-likelihood region by the repulsive force. Therefore, these models can hurt the ensemble performance. Also, we provide some empirical examples with 3/10/50 particles (Figures 1-3) here: https://sites.google.com/view/mt-sgd-rebuttal/. Thus, one can observe from Figure 3 that many of the fifty particles are outside the common mode region.\n\n**Prior of model parameter** \n\nFollowing the previous work, the initial weights distributions $p(θ)$ is chosen following the default initialization of torch (e.g. kaiming_uniform for [fully connected](https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/linear.py#L103) and [conv layers](https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/conv.py#L150)). Thank you for your suggestion, we have clarified this in the revision.\n\n**Complexity** \n\nAs can be seen from Algorithm 1 and Equation from Line 142, the main difference in terms of complexity between our proposed MT-SGD with the MGDA baseline is that it requires computing the $U$ matrix, which has a $O(K^2M^2d)$ complexity where the number of particles $M$ is usually set to a small positive integer. Furthermore, computing $U$'s entries can be accelerated in practice by calculating them in parallel, as there is no interaction between them during forward pass.\n\nHowever, the computation of the back-propagation is typically more costly than the forward pass. Thus, the main bottlenecks in our method lie on the backward pass and solving the quadratic programming problem - which requires an iterative method. We have run one more experiment to measure the running time in each epoch of MT-SGD and baselines. The result is reported in the supplementary and presented in Figure 4: https://sites.google.com/view/mt-sgd-rebuttal/\n\nThank you for your willingness to increase our paper's score. Please let us know if you would like us to do anything else.\n\n\n", " Furthermore, to extend our current experimental evaluation and include the comparison against your suggested uniform scaling method, we have run the experiment on the full large-scale CelebA dataset and reported the details in the supplementary material, with the performance of previous work reported from MGDA paper:\n\n\n| Attribute | Uniform scaling | Single task | Uncertainty | Gradnorm | MGDA | MT-SGD (our) |\n|-|-|-|-|-|-|-|\n| 5 O'clock Shadow | 7.11 | 7.16 | 7.18 | 6.54 | 6.47 | **6.03** |\n| Arched Eyebrows | 17.30 | 14.38 | 16.77 | 14.80 | 15.80 | **14.11** |\n| Attractive | 20.99 | 19.25 | 20.56 | 18.97 | 19.21 | **18.62** |\n| Bags Under Eyes | 17.82 | 16.79 | 18.45 | 16.47 | 16.60 | **15.91** |\n| Bald | 1.25 | 1.20 | 1.17 | 1.13 | 1.32 | **1.09** |\n| Bangs | 4.91 | 4.75 | 4.95 | 4.19 | 4.41 | **4.02** |\n| Big Lips | 20.97 | 14.24 | 15.17 | 14.07 | 15.32 | **13.82** |\n| Big Nose | 18.53 | 17.74 | 18.84 | 17.33 | 17.70 | **17.14** |\n| Black Hair | 10.22 | 8.87 | 10.19 | 8.67 | 9.31 | **8.22** |\n| Blond Hair | 5.29 | 5.09 | 5.44 | 4.68 | 4.92 | **4.42** |\n| Blurry | 4.14 | 4.02 | 4.33 | 3.77 | 3.90 | **3.61** |\n| Brown Hair | 16.22 | 15.34 | 16.64 | 14.73 | 15.27 | **14.63** |\n| Bushy Eyebrows | 8.42 | 7.68 | 8.85 | **7.23** | 7.69 | 7.42 |\n| Chubby | 5.17 | 5.15 | 5.26 | 4.75 | 4.82 | **4.59** |\n| Double Chin | 4.14 | 4.13 | 4.17 | 3.73 | 3.74 | **3.35** |\n| Eyeglasses | 0.81 | 0.52 | 0.62 | 0.56 | 0.54 | **0.47** |\n| Goatee | 4.00 | 3.94 | 3.99 | 3.72 | 3.79 | **3.34** |\n| Cray Hair | 2.39 | 2.66 | 2.35 | 2.09 | 2.32 | **2.00** |\n| Heavy Makeup | 8.79 | 9.01 | 8.84 | 8.00 | 8.29 | **7.65** |\n| High Cheekbones | 13.78 | 12.27 | 13.86 | 11.79 | 12.18 | **11.45** |\n| Male | 1.61 | 1.61 | 1.58 | 1.42 | 1.72 | **1.26** |\n| Mouth Slightly Open | 7.18 | 6.20 | 7.73 | 6.91 | 6.86 | **5.91** |\n| Mustache | 4.38 | 4.14 | 4.08 | 3.88 | 3.99 | **3.55** |\n| Narrow Eves | 8.32 | 6.57 | 8.80 | **6.54** | 6.88 | 6.64 |\n| No Beard | 5.01 | 5.38 | 5.12 | 4.63 | 4.62 | **4.25** |\n| Oval Paw | 27.59 | 24.82 | 26.94 | 24.26 | 24.28 | **23.78** |\n| Pale Skin | 3.54 | 3.40 | 3.78 | 3.22 | 3.37 | **3.13** |\n| Pointy Nose | 26.74 | 22.74 | 26.21 | 23.12 | 23.41 | **22.48** |\n| Receding Hairline | 6.14 | 5.82 | 6.17 | 5.43 | 5.52 | **5.28** |\n| Rosy Cheeks | 5.55 | 5.M | 5.40 | 5.13 | 5.10 | **4.82** |\n| Sideburns | 3.29 | 3.79 | 3.24 | 2.94 | 3.26 | **2.87** |\n| Smiling | 8.05 | 7.M | 8.40 | 7.21 | 7.19 | **6.74** |\n| Straight Hair | 18.21 | 17.25 | 18.15 | **15.93** | 16.82 | 16.32|\n| Wavy Hair | 16.53 | 15.55 | 16.19 | 13.93 | 15.28 | **13.19** |\n| Wearing Earrings | 11.12 | **9.76** | 11.46 | 10.17 | 10.57 | 10.17 |\n| Wearing Hat | 1.15 | 1.13 | 1.08 | **0.94** | 1.14 | 0.95 |\n| Wearing Lipstick | 7.91 | 7.56 | 8.06 | 7.47 | 7.76 | **7.15** |\n| Wearing Necklace | 13.27 | 11.90 | 13.47 | 11.61 | 11.75 | **11.32** |\n| Wearing Necktie | 3.80 | 3.29 | 4.04 | 3.57 | 3.63 | **3.27** |\n| Young | 13.25 | 13.40 | 13.78 | 12.26 | 12.53 | **11.83** |\n| Average | 9.62 | 8.77 | 9.53 | 8.44 | 8.73 | **8.17** |\n", " We thank the reviewer for constructive feedbacks on our paper. Below we provide further clarification with respect to your main concerns:\n\n**Multi-task learning formulation** \n\nThank you for your comment, it is true that using your proposed derivation is a common way to tackle the multi-task learning problem:\n\n$$p\\left(\\alpha \\mid \\mathbb{D}, \\beta^{1}, \\ldots, \\beta^{J}\\right) \\propto p(\\alpha) \\prod_{j=1}^{K} \\prod_{i=1}^{N} p\\left(y_{i j} \\mid x_{i}, \\alpha, \\beta^{j}\\right)$$\n$$ \\Leftrightarrow \\log p\\left(\\alpha \\mid \\mathbb{D}, \\beta^{1}, \\ldots, \\beta^{J}\\right) \\propto \\log p(\\alpha) + \\sum_{j=1}^{K} p\\left(y_{j} \\mid x, \\alpha, \\beta^{j}\\right)$$\n\nTherefore, it can be clearly seen that this formulation is equivalent to uniformly weighting all tasks, which is a straightforward and commonly-used approach in practice. \n\nHowever, our motivation in this work is to incorporate the MOO scheme into the sampling problem, allowing us to sample the shared part simultaneously in the Pareto common of multiple tasks. By contrast, simply using uniform weights for all tasks could lead to a deterioration in performance, as carefully investigated in previous literature: (MGDA, ParetoMTL, MOO-SVGD, and etc.). \n", " We greatly appreciate the reviewer’s detailed and constructive comments and suggestions. In the following, we provide the main response to your comments:\n\n**Clarity**\n1. **The objective function $w^TUw$** \n\n In our paper, the derived objective function $w^TUw$ is inspired by the MGDA (MOO) method. Specifically, MOO's gradient acts on a space of gradients, whereas our optimal pushforward function $\\phi^*$ acts on $\\mathcal{H}_k^d$ where $\\mathcal{H}_k$ is a Reproducing Kernel Hilbert space corresponding to the kernel $k$. \n\n In the paper, we explicitly show the connection to MOO and state our problem as a multi-objective optimization problem on a probability space, as shown in Equation (2). Additionally, Theorem 2 dictates the Pareto stationary condition and indicates that if the steep descent directions $\\phi_i^*$ are linearly dependent, we reach a Pareto stationary point where the obtained pushforward function $\\phi^*$ becomes the zero function and cannot help to further decrease the divergences.\n \n2. **Comma in Equation (1)** \n\n The comma here acts as a separator between Equation (1) and the below notation explanation. In terms of the minimization of a list of $D_{KL}$, we are interested in simultaneously minimizing all objectives (i.e. gradually updating the particles to the common high-likelihood region). That is the reason why we cast this problem as a MOO, from which we can derive a principled method to achieve this goal. Due to space constraints, we will include more clarification in the later revision.\n \n3. **In line 49, do you mean empirical distributions** \n\n Yes, it is. We mean a empirical (uniform) distribution over the set of particles.\n\n4. **In line 107, is the optimal transformation $T$ for the current step?** \n\n Yes, it is. We explicitly present this information from Line 103 to 107 in the paper. Nevertheless, we have reminded this a second time in the revision to avoid confusion for the readers.\n \n5. **In line 112, should it be $O(\\epsilon^2)$?** \n\n Thank you for spotting this typo, it should be $O(\\epsilon^2)$. We have fixed it in the revised version.\n \n6. **Can the objective $w^TUw$ in lemma 1 be translated to an optimization problem of the weighted combination of $D_{KL}$?** \n\n The objective function $w^TUw$ in lemma 1 cannot be translated to an optimization problem of the weighted combination of $D_{KL}$. The reason is that although the optimal $\\phi^*$ is linearly dependent on the steepest descent direction $\\phi^*$, when we apply them to pushforward the current distribution, the linear dependency cannot be preserved to the KL divergences. \n \n7. **In line 175, using a different subscript for $w$** \n\n Thanks for your suggestion. We have updated the revised version accordingly.\n \n8. **The captions of Figure 4** \n\n Thanks. We have corrected it in the revised version.\n\n**SVGD-related** \n\n1. **Mode collapse problem** \n\n Thanks for your question. We do not confront the mode collapsing problem on the synthetic datasets and multi-task learning. We conjecture that in our approach, the particles need to sufficiently spread out to orient multiple target distributions (or tasks) and also further avoid collapsing with the inherit repulsive forces. Therefore, the particle collapsing problem is less likely to happen. However, we certainly believe that clearly investigating the behaviour of SVGD in high-dimensional data is a necessary next step to pursue in future work.\n\n2. **Assymtotic behavior when $\\sigma$ goes to $\\infty$** \n\n Yes, in that case, the formulation might be asymptotically relevant to some extent but does not exactly reduce to MOO.\n\n**Limitation** \n\n Finding the Pareto front is preferred when users want to obtain a collection of diverse Pareto optimal solutions with different trade-offs among all tasks. Thus, one can select their preferred trained model among the solution set at inference time. For instance, [2] utilized MOO to jointly train the main task with another auxiliary task and then selected the trained model that had an outstanding performance in the main task only. However, there might some of them are *extreme* models (i.e. works well on one task while performing poorly on the others). \n\n[2] Yim, Jonghwa, and Sang Hwan Kim. \"Learning boost by exploiting the auxiliary task in multi-task domain.\" arXiv preprint arXiv:2008.02043 (2020).", " We greatly appreciate the reviewer’s detailed and constructive comments and suggestions. Below we address the main concerns raised in your review.\n\n1. **Lack of convergence analysis of MT-SGD** \n\n We totally agree with your observation. Currently, we only prove that if the particles are updated according to the proposed algorithm, at each step, KL divergences to target distributions are decreased by a certain amount. However, we do not have any theoretical results regarding the convergence analysis at this point. \n\n Moreover, Theorem 2 indicates the Pareto stationary condition happening when our approach converges. This theorem shows that the proposed approach halts when the steepest descent directions $\\phi_i^*$(s) are linearly dependent, hence the resultant pushforward function $\\phi^*$ is zero. Eventually, the current distribution stays still. \n\n Additionally, developing convergence analysis for our approach is a challenging theoretical problem for further study. At this outset, we realize it is a highly tough and challenging theoretical study. Evidently, for non-linear objective functions (e.g., a deep learning empirical loss), we need to make strong assumptions about its local smoothness to guarantee its convergence to a local minima. For non-linear multi-objective optimization, there has not been any work addressing its convergence to a Pareto stationary point, to the best of our knowledge.\n\n2. **Equation line 166-167** \n\n Yes, you're right. In detail, the formulation is written as $p(\\theta \\mid D) \\propto p(D \\mid \\theta)$ $p(\\theta)$ (following the Bayes' theorem). However, for a fair comparison, the prior (initial weights distributions) $p(θ)$ is retained from previous work (MGDA, ParetoMTL, MOO-SVGD), which is the default initialization of torch (e.g. kaiming_uniform for [fully connected](https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/linear.py#L103) and [conv layers](https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/conv.py#L150)). So that, we treat the prior $p(\\theta)$ as a constant term and eliminate it out of the main objective for brevity. We have updated the revised version to explicitly state this elimination.\n\nWe hope that you can reconsider the review score. Please let us know if you would like us to do anything else.", " This paper proposes MT-SGD for multi-objective optimization problems, which leverages the idea of SVGD to generate diverse particles w.r.t. the joint high-likelihood region for all distributions. Experiments on both synthetic dataset and real dataset (Multi-Fashion+Multi-MNIST and CelebA) demonstrate the effectiveness of MT-SGD compared to MGDA, Pareto MTL and MOO-SVGD. Strength:\n+ Writing is good and the paper is easy to follow\n+ Finding diverse particles on the Pareto front is a well-motivated topic.\n+ The proposed method is novel and inspiring\n\nWeakness:\n- Lack of convergence analysis of MT-SGD. Theorem 1 & 2 proves that MT-SGD reduces every KL divergence simultaneously, but it is unclear how the variational distribution $q$ or the particle set $\\{\\theta_i\\}_{i=1}^M$ will be when it converged.\n- The equation between line 166 and line 167 seems weird. It should be $p(\\theta|D) \\propto p(D|\\theta) p(\\theta)$. Besides, how the prior $p(\\theta)$ is chosen in experiments? None None", " This paper proposed a variant of SVGD for multi-task learning. In particular, instead of driving particles towards the high-density region of a single distribution like SVGD, the proposed MT-SGD drives the particles towards the joint region of all densities. The main novelty of this paper is to propose a weighted combination of the driving directions from each SVGD such that the combined direction is guaranteed to decrease all KL divergences between the current distribution and each target distribution. Additionally, like the connection of SVGD to SGD, the author showed the connection of MT-SGD to MOO. Empirically, the proposed MT-SGD achieves better results compared to baselines consistently. **Strength**:\nThis paper adopts the framework of MOO and extends to SVGD. The basic idea is very similar to MOO, the novelty comes from the incorporation of SVGD. The objective for searching $w^*$ can also be viewed as an adaptation of the MOO objective. In summary, the proposed approach is novel to some extent, but as mentioned by the author, this is not a completely novel framework. In terms of significance, the proposed framework demonstrates clear advantages compared to baselines and seems to be a good contribution to Bayesian multi-tasking learning. \n\n**Weakness**:\nI have checked the proofs in the appendix, which seem to be correct. Thus, my main criticism is its presentation. I will elaborate more in the question section. **Clarity**:\n1. From my understanding, the main novelty is the objective in lemma 1. Is there an intuition on why $w^TUw$ is a reasonable objective?. It seems that this objective is very similar to the one used in MOO (Eq.3). But instead replacing the derivative of $L$ with the functional derivative of $D_{KL}$. Is this true? In that case, personally, I think it would be better if MT-SGD is introduced from the MOO point of view. In this way, it is also easier for the reader to understand intuition. Also, can MT-SGD be derived based on the Pareto stationary KKT condition?\n\n2. In Eq.1, what does the comma mean? I understand this is a direct adaptation of MOO, but can you explain this minimization, since the typical minimization is targeted at a single value, but here, we have a list of $D_{KL}$. \n\n3. In line 49, do you mean empirical distributions?\n\n4. In line 107, is the optimal transformation $T$ for the current step?\n\n5. In line 112, should it be $O(\\epsilon^2)$?\n\n6. Just curious, can the objective $w^TUw$ in lemma 1 be translated to an optimization problem of the weighted combination of $D_{KL}$? \n\n7. In line 175, I suggest using a different subscript for $w$, since $t$ is already reserved for the number of SVGD steps in algorithm 1. \n8. For the captions of Figure 4, it should be red curve. \n\n**SVGD-related**:\n1. One of my concerns about using SVGD is its mode collapse problem, which has been extensively reported [1]. This means without proper tuning of the repulsive force, the particles tend to collapse to a single point in high dimensions. Have you noticed this behaviour? If not, why?\n2. In line 143, the author mentioned when $\\sigma \\rightarrow \\infty$, $U$ reduces to the inner product of the score function. But it seems that the update of MT-SGD, in this case, does not recover the MOO? Since the kernel matrix in SVGD is now a matrix with value 1, which takes the gradient of other particles into consideration.\n\n\n[1]: Gong, Wenbo, Yingzhen Li, and José Miguel Hernández-Lobato. \"Sliced kernelized Stein discrepancy.\" arXiv preprint arXiv:2006.16531 (2020).\n This work does not have any potential negative societal impact. In terms of the potential limitation, the author argues that the proposed MT-SGD tries to find the Pareto common, whereas the other baselines try to find the Pareto front. I am curious under what scenario Pareto front is preferred compared to Pareto common? ", " In this work, the authors introduce Stochastic Multiple Target Sampling Gradient Descent (MT-SGD), an extension of Stein variational gradient descent (SVGD) that tries to approximately sample from multiple distributions *simultaneously*. The authors then demonstrate that MT-SGD is well suited for Bayesian multi-task learning. # Strengths\n\nMT-SGD is a natural extension of SVGD that is theoretically motivated. Moreover, the authors were also able to demonstrate for RBF kernels, as the scale goes to infinity, MT-SGD reduces to multiple objectivation optimization which is amazing! Moreover, the computational complexity of the algorithm seems to be on the order of SVGD, with the main difference being solving the quadratic programming problem for finding the optimal combination of the gradients. While I'm not familiar with the SVGD and multi-task learning literature, this approach seems both highly novel and theoretically grounded. The experiments section is also compelling as well.\n\n# Weaknesses\n\nFirstly, I'm a little confused about the Multi-task learning formulation. Given the data, one would want to compute the following posterior distribution\n$$ p(\\alpha, \\beta^1, \\ldots, \\beta^K \\mid \\mathbb{D}) \\propto p(\\alpha) \\prod_{j=1}^K p(\\beta^j) \\prod_{i=1}^N p(y_{ij} \\mid x_i, \\alpha, \\beta^j) $$\nConditioned on the non-shared parts, $\\beta^1, \\ldots, \\beta^K$, the posterior for the shared parts is\n$$ p(\\alpha \\mid \\mathbb{D}, \\beta^1, \\ldots, \\beta^J) \\propto p(\\alpha) \\prod_{j=1}^K \\prod_{i=1}^N p(y_{ij} \\mid x_i, \\alpha, \\beta^j) $$\nwhere we can see that this conditional posterior already attempts to find an $\\alpha$ that is good for all tasks simulatensouly. Moreover, just regular SVGD can be used for this. I don't see the advantage, nor the intuition, of formulating the problem as $K$ individual posteriors. I think it would strengthen the approach if they authors compared SVGD on the posterior I have outlined.\n\nNext, while the experiments section is great, it is **very** concerning that there are no error bars for all the experiments in section 4.2, especially given how close the accuracies are for each of these tasks. Thus, I think it is imperative the authors run the experiments over a number of realizations to make me more confident in their results.\n\nPlease let me know if I have misunderstood anything! I will also be happy to raise my score if the authors get back to me about my concerns. 1. In section 2.2.1 in the appendix, there is an interesting phenomenon in Figure 2 where increasing the number of particles, $M$, seems to **deteriorate** the performance substantially. A prior, I don't see why this should be the case. I would appreciate it if the authors could clear this up.\n2. In section on Multi-task learning, the authors should state that they are using a non-informative prior for the parameters of the model. While on the checklist the authors stated that they talk about the limitations in the appendix, I don't see it anywhere. To me, the two biggest limitations are the scaling with the number of tasks, $K$, and the number of particles, $M$ (which is inherent to SVGD). I think the authors should mention it in the work, though it doesn't detract from the novelty of the work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 2 ]
[ "i5XNcvNIhB4", "PsAWGzbNwdt", "0lgSxt6XD0l", "Cyqoskf1jqP", "koGlyUM6G-X", "1Bfufgou0Nx", "y8Gdyrr4TB9", "x2IxHQwXoKQ", "1Z-7GKS_AIC", "XhH5KTWEHfj", "h8q2OUzC4GI", "ZpUQy03IVbA", "yn0fQ1UXHHh", "ViKY-WVuwGb", "nips_2022_Iksst2czYoB", "nips_2022_Iksst2czYoB", "nips_2022_Iksst2czYoB" ]
nips_2022_wjSHd5nDeo
Multi-Sample Training for Neural Image Compression
This paper considers the problem of lossy neural image compression (NIC). Current state-of-the-art (SOTA) methods adopt uniform posterior to approximate quantization noise, and single-sample pathwise estimator to approximate the gradient of evidence lower bound (ELBO). In this paper, we propose to train NIC with multiple-sample importance weighted autoencoder (IWAE) target, which is tighter than ELBO and converges to log likelihood as sample size increases. First, we identify that the uniform posterior of NIC has special properties, which affect the variance and bias of pathwise and score function estimators of the IWAE target. Moreover, we provide insights on a commonly adopted trick in NIC from gradient variance perspective. Based on those analysis, we further propose multiple-sample NIC (MS-NIC), an enhanced IWAE target for NIC. Experimental results demonstrate that it improves SOTA NIC methods. Our MS-NIC is plug-and-play, and can be easily extended to neural video compression.
Accept
This paper studies the problem of neural image compression (NIC). Standard methods for NIC use a "single-sample pathwise estimator" to estimate the ELBO gradients to optimize the rate-distortion loss function. This paper improves the estimation by using multiple samples, leading to better compression results. Experimental results show that multi-sample methods improve compression performance in many cases. The reviewers' comments are appropriately addressed, and all reviewers appreciate the contribution of this paper on neural image compression, so I recommend accept.
train
[ "56y96darhdk", "PQoHuGSmBi", "Ny2RouvxdTy", "v4aJmL50Ttl", "8CNsGglSXvi", "lgSh_2GBplB", "HRQ06AI7Qtq", "TleKjGrsrq", "cRRrwb8PzY", "1GM-KYtKbMi", "J9yMpqSeTu", "4Uj0Cz5AjFGV", "LgKnwGh4iYFM", "3snQs7Rmrlj", "rZtzMELI-_7", "LacQHX8kFGh", "Zi7t_8odtLC" ]
[ "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the detailed response, which has addressed most of my concerns and provided interesting new insights to the problem. \n\nI think the paper made a good contribution that connects and fills in the blank of existing literature, and I've raised my score. ", " Hi, thanks for your feedback.\n\nWe are trying to express that \"UN+UQ\" performs much worse than \"UN+Q\", which is aligned with Fig. 2A of [Agustsson and Theis, 2020]. And we mean that \"UQ improves R-D performance.\" is dubious, not the original UQ paper is dubious.\n\nI think the word \"dubious\" is abused here. We will amend this in future version.\n\n### Reference:\nE. Agustsson and L. Theis. Universally quantized neural compression. Advances in neural information processing systems, 33:12367–12376, 2020.\n", " I appreciate the thorough response of the authors, which addresses most of my concerns and adds an interesting discussion.\n\nHowever, I find the additional appendix A.11.3 to unfairly characterise UQ.\n\nThe authors write in l.658 of the appendix:\n\"\nAs a matter of fact, the assumption of UQ that resolving training-testing distribution mismatch improves R-D performance is dubious. \n\"\n\nI find this claim really strange, given that in the UQ paper (https://arxiv.org/pdf/2006.09952.pdf, sec 5.3) it is written:\n\"\"\"\nWhen comparing the UN + UQ model which uses universal quantization to the test-time quantization baseline UN + Q, we see that despite the train-test mismatch using quantization improves the RD-performance at test-time (hatched area).\n\"\"\"\n\nSo this has already been observed and discussed in the original UQ paper, so what exactly is dubious?\n\nClearly there is a train-test mismatch as the authors have also observed (plain rounding does not give a uniform noise distribution as used in the noise proxy of Balle et al). The UQ paper shows that while it can be eliminated, performance drops unless soft-rounding is introduced.", " I appreciate that the authors took the time to write a thorough response (here and for the other reviewers). The comments addressed my questions/concerns, and the added pseudocode also adds clarity to the paper. I maintain my rating of 7 and support accepting this paper.", " Thank you to all the reviewers for the great effort in reviewing the paper and the authors for the responses. \n\nAs in the discussion period, I want to ensure that reviewers have read the authors' responses and engage with the authors if needed. \n\nIf you haven't done this, could you please take a moment to read through the authors' responses, update the reviews to indicate that you have read the authors' responses, or communicate with the authors if needed? You can also share in private conversations with the reviewing team.\n\nPlease continue to share your thoughts. Thank you!", " Thanks for your detailed review. We have uploaded the revised main text and supplementary material, with all the revisions marked in blue. Due to the space limitation, we could not include all the amendment in the main text. Below is a summary of revisions:\n\n### Main Text\n* Sec 2.2: We revise the expression about absolute continuity and expand the discussion on the impact of gradient estimators on IWAE-DReG and add detailed formulas explaining why it does not work. (as suggested by XrNr)\n* Sec 3.2: We revise the assumption of convergence condition. (as suggested by XrNr)\n* Sec 4.2: We add discussion that stochastic lossy encoding remains dubious, and we provide a pointer to in-depth discussion in Appendix. (as suggested by XrNr,5bm9)\n* Sec 5.3: We move a table to appendix to make room for extra discussions.\n\n### Appendix\n* Appendix A.4: We revise the assumption and proof of convergence condition. (as suggested by XrNr)\n* Appendix A.9 We add additional discussion on why the performance improvement of MS approach is limited (as suggested by XrNr) and why the MS-SSIM on [Cheng et al. 2020] is negative.\n* Appendix A.11.1: A new section discussing inference time ELBO and softmin coding (as suggested by XrNr, 5bm9).\n* Appendix A.11.2: A new section discussing Stochastic Lossy Encoder and Universal Quantization (as suggested by XrNr, 5bm9)\n* Appendix A.11.3: A new section discussing Training-Testing Distribution Mismatch and Universal Quantization (as suggested by XrNr, 5bm9)\n* Appendix A.12: A new section discussing the impact of our method on training time, and how to balance the batchsize and sample size (as suggested by 8LQV, 5bm9)\n* Appendix A.13: A new section discussing the implementation guidance (as suggested by 8LQV)\n\n### Reference\n* Y. Yang, R. Bamler, and S. Mandt. Improving inference for neural image compression. Advances in Neural Information Processing Systems, 33:573–584, 2020.\n* M. Song, J. Choi, and B. Han. Variable-rate deep image compression through spatially-adaptive feature transform. In 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, pages 2360–2369. IEEE, 2021.", " ### Q1 should I prefer fewer MS steps or more SS steps ?\n* This is a quite interesting question, yet it is quite subtle and complicated. First, let's assume that M single sample, single batchsize step is inferior to 1 single sample, M batchsize step [Schmidt, 2019]. The intuition is that 1) ignoring the effect of batchsize to gradient variance, M single batchsize step is equivalent to 1 M batchsize step. 2) the error of gradient descent grows linearly to the variance of gradient estimation, and the variance of gradient estimation grows inversely with batchsize M.\n* Then the question becomes, given a time budget T, how to allocate the batchsize M and the sample size K. The general rule is, the optimal allocation lies where M and K are not too far away. (In our experiment, we set M=8, and K=1,3,8,16). Perhaps surprisingly, setting either M=1 or K=1 is suboptimal. The exact optimal allocation can be really complicated and effected by multiple factors. (This discussion is similar to the discussion with Reviewer 5bm9 Q3.)\n* The first argument to make is that, the MS-NIC-MIX and MS-NIC-DMS is more time efficient than simply increasing the batchsize (in terms of FLPOS/MACS). For MS-NIC-MIX with $K$ samples, the $y$ encoder $q(y|x)$ is inferred with only 1 sample, and the $z$ encoder, decoder and entropy model $q(z|x),p(y|z),p(z)$ is inferred with only 1 sample. And only the $y$ decoder $p(x|y)$ is inferred $K$ times. This sample efficiency makes the training time grows slowly with $k$. In our experiment, the MS-NIC-MIX with 8 samples only increases the training time by $\\times 1.5$, the MS-NIC-MIX with $16$ samples only increases the training time by $\\times 3$. The MS-NIC-DMS is slightly slower, as the $z$ entropy model and decoder $p(z),p(y|z)$ also requires $k$ times inference. However, it is still much more efficient than batchsize $\\times k$ as all the encoders $q(y|x),q(z|x)$ requires only 1 inference. In fact, sampling from posterior is much cheaper than inferring the posterior parameters. Similar spirit has also been adopted in improving the efficiency of sampling from Gumbel-Softmax relaxed posterior [Paulus et al., 2020].\n* From the above discussions, one might conclude that we should use as much multiple samples as possible and set batchsize to $1$. However, further investigation is required. As stated in [Rainforth et al., 2018], the gradient SNR of encoder (inference model) scales with $\\Theta(MK)$, and the gradient SNR of decoder (generative model) scales with $\\Theta(M/K)$, where M is the batchsize and K is the sample size. Another assumption required prior to further discussion is that the suboptimality of VAE mainly comes from inference model [Cremer et al. 2018], which means that the encoder is harder to train than the decoder. With those assumption in mind, combined with the sample efficiency between our approach and batchsize, we can formulate the problem of finding optimal batshsize $M$, sample size $K$, given limited time $T$, into a quadratic integer programming problem: $M,K\\leftarrow\\arg\\max MK, s.t. T(M)+T(K)<T$, where $T(M)$ and $T(K)$ are the time spend on $M$ batchsize and $K$ multiple sample size, and $T$ is the overall time limit. That is the simplistic case when we exclude the effect of tighter ELBO and consider inference suboptimality only. In practice the overall performance is determined by both inference suboptimality and ELBO-likelihood gap. In a word, we believe there is no general answer for all problem. But a reasonable balance between sample size and batchsize is the golden rule to maximize performance (as $T(M)$ and $T(K)$ grow linearly with batchsize/sample size). And the obvious case is that neither setting batchsize to $M=1$ and give all resources to $K$, nor setting sample size $K=1$ and give all resources to $M$ is optimal.\n* __For revision__: we will include those discussions.\n\n### Reference\n* J. Ballé, D. Minnen, S. Singh, S. J. Hwang, and N. Johnston. Variational image compression with a scale hyperprior. In International Conference on Learning Representations, 2018.\n* Z. Cheng, H. Sun, M. Takeuchi, and J. Katto. Learned image compression with discretized gaussian mixture likelihoods and attention modules. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 7939–7948, 2020.\n* Mark Schmidt, Lecture Notes on CPSC 540 - Machine Learning https://www.cs.ubc.ca/~schmidtm/Courses/540-W19/ 2019\n* M. B. Paulus, C. J. Maddison, and A. Krause. Rao-blackwellizing the straight-through gumbel-365\n* T. Rainforth, A. Kosiorek, T. A. Le, C. Maddison, M. Igl, F. Wood, and Y. W. Teh. Tighter variational bounds are not necessarily better. In International Conference on Machine Learning, pages 4277–4285. PMLR, 2018.\n* C. Cremer, X. Li, and D. Duvenaud. Inference suboptimality in variational autoencoders. In International Conference on Machine Learning, pages 1078–1086. PMLR, 2018.\n", " ### W1 Why multi-sample methods would perform worse on MS-SSIM and the [Cheng et al. 2020]?\n* One possible explanation is that the gradient property of [Cheng et al. 2020] is not as good as [Ballé et al. 2018]. As a reference, the training of [Burda et al., 2016] totally fails on [Cheng et al. 2020] and produces garbage R-D results (See Tab. 10). This bad gradient property might account for the bad results of MS-SSIM on [Cheng et al. 2020], as the gradient of IWAE and MS-NIC is certainly tricker than the gradient of single sample approaches.\n* As evidence, when we are studying the stability of the network in [Cheng et al. 2020], we find that without limitation of entropy model (imagine setting $\\lambda$ to $\\infty$) and quantization, [Cheng et al. 2020] produces $nan$ before $2000$ epochs of training while [Ballé et al. 2018] converges properly. Under such setting, [Ballé et al. 2018] achieve PSNR of $48.54$db, while the best of [Cheng et al. 2020] is $43.27$db. This result indicates that the backbone of [Cheng et al. 2020] is not as good as [Ballé et al. 2018] as an auto-encoder. Moreover, when we finetune these pre-trained model into a lossy compression model, Cheng et al. [2020] produces nan results while Ballé et al. [2018] converges. This result indicates that the backbone of Cheng et al. [2020]’s gradient is probably more difficult to deal with than Ballé et al. [2018].\n* __For revision__: we will include those discussions.\n### W3 Either actual code or more guidance on how to implement MS-NIC-DMS would strengthen the paper. I think I can figure this out by reviewing the IWAE and SGVB algorithms.\n* We are glad to provide the detailed pytorch-style sudo code for implementation guidance. We extract and rewrite the core code for computing the loss of MS-NIC-DMS/MS-NIC-MIX.\n```python\nimport torch\nfrom torch.nn import functional as F\n\ndef IWAELoss(minus_elbo):\n '''\n args\n ----\n minus_elbo: tensor, [b, k], which is R + \\lambda D\n\n return\n ------\n local iwae loss\n '''\n # this is the minus ELBO related to y part, to get the real ELBO:\n log_weights = - minus_elbo.detach() # no gradient given to weights\n weights = F.softmax(log_weights, dim=1) # B, K\n loss_b = torch.sum(minus_elbo * weights, dim=1, keepdim=False) # B\n loss_iwae = torch.mean(loss_b)\n return loss_iwae\n\ndef DMSLoss(x, x_hat, y_likelihood, z_likelihood, lam):\n '''\n args\n ----\n x: original image: [b, c, h, w]\n x_hat: reconstructed image: [b, k, c, h, w], k is the number of samples\n y_likelihood: [b, 192/320, h//8, w//8, k^2], as original paper of [Ballé et al. 2018], the number of channels 192/320 is determined by lambda, k^2 is the number of samples in DMS setting, with MS-NIC-MIX, this k^2 is k\n z_likelihood: [b, 128/192, h//64, w//64, k], as original paper of [Ballé et al. 2018], the number of channels 128/192 is determined by lambda, k is the number of samples\n\n return\n ------\n total iwae loss\n '''\n b, c, h, w = x.shape\n k = x_hat.shape[0] // x.shape[0]\n x = torch.repeat_interleave(x, repeats=k, dim=0)\n x = x.reshape(b, k, c, h, w)\n x_hat = x_hat.reshape(b, k, c, h, w)\n d_loss = torch.mean(lam * 65025 * (x - x_hat)**2, dim=(2,3,4), keepdim=False)\n yz_loss = -torch.sum(torch.log2(y_likelihood), dim=(1,2,3)).reshape(b, -1) / (h * w)\n z_loss = -torch.sum(torch.log2(z_likelihood), dim=(1,2,3)).reshape(b, -1) / (h * w)\n local_d = IWAELoss(d_loss)\n local_yz = IWAELoss(yz_loss)\n local_z = IWAELoss(z_loss)\n loss_total = local_d + local_yz + local_z\n\n return loss_total\n```\n* __For revision__: we will include those discussions.", " ### Q3 How the proposed objective affects training time\n* The MS-NIC-MIX and MS-NIC-DMS is more time efficient (in terms of FLOPS/MACS) than simply increase batchsize. For MS-NIC-MIX with $k$ samples, the $y$ encoder $q(y|x)$ is inferred with only 1 sample, and the $z$ encoder, decoder and entropy model $q(z|x),p(y|z),p(z)$ is inferred with only 1 sample. And only the $y$ decoder $p(x|y)$ is inferred $k$ times. This sample efficiency makes the training time grows slowly with $k$. In our experiment, the MS-NIC-MIX with 8 samples only increases the training time by $\\times 1.5$, the MS-NIC-MIX with $16$ samples only increases the training time by $\\times 3$. The MS-NIC-DMS is slightly slower, as the $z$ entropy model and decoder $p(z),p(y|z)$ also requires $k$ times inference. However, it is still much more efficient than batchsize $\\times k$ as all the encoders $q(y|x),q(z|x)$ requires only 1 inference. In fact, sampling from posterior is much cheaper than inferring the posterior parameters. Similar spirit has also been adopted in improving the efficiency of sampling from Gumbel-Softmax relaxed posterior [Paulus et al., 2020].\n* From the above discussions, one might conclude that we should use as much multiple samples as possible and set batchsize to $1$. However, further investigation is required. However, the trade-off between batchsize and sample number might be much more subtle and complicated despite its innocent look. As stated in [Rainforth et al., 2018], the gradient SNR of encoder (inference model) scales with $\\Theta(MK)$, and the gradient SNR of decoder (generative model) scales with $\\Theta(M/K)$, where M is the batchsize and K is the sample size. Another assumption required prior to further discussion is that the suboptimality of VAE mainly comes from inference model [Cremer et al. 2018], which means that the encoder is harder to train than the decoder. With those assumption in mind, combined with the sample efficiency between our approach and batchsize, we can formulate the problem of finding optimal batshsize $M$, sample size $K$, given limited time $T$, into a quadratic integer programming problem: $M,K\\leftarrow\\arg\\max MK, s.t. T(M)+T(K)<T$, where $T(M)$ and $T(K)$ are the time spend on $M$ batchsize and $K$ multiple sample size, and $T$ is the overall time limit. That is the simplistic case when we exclude the effect of tighter ELBO and consider inference suboptimality only. In practice the overall performance is determined by both inference suboptimality and ELBO-likelihood gap. In a word, we believe there is no general answer for all problem. But a reasonable balance between sample size and batchsize is the golden rule to maximize performance (as $T(M)$ and $T(K)$ grow linearly with batchsize/sample size). And the obvious case is that neither setting batchsize to $M=1$ and give all resources to $K$, nor setting sample size $K=1$ and give all resources to $M$ is optimal.\n\n* __For revision__: we will include those discussions.\n\nReference:\n* J. Townsend, T. Bird, and D. Barber. Practical lossless compression with latent variables using bits back coding. In International Conference on Learning Representations, 2018\n* L. Theis and E. Agustsson. On the advantages of stochastic encoders. arXiv preprint arXiv:2102.09270, 2021.\n* T. Ryder, C. Zhang, N. Kang, and S. Zhang. Split hierarchical variational compression. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 386–395, 2022.\n* M. B. Paulus, C. J. Maddison, and A. Krause. Rao-blackwellizing the straight-through gumbel-365\nsoftmax gradient estimator. arXiv preprint arXiv:2010.04838, 2020.\n* G. Flamich, M. Havasi, and J. M. Hernández-Lobato. Compressing images by encoding their latent representations with relative entropy coding. Advances in Neural Information Processing Systems,33:16131–16141, 2020.\n* T. Rainforth, A. Kosiorek, T. A. Le, C. Maddison, M. Igl, F. Wood, and Y. W. Teh. Tighter variational bounds are not necessarily better. In International Conference on Machine Learning, pages 4277–4285. PMLR, 2018.\n* C. Cremer, X. Li, and D. Duvenaud. Inference suboptimality in variational autoencoders. In International Conference on Machine Learning, pages 1078–1086. PMLR, 2018.\n", " ### Q2 how the quantization vs noise mismatch interacts with the whole analysis\n* That is another really interesting problem. As we stated above, the benefits of our method comes from avoiding posterior collapse instead of achieving tighter ELBO during inference time. We are quite aware of the issue of the training/inference GAP of $q(y|x)$. And that is the reason why we provide analysis of latent space in both continuous (AUN) case and discrete (quantization) case (See Sec. 5.3). It is shown that the multiple sample latent is consistently richer in both continuous space and discrete space, which means that our method is effective regardless the mismatch. \n* However, as far as we know, this training/inference mismatch is more severe than the UQ paper has described. We examine the quantization error of $y$, and we are surprised to find that the quantization error distribution is not a uniform distribution. Instead, its histogram has a slab-and-spike shape (See histogram in: https://ibb.co/GxkwvNT). This histogram indicates that the basic assumption that quantization error is a uniform distribution is dubious. We believe this mismatch is another source why UQ does not work well when distortion is measured in MSE.\n* __For revision__: we will include those discussions.", " Thanks for your detailed review. And we are glad to provide our answer to your questions, and a few clarification on some misunderstandings: \n\n### Q1 do we have any guarantees that the \"tighter ELBO\" that we get from the multi-sample NIC objective actually improves the inference objective?\n* This is a very interesting question. The inference time tighter ELBO is another under-explored issue. In fact, the training time tighter ELBO and inference time ELBO is independent. We can train a model with tighter ELBO, infer with single sample ELBO. Or we can also conduct multiple sample infer on a model trained with single sample. The general idea is: \n * The training time tighter ELBO benefits the performance in terms of avoiding posterior collapse. As we state and empirically verify in Sec. 5.3. We adopt deterministic rounding during inference time, and there is no direct connection between the training time tighter ELBO and inference time R-D cost. However, we indeed end up with a richer latent space (Sec. 5.3), which means more active latent dimensions and less bitrate waste.\n * The inference time tighter ELBO sounds really alluring for compression community. However, there remains two pending issue to be resolved prior to the application of the inference time tighter ELBO:\n * How this inference time multiple-sample ELBO is related to R-D cost remains under-explored. In other words, whether the entropy coding itself can achieve the R-D cost defined by multiple-sample ELBO is a question.\n * The inference time multiple-sample ELBO only makes sense with stochastic encoder (you can not importance weight the same deterministic ELBO), whose impact on lossy compression remains dubious.\n* For the first pending issue, the softmin coding [Theis and Ho 2021] is proposed to achieve multiple sample ELBO based on Universal Quantization (UQ) [Agustsson and Theis, 2020]. However, it is not a general method and is tied to UQ. Moreover, as we stated in Sec 4.3, its computational cost is forbiddingly high and its improvement is marginal. But those are not the real problem of softmin coding. Instead, the real problem is the second pending issue: stochastic lossy encoder. The softmin coding relies on UQ, and UQ relies on stochastic lossy encoder.\n* It is known to lossless compression community that stochastic lossy encoder benefits compression performance [Theis and Agustsson, 2021] with the aid of bits-back coding [Townsend et al., 2018]. While the bits-back coding is not applicable to lossy compression. For lossy compression, currently we know that the stochastic encoder degrades R-D performance especially when distortion is measured in MSE [Ryder et al., 2022]. In the original UQ paper, the performance decay of vanilla UQ over deterministic rounding is obvious (~1 db). When we writing this paper, we also find the performance decay of UQ is quite high:\n\n | | y bpp | z bpp | MSE | RD Cost |\n | ---------------------- | ------ | ------- | ----- | ------- |\n | Deterministic Rounding | 0.3347 | 0.01418 | 26.86 | 0.7552 |\n | Universal Quantization | 0.5379 | 0.01431 | 23.94 | 0.9080 |\n\n* We can see that compared with deterministic rounding, UQ's bpp ($\\log p(y)$ in ELBO) is higher, and MSE ($\\log p(x|y)$ in ELBO) is lower. In general, we find that UQ's performance decay is much more significant than the potential performance gain by softmin coding.\n* In our humble opinion, this performance decay is brought by stochastic encoder itself. For lossless compression, the deterministic encoder and stochastic encoder are just two types of bit allocation preference:\n * The deterministic encoder allocate less bitrate to $\\log p(y)$, more to $\\log p(x|y)$ and $0$ to $\\log q(y|x)$.\n * The stochastic encoder allocate more bitrate to $\\log p(y)$, less to $\\log p(x|y)$ and minus bitrate to $\\log q(y|x)$\n* Therefore, for lossless compression, it is reasonable that the bitrate increase to $\\log p(y)$ and $\\log p(x|y)$ can be offset by bits-back coding bitrate $\\log q(y|x)$. While for lossy compression, there is no way to bits-back $\\log q(y|x)$ (as we can not reconstruct $q(y|x)$ without $x$). If the bitrate increase in $\\log p(y)$ and $\\log p(x|y)$, the R-D cost just increases for lossy compression. Prior to other entropy coding bitrate that is able to achieve R-D cost equals to minus ELBO with $E_q[\\log q]!=0$ becomes mature (such as relative entropy coding [Flamich et al., 2020]), we have no way to implement a stochastic lossy encoder with reasonable R-D performance. By now, we have no good way to achieve tighter ELBO during inference time.\n* __For revision__: we will include those discussions.", " ### Weakness 2 part 2: \n* We think that the performance improvement of our approach is bounded by how severe the posterior collapse is in neural image compression. We measure the variance in latent dimension according to data in Fig. 3. And from that figure it might be observed that the major divergence of IWAE and VAE happens when the variance is very small. And for the area where variance is reasonably large, the gain of IWAE is not that large. This probably indicates that the posterior collapse in neural image compression is only alleviated to a limited extend.\n\n* __For revision__: we will extend Sec 5.3 to include these discussions. \n\n### Question 1,3: \n\n* The problem of taking $k$ samples from $q(z,y|x)=q(z|x)q(y|x)$ is that the $q(z|x)$ and $q(y|x)$ is independent. Assuming we sample $y_{1:k}$ from $q(y|x)$, we still only have 1 distribution for $q(z|y)$. Again, if we also draw $z_{1:k}$ from $q(z|x)$, it would actually be a waste of sample to pair $z_1$ with $y_1$, …, $z_k$ with $y_k$, as they are essentially independent. Moreover, the number sample that we draw from $q(y|x)$ and $q(z|x)$ does not need to be the same as $z,y$ factorizes given $x$. Assuming we sample $y_{1:m}$, $z_{1:n}$, a smarter way to pair the samples is to pair each $y$ $n\\times z$, which means that we can generate $m\\times n$ samples with $m+n$ draws. In fact, this leads MS-NIC-DMS described in Sec 3.2. \n\n* Thus, the plain IWAE sampling from $q(z,y|x)=q(z|x)q(y|x)$ is unreasonable. And the natural condition where IWAE fits is another form of factorization: $q(z,y|x)=q(z|y)q(y|x)$. Now assuming we sample $y_{1:k}$ from $q(y|x)$, we have $k$ distribution for $q(z|y_i)$. However, this type of factorization brings severe performance decay due to the gradient variance issue as we show in Sec 2.3, and the plain IWAE is tied to this type of factorization. That is the reason why it’s performance is much worse in Tab. 5. \n\n### Question 2: \n* Thanks for pointing this out. Probably the proper expression for 187-188 is:\n * $\\mathcal{L}^{DMS}_{k,l}\\xrightarrow{as} \\log p(x)$, under the assumption that $p(x|y_i)p(y_i|z_j)/q(y_i|x)$ and $p(x|z_j)p(z_j)/q(z_j|x)$ is bounded.\n* And the proof follows the Strong Low of Large Number instead of the Weak Low of Large Number.\n* __For revision__: we will revise the proposition and proof. We are OK to abandon this claim if you still find it problematic. As it does not harm our central claims.\n\n### Question 4: \n* Thanks for pointing it out.\n* __For revision__: we will revise this as much as we can.\n\n### Reference:\n* G. Tucker, D. Lawson, S. Gu, and C. J. Maddison. Doubly reparameterized gradient estimators for monte carlo objectives. 2018.\n* R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3):229–256, 1992.\n* A. Mnih and D. Rezende. Variational inference for monte carlo objectives. In International Conference on Machine Learning, pages 2188–2196. PMLR, 2016\n* L. Theis and J. Ho. Importance weighted compression. In Neural Compression: From Information Theory to Applications–Workshop@ ICLR 2021, 2021.\n* E. Agustsson and L. Theis. Universally quantized neural compression. Advances in neural information processing systems, 33:12367–12376, 2020.\n* Y. Burda, R. B. Grosse, and R. Salakhutdinov. Importance weighted autoencoders. In ICLR (Poster), 2016.\n* J. Townsend, T. Bird, and D. Barber. Practical lossless compression with latent variables using bits back coding. In International Conference on Learning Representations, 2018", " ### Weakness 2 part 1: \n\n* The major reason why we did not include result of softmin coding [Theis and Ho 2021] is not within the softmin coding itself, but instead the Universal Quantization (UQ) [Agustsson and Theis, 2020] method it relies on. The original softmin paper only compares the R-D performance of softmin+UQ and UQ, but does not include the deterministic scalar quantization. In fact, the UQ makes the encoder stochastic, which brings RD performance decay compared with deterministic scalar quantization when distortion is measured by MES. As shown in the original paper, UQ performs worse than scalar quantization without a trick named Soft Rounding. However, the Soft Rounding trick change the variational posterior from uniform distribution towards a spike-like distribution with weights concentrated in integer bins, which makes the softmin coding inapplicable. The performance decay of UQ along looks quite significant (~1 db in PSNR), and the performance gain by softmin coding looks only marginal to us. \n\n* The inference time multi-sample coding only makes sense with stochastic encoder. However, in our humble opinion, the stochastic encoder itself is the reason why UQ does not work. While the stochastic encoder outperforms deterministic encoder in lossless compression with bits-back coding [Townsend et al., 2018], it remain dubious in lossy compression as the entropy of stochastic encoder (the $\\log q(y|x)$ term in ELBO) can not be bits-back coded. As the bits-back coding is known to be incompatible to lossy compression. Thus, even if the ELBO of stochastic encoder is tighter than the deterministic encoder, without bits-back coding we can not reduce the $\\log q(y|x)$ term from the bit stream. In fact, the stochastic encoder for lossy compression and lossy bits-back coding is quite interesting and relatively under-explored. There are indeed other approaches, such as relative entropy coding (REC) that can achieve the similar effect of lossy bits-back coding. However, it requires another dedicated entropy coding technique to achieve tighter ELBO, just like softmin for UQ. We believe that the stochastic lossy encoder is out of the scope of our paper and deserves a separated dedicated paper to discuss. And we think that the multi-sample coding for inference time is not ready prior to the issue of stochastic lossy encoder is addressed. \n\n* In fact, during the paper writing process, we explored into this problem preliminarily. We find that applying UQ during inference time greatly harms the R-D performance compared with deterministic rounding:\n\n| | y bpp | z bpp | MSE | RD Cost |\n| ---------------------- | ------ | ------- | ----- | ------- |\n| Deterministic Rounding | 0.3347 | 0.01418 | 26.86 | 0.7552 |\n| Universal Quantization | 0.5379 | 0.01431 | 23.94 | 0.9080 |\n\n* And we find that the major reason that UQ fails is that during the inference time the quantization noise is not a uniform distribution at all! In fact, the real distribution of quantization noise is pretty much a highly concentrated distribution around $0$ (See histogram in https://ibb.co/GxkwvNT) . And it is quite far away from uniform distribution. We also find that this concentrated distribution is caused by that most of latent dimension is quite close to $0$. The evidence is, if we remove the latent dimension $y^i\\in [-0.5,0.5]$, then the quantization noise looks similar to a uniform distribution (See histogram in https://ibb.co/j57QB3T). So, if we apply direct rounding, they are kept as $0$ and the latent is sparse. However, adding uniform noise to it loses this sparsity. As a matter of fact, the assumption of UQ does not holds for inference time. To wrap up, those observations on UQ really hinders us to explore softmin coding during our paper writing.\n* __For revision__: we will extend Sec 4.3 to include these discussions. \n", " Thanks for your detailed review. And we are glad to provide our answer to your questions, and a few clarification on some misunderstandings: \n\n### Weakness 1: \n\n* The major result and conclusion of Sec 2.1-2.2 is not that the REINFORCE [Williams, 1992] should not be used (which is obvious), but is that the DReG [Tucker et al., 2018] estimator should not be used (which is much less obvious). We agree that no one would adopt REINFORCE when reparameterization trick is available. However, the DReG estimator at the first glance indeed looks an innocent improvement over IWAE [Burda et al., 2016]. It is a path-wise estimator instead of a score-function estimator. However, its unbiasedness is built upon the equivalence between REINFORCE and reparameterization trick. In fact, we write Sec 2.2 and Sec 2.1 because we first find the weird result that DReG fails (Tab 1). Then, we carefully examine the derivation of DReG and find it is not compatible with uniform posterior with the help of [Mohamed et al., 2020]. Then, we determine to fully discuss this issue and write Sec 2.1 Sec 2.2 to go over the applicability of common gradient estimators to neural compression. We think that Sec 2.1 and 2.2 is informative to neural image compression, not because that the straightforward REINFORCE might be adopted, but because that more advanced estimators such as VIMCO [Mnih and Rezende, 2016] and DReG could be adopted. \n* To be specific, let $x$ be the observed evident and $y$ be the latent, and denote $w_i=\\frac{p_{\\theta}(x,y_i)}{q_{\\phi}(y_i|x)}$. The DReG estimator observes that with reparameterization $y_i=y(\\epsilon_i,\\phi)$, the total derivative of a $k$ sample IWAE target can be expanded as $\\nabla_{\\phi} E_{q_{\\phi}(y|x)}[\\log \\frac{1}{k}\\sum_{i=1}^{k}w_i]= E_{p(\\epsilon)}[\\sum_{i=1}^{k}\\frac{w_i}{\\sum_{j=1}^{k}w_j}(-\\frac{\\partial}{\\partial \\phi}\\log q_{\\phi}(y_i|x)+\\frac{\\partial \\log w_i}{\\partial y_i}\\frac{\\partial y_i}{\\partial \\phi})]$. The DReG estimator observes that the $\\frac{\\partial}{\\partial \\phi}\\log q_{\\phi}(y_i|x)$ part of this path-wise gradient\nresembles a score function estimator, and propose to further reparameterize it as $E_{q(y_i|x)}[\\frac{w_i}{\\sum_{j=1}^{k}w_j}\\frac{\\partial}{\\partial \\phi}\\log q_{\\phi}(y_i|x)]= E_{p(\\epsilon_i)}[\\frac{\\partial}{\\partial y_i}(\\frac{w_i}{\\sum_{j=1}^{k}w_j})\\frac{\\partial y_i}{\\partial \\phi_i}]$. However, this step requires the equivalence between score function gradient and path-wise gradient: $E_{q_{\\phi}(y|x)}[f(y)\\frac{\\partial}{\\partial \\phi}\\log q_{\\phi}(z|x)]= E_{p(\\epsilon)}[\\frac{\\partial f(y)}{\\partial y}\\frac{\\partial y}{\\partial \\phi}]$. And such equivalence does not hold in general for our case.\n\n* __For revision__: we will add the detailed derivation about why DReG fails in the revised version of paper. \n", " This paper proposes to train neural compression VAEs with multi-sample IWAE-style objectives, and analyzes the effect of the choice of variational distribution (constant width uniform posterior; the choice between q(y|x) v.s. q(y|z)) on various gradient estimators. They theoretically showed that the uniform posterior in neural compression automatically implements a form of the STL estimator [Roeder et al., 2017], and that naively applying the score function gradient estimator would give incorrect results. They propose two IWAE-style objectives that are potentially tighter than the standard NELBO objective, and show empirically that training on them yield small improvement in R-D performance (5% rate savings). Strengths:\n\nThe paper is mostly clear and easy to follow; the proposed method is somewhat original (see Theis and Ho [2021]) and conceptually straightforward. Even though the experimental results are somewhat negative, I believe a better understanding between variational inference and compression would be of significant interest to researchers in both fields. \n\n\nWeaknesses:\n\n1. The authors made several observations regarding different gradient estimators for IWAE, specialized to the VI setup of neural image compression (NIC), but it's unclear what the impact/significance they have. The first observation (sec 2.1; \"STL is unbiased because in NIC the posterior entropy is constant\") seems trivial, and the second observation (sec 2.2; score function gradient doesn't work because the NIC posterior has bounded support) also seems straightforward after checking the conditions for when the score function gradient apply ([Mohamed 2020]). As for impact, the first observation appears to be a sanity check for the proposed multi-sample bound, and the second observation appears to caution against an approach (score function gradient estimator) which nobody seems to be using in neural compression ...\n\n2. The experimental results are not very informative/satisfying. It's perhaps unsurprising that training a NIC model with a tighter bound does not yield significantly improved compression performance, unless an accompanying encoding/inference scheme is used to realize such a bound. It would be interesting to see if softmin coding (Theis and Ho [2021]) can fulfill the potential of the proposed multi-sample training scheme, even on a small scale. Similarly, it'd be helpful to provide some insight into why increasing the number of IWAE samples has a limited impact on performance. \n\n3. Also see below about potentially questionable claims. There are a few claims in this paper that I'm not unsure about. \n\n1. I'm not following the discussions around line 153, regarding why the direct-y trick poses an issue for IWAE. Here we simply have a factorized posterior q(z, y|x) = q(z|x) q(y|x), so why can't we just use k samples of (z, y) from this joint posterior to form an IWAE estimator?\n\n2. The authors' statements about IWAE estimator converging to log p(x) (e.g., line 187-189) can be misleading, as they may not hold for the neural compression VAEs. Again this has to do with the posterior having bounded support, whereas the prior and likelihood models having full support. \n\n3. Is there an obvious explanation for why the mult-sample IWAE [Burda et al., 2016] baseline is so much worse in Table 5? I could not find it. \n\n3. This is a nitpick: the statement \"q(z˜|y˜) and q(y˜|x˜) are not absolute continuous\" is technically incorrect (line 103), although I understand the authors meant to say they don't satisfy the \"absolute continuity condition of [Mohamed 2020]\". The uniform posterior distributions *are* absolutely continuous (w.r.t. Lebesgue measure) since they have densities. To be fair, the language used in [Mohamed 2020] is also confusing in this regard. It would be helpful to also discuss the computation/implementation complexity of the multi-sample approach. ", " Most work on lossy neural image compression (NIC) us a \"single-sample pathwise estimator\" to estimate the ELBO gradients to optimize the core rate-distortion loss function. This paper observes that multiple-sample methods provide a tighter bound than ELBO and thus may lead to better compression results (a lower rate-distortion loss). The paper explores different formulations (IWAE, MS-NIC-MIX, and MS-NIC-DMS) that incorporate multiple samples, offers an explanation for why IWAE does not work here (\\tilde{z} depends on \\tilde{y} which increases gradient variance compared to models where \\tilde{z} depends on y), and shows empirically that both multi-sample methods (MIX and DMS) boost compression performance in most cases (the exception is an autoregressive entropy model optimized for MS-SSIM).\n\nGlossary:\nELBO = evidence lower bound\nIWAE = importance-weighted autoencoder\nMS = multi-sample\nMIX = combine multi and single-sampling\nDMS = double multi-sampling\nMS-SSIM = multiscale structural similarity\n Strengths:\n1. This paper looks at a fundamental aspect of neural image compression (tighter bounds than ELBO when uniform noise is used for latents) that, as far as I know, has not be previously discussed. The discussion of multi-sample methods is thorough.\n\n2. The approach is quite general in the sense that it is applicable to most neural image compression methods (models that use VQ are an exception) and the empirical benefits are non-trivial (2.38% - 4.93% rate savings for MS-NIC-DMS when optimizing for PSNR). For reference, a 10% rate gain is considered huge, less than 1% is not very interesting since it's on par with the random variation across training runs, and a new generation of a standard codec typically provides a 30% rate savings).\n\nWeaknesses:\n1. It's not obvious to me why multi-sample methods would perform worse on MS-SSIM and the (Cheng 2020). Additional exploration is warranted (as the authors say in the conclusion).\n\n2. Confidence intervals for the empirical results would strengthen the paper. The authors do address this in the \"Checklist\" section, and I agree that training models can be expensive and \"error bars\" are not commonly seen in NIC papers (though they should be). Perhaps a reasonable compromise would be to estimate the confdience interval in just a single case (e.g. PSNR for (Balle 2018).\n\n3. Either actual code or more guidance on how to implement MS-NIC-DMS would strengthen the paper. I think I can figure this out by reviewing the IWAE and SGVB algorithms.\n\n Presumably, the training time for MS-NIC-DMS will be higher per step compared to the baseline single-sample approach. If so, does multi-sample optimization make sense with a fixed training budget? In other words, if I have a compute (or time) budget for training, should I prefer fewer MS steps or more SS steps? I don't see any potential negative societal impacts from this work. The authors did discuss limitations (the poor results on MS-SSIM with the (Cheng 2020) model is the primary limitation).", " Neural Image Compression (NIC) is typically formulated with a VAE objective for optimization, with the rate-distortion objective corresponding to the ELBO, but with some pecularities: the posterior of the inference network is assumed to be uniformly distributed to simulate quantization noise. With this in mind, the paper carefully studies a) the impact of this on various gradient estimators and b) how to properly adapt multi-sample objectives such as the IWAE target.\nThe proposed MS-NIC objective results in 2-5% BD-rate savings in terms of PSNR for two well known approaches (Balle et al, 2018) and (Cheng et al, 2020). Note: While very familiar with neural image compression, I am not an expert on multi-sample VAEs and gradient estimators.\n\nI found the paper well written. It gives a very detailed analysis on how the NIC objective connects to VAEs, going into the details such as the direct-y trick of the hyperprior, how the unform posterior is discontinuous and how that affects various gradient estimators.\n\nThe resulting MS-NIC objective seems well motivated to me and the empirical results are promising, given that only the training objective has been changed. This means we get better systems with the exactly same architecture.\n\nHowever, this also brings me to point of confusion: as far as I understand, the multi sample objective is training-only, while at inference time we use the standard single-forward-pass encoder and quantization.\nSo do we have any guarantees that the \"tighter ELBO\" that we get from the multi-sample NIC objective actually improves the inference objective?\n\nThe other concern is how the quantization vs noise mismatch interacts with the whole analysis, as indeed the NIC objective is only a proxy for quantization (while it can be implemented at test time with Universal Quantization).\n\nA third concern is how the proposed objective affects training time. Because another way to improve results is to train longer or with a larger batch size, so it would be interesting to understand the trade-offs there. See above. Yes." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2 ]
[ "rZtzMELI-_7", "Ny2RouvxdTy", "Zi7t_8odtLC", "HRQ06AI7Qtq", "nips_2022_wjSHd5nDeo", "nips_2022_wjSHd5nDeo", "LacQHX8kFGh", "LacQHX8kFGh", "Zi7t_8odtLC", "Zi7t_8odtLC", "Zi7t_8odtLC", "rZtzMELI-_7", "rZtzMELI-_7", "rZtzMELI-_7", "nips_2022_wjSHd5nDeo", "nips_2022_wjSHd5nDeo", "nips_2022_wjSHd5nDeo" ]
nips_2022_duBoAyn9aI
Controllable and Lossless Non-Autoregressive End-to-End Text-to-Speech
Some recent studies have demonstrated the feasibility of single-stage neural text-to-speech, which does not need to generate mel-spectrograms but generates the raw waveforms directly from the text. Single-stage text-to-speech often faces two problems: a) the one-to-many mapping problem due to multiple speech variations and b) insufficiency of high frequency reconstruction due to the lack of supervision of ground-truth acoustic features during training. To solve the a) problem and generate more expressive speech, we propose a novel phoneme-level prosody modeling method based on a variational autoencoder with normalizing flows to model underlying prosodic information in speech. We also use the prosody predictor to support end-to-end expressive speech synthesis. Furthermore, we propose the dual parallel autoencoder to introduce supervision of the ground-truth acoustic features during training to solve the b) problem enabling our model to generate high-quality speech. We compare the synthesis quality with state-of-the-art text-to-speech systems on an internal expressive English dataset. Both qualitative and quantitative evaluations demonstrate the superiority and robustness of our method for lossless speech generation while also showing a strong capability in prosody modeling.
Reject
I am in agreement with the last 2 reviewers. 1) there are many concerns about the technical correctness of the paper that can be improved 2) more thorough evaluations and experiments are needed i'm marking this as reject and i encourage the authors to address reviewer comments and resubmit.
train
[ "WV0tv4qeW4", "yFt4cs2CFWh", "iBiKeYD3qeZ", "DIxy90qzpB7", "Gw5EC0UvIW", "9MapiaTetKZ", "p0QN4RQprK-", "NdwOQdChi4F", "GNQWvVoFPsK", "jOS9v0Q5Pyk", "956tD4Lppgm", "IA7XbuUFB_H", "bfpfXvp0_aq" ]
[ "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer xNqf:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest.", " Dear Reviewer 1KcR:\n\nWe thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.\n\nBest.", " > Why do the authors use TFGAN for fastspeech2 and tacotron2 baselines when there's HiFi-GAN vocoder that is used very often in many recent works? I don't think I have seen TTS works that use TFGAN for the vocoder.\n\nAfter evaluation, our TFGAN performs better than the open source HiFi-GAN V1. Our TFGAN is able to generate higher quality waveform with sharper high frequency and less glitch. For a fairer comparison and to show the high quality of the waveform generated by CLONE, we chose TFGAN.\n\n> Why do the authors evaluate the proposed model using internal datasets?\n\nWe used an internal dataset because the internal dataset has the following advantages:\n1. Firstly, our internal data has rich prosody diversity, which puts forward higher requirements for the prosody modeling capability of the model.\n2. Secondly, our internal dataset has high-quality recordings and 48k audio, which can fully demonstrate the model's capability of waveform generation.\n\n### Summary\n\nIn particular, our paper proposes a new prosody modeling method. We model the phoneme-level prosody in an implicit way, assuming that the prosody obeys a non-standard normal distribution. It cannot fully characterize prosody information to use pitch or energy to model prosody explicitly such as FastSpeech 2, FastPitch and etc. In addition, we use the prosody predictor to predict the prosody distribution instead of the fixed prosody vector, which can avoid the over-smoothness of the prosody prediction, because for the same text, there will be different prosody distributions. Therefore, we assume that prosody obeys a non-standard normal distribution, which increases the upper bound of prosody representational power, and CLONE's prosody predictor avoids the over-smoothness of prosody prediction, improving the expressiveness of the generated audio in end-to-end synthesis. CLONE can achieve rich prosody control capabilities, in addition to those mentioned in the paper, CLONE can use the timbre of speaker A to synthesize the speech of speaker B's speaking style by replacing the speaker ID of the prosody predictor with speaker B's. Samples are at [here](https://github.com/CLONEneurips2022/samples).\n\nBesides, we introduce the dual parallel autoencoder to solve the problem of lacking supervision of ground-truth acoustic features during training, which allows the single-stage model to generate higher quality speech.\n\nWe hope our replies solve all your concerns about the paper. If we successfully address your concerns, we would strongly appreciate an increased score; otherwise, we are happy to provide additional discussion and address any further questions. Thanks!\n", " Thanks for your comments on our paper. We reply to your questions as follows:\n\n> line 121, \"We use the obtained duration of phonemes to construct the hard alignment matrix\" → The process of obtaining the duration of phonemes was not mentioned before this.\n\nDuring training, we get the hard alignment matrix $\\textbf{A}$ from the ground-truth durations $\\mathbf{d} = \\{d_1, ..., d_{T_p}\\}$ of phonemes. $\\mathbf{A} \\in \\mathbb{R}^{T_p \\times T_f}$ and each element in A is 0 or 1.\n\n$A_{ij} = 1$, when $\\sum_{k=1}^{i-1}d_k \\leq j < \\sum_{k=1}^{i}d_k$\n\n$A_{ij} = 0$, when otherwise. \n\nIn other words, $A_{ij} = 1$ denotes The i-th phoneme is being pronounced at the j-th spectrogram frame.\n\n> How do the authors safely assume that the output of prosody predictor is actually prosody latent variable?\n\n1. Firstly, the learning target of the prosody predictor is the distribution of phoneme-level prosody generated by the posterior encoder. We assume that the posterior encoder can extracts the phoneme-level prosody latent variable.\n2. Secondly, in order to allow the posterior encoder to accurately extract the prosody information and filter out other information, such as linguistic features, we conduct an average pooling operation to downsample on the frame-level prosody features. Average pooling eliminates timing information between phones and ensures that linguistic information is removed.\n3. Our experiments demonstrate the effectiveness of our method. In prosody reconstruction, the pitch and energy of the generated audio are extremely similar to the reference audio. By inputting different sample values to invertible transformation of normalizing flow, the generated audio achieves high variation of pitch, while linguistic information is not affected.\n\n> Why is text information included in both reference and source text the same? This is not a practical scenario and not a fair way to test the actual performance.\n\nWe performed prosody transfer and prosody reconstruction experiments to demonstrate the capability of CLONE's prosody modeling. This capability is divided into two aspects, one is to accurately extract the prosody, and the other is that the extracted prosody is independent with the text and speaker, and can be transferred between different speakers and different texts. For the first aspect, the prosody reconstruction experiment shows that CLONE can accurately extract the prosody, and the extracted prosody is very similar to the prosody of the reference audio. For the second aspect, our prosody transfer experiment proves that prosody can be transferred between speakers, and the prosody variation experiment proves that prosody is independent with text.\n\n> Why is approximate posterior distribution denoted as q(z|x,c)? Isn't it q(z|x)?\n\nHere we use condition VAE to design our model. Referring to the theory of condition VAE, we introduce the condition linguistic feature in the posterior encoder and prior encoder, which is $c$, so in CLONE, the posterior encoder is $q_{\\phi}(z \\mid x, c)$. To understand intuitively, we provide the posterior encoder with linguistic features, which help the posterior encoder filter out linguistic information from the input linear spectrogram and model the prosody.\n\n> It seems not technically correct to call the transformed prior with normalizing flow a conditional prior (p(z|c)) because no condition was actually given. It makes sense in the paper VITS because the conditional prior distribution is constructed by feeding text information to the network.\n\nThere is no technical error here. First of all, our assumption is similar to variational inference with normalizing flow[1] and VITS[2]. The posterior encoder outputs a normal distribution, which is then transformed by the normalizing flow to another normal distribution. Unlike VITS, the normalizing flow in VITS transforms the input distribution to the non-standard normal distribution predicted by the text encoder, and we transform the input distribution to the standard normal distribution. Besides, $z$ is latent prosody variable, which is independent with linguistic information $c$, thus, $p_{\\theta}(z \\mid c) = p_{\\theta}(z)$. Thus, we have $p_{prior} = p_{\\theta}(z \\mid c) = p_{\\theta}(z) = \\mathcal{N}\\left(f_{\\theta}(z) ; \\mathbf{0}, \\boldsymbol{I}\\right) \\left|\\operatorname{det} \\frac{\\partial f_{\\theta}(z)}{\\partial z}\\right|$.\n\n- [1] Variational Inference with Normalizing Flows.\n- [2] Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech.\n", " > Most recent work use widely used public datasets and author's official implementations and pre-trained weights for fair and unambiguous comparisons. The models used by the authors for comparison are models with well-performing public implementations and pre-trained weights. By training the proposed model with the public dataset(eg, LJSpeech), authors can easily compare it with other models. This is the fairest and most reliable comparison method that can claim that the proposed method shows state-of-the-art performance.\n\nThe LJSpeech dataset is an excellent public dataset, but does not meet our needs.\n\n1. Firstly, the prosody diversity of LJSpeech is relatively weak, and it cannot demonstrate the capability of prosody modeling. Our internal data has rich prosody diversity, which puts forward higher requirements for the prosody modeling capability of the model.\n2. Secondly, the sound quality of LJSpeech is relatively poor, with reverberation and high frequency breaking. To explore the upper bound of the generated audio quality, we choose an internal dataset. Our internal dataset has high-quality recordings and 48k audio, which can fully demonstrate the model's capability of waveform generation.\n\n> Some figures need to be improved, such as font size.\n\nThank you for your suggestion.\n\n### Summary\n\nIn particular, our paper proposes a new prosody modeling method. We model the phoneme-level prosody in an implicit way, assuming that the prosody obeys a non-standard normal distribution. It cannot fully characterize prosody information to use pitch or energy to model prosody explicitly such as FastSpeech 2, FastPitch and etc. In addition, we use the prosody predictor to predict the prosody distribution instead of the fixed prosody vector, which can avoid the over-smoothness of the prosody prediction, because for the same text, there will be different prosody distributions. Therefore, we assume that prosody obeys a non-standard normal distribution, which increases the upper bound of prosody representational power, and CLONE's prosody predictor avoids the over-smoothness of prosody prediction, improving the expressiveness of the generated audio in end-to-end synthesis.\n\nWe hope our replies solve all your concerns about the paper. If we successfully address your concerns, we would strongly appreciate an increased score; otherwise, we are happy to provide additional discussion and address any further questions. Thanks!\n", " > It is necessary to clarify whether the inability to synthesize high frequencies with high quality is a common problem that occurs with other models or a problem that only occurs with the proposed model. In addition, in order to claim that DPA is a general method to increase the quality of high frequency bands, it must be shown that it is effective even when applied to other models.\n\n1. The high frequency generation problem also exists in other single-stage models, such as FastSpeech 2s and EATS. The fundamental reason is that there is a strong mapping relationship between textual information and vocal frequency band information (such as mel spectrogram), but the mapping relationship between textual information and high-frequency information is not significant. Mel spectrogram loss will constrain the model to reconstruct human voice information, but it is difficult to constrain the model learn high frequencies. As mentioned above, the prediction error and over-smoothness of the intermediate representation will affect the convergence of the second part of single-stage model, which increases the search space of the second part and weakens the quality of generated audio. Compared with the information of vocal frequency band that is easy to learn, high-frequency information is difficult to model efficiently, which is manifested as poor high-frequency generation ability of the single-stage model. It is worth noting that VITS can solve this problem by introducing a VAE to generate waveform from linear spectrogram, and we have introduced a new idea to solve this problem.\n2. The architecture of DPA can be used in other single-stage models to improve high-frequency modeling capabilities. Single-stage model can design a posterior wave encoder with linear spectrogram as input and intermediate representation as output. Calculate loss with intermediate representation generated by the first part of single-stage model, and jointly optimize, to narrow the search space of second part, and improve the quality of generated audio.\n\nIn general, our motivation for designing the DPA structure is that we found that the pipeline of \"text->acoustic model->spectrogram-length intermediate representation->upsampler->waveform\" cannot generate high-quality waveforms, and the fundamental reason is that the quality of the \"spectrogram-length intermediate representation\" is poor, which slows down the convergence of the upsampler and affects the high-frequency reconstruction of the upsampler. So we design the posterior wave encoder to introduce the information from the ground-truth linear spectrogram into the training process of the \"upsampler\" and provide more supervision to the \"spectrogram-length intermediate representation\".\n\n> The authors did not specify how the comparative models were trained. Referring to the quality of audio samples, the performance of some models differs in quality from the audio samples presented in the original work or best-performing open source implementations. Given that the human recording samples presented by the authors show uniform quality than the widely used public dataset, some models appear to be compared without convergence. All comparisons should be made between sufficiently converged models, and the authors should specify the training settings of the comparison models, and if they did not converge to the level suggested in the original work, the authors should re-experiment and revise the comparison results.\n\nThe comparative models we provide, VITS, FastSpeech 2 and Tacotron 2, are all open source implementations, trained using our internal data. TFGAN[7] is an internal implementation. After evaluation, our TFGAN's performance is stronger than the open source HiFi-GAN V1. For the fairness of the comparison, we specially selected the checkpoint with the best degree of convergence. Thus, we believe that the comparative models we provide can be compared with CLONE fairly.\n\n- [7] TFGAN: Time and Frequency Domain Based Generative Adversarial Network for High-fidelity Speech Synthesis.\n", " > The authors claim the following: \"Unlike the mel-spectrogram used in two-stage methods, the intermediate representation in single-stage methods is predicted by the model without training supervision, subject to prediction errors and over-smoothness.\"\nIt is not clear which intermediate features in single stage models are claimed, and the authors should provide scientific analysis of the problem of intermediate features pointed out by the authors.\n\nIn most previous two-stage neural network TTS models, the mel spectrogram is first predicted by the acoustic model, and then the mel spectrogram is converted into a raw waveform by the vocoder. There is no such process in the single-stage TTS model, but most of the single-stage TTS models maintain a similar two-stage mode, which first extends the input text information to the intermediate representation with the spectrogram length, and then predicts the waveform from the intermediate representation by a module similar to vocoder, such as FastSpeech 2s, VITS[5], EATS[6] and etc.\n\nIn the two-stage neural network TTS model, the predicted mel spectrogram usually has prediction error and over-smoothness, and these problems weaken the vocoder to generate high-quality waveforms. Thus, we need to use ground-truth mel spectrogram to train the vocoder, and during fine tuning, we use gta mel spectrogram to fine tune the vocoder, similar to Tacotron and Tacotron2. Just as the two-stage model predicts the mel spectrogram with prediction error and over-smoothness, the single-stage model also has the same problem when predicting intermediate representations. These predicted intermediate representations are with over-smoothness and not completely aligned with the raw waveform. These problems will also slow down the convergence of the second part (intermediate representation to waveform) of the single-stage model and weaken the generated audio quality. Specifically, these problems will expand the search space of the second part of the single-stage model and produce errors in the mapping relationship between the intermediate representation and waveform.\n\n> The authors claim the following: \"we carefully study the problem of unsatisfactory high-frequency information generation in single-stage end-to-end speech synthesis, which is caused by the lack of supervision of ground truth acoustic features during training.\"\nConsidering that other models are also supervised using mel-spectrogram loss similar to the proposed method, it is difficult to see that other models lack the supervision of the ground truth, and the method proposed is fundamentally the same as the previous work in the point of view.\n\nThe \"lack of supervision\" mentioned here does not mean the lack of supervision from loss function, but the lack of supervision in the model search space during the training process. As above, the prediction error and over-smoothness of the intermediate representation will affect the convergence of the second part of single-stage model, which increases the search space of the second part and weakens the quality of generated audio. In the two-stage model, the vocoder has ground-truth mel spectrogram as input in the training phase, and learns the mapping relationship between the ground-turth mel spectrogram and the raw waveform, whose search space is smaller than the former. In the single-stage model, although the intermediate representation has mel spectrogram loss as the loss function, the prediction error and over-smoothness always exist. We need \"supervision\" of the ground-truth signal, so we designed the posterior wave encoder. This module takes the ground-truth linear spectrogram as input, and the output of this module is the input of the second part of single-stage model. In the pipeline of \"linear spectrogram -> parallel wave encoder -> intermediate representation -> wave generator -> waveform\", the model learns the mapping relationship between linear spectrogram and waveform. Compared with the initial, this method narrows the search space and speeds up convergence. In general, we introduce the ground-truth signal into the second part in the form of a parallel encoder, thereby reducing the search space of the second part, and finally improving the generated audio quality.\n\n- [5] Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech.\n- [6] End-to-End Adversarial Text-to-Speech.\n", " Thanks for your comments on our paper. We reply to your questions as follows:\n\n> The authors claim that the proposed method generates lossless speech, but comparing the human recording and synthesized audio presented by the authors clearly shows that there is a quality loss. Also, considering that it is a generative model based on neural networks, there is an error in the authors' claim.\n\n48kHz CLONE can generate audio with 48k sample rate and 16 bit depth. Generally speaking, 48kHz and 16 bit audio is \"lossless audio\", so we consider CLONE can synthesize lossless audio. At the same time, we also admit that there is a gap between the audio generated by 48kHz CLONE and ground-truth 48kHz audio. It should be emphasized that CLONE is the first single-stage TTS model capable of generating 48kHz audio and we show the possibility of a single-stage TTS model for generating higher sample rate audio.\n\n> The authors claim that the proposed method achieves high controllability, but I would like to ask what can be controlled. There is no evidence in the paper that the desired speech audio can be synthesized by controlling the model proposed in this work.\n\nCLONE can achieve control of prosody through the following methods:\n1. Sample different value from standard normal distribution as the input of normalizing flow to achieve control of generated audio. As shown in Figure 2(b), the input of flow is all set to -1 or 1 and the generated audio has high pitch variation. You can also refer to the demo page in the [section of \"Prosody Variation\"](https://cloneneurips2022.github.io/CLONE/). The variation of generated speech by distributing different sampling values has been shown in many previous works, like Flowtron[1], Parallel Tacotron and Glow-TTS[2], and these works also claim to achieve controllability.\n2. CLONE can use speaker A's audio as reference audio to generate speaker B's audio with speaker A's prosody. The samples are shown on the demo page in the section of \"Prosody Transfer\" and we analysis the pitch and energy similarity in Figure 2 (c) & (d) and Table 3. Unlike some works (FastSpeech 2[3], FastPitch[4], etc) which explicitly model pitch and energy, this experiment proves that our method can implicitly model prosody to achieve the transfer of pitch and energy.\n3. CLONE can use the timbre of speaker A to synthesize the speech of speaker B's speaking style by replacing the speaker ID of the prosody predictor with speaker B's. Samples are at [here](https://github.com/CLONEneurips2022/samples).\n\n> The authors claim the following: \"Unlike the mel-spectrogram used in two-stage methods, the intermediate representation in single-stage methods is predicted by the model without training supervision, subject to prediction errors and over-smoothness.\"\nIt is not clear which intermediate features in single stage models are claimed, and the authors should provide scientific analysis of the problem of intermediate features pointed out by the authors.\n\nIn most previous two-stage neural network TTS models, the mel spectrogram is first predicted by the acoustic model, and then the mel spectrogram is converted into a raw waveform by the vocoder. There is no such process in the single-stage TTS model, but most of the single-stage TTS models maintain a similar two-stage mode, which first extends the input text information to the intermediate representation with the spectrogram length, and then predicts the waveform from the intermediate representation by a module similar to vocoder, such as FastSpeech 2s, VITS, EATS and etc.\n\nIn the two-stage neural network TTS model, the predicted mel spectrogram usually has prediction error and over-smoothness, and these problems weaken the vocoder to generate high-quality waveforms. Thus, we need to use ground-truth mel spectrogram to train the vocoder, and during fine tuning, we use gta mel spectrogram to fine tune the vocoder, similar to Tacotron and Tacotron2. Just as the two-stage model predicts the mel spectrogram with prediction error and over-smoothness, the single-stage model also has the same problem when predicting intermediate representations. These predicted intermediate representations are with over-smoothness and not completely aligned with the raw waveform. These problems will also slow down the convergence of the second part (intermediate representation to waveform) of the single-stage model and weaken the generated audio quality. Specifically, these problems will expand the search space of the second part of the single-stage model and produce errors in the mapping relationship between the intermediate representation and waveform.\n\n- [1] Flowtron: an Autoregressive Flow-based Generative Network for Text-to-Speech Synthesis.\n- [2] Glow-TTS: A Generative Flow for Text-to-Speech via Monotonic Alignment Search.\n- [3] FastSpeech 2: Fast and High-Quality End-to-End Text to Speech.\n- [4] FastPitch: Parallel Text-to-speech with Pitch Prediction\n", " > What does CMOS stand for on line 276?\n\nWe conduct CMOS evaluation ([speech quality assessment](https://ecs.utdallas.edu/loizou/cimplants/quality_assessment_chapter.pdf)) which compares the voice quality between two voices. The test taker will rate the two audios with a score of -2, -1, 0, 1, and 2, respectively. The higher the score, the higher the quality of the test audio than the reference audio. We let the 48 kHz ground-truth audio be compared with the samples of 48 kHz CLONE, 24kHz CLONE and 24kHz VITS respectively. The CMOS evaluations demonstrate that 48 kHz CLONE can generate the audio close to the 48 kHz ground-truth audio and outperforms both the 24 kHz CLONE and 24 kHz VITS.\n\n> In section 4.5.2, Prosody Variation, you mentioned a high variance of from standard normal distribution sampling. I was wondering if you have considered the mismatch between training and inference (mode(b) in this case). During the training time, it seems that the prior is not standard.\n\nDuring training, posterior encoder predicts the phoneme-level prosody distributions from the input linear spectrogram and the condition linguistic feature, which aren't standard normal distributions but non-standard normal distributions. Then we conduct a normalizing flow to transform the non-standard normal distribution to the standard normal distribution, thus, with the invertible transformation of flow model, we can sample value from the standard normal distribution and transform it to the non-standard normal distribution. In mode b inference, we sample value from the standard normal distribution and transform it to the non-standard normal distribution by flow model.\n\n> This is not question but personal opinion, which is not related to the quality of the paper. While I understand the authors would like to choose a good abbreviation, (CLONE in the paper), the paper title is not super related to the novelty and strength of model, which is prosody modeling. (controllable and lossless sound very generic)\n\nThank you very much for your suggestion. We will consider using new model name in next work.", " Thanks for your comments on our paper. We reply to your questions as follows:\n\n> Could you add variable notations in Figure 1 (e.g. where are x, c, z etc)\n\n$c $ and $ z $ denote the linguistic feature and the phoneme-level prosody latent variable respectively, and $ x $ denotes a data point in our training dataset. $ c $ is the \"Linguistic Feature\" in the figure and $ x $ is the spectrogram in the figure. $ z $ isn't visualized in the figure but it is the result of \"Reparameterization\". $ ir $ and $ \\hat{ir} $ is from acoustic encoder and posterior wave encoder respectively. $ \\mu_{pp}, \\sigma_{pp} $ and $ \\mu_{\\phi}, \\sigma_{\\phi} $ is the predicted distribution from prosody predictor and posterior encoder respectively. $ PF_{phone} $, $ PF_{frame} $ and $ \\widehat{PF}_{frame} $ denote phoneme-level prosody feature, frame level prosody feature and frame-level prosody feature expanded from $ z $.\n\n> What do you mean by \"severe linguistic information leakages\" in line 118? Could you illustrate more?\n\nLinguistic information leakage here means that frame-level prosody feature $PF_{frame}$ is not downsampled, so it contains too much information causing the acoustic encoder predicts intermediate representations directly from $ PF_{frame} $ without the use of linguistic features (just like the linguistic information leaks through $PF_{frame}$) during the training of the model. Note that the prosody modeling method (as shown in Figure 1 (b)) is based on VAE, so the downsampling operation is critical to extract prosody information and filter out linguistic information. Thus, we extract the prosody feature at the frame level and downsample it to the phoneme level before putting it into the acoustic encoder.\n\n> How do you get the hard alignment matrix A in equation (2)? I found a term called hard alignment prosody extractor in section 4.5.3. Could you illustrate more?\n\nDuring training, we get the hard alignment matrix $\\textbf{A}$ from the ground-truth durations $\\mathbf{d} = \\{d_1, ..., d_{T_p}\\} $ of phonemes. $\\mathbf{A} \\in \\mathbb{R}^{T_p \\times T_f}$ and each element in A is 0 or 1.\n\n$A_{ij} = 1$, when $\\sum_{k=1}^{i-1}d_k \\leq j < \\sum_{k=1}^{i}d_k$\n\n$A_{ij} = 0$, when otherwise.\n\nIn other words, $A_{ij} = 1$ denotes The i-th phoneme is being pronounced at the j-th spectrogram frame. Hard alignment prosody extractor means we extract prosody by using the hard alignment matrix $\\mathbf{A}$ rather than a soft attention mechanism e.g. parallel tacotron[1], which represents the operation of converting $PF_{frame}$ to $PF_{phone}$.\n\n> In equation (4), what is normalizing flow f_theta? Is there a reference to such design of prior? If not, could you illustrate more?\n\nThe normalizing flow $f_{\\theta}$ (Real NVP[2]) is a stack of affine coupling layers consisting of a stack of non-causal WaveNet residual blocks. Same as VITS[3], we simply the normalizing flow with a volume-preserving transformation with the Jacobian determinant of one.\n\n> There are six weights in loss functions and the values are pretty diverse according to appendix. And only one set of values is provided. Is model performance sensitive to weight values?\n\nThe efficiency of prosody modeling is sensitive to the coefficient of reconstruction loss $L_{recon}$, which is $ \\lambda_{1} $. Thus, we increase the $ \\lambda_{1} $ to ensure the modeling of prosody and we use the same coefficient as HiFi-GAN[4], which is 45.0. The other coefficients of loss functions are not sensitive to model performance.\n\nReference\n\n- [1] Parallel Tacotron: Non-Autoregressive and Controllable TTS.\n- [2] Density estimation using Real NVP.\n- [3] Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech.\n- [4] HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis\n\n", " The paper introduces a novel approach for prosody control in text-to-speech models. A sequence of prosody representations are introduced as latent variables in variational training. Different from prior work, where prosody information are explicitly represented, the propose method learns general and implicit ones. The general representation, on the other hand, enables better modeling of prosody information, which is crucial for on-to-many mapping problem in TTS systems. The paper also provides numerical results compared with recent works, showing a significant improvements on naturalness. The authors also provides system outputs for baseline and proposed model, which shows advantages of the proposed model. Strengths\n1. The paper in general is well written. The introduction of methodology and motivation is straightforward.\n2. The methods, including prosody modeling and dual parallel auto-encoder, are fairly novel.\n3. The numerical results and system output support the advantage of proposed model.\n4. The analysis and ablation study are thorough.\n\nWeaknesses \nI don't see significant weaknesses in the paper. However, there are some minor issue I will later address in the question section. 1. Could you add variable notations in Figure 1 (e.g. where are x, c, z etc)\n2. What do you mean by \"severe linguistic information leakages\" in line 118. Could you illustrate more?\n3. How do you get the hard alignment matrix A in equation (2)? I found a term called hard alignment prosody extractor in section 4.5.3, could you illustrate more?\n4. In equation (4), what is normalizing flow f_theta? Is there a reference to such design of priori? If not, could you illustrate more?\n5. There are six weights in loss functions and the values are pretty diverse according to appendix. And only one set of values are provided. Is the model performance sensitive to weight values?\n6. What does CMOS stands for on line 276?\n7. In section 4.5.2, Prosody Variation, you mentioned a high variance of from standard normal distribution sampling. I was wondering if you have considered the mismatch between training and inference (mode(b) in this case). During the training time it seems that the priori is not standard.\n8. This is not question but personal opinion, which is not related to the quality of the paper. While I understand the authors would like to choose a good abbreviation, (CLONE in the paper), the paper title is not super related to the novelty and strength of model, which is prosody modeling. (controllable and lossless sound very generic) The authors address their concern on limitations and potential negative societal impact.\nAs to the limitations, I would suggest author provide more training details, especially on hyperparameter tuning, which can be super useful for future research to replicate the work.\n\n ", " The authors propose a controllable and lossless method of Non-Autoregressive End-to-End Text-to-Speech synthesis. * Weaknesses\n1. The authors claim that the proposed method generates lossless speech, but comparing the human recording and synthesized audio presented by the authors clearly shows that there is a quality loss. Also, considering that it is a generative model based on neural networks, there is an error in the authors' claim.\n1. The authors claim that the proposed method achieves high controllability, but I would like to ask what can be controlled. There is no evidence in the paper that the desired speech audio can be synthesized by controlling the model proposed in this work.\n1. The authors claim the following:\n”Unlike the mel-spectrogram used in two-stage methods, the intermediate representation in single-stage methods is predicted by the model without training supervision, subject to prediction errors and over-smoothness.”\\\nIt is not clear which intermediate features in single stage models are claimed, and the authors should provide scientific analysis of the problem of intermediate features pointed out by the authors.\n1. The authors claim the following:\n“we carefully study the problem of unsatisfactory high-frequency information generation in single-stage end-to-end speech synthesis, which is caused by the lack of the supervision of ground truth acoustic features during training.”\\\nConsidering that other models are also supervised using mel-spectrogram loss similar to the proposed method, it is difficult to see that other models lack the supervision of the ground truth, and the method proposed is fundamentally the same as the previous work in the point of view.\n1. It is necessary to clarify whether the inability to synthesize high frequencies with high quality is a common problem that occurs with other models or a problem that only occurs with the proposed model. In addition, in order to claim that DPA is a general method to increase the quality of high frequency bands, it must be shown that it is effective even when applied to other models.\n1. The authors did not specify how the comparative models were trained. Referring to the quality of audio samples, the performance of some models differs in quality from the audio samples presented in the original work or best-performing open source implementations. Given that the human recording samples presented by the authors show uniform quality than the widely used public dataset, some models appear to be compared without convergence. All comparisons should be made between sufficiently converged models, and the authors should specify the training settings of the comparison models, and if they did not converge to the level suggested in the original work, the authors should re-experiment and revise the comparison results.\n1. Most recent work use widely used public datasets and author's official implementations and pre-trained weights for fair and unambiguous comparisons. The models used by the authors for comparison are models with well-performing public implementations and pre-trained weights. By training the proposed model with the public dataset(eg, LJSpeech), authors can easily compare it with other models. This is the fairest and most reliable comparison method that can claim that the proposed method shows state-of-the-art performance.\n1. Some figures need to be improved, such as font size. Included in what was described above. The authors mentioned limitations in the appendix.", " This paper introduces a text-to-speech model that combines several generative models such as VAE, GAN, and Normalizing Flow. The authors propose to use phoneme-level latent variable to generate prosodic latent variable. Normalizing Flow was used to make more complex probability distributions for the latent space of VAE. The evaluation on internal dataset shows that the proposed method works better than recent end-to-end SOTA model, VITS. - strengths\n - The authors try to disentangle latent information that was entangled in a single random variable in VITS.\n \n- weakness\n - more thorough subjective evaluations are required to evaluate how good the prosody modeling of the proposed method is. Simple MOS is not enough.\n - Some technical details seem wrong. See Questions. - line 121, “We use the obtained duration of phonemes to construct the hard alignment matrix” → The process of obtaining the duration of phonemes was not mentioned before this.\n- How do the authors safely assume that the ouput of prosody predictor is actually prosodic latent variable?\n- Why do text information included in both reference and source text the same? This is not a practical scenario and not a fair way to test the actual performance.\n- Why is approximate posterior distribution denoted as q(z|x,c)? Isn’t it q(z|x)?\n- It seems not technically correct to call the transformed prior with normalizing flow “a conditional prior (p(z|c)) because no condition was actually given. It makes sense in the paper VITS because the conditional prior distribution is constructed by feeding a text information to the network.\n- Why do the authors use TFGAN for fastspeech2 and tacotron2 baselines when there’s HiFi-GAN vocoder that is used very often in many recent works? I don’t think I have seen TTS works that use TFGAN for the vocoder.\n- Why do the authors evaluate the proposed model using internal datasets? Both the limitations and potential negative societal impact are written very briefly in Appendix." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 3, 3 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4 ]
[ "bfpfXvp0_aq", "IA7XbuUFB_H", "bfpfXvp0_aq", "bfpfXvp0_aq", "IA7XbuUFB_H", "IA7XbuUFB_H", "IA7XbuUFB_H", "IA7XbuUFB_H", "956tD4Lppgm", "956tD4Lppgm", "nips_2022_duBoAyn9aI", "nips_2022_duBoAyn9aI", "nips_2022_duBoAyn9aI" ]
nips_2022_pZsAwqUgnAs
Implicit Regularization or Implicit Conditioning? Exact Risk Trajectories of SGD in High Dimensions
Stochastic gradient descent (SGD) is a pillar of modern machine learning, serving as the go-to optimization algorithm for a diverse array of problems. While the empirical success of SGD is often attributed to its computational efficiency and favorable generalization behavior, neither effect is well understood and disentangling them remains an open problem. Even in the simple setting of convex quadratic problems, worst-case analyses give an asymptotic convergence rate for SGD that is no better than full-batch gradient descent (GD), and the purported implicit regularization effects of SGD lack a precise explanation. In this work, we study the dynamics of multi-pass SGD on high-dimensional convex quadratics and establish an asymptotic equivalence to a stochastic differential equation, which we call homogenized stochastic gradient descent (HSGD), whose solutions we characterize explicitly in terms of a Volterra integral equation. These results yield precise formulas for the learning and risk trajectories, which reveal a mechanism of implicit conditioning that explains the efficiency of SGD relative to GD. We also prove that the noise from SGD negatively impacts generalization performance, ruling out the possibility of any type of implicit regularization in this context. Finally, we show how to adapt the HSGD formalism to include streaming SGD, which allows us to produce an exact prediction for the excess risk of multi-pass SGD relative to that of streaming SGD (bootstrap risk).
Accept
The paper addresses an important question regarding the trajectories of SGD in high-dimensional settings. The theoretical derivations of the paper builds on top of prior works but nonetheless is sound. Most reviewers agree that the paper advances the knowledge in this area.
train
[ "0MS6RX4G3PJ", "vrNdYZgPcI6", "p-pZC3fgnvB", "0hM9xrKDuo", "GoH2AGk-S5t", "m99_GzrFNgV", "VZFxqPhjXHn", "DT95QKDYHj7Q", "t6iXHfSOZXL8", "26dzc896CsR9", "XKDJvTnLpq", "8aeIVgnkdpI", "zzUb5uZUKMf", "Qz48jbosjRh", "jnzDMhrNDdj", "UEkYuZs1qfg" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We agree with the reviewer that it is very interesting to examine the evolution of the ICR during training. We have added an experiment that shows the ICR is roughly constant over training **[see Supplementary Materials zip file]**, yielding at least some preliminary empirical support for the practical utility of our analysis. We are currently running additional experiments on more architecture and datasets, which we will include in the final version (the computational burden is significant and these results may take a while to be finalized).\n\n\nWe would like to emphasize though that conducting a rigorous and compelling analysis is more challenging than it might seem, owing to a number of reasons:\n\n1. It is not immediately clear which matrix to compute the ICR of in the non-convex setting, owing to the presence of negative eigenvalues. Perhaps the most natural choices would be the Gauss-Newton approximation to the Hessian or the Neural Tangent Kernel. In the case of squared error, these matrices have the same spectrum (and as such was the choice for the above experiments), but for other loss functions they may differ and there is some ambiguity regarding which matrix to study.\n\n2. For realistic models and dataset sizes, there are serious computational hurdles to extracting the ICR, especially in obtaining high-precision estimates of the minimum eigenvalue. Indeed, constructing efficient estimators for spectral statistics of curvature matrices has been the subject of an active recent research direction [1-2]. While these advances have made possible the estimation of the spectrum during training, they are still not able to reliably achieve high-precision information about the smallest eigenvalues, which would be necessary for estimating the ICR.\n\n3. There is no analogue of the Nesterov optimal momentum parameters for the non-convex setting, and therefore it is not clear how to construct the appropriate baselines against which to compare SGD. Grid search is a possibility, but it is expensive and still there would be ambiguity with respect to the criterion for optimality, since the late-time asymptotics will not be accessible experimentally.\n\n4. For natural datasets and realistic architectures, there is not always a well-motivated “knob” that would allow for tuning the ICR while leaving everything else fixed. (We have looked at a number of preprocessing operations, e.g. regularized ZCA whitening, but for many architectures they do not have a strong impact on the initial ICR.) As such, it is challenging to directly investigate the impact of ICR on the favorability of SGD vs M-GD.\n\n[1] Ghorbani, B., Krishnan, S., and Xiao, Y. *An Investigation into Neural Net Optimization via Hessian Eigenvalue Density*, ICML, 2019\n\n[2] Adams, RP., Pennington, J., Johnson, MJ, Smith, J, Ovadia, Y, Patton, B, and Saunderson J., *Estimating the Spectral Density of Large Implicit Matrices*, 2018", " Dear reviewers and AC:\n\nThe links to the anonymous github are proving unreliable. The extra figures have been included in the supplementary material ZIP file.\nThey are:\n1) A figure showing the effect of ICR on training in a synthetic least-squares setup. This was generated with data covariance which is power-law but bounded below. By changing the power law exponent, we are able to tune the ICR and hence illustrate that low-ICR implies SGD is favored.\n2) A figure showing loss curves and ICR in a 3-layer fully connected neural network with width 512 for CIFAR classification (planes vs. automobiles). The loss is the mean squared error. SGD applied with batch size 100, learning rate 0.1 over 40 epochs. Losses/Validation appear on the right axis. The ICR remains largely stable over the course of the training.\n\nThanks for your consideration!", " Thanks for the author's response! Most of my concerns have been addressed by the author. In particular, I like the introduction about the Volterra Dynamics in Appendix A. Hence, I decided to increase my score from 4 to 5. \n\nHowever, about the first point, I still have some concerns. As pointed out by the author, the ICR changes over time in the optimization. However, if the argument is correct, then it might not be difficult to show that the ICR has a consistent pattern along the training process. For example, the ICR is always smaller than some threshold in the case that SGD outperforms GD. Hence, I highly suggest the author add the simulation about the evolution of ICR during the training process for realistic applications.", " I would like to thank the authors for their rebuttal and their work. ", " **Response to Weaknesses**\n\n*1. In Figure 2, they run random feature models for realistic applications, i.e., CIFAR-10, CIFAR-5m, showing that in these cases the Hessian is ill-conditioned, supporting their main arguments that SGD outperforms GD on these realistic applications. However, note that the random feature model is a convex optimization problem, which differs a lot from realistic neural networks. Hence, small ICR on the random feature model cannot be used to show that SGD outperforms GD on realistic optimization problems **directly**.*\n\nIndeed, for non-least squares problems the ICR changes over time in the optimization. So we only show the problem at initialization in some sense. For neural networks with wide hidden layers and whose dynamics are well-approximated by the neural tangent kernel, the ICR of the NTK will not vary over time and our analysis should apply. Of course, one could debate whether such a problem is in fact realistic, and for situations in which the NTK does evolve substantially over time our analysis would not directly apply. \n\nHaving said that, we would like to add that part of the motivation to do a thorough analysis of linear regression is that when presented with a truly challenging problem, like the analysis of a deep neural network, having good information about linear regression provides an important prior and useful insight about what we might expect to observe because of high-dimensionality itself, even in the convex quadratic setting. In fact, we find it quite remarkable that there are still things to learn about optimization of convex quadratics, and our novel results highlight the importance of studying optimization in high dimensions.\n\n*2. Next, the author should provide the \"real\" convergence comparison between SGD and GD even for the random feature models to support their main observation. ICR is just an indicator for the comparison between SGD and GD. There might be an approximation error due to the usage of HSGD and also the dimension, iteration time is finite. Hence, without this simulation, I am not convinced that SGD does outperform GD in terms of convergence speed.*\n\nWe think you make a fair point, and so present a simulation that we believe adds some strength to the meaning of ICR, showing how as we vary the ICR, SGD vs GD performance changes. The following is the performance on a least-square problem of fixed aspect ratio, where we can change the ICR by varying the power-law exponent of the covariance spectra.\n\n[https://anonymous.4open.science/r/revisionstuff-C899/BetaSmallICR_overdetermined.pdf](https://anonymous.4open.science/r/revisionstuff-C899/BetaSmallICR_overdetermined.pdf)\n\n(Notes: dotted (GD), dashed (SGD), solid (MGD). All overdetermined, but with L2 regularizer to stabilize MGD. ICR plotted at the bottom. Time measured in dot-products.)\n\n*3. The argument that there is no implicit regularization might be trivial. For convex optimization, **any** algorithms, if convergent, then they should converge to the same function value (for strongly convex cases, they converge exactly to the same point). To my best knowledge, all the previous papers studying the implicit regularization focus on nonconvex optimization.*\n\nWe agree that Lemma 1 is a triviality, and we are not trying to claim this as a novelty. Your argument is very clear, and ourproof is a direct consequence of Jensen’s inequality and is a 1-line proof. We have formulated it as a lemma to emphasize that in this case, implicit regularization cannot occur. \n\n*4. The introduction of Volterra Dynamics is limited, which is very unfriendly to those unfamiliar with it. It would be good if additional background information can be provided.*\n\nAdditional background and references for Volterra dynamics, which appear frequently in filtering, population dynamics, and the renewal problem, to name a few, are now available in the new version of the uploaded Supplementary Materials (Appendix A).\n\n*5. Missing title in the reference [36].*\n\nThanks. We have fixed this.\n", " **Response** (2/2)\n\n*4. The paragraph on **Diffusion approximations and homogenized SGD** should be rewritten. A lot of work has been done to model SGD as a diffusion. [3] properly explains how to model SGD with a SDE, [4] analyses it respecting the geometry of the noise and the sentence we lack a precise connection between a concrete SLD and a practical nonconvex learning problem is simply wrong as [5] studied exactly a SDE model in a non-convex setting. Finally, to be consistent with the rest of the paper, it would be great that γ appears in the noise term of equation (5).*\n\n*[3] Qianxiao Li, Cheng Tai, and Weinan E. Stochastic modified equations and dynamics of stochastic gradient algorithms i: Mathematical foundations. Journal of Machine Learning Research, 20(40):1–47, 2019.*\n\n*[4] Stephan Wojtowytsch. Stochastic gradient descent with noise of machine learning type. Part II: Continuous time analysis, Preprint, 2021.*\n\n*[5] Scott Pesme, Loucas Pillaud-Vivien, and Nicolas Flammarion. Implicit bias of sgd for diagonal linear networks: a provable benefit of stochasticity. Advances in Neural Information Processing Systems, 34, 2021.*\n\nThanks for the references. We discussed [3] and the related body of work in a response above. Paper [4] we agree should be added. [5] is very nice work, and we agree it should be referenced somewhere, though it may not really constitute “a practical nonconvex learning problem,” at least as we envisioned that phrase to mean, and we would object somewhat to characterizing that sentence as “simply wrong.” Nevertheless, we agree with the spirit of the reviewer’s complaint and will reword that sentence and rewrite the entire paragraph. To that end, we have added a subsection on *Diffusion Approximations and SGD* into the new version of the Supplemental Materials (see Appendix A) in which we have a non-exhaustive description of prior work on SDEs and SGD and a detailed description of differences between SME and HSGD. We encourage the reviewer to look at the new uploaded Supplementary Materials, Appendix A. We will incorporate this material into the main text using the additional page available for the camera ready version.\n\n*5. Note also that in the isotropic case, if Σ is proportional to the identity, the invariant measure is not proportional to e−f but to e−Cγf, with Cγ>0 some constant.*\n\nThanks! This is fixed in the new uploaded version of the paper. \n\n**Response to Questions:**\n\n*1. I have one question for the SDE limit of the model: could the authors explain why the quadratic case is special to derive the SDE limit ? How could it be extended to a general convex loss ? or even to a non-convex setup ?*\n\nGLMs (or some class of them) might be solvable. We thought about the linear hidden-layer setup some, but there are important phenomenological differences. The model of [5] could possibly be approached.\n", " **Response** (1/2)\n\nWe thank the reviewer for their careful reading and constructive feedback. We have fixed the minor errors found by the reviewer in the new version of the main paper and supplementary materials. **We encourage the reviewer to look at the new revisions.** \n\n**Comments on Weaknesses**\n\n*1. Perhaps the first weakness is the limitation of the result: the authors are very precise in the specific setting of linear regression, provide some good material to analyse it, but it is hard to conclude anything deep from their analysis.*\n\nIndeed our analysis is restricted to the case of linear regression, and we agree that it is hard to draw general conclusions beyond this setting. We might expect our results to extend to the specific case of neural networks with wide enough layers and for which the dynamics are well approximated by the neural tangent kernel. But it is true that truly non-convex models, the dynamics of feature learning, etc., are all well outside the scope of our analysis.\n\nHaving said that, we would like to add that part of the motivation to do a thorough analysis of linear regression is that when presented with a truly challenging problem, like the analysis of a deep neural network, having good information about linear regression provides an important prior and useful insight about what we might expect to observe because of high-dimensionality itself, even in the convex setting. In fact, we find it quite remarkable that there are still things to learn about optimization of convex quadratics, and our novel results highlight the importance of studying optimization in high dimensions.\n\n*2. I know this is a large tendency in the ML community, but I think that overselling the results is counter productive when writing a paper. A good example is the title of the paper where the fact that the focus is on Least-squares should be written. Also, trying to systematically conclude that SGD is \"better\" or that this can explain the amazing performance of SGD in practice is a bit overselling. The results of the authors being already strong, there is no need to oversell the paper like this.*\n\nWe completely agree with the reviewer about this point and are very sorry that our paper came across this way. Our intention was not to argue that SGD is generally “better,” and in fact our results do not even support that conclusion in the least squares setting. To the contrary, we rather sought to argue that SGD actually incurs a mild generalization performance loss, but point out that it *can* have optimization advantages that become evident in high dimensions. To fully understand the performance of SGD, it is necessary to examine its effect on both generalization and on convergence rates, though it is often hard to disentangle these effects in empirical analyses in practical models. Our in-depth analysis in the least squares setting allows for us to do this theoretically, and provides some baselines for future analyses in the non-convex setting, and also underscores the importance of accounting for high-dimensionality.\n\n*3. In the same direction, I find the introduction confusing and the references to the literature incomplete. I think that the authors should be more specific and really focus on the least squares literature, and not try to refer to the deep learning one (or maybe just at the end to motivate further investigations). In terms of literature, lign 50, the multipass is not properly covered and [1,2] among others are worth mentioning.*\n\n*[1] Junhong Lin and Lorenzo Rosasco. Optimal rates for multi-pass stochastic gradient methods. Journal of Machine Learning Research, 18(97):1–47, 2017*\n\n*[2] Loucas Pillaud-Vivien, Alessandro Rudi, and Francis Bach. Statistical optimality of stochastic gradient descent on hard learning problems through multiple passes. In Advances in Neural Information Processing Systems, pages 8125–8135, 2018.*\n\nThank you for the references. We have added them into the new version of the Supplemental Materials (see Appendix A) in which we have a non-exhaustive additional related work section. We encourage the reviewer to look at the new uploaded Supplementary Materials. \n", " **Comments on Weaknesses** (2/2)\n\n*3. In section 3.2, a detailed explanation about why SGD is more efficient than GD is missing. It seems that the authors only claim that SGD with a constant learning rate can match the convergence of GD, but how this lead to the argument that SGD is more efficient is missing. The authors may need to provide a rigorous comparison between the efficiency of SGD and GD/M-GD.*\n\nTheorems 4 and 5 establish the asymptotic convergence rates for SGD and M-GD, and lines 228-229 provide the interpretation of these results: SGD will converge in an ICR-multiple number of epochs that M-GD requires (so lower ICR favors SGD). Of course, these are just asymptotic rates, and the reviewer is absolutely right that they might not fully reflect what happens in practice or at finite $t$. As such, we have conducted some additional simulations that we believe adds some strength to the meaning of ICR.\n\nThe following is the performance on a least-square problem of fixed aspect ratio, where we can change the ICR by varying the power-law exponent of the covariance spectra.\n\n[https://anonymous.4open.science/r/revisionstuff-C899/BetaSmallICR_overdetermined.pdf](https://anonymous.4open.science/r/revisionstuff-C899/BetaSmallICR_overdetermined.pdf)\n\n(Notes: dotted (GD), dashed (SGD), solid (MGD). All overdetermined, but with L2 regularizer to stabilize MGD. ICR plotted at the bottom. Time measured in dot-products.)\n\nWe would like to emphasize that we are not trying to argue that SGD ‘is more efficient’ than GD as a blanket statement. It’s rather that there is a type of problem which is poorly conditioned (meaning low ICR) in which full batch methods struggle and SGD will perform better. We’re open to other suggestions on how to provide a rigorous comparison between the efficiency of SGD and GD/M-GD, especially at finite $t$.\n\n*4. Lemma 1 is not new in the literature (at least in the case of δ=0). The following work (see their Theorem 1) has shown that for any feasible learning rate, the excess risk of multi-pass SGD must be greater than or equal to GD, then taking the limit learningrate→0 can imply the results of Lemma for the case of δ=0*\n\n*The authors may need to comment this in the surrounding text.* \n\n*D. Zou, J. Wu, V. Braverman, Q. Gu, and S. M. Kakade. Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime. arXiv preprint arXiv:2203.03159, 2022.*\n\nZou-Wu-Braverman-Gu-Kakade is a nice paper, and we’re happy to discuss it further. Lemma 1 is a 1-sentence application of Jensen's inequality, and we don’t mean to claim it as a novelty. We have added the reference (actually their other paper as it deals with multi-pass SGD) to the new version of the main paper. \n", " **Comments on Weaknesses** (1/2)\n\n*1. Although the authors provide a precise characterization of the generalization error of SGD, the formula of the Psit and Ωt are still difficult to follow.*\n\nWe agree that $\\Omega_t$ and $\\Psi_t$ may at first appear complicated as they are not defined through closed-form expressions but rather in terms of solutions to an integral (Volterra) equation. The level of complexity here is similar to that of a linear differential equation, and as such we believe the presentation is not unduly complicated or unwarranted given the power of the results. Having said that, we recognize that not everyone has familiarity with Fredholm theory or the theory of integral equations and we added some background material to Appendix A. We have also provided code for the evaluation of these quantities: they can be numerically evaluated efficiently by what amounts to the power method. We have also analyzed their asymptotic behavior, which can be examined somewhat more explicitly. We believe that these various efforts are sufficient to help readers follow and understand these formulas. If the reviewer has additional specific requests in this direction we would be happy to accommodate.\n\n *There lacks a good interpretation of the developed results, especially in the non-asymptotic setting that t is not approaching \ninfinity.*\n\nWe agree that the behavior for intermediate $t$ is important, and one of the main strengths of our results is that our formulas give precise predictions for the entire learning trajectory. As is often the case in the perhaps more familiar setting of linear differential equations, some of the details at intermediate $t$ may not be determinable without solving the equations numerically; we have performed such numerical evaluations in many settings, see e.g. Figs 1 and 3, which we believe should aid in the interpretation of the results. In certain cases, some high-level conclusions can be determined more generally, and we have developed those analyses as well; see e.g. Lemma 1, which shows that SGD generalizes worse than gradient flow at all intermediate times. \n\n*Note that for many high-dimensional linear regression problems, GD/SGD with early stopping can give good generalizable solutions while the overfitting will occur if t→∞.*\n\nYes, it is often the case that in problems with low SNR or insufficient explicit regularization that both GD and SGD generalize better with early stopping than if t->infinity (see e.g., Figure 1). But it's also true that SGD, with early stopping, generalizes worse than the corresponding gradient flow (Lemma 1), and MGD actually struggles even more on problems with large ICR. Note MGD performance gains usually appear when solving for the ‘last eigenmodes’, which are frequently exactly the ones which are being screened by the practitioner by early stopping – actually appears nicely in\n\n[https://anonymous.4open.science/r/revisionstuff-C899/BetaSmallICR_overdetermined.pdf](https://anonymous.4open.science/r/revisionstuff-C899/BetaSmallICR_overdetermined.pdf)\n\n(Notes: dotted (GD), dashed (SGD), solid (MGD). All overdetermined, but with L2 regularizer to stabilize MGD. ICR plotted at the bottom. Time measured in dot-products.)\n\nSo the story at early stopping is largely the same as t=infinity.\n\n*2. Additionally, in some cases the condition number κ could be extremely large (when the matrix T or A has a fast decaying eigenspectrum). Then the convergence rate to Ψ∞ and Ω∞ may not be that interesting as they will be super slow and people tend to early stop the optimization algorithm.*\n\nThis is certainly an excellent point. And we give the actual finite time formula, which is expressed using the Volterra equation. But as you note, this is complicated, and so it’s helpful to have a simplification that allows for an easy proxy for the behavior of Psi and Omega which are helpful in some cases.\n\nAlso, early stopping can be interpreted as screening some of the eigenvalues from the problem, as they are too small to have participated in the optimization. In this case, you are actually using the smallest ‘effective’ eigenvalue. In every real-world optimization analysis, you have to do this. An example of this is MNIST itself, which has a few eigenvalues 10^{-5} smaller in magnitude than the next smallest one. These need to be removed to understand the correct smallest ‘effective’ eigenvalue. And the correct way to compute Psi_infty and Omega_infty is with this smallest effective eigenvalue. This would also be the case with early stopping, where you remove eigenvalues smaller than the threshold for participation.\n", " **Comments on Weaknesses** (2/2)\n\n* *Line 119-124. I am actually confused, because it has been long known that a continuous SME approximates SGD, see [L 2017] and [H 2017].*\n\nFor space considerations in the original submission, we had to remove a longer discussion about these papers., but we certainly agree that they are a substantial and important theoretical contribution to the theory of diffusions in optimization/machine learning. If accepted, we will happily restore some of this discussion, which we have temporarily added to Appendix A.\nEssentially, there are a few critical differences between our setting and that of L2017 and H2017:\n\n1. L2017 and H2017 actually concern a different diffusion, the SME, which has a different covariance structure than H-SGD. Even in the least squares setting, this covariance structure is quite complicated. In particular, there is no analogue of the Volterra equation, and so it actually cannot be analyzed in the same way as H-SGD. The fact that the generalization dynamics of H-SGD do not have dimension dependence is the key difference.\n\n2. The time scale in L2017 and H2017, when transported to our setting, is a fixed number of _iterations_. On that time scale, when the dimensions d ~~ n are large, SGD does nothing. You need to run on the order of n iterations. The theory from L2017 and H2017 no longer applies to this case. In contrast, note that in Theorem 1, the number of iterations over which SGD and H-SGD are close is the same magnitude as the problem size (there is an n in the x_[nt]!)\n\n* *Line 148. I believe [21] is a wrong citation here. Could you point out where in [21] iid sub gaussian is assumed?*\n\nRather, our analysis holds with the subgaussian assumption. [21] is related, but just some finite number of moments is needed.\n\n* *Line 158. The notation is a bit confusing, is γ still referring to the stepsize?*\n\nYes, $\\gamma$ is always the stepsize constant.\n\n* *Thm 1 is hard for me to interpret. First of all, how do you compare this result to the SME approximation proved by [L 2017] and [H 2017]? Secondly, Could you provide some examplar regime where the approximation error is small? It seems for the error to be small one needs d to be large, but on the other hand that implies n needs to be large. How is the error affected by stepsize?*\n\n(See above for comparison to [L 2017] and [H 2017]) \n\nYour discussion is exactly correct. When the problem-size is large (d and n are large), SGD can be compared to H-SGD. The philosophy here is to make approximations to the behavior of SGD for large-d and large-n problems. \n\nIf d=n=10, we have nothing to say. When d and n are 10^4 or larger, then H-SGD and SGD will have a tiny difference. (Figure 3, e.g. – which is a good example of where the theory applies, here n and d vary, T=100, and the Volterra and SGD are on top of each other – HSGD would be too.) Even d and n in the hundreds show good empirical agreement. \n\nThe error has dependence on the stepsize through the constants, but the theorem actually covers non-convergent and convergent stepsizes alike (so SGD converges if and only if HSGD converges). The key is that you can get a meaningful statement by fixing a gamma and changing the dimension. When the data is standardized (row norms of A are 1), the interval [0,2) are the convergent values of gamma. So it’s not necessary to put n–dependence into gamma.\n", " **Comments on Weaknesses** (1/2)\n\n* *I find the statement of related literature could be improved.*\n\nWe have included an extensive longer related work section in Appendix A of the new version of the supplementary material and we encourage the reviewer to look at the new Appendix A in the Supplement. If the reviewer has additional papers that we missed, we would greatly appreciate the references. Unfortunately due to current space requirements, we could not add additional references to the main 9 pages, but we will utilize the extra page of the camera ready to do so, if the paper is accepted.\n\n* *Line 49, could have mentioned [55]. In particular, [55] showed a similar result that multi-pass SGD generalizes worse than GD in the linear regression setting. The authors should mention this when stating their contribution and in Sec 3.1* \n\nOur original discussion of paper [55] was perhaps limited because that work appeared on the arXiv only two months before the submission deadline. Nevertheless, we agree that the results are relevant, and the new version of our paper comments on [55] in the main text and expands on the discussion in Appendix A. We invite the reviewer to look at the new Supplementary material.\n\n* *Thm 1 and Thm2 seem to from [39]. The authors should explicitly mention [39] after their Thms 1 and 2. Moreover, could you please explain the delta of thms 1 & 2 compared to that in [39]?*\n\nTheorem 1 is indeed from [39]; however, we would like to emphasize that [39] only appeared on the arXiv in May and that it is a much more mathematical paper with a different target audience. Theorem 2 is novel and does not appear in [39]. Theorem 2 extends prior work to cover more SLDs and in particular applies to settings such as streaming. We will clarify these points in the next revision.\n\n* *Thms 4 and 5 are from [36, 37, 38] (as explicitly mentioned in line 901 in Appendix). In this sense, the authors should explicitly clarify this issue in their main text. The current writing has caused a huge misunderstanding about its true contribution when I first go through the paper.*\n\nThanks for checking the proof, but actually Theorems 4 and 5 are novel and do not appear in [36,37,38], which do not deal at all with generalization properties of SGD. It is true that papers [36,37,38] contain similar results for the convergence rates for the empirical risk, but they do not discuss the population risk at all. Given the other results of this paper, particularly Theorems 1 and 2, Theorems 4 and 5 largely follow with small additional arguments from the results of [36,37,38]. The adaptation of the proofs is not complicated, because for both cases, in the least squares setup, the population risk can be expressed as a function of empirical risk. As we have the behavior of the empirical risk, we can derive the corresponding asymptotic behavior of the population risk.\n\n* *Line 90. Could you explain why SGD with momentum degenerates to SGD?*\n\nThe citation has the theoretical argument. Heuristically, in high dimensions, gradient estimators can be expected to be orthogonal. If the momentum parameter is not sent to 1 with the dimensionality of the problem, then the lifetime of a single gradient estimate in the memory of the algorithm is shorter than the time it takes to interact with the other gradient estimates. So for a fixed momentum parameter m (eg. 0.9) and in high dimensions, single batch SGD+momentum is actually pathwise-equivalent to SGD with effective learning rate $\\gamma*m/(1-m)$: this comes from summing all contributions of a single gradient estimator as though they all occurred in orthogonal spaces.\n\n* *Lemma 1. What is γ here? Note that a somewhat related lemma has been shown in [55].*\n\nGamma is defined in (5). We should emphasize that Lemma 1 is not really claimed as a novelty: it is a direct consequence of Jensen’s inequality and is a 1-line proof. We rather consider it as a triviality but formulate it as a lemma to emphasize that in this case, implicit regularization cannot occur. Of course [55] is a very nice paper and we are happy to further our discussion of it.\n", " We thank the reviewers for their constructive feedback and careful reading of our paper. We have implemented changes in the **newly uploaded version of our paper and Supplementary Materials**, which we encourage the reviewers to look at. Space limitations prevented us from including all changes in the main text itself, but we will do so for the camera ready. The main changes include:\n\n1. Minor typos and fixes suggested by the reviewers\n2. Reference to [Zou et al 2022] in Lemma 1\n3. Expanded non-exhaustive “Related work” section which includes all the references suggested by the reviewers plus a few more in Appendix A of the Supplementary Material. Space limited us to include this in the main text.\n4. Expanded discussion of the “Diffusion Approximations and SGD” that includes detailed differences and limitations between SME [Li et al 2017] and its analysis [Hu et al 2017]. This is available in the new Supplementary Materials, Appendix A. (See more discussion below for individual questions). \n5. Added background references and some basic convergence and limiting behavior results for Volterra equations. This is available in the new Supplementary Materials, Appendix A. \n6. Added some additional experiments highlighting the role of ICR in the performance of SGD versus momentum ([see here](https://anonymous.4open.science/r/revisionstuff-C899/BetaSmallICR_overdetermined.pdf)). When the ICR goes below approximately 1, SGD begins to outperform momentum as *exactly* predicted. The experiment was performed on a least-square problem of fixed aspect ratio, where we can change the ICR by varying the power-law exponent of the covariance spectra.", " This work proposes HSGD as a continuous approximation to SGD (multi pass version). It first shows an approximation theorem that justifies the closeness between the continuous HSGD and the discrete SGD. Then by studying HSGD it characterizes the limiting training and population risks through Volterra dynamics. Based on the theorems, the paper concludes that (1) multi-pass SGD does not have implicit bias over GD, at least in the setting of least square, (2) SGD has an effect of implicitly conditioning, that accelerates the convergence. # Strengths\n+ The implicit conditioning could be an interesting perspective to understand the effectiveness of SGD.\n\n# Weakness\n- I find the statement of related literature could be improved. \n\n- please correct the bib formate of reference [36]\n\n- Line 49, could have mentioned [55]. In particular, [55] showed a similar result that multi-pass SGD generalizes worse than GD in the linear regression setting. The authors should mention this when stating their contribution and in Sec 3.1\n\n- Thm 1 and Thm2 seem to from [39]. The authors should explicitly mention [39] after their Thms 1 and 2. Moreover, could you please explain the delta of thms 1 & 2 compared to that in [39]? \n\n- Thms 4 and 5 are from [36, 37, 38] (as explicitly mentioned in line 901 in Appendix). In this sense, the authors should explicitly clarify this issue in their main text. The current writing has caused a huge misunderstanding about its true contribution when I first go through the paper. \n\n- Line 90. Could you explain why SGD with momentum degenerates to SGD?\n\n- Lemma 1. What is $\\gamma$ here? Note that a somewhat related lemma has been shown in [55].\n\n- Line 119-124. I am actually confused, because it has been long known that a continuous SME approximates SGD, see [L 2017] and [H 2017]. \n\n- Line 148. I believe [21] is a wrong citation here. Could you point out where in [21] iid sub gaussian is assumed? \n\n- Line 158. The notation is a bit confusing, is $\\gamma$ still referring to the stepsize?\n\n- Thm 1 is hard for me to interpret. First of all, how do you compare this result to the SME approximation proved by [L 2017] and [H 2017]? \nSecondly, Could you provide some examplar regime where the approximation error is small? It seems for the error to be small one needs $d$ to be large, but on the other hand that implies $n$ needs to be large. How is the error affected by stepsize?\n\n- Thm 2 is also hard for me to interpret. \n\n\n\n\n\n\n\n\n\n\n\n\n\n[L 2017] Li Q, Tai C, Weinan E. Stochastic modified equations and adaptive stochastic gradient algorithms. InInternational Conference on Machine Learning 2017 Jul 17 (pp. 2101-2110). PMLR.\n\n[H 2017] Hu W, Li CJ, Li L, Liu JG. On the diffusion approximation of nonconvex stochastic gradient descent. arXiv preprint arXiv:1705.07562. 2017 May 22. Please see above. Please see above. It seems most of the theorems are from existing works. Thus this particular work presents little delta to me. Another thing bothers me is that, the authors did not try to explicitly discuss this issue in their main text. ", " This paper studies the generalization ability of the multi-pass SGD on high-dimensional convex quadratics by relating it to a stochastic differential equation called homogenized stochastic gradient descent (HSGD). The authors show that using the HSGD, a precise risk trajectory of SGD can be established, which reveals the conditions on the data distribution that SGD is more efficient than GD. The authors further extend the analysis to streaming SGD and show its inability to capture certain salient features compared to multi-pass SGD.\n\n Overall, the strengths of this paper include\n\n(1) establishing an approximation result between SGD and HSGD.\n(2) showing that SGD negatively impacts generalization performance.\n(3) showing how SGD accelerates convergence.\n(4) showing the inability of streaming SGD.\n\nWeaknesses are as follows:\n(1) Although the authors provide a precise characterization of the generalization error of SGD, the formula of the $Psi_t$ and $\\Omega_t$ are still difficult to follow. There lacks a good interpretation of the developed results, especially in the non-asymptotic setting that $t$ is not approaching infinity. Note that for many high-dimensional linear regression problems, GD/SGD with early stopping can give good generalizable solutions while the overfitting will occur if $t\\rightarrow \\infty$.\n\n(2) Additionally, in some cases the condition number $\\kappa$ could be extremely large (when the matrix $T$ or $A$ has a fast decaying eigenspectrum). Then the convergence rate to $\\Psi_\\infty$ and $\\Omega_\\infty$ may not be that interesting as they will be super slow and people tend to early stop the optimization algorithm. \n\n(3) In section 3.2, a detailed explanation about why SGD is more efficient than GD is missing. It seems that the authors only claim that SGD with a constant learning rate can match the convergence of GD, but how this lead to the argument that SGD is more efficient is missing. The authors may need to provide a rigorous comparison between the efficiency of SGD and GD/M-GD.\n\n(4) Lemma 1 is not new in the literature (at least in the case of $\\delta=0$). The following work (see their Theorem 1) has shown that for any feasible learning rate, the excess risk of multi-pass SGD must be greater than or equal to GD, then taking the limit $learning rate \\rightarrow 0$ can imply the results of Lemma for the case of $\\delta=0$ The authors may need to comment this in the surrounding text.\n\nD. Zou, J. Wu, V. Braverman, Q. Gu, and S. M. Kakade. Risk Bounds of Multi-Pass SGD for Least Squares in the Interpolation Regime. arXiv preprint arXiv:2203.03159, 2022.\n\n Please refer to the weakness section. Please refer to the weakness section.", " The authors of the paper present the following contributions:\n- They show that, in the high dimensional limit, multipass SGD behaves, in terms of empirical and population loss, as a SDE with a particular noise covariance\n- They further show that, in the high dimensional limit, this SDE converges to a deterministic Volterra Dynamics\n- They analyse this dynamics to compare to the one of the gradient flow showing that noise negatively impact the population loss but can accelerate convergence.\n ### **Strengths**\n\n- First I have to say that the paper is well written, pleasant to follow and well illustrated by the experiments.\n- Second and more importantly, I really like the results concerning the high dimensional set-up: comparing the losses dynamics to the one of an explicit SDE that can be studied is a good idea. And even I had no time to check the proof, the result seems sound. Then, concentration to Lotka-Volterra dynamics is also a nice development of the analysis!\n\n\n### **Weaknesses**\n\n- Perhaps the first weakness is the limitation of the result: the authors are very precise in the specific setting of linear regression, provide some good material to analyse it, but it is hard to conclude anything deep from their analysis.\n- I know this is a large tendency in the ML community, but I think that overselling the results is counter productive when writing a paper. A good example is the title of the paper where the fact that the focus is on Least-squares should be written. Also, trying to systematically conclude that SGD is \"better\" or that this can explain the amazing performance of SGD in practice is a bit overselling. The results of the authors being already strong, there is no need to oversell the paper like this. \n- In the same direction, I find the introduction confusing and the references to the literature incomplete. I think that the authors should be more specific and really focus on the least squares literature, and not try to refer to the deep learning one (or maybe just at the end to motivate further investigations). In terms of literature, lign 50, the multipass is not properly covered and [1,2] among others are worth mentioning. \n- The paragraph on **Diffusion approximations and homogenized SGD** should be rewritten. A lot of work has been done to model SGD as a diffusion. [3] properly explains how to model SGD with a SDE, [4] analyses it respecting the geometry of the noise and the sentence *we lack a precise connection between a concrete SLD and a practical nonconvex learning problem* is simply wrong as [5] studied exactly a SDE model in a non-convex setting. Finally, to be consistent with the rest of the paper, it would be great that $\\gamma$ appears in the noise term of equation (5). Note also that in the isotropic case, if $\\Sigma$ is proportional to the identity, the invariant measure is not proportional to $e^{-f}$ but to $e^{- C_{\\gamma} f}$, with $C_{\\gamma}>0$ some constant.\n\n\n[1] Junhong Lin and Lorenzo Rosasco. Optimal rates for multi-pass stochastic gradient methods. Journal of Machine Learning Research, 18(97):1–47, 2017\n\n[2] Loucas Pillaud-Vivien, Alessandro Rudi, and Francis Bach. Statistical optimality of stochastic gradient descent on hard learning problems through multiple passes. In Advances in Neural Information Processing Systems, pages 8125–8135, 2018.\n\n[3] Qianxiao Li, Cheng Tai, and Weinan E. Stochastic modified equations and dynamics of stochastic gradient algorithms i: Mathematical foundations. Journal of Machine Learning Research, 20(40):1–47, 2019.\n \n[4] Stephan Wojtowytsch. Stochastic gradient descent with noise of machine learning type. Part II: Continuous time analysis, *Preprint*, 2021.\n\n[5] Scott Pesme, Loucas Pillaud-Vivien, and Nicolas Flammarion. Implicit bias of sgd for diagonal linear networks: a provable benefit of stochasticity. Advances in Neural Information Processing Systems, 34, 2021.\n\n\n### **Minor Flaws**\n- [36] is an empty reference\n- lign 92: miss a $dt$ in equation (5) for the drift\n- lign 166: explain with you take $tn$ as the time for SGD\n- lign 250: parenthesis problems I have one question for the SDE limit of the model: could the authors explain why the quadratic case is special to derive the SDE limit ? How could it be extended to a general convex loss ? or even to a non-convex setup ?\n\n As I already discussed the limitations in the previous boxes, I'll put the conclusion of my review here. I truly think this is a nice paper, with solid result and an interesting high-dimensional limit dynamics. I now put a weak accept, but will be happy to raise my score if the authors temper a bit the overselling part, correct the minor flaws and the referencing.", " This paper uses SDE (so-called HSGD) to study the optimization property of both streaming and multi-pass SGD over the quadratic functions. First, they show that the risk of SGD is no better than GD, showing that there is no implicit regularization in this setting. Next, they show the connection between SGD and HSGD, enabling them to use HSGD to study the properties of SGD. Their main contribution is to show that SGD enjoys a different condition number that sometimes leads to a better convergence rate over GD. Moreover, they also study the streaming SGD using the same framework and relate them to compare the generalization. ### Strengths\n* The most interesting result is to show that SGD has a different condition number for **convex quadratic functions**. In particular, SGD outperforms GD if the spectrum of the Hessian has some outliers, which is usual in realistic applications, see Figure 2.\n* The Volterra Dynamic, though not first proposed by this paper, its analysis might be of independent interest to study the behavior of SGD.\n\n### Weaknesses\n* In Figure 2, they run random feature models for realistic applications, i.e., CIFAR-10, CIFAR-5m, showing that in these cases the Hessian is ill-conditioned, supporting their main arguments that SGD outperforms GD on these realistic applications. However, note that the random feature model is a convex optimization problem, which differs a lot from realistic neural networks. Hence, small ICR on the random feature model cannot be used to show that SGD outperforms GD on realistic optimization problems **directly**. Next, the author should provide the \"real\" convergence comparison between SGD and GD even for the random feature models to support their main observation. ICR is just an indicator for the comparison between SGD and GD. There might be an approximation error due to the usage of HSGD and also the dimension, iteration time is finite. Hence, without this simulation, I am not convinced that SGD does outperform GD in terms of convergence speed.\n\n* The argument that there is no implicit regularization might be trivial. For convex optimization, **any** algorithms, if convergent, then they should converge to the same function value (for strongly convex cases, they converge exactly to the same point). To my best knowledge, all the previous papers studying the implicit regularization focus on nonconvex optimization.\n\n* The introduction of Volterra Dynamics is limited, which is very unfriendly to those unfamiliar with it. It would be good if additional background information can be provided.\n* Missing title in the reference [36]. Please see the weakness. Please see the weakness." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 2, 4, 4, 4 ]
[ "p-pZC3fgnvB", "8aeIVgnkdpI", "GoH2AGk-S5t", "m99_GzrFNgV", "UEkYuZs1qfg", "VZFxqPhjXHn", "jnzDMhrNDdj", "t6iXHfSOZXL8", "Qz48jbosjRh", "XKDJvTnLpq", "zzUb5uZUKMf", "nips_2022_pZsAwqUgnAs", "nips_2022_pZsAwqUgnAs", "nips_2022_pZsAwqUgnAs", "nips_2022_pZsAwqUgnAs", "nips_2022_pZsAwqUgnAs" ]
nips_2022_OtxyysUdBE
FedRolex: Model-Heterogeneous Federated Learning with Rolling Sub-Model Extraction
Most cross-device federated learning (FL) studies focus on the model-homogeneous setting where the global server model and local client models are identical. However, such constraint not only excludes low-end clients who would otherwise make unique contributions to model training but also restrains clients from training large models due to on-device resource bottlenecks. In this work, we propose FedRolex, a partial training (PT)-based approach that enables model-heterogeneous FL and can train a global server model larger than the largest client model. At its core, FedRolex employs a rolling sub-model extraction scheme that allows different parts of the global server model to be evenly trained, which mitigates the client drift induced by the inconsistency between individual client models and server model architectures. Empirically, we show that FedRolex outperforms state-of-the-art PT-based model-heterogeneous FL methods (e.g. Federated Dropout) and reduces the gap between model-heterogeneous and model-homogeneous FL, especially under the large-model large-dataset regime. In addition, we provide theoretical statistical analysis on its advantage over Federated Dropout. Lastly, we evaluate FedRolex on an emulated real-world device distribution to show that FedRolex can enhance the inclusiveness of FL and boost the performance of low-end devices that would otherwise not benefit from FL. Our code is available at: \href{https://github.com/MSU-MLSys-Lab/FedRolex}{https://github.com/MSU-MLSys-Lab/FedRolex}.
Accept
To better handle the case that each client is with heterogeneous device resources, this paper presents a model-heterogeneous federated learning algorithm FedRolex. FedRolex rolls the submodel in each federated iteration, in order to train the parameters of the global model on the global data distribution. Experimental results show FedRolex outperforms other model-heterogeneous baselines. Ablation studies on submodel rolling show it is an effective technique. However, this paper suffers from several limitations. Firstly, it remains unclear why FedRolex can significantly outperform Federated Dropout and HeteroFL, since they are only different in sampling methods. Secondly, after federated learning, the low-end devices still can only use a sub-model. Will it benefit?
train
[ "8ntybHCno2Q", "w5tyCt5O-MZ", "FLVZlP3uMAY", "49l0mlTPQED", "7_K38_z_eer", "dnz8l6B8f_t", "w4MBf6NWmlI", "gSpOtYKurW5", "1jlXHIUVjro", "DbY_u0LWAzl", "E9XjD4CZ-Xe", "J2O0MWJUlHP", "MI1_Ziwe0gk", "gHtDiePuuys", "azZoU7_Lz70", "NYwVcjRkepO", "8uLTmCciM5g", "fjMYyofwZC", "bDnZPef0fJkL", "CQa_RCOBgUOa", "CznujbdzLMH", "8YlqFSRnPnX", "lHzfU7_OEtN", "MH5WBbl2PEc" ]
[ "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer 5GU1,\n\nWe have included the rebuttal results and discussion into the revision, and have uploaded the latest version. Thanks again for the time and valuable comments. We really appreciate it.", " Thank the authors for your response, which addressed my major concerns. I have improved my ratting.", " Dear Authors,\n\nMany thanks for your great efforts in the rebuttal, especially for the extra experiments. Hope all rebuttal results and discussion can be delivered in the revision. I will raise my score.\n\nBest wishes,\nReviewer 5GU1", " Dear Reviewer ic9g,\n\nWe want to thank you for taking the time to review our paper and provide valuable comments to help us clarify and improve our work. Feel free to let us know if we have answered your questions and addressed your concerns. We really appreciate this opportunity to communicate with you to improve our work. Thanks again for the time and valuable comments. We really appreciate it.\n\n", " Dear Reviewer 2zgM,\n\nWe want to thank you for taking the time to review our paper and provide valuable comments to help us clarify and improve our work. Feel free to let us know if we have answered your questions and addressed your concerns. We really appreciate this opportunity to communicate with you to improve our work. Thanks again for the time and valuable comments. We really appreciate it.\n\n", " Dear Reviewer 6GHV, \n\nWe want to thank you for taking the time to review our paper and provide valuable comments to help us clarify and improve our work. Feel free to let us know if we have answered your questions and addressed your concerns. We really appreciate this opportunity to communicate with you to improve our work. Thanks again for the time and valuable comments. We really appreciate it.\n", " Dear Reviewer 5GU1, \n\nWe want to thank you for taking the time to review our paper and provide valuable comments to help us clarify and improve our work. Feel free to let us know if we have answered your questions and addressed your concerns. We really appreciate this opportunity to communicate with you to improve our work. Thanks again for the time and valuable comments. We really appreciate it.\n\n\n\n", " (2/2) ``` The cohort size considered in the experiments is quite small. In practice, a much larger cohort size is normally incorporated in cross-device FL.```\n\n```Can the proposed technique generalize to a larger cohort size which is more practical in real-world deployments?```\n\nIn our paper, we used a 10% client participation rate by following works that are relevant to ours [1, 2, 3, 4]. To answer the reviewer’s question and examine the effect of client participation rate, we ran additional experiments with both lower (5%) and higher (20%) client participation rates using CIFAR10 as an example for FedRolex and both HeteroFL and Federated Dropout baselines. The results are summarized in the table below. \n| | | Sample Rate | | |\n| :------ | :---------------- | :---------------- | :--------------- | :----------------- |\n| | | 5% | 10% | 20% |\n| Cifar10 | HeteroFL | 48\\.43 (+/- 1.78) | 63\\.90 (+/-2.74) | 65\\.07 (+/- 2.17) |\n| | Federated Dropout | 42\\.06 (+/- 1.29) | 46\\.64 (+/-3.05) | 55\\.20 (+/- 4.64) |\n| | FedRolex | 57\\.90 (+/- 2.72) | 69\\.44 (+/-1.50) | 71\\.85 ( +/- 1.22) |\n\nAs shown, we can see that FedRolex consistently outperforms both Federated Dropout baselines under 5%, 10% and 20% client participation rates. \n\nWe will add the results of this experiment in the Appendix.\n\n[1] Diao, Enmao, Jie Ding, and Vahid Tarokh. \"HeteroFL: Computation and communication efficient federated learning for heterogeneous clients.\" arXiv preprint arXiv:2010.01264 (2020).\n\n[2] Horvath, Samuel, et al. \"Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout.\" Advances in Neural Information Processing Systems 34 (2021): 12876-12889.\n\n[3] He, Chaoyang, Murali Annavaram, and Salman Avestimehr. \"Group knowledge transfer: Federated learning of large CNNs at the edge.\" Advances in Neural Information Processing Systems 33 (2020): 14068-14080.\n\n[4] Reddi, Sashank, et al. \"Adaptive federated optimization.\" arXiv preprint arXiv:2003.00295 (2020).\n", " We thank the reviewer for the thoughtful review. Here are our responses.\n\n(1/2) ```The experiments are limited to vision datasets (i.e., CIFAR-10, CIFAR-10). It is not clear whether the proposed technique can generalize to other data types such as languages.```\n\n```Can the proposed technique generalize to other data types such as languages and the corresponding language models?```\n\n\nOur proposed method can indeed train SOTA models beyond CNN-based models such as Transformers. To demonstrate this, we applied our method to the Transformer model used for federated learning introduced in [1]. Specifically, the Transformer model includes 3 layers; the dimension of the token embeddings was 128; the hidden dimension of the feed-forward network (FFN) block is 2048; 8 heads were used for the multi-head attention, where each head is based on 12-dimensional (query, key, value) vectors; ReLU activation was used and the dropout rate was set to 0.1. To extract the submodels, we varied the width of the hidden layers in the Transformer heads to ½, ¼, ⅛, 1/16, and 1/32. \n\nIn fact, the PT-based methods we used as baselines in our paper can be used to train the Transformer model. Therefore, we compare our method against PT-based baselines HeteroFL and Federated Dropout by training them using the Transformer model described above on the StackOverflow dataset, which consists of 342,477 clients. We followed [1] to sample 200 clients per round. We run the experiments three times with different seeds and report the average and std of the global accuracy. The results are listed in the table below.\n\n| | **Method** | | **Global Accuracy** |\n| :-- | :------------------- | :-- | :-------------------- |\n| | Heterofl | | 27\\.21 (+/- 0.12) |\n| | Federated Dropout | | 23\\.46 (+/- 0.12) |\n| | *FedRolex* | | *29\\.22 (+/- 0.24)* | \n\nAs shown in the table, we can see that our proposed method outperforms both HeteroFL and Federated Dropout. \n\nWe realize that in Figure 1 we visualized our method using convolution which has caused confusion. We will update Figure 1 where we plan to use more general neurons instead of convolution to indicate our method can also apply to Transformers.\n\n[1] Wang, Jianyu, et al. \"A field guide to federated optimization.\" arXiv preprint arXiv:2107.06917 (2021).\n", " (2/2) ```Can the authors elaborate more on the concept of client drift to make the paper self-contained?```\n\nThe term ‘Client Drift’ was mentioned in [1]. Here, it was used to describe the error in gradient estimation due to the differences in data distributions between clients. The error in gradient estimation (or model update) can arise for several reasons in federated learning. With multiple local updates, each client implements (stochastic) gradient update several times based on its own updated models. For model heterogeneous federated learning, client drift will occur not only due to the differences in data distributions between the clients' private datasets but also due to the inconsistency between the clients' individual model architectures. \n\n[1] Wang, Jianyu, et al. \"A field guide to federated optimization.\" arXiv preprint arXiv:2107.06917 (2021).\n", " We thank the reviewer for the thoughtful review. Here are our responses\n\n(1/2) \n``` The primary weakness of the work is that although the work claims that the proposed FedRolex could train large server models, the model studied in the current version is ResNet18. The work would be much more convincing if FedRolex could train modern large models such as Transformers.```\n\n```Would FedRolex work on modern large models such as Transformer?```\n\n\nOur proposed method can indeed train SOTA models beyond CNN-based models such as Transformers. To demonstrate this, we applied our method to the Transformer model used for federated learning introduced in [1]. Specifically, the Transformer model includes 3 layers; the dimension of the token embeddings was 128; the hidden dimension of the feed-forward network (FFN) block is 2048; 8 heads were used for the multi-head attention, where each head is based on 12-dimensional (query, key, value) vectors; ReLU activation was used and the dropout rate was set to 0.1. To extract the submodels, we varied the width of the hidden layers in the Transformer heads to ½, ¼, ⅛, 1/16, and 1/32. \n\nIn fact, the PT-based methods we used as baselines in our paper can be used to train the Transformer model. Therefore, we compare our method against PT-based baselines HeteroFL and Federated Dropout by training them using the Transformer model described above on the StackOverflow dataset, which consists of 342,477 clients. We followed [1] to sample 200 clients per round. We run the experiments three times with different seeds and report the average and std of the global accuracy. The results are listed in the table below.\n\n| | **Method** | | **Global Accuracy** |\n| :-- | :------------------- | :-- | :-------------------- |\n| | Heterofl | | 27\\.21 (+/- 0.12) |\n| | Federated Dropout | | 23\\.46 (+/- 0.12) |\n| | *FedRolex* | | *29\\.22 (+/- 0.24)* | \n\nAs shown in the table, we can see that our proposed method outperforms both HeteroFL and Federated Dropout.\n \nWe realize that in Figure 1 we visualized our method using convolution which has caused confusion. We will update Figure 1 where we plan to use more general neurons instead of convolution to indicate our method can also apply to Transformers.\n\n[1] Wang, Jianyu, et al. \"A field guide to federated optimization.\" arXiv preprint arXiv:2107.06917 (2021).\n", " ```(6/6) Though sampling the sub-network in a rolling way outperforms baselines in the given experimental settings, it is not clear whether FedRolex outperforms baselines with a higher client participation rate.```\n\nIn our paper, we used a 10% client participation rate by following works that are relevant to ours [1, 2, 3, 4]. To answer the reviewer’s question and examine the effect of client participation rate, we ran additional experiments with both lower (5%) and higher (20%) client participation rates using CIFAR10 as an example for FedRolex and both HeteroFL and Federated Dropout baselines. The results are summarized in the table below. \n| | | Sample Rate | | |\n| :------ | :---------------- | :---------------- | :--------------- | :----------------- |\n| | | 5% | 10% | 20% |\n| Cifar10 | HeteroFL | 48\\.43 (+/- 1.78) | 63\\.90 (+/-2.74) | 65\\.07 (+/- 2.17) |\n| | Federated Dropout | 42\\.06 (+/- 1.29) | 46\\.64 (+/-3.05) | 55\\.20 (+/- 4.64) |\n| | FedRolex | 57\\.90 (+/- 2.72) | 69\\.44 (+/-1.50) | 71\\.85 ( +/- 1.22) |\n\nAs shown, we can see that FedRolex consistently outperforms both Federated Dropout baselines under 5%, 10% and 20% client participation rates. \n\nWe will add the results of this experiment in the Appendix.\n\n[1] Diao, Enmao, Jie Ding, and Vahid Tarokh. \"HeteroFL: Computation and communication efficient federated learning for heterogeneous clients.\" arXiv preprint arXiv:2010.01264 (2020).\n\n[2] Horvath, Samuel, et al. \"Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout.\" Advances in Neural Information Processing Systems 34 (2021): 12876-12889.\n\n[3] He, Chaoyang, Murali Annavaram, and Salman Avestimehr. \"Group knowledge transfer: Federated learning of large CNNs at the edge.\" Advances in Neural Information Processing Systems 33 (2020): 14068-14080.\n\n[4] Reddi, Sashank, et al. \"Adaptive federated optimization.\" arXiv preprint arXiv:2003.00295 (2020).\n", " ``` (5/6) KD-based baselines perform much worse than PT-based methods on CIFAR-100, could the authors provide the details of the chosen proxy data?```\n\nTo clarify, since our method is PT-based other than KD-based, we directly pulled the performance of KD-based baselines from their original papers. For PT-based baselines, we duplicated them at our side following the codebases provided by the original papers. \n\nTo answer the reviewer’s question, we quote the original KD-based paper [1] which provides the details of the chosen proxy data: “The public dataset is generated by applying a different data transformation to the data samples (non-overlapping with either the training or test dataset) to further differentiate it from the training dataset. For all datasets,non-overlapping users’ data samples are used.” \n\n[1] Cho, Yae Jee, et al. \"Heterogeneous Ensemble Knowledge Transfer for Training Large Models in Federated Learning.\" arXiv preprint arXiv:2204.12703 (2022). \n", " ```(4/6) The authors should provide the communication cost and computation overhead to reach the target accuracy.```\n\nTo calculate the communication cost, we use the average size of the models sent by all the participating clients per round as the metric. Similarly, to calculate the computation overhead, we calculate the FLOPs and numbers of parameters in the models of all the participating clients per round and take the average as the metric.\n\nTo put those calculated numbers into context, we also calculate the upper and lower bounds of the communication cost and computation overhead (i.e., all the clients were using the same model (largest and smallest respectively)). The results are listed in the table below.\n| | **Homogeneous(largest)** | **FedRolex** | **Homogeneous(smallest)** |\n| :---------------------------------------------------- | :----------------------- | :------------------- | :------------------------ |\n| **Average Number of Parameters per client (Million)** | 11\\.1722 | 2\\.9781232 | 0\\.04451 |\n| **Average FLOPs per client (Million)** | 557\\.656 | 149\\.048384 | 2\\.41318 |\n| **Average model size per client (MB)** | 42\\.62 | 11\\.36 | 0\\.17 |\n\nAs shown, we can see that compared to the upper bound, FedRolex significantly reduces the communication cost and computation overhead, while being able to achieve comparable model accuracy (see Table 2 in the submitted paper). Compared to the lower bound, although FedRolex has higher communication cost and computation overhead, the model accuracy achieved is much higher than the lower bound (see Table 2 in the submitted paper). Based on these results, we can conclude that FedRolex is able to achieve comparable high model accuracy as the upper bound with much less communication cost and computation overhead.\n\nWe will add the results of this experiment in the Appendix.\n", " ```(3/6) The authors only provide the statistical analysis to prove that the index is chosen equally randomly, the convergence analysis is not provided.```\n\nSimilar to SOTA works (PT-based methods [1] [2] [3] and KD-based methods [4] [5] [6] [7]) we cite and compare against in this paper, we focus on the empirical study of the model-heterogeneity problem in federated learning. We admit that the full convergence analysis is actually not trivial, and instead, we provided statistical analysis to help readers understand why our method is better than other PT-based methods. We will make it clear in the revision to emphasize that our paper’s main contribution is from the empirical side, and state that the lack of a full convergence analysis is a limitation. We will pursue it in our future work.\n\n[1] Caldas, Sebastian, et al. \"Expanding the reach of federated learning by reducing client resource requirements.\" arXiv preprint arXiv:1812.07210 (2018).\n\n[2] Diao, Enmao, Jie Ding, and Vahid Tarokh. \"HeteroFL: Computation and communication efficient federated learning for heterogeneous clients.\" arXiv preprint arXiv:2010.01264 (2020).\n\n[3] Horvath, Samuel, et al. \"Fjord: Fair and accurate federated learning under heterogeneous targets with ordered dropout.\" Advances in Neural Information Processing Systems 34 (2021): 12876-12889.\n\n[4] Cho, Yae Jee, et al. \"Heterogeneous Ensemble Knowledge Transfer for Training Large Models in Federated Learning.\" arXiv preprint arXiv:2204.12703 (2022).\n\n[5] He, Chaoyang, Murali Annavaram, and Salman Avestimehr. \"Group knowledge transfer: Federated learning of large CNNs at the edge.\" Advances in Neural Information Processing Systems 33 (2020): 14068-14080.\n\n[6] Lin, Tao, et al. \"Ensemble distillation for robust model fusion in federated learning.\" Advances in Neural Information Processing Systems 33 (2020): 2351-2363.\n\n[7] Itahara, Sohei, et al. \"Distillation-based semi-supervised federated learning for communication-efficient collaborative training with non-iid private data.\" arXiv preprint arXiv:2008.06180 (2020).\n", " ```(2/6) The experimental results do not list the performance of the simplest model in FedRolex. For the heterogeneous methods, the performance of each group should be listed.```\n\nOur goal in this paper is to train a large global model using a federation of clients with heterogeneous on-device resources. The models trained on the clients are parts of the large global model (i.e., submodels). Therefore, the accuracies of the submodels do not reflect the accuracy of the global model. This is why our work as well as the other PT-based SOTA works only list the accuracies of the global model. \n\nDetermining what models to deploy onto each heterogeneous client AFTER the global model has finished training during the federated learning process is a separate task. This can be achieved by a few techniques such as compressing/quantizing/KD the global model into smaller models to fit the resources of the target client. However, this is not the focus of this work and is a separate study in itself. We will pursue it in our future work. \n", " We thank the reviewer for the thoughtful review. Here are our responses.\n\n```(1/6) Compared with Federated Dropout and HeteroFL, FedRolex only modifies the sampling method for Federated Dropout. The idea is not novel enough.```\n\nWe understand the reviewer's perspective as the idea presented in the main paper is focused on the simple rolling-based subnet extraction technique. In fact, we have introduced 2 other techniques including (1) client weighing, and (2) overlapping kernel extraction besides rolling-based subnet extraction. The reason we only presented the rolling-based subnet extraction technique in the main paper is based on the findings of our rigorous ablation studies: we found that the core idea of using the rolling-based subnet extraction is consistently outperforming SOTA baselines across multiple datasets and models. In contrast, as we have shown in our Appendix, our ablation studies on client weighing (Table 4), and overlapping kernel extraction (Figure 5) show that these techniques are not the main contributors to why we beat the SOTA baselines. We decided to not include those techniques in the main paper with the intention of not wanting to misguide the readers and instead pinpointing the most important technique that is consistently effective. We argue that despite the simplicity, we showed that it has a high impact in the sense that it allows the model to achieve accuracy close to homogeneous models.\n", " ```(3/3) The computation resources in the rich-resource clients seem not to be fully exploited by the proposed method.```\n\nAgain, each client will load the corresponding submodel that fits its local resource. Therefore, for rich-resource clients, the submodels that they train can be as large as the global model. In fact, in Figure 3 in our paper, we showed that clients can use all their resources to train part of a large global model that it itself cannot fully load.\n", " ```(2/3) The proposed method reduces computation costs for low-resource clients by only updating a small part of model parameters in a training round. The proposed method still needs different clients to train an intelligent model with the architecture. However, in some scenarios, it may be difficult for the low-resource clients to save and load the model that can be trained on the rich-resource clients. It seems that the proposed model cannot tackle such scenarios.```\n\nEach client only needs to load the corresponding submodel that fits its local resource. Therefore, for low-resource clients, the submodels that they need to save and load can be much smaller than the global model. \n", " We thank the reviewer for the thoughtful review. Here are our responses.\n\n```(1/3) The proposed method can only be used to train CNN-based models and is difficult to be applied to train other SOTA models, such as transformers.```\n\nOur proposed method can indeed train SOTA models beyond CNN-based models such as Transformers. To demonstrate this, we applied our method to the Transformer model used for federated learning introduced in [1]. Specifically, the Transformer model includes 3 layers; the dimension of the token embeddings was 128; the hidden dimension of the feed-forward network (FFN) block is 2048; 8 heads were used for the multi-head attention, where each head is based on 12-dimensional (query, key, value) vectors; ReLU activation was used and the dropout rate was set to 0.1. To extract the submodels, we varied the width of the hidden layers in the Transformer heads to ½, ¼, ⅛, 1/16, and 1/32. \n\nIn fact, the PT-based methods we used as baselines in our paper can be used to train the Transformer model. Therefore, we compare our method against PT-based baselines HeteroFL and Federated Dropout by training them using the Transformer model described above on the StackOverflow dataset, which consists of 342,477 clients. We followed [1] to sample 200 clients per round. We run the experiments three times with different seeds and report the average and std of the global accuracy. The results are listed in the table below.\n| | **Method** | | **Global Accuracy** |\n| :-- | :------------------- | :-- | :-------------------- |\n| | Heterofl | | 27\\.21 (+/- 0.12) |\n| | Federated Dropout | | 23\\.46 (+/- 0.12) |\n| | *FedRolex* | | *29\\.22 (+/- 0.24)* | \n\nAs shown in the table, we can see that our proposed method outperforms both HeteroFL and Federated Dropout. \n\nWe realize that in Figure 1 we visualized our method using convolution which has caused confusion. We will update Figure 1 where we plan to use more general neurons instead of convolution to indicate our method can also apply to Transformers.\n\n[1] Wang, Jianyu, et al. \"A field guide to federated optimization.\" arXiv preprint arXiv:2107.06917 (2021).\n", " The authors proposed FedRolex, a model-heterogeneous federated learning framework that breaks the constraint of standard federated learning based on homogeneous model. The authors proposed a rolling technique to extract heterogeneous submodels from a shared global model, which has been shown to achieve much better performance in comparison with other model-heterogeneous federated learning methods such as Federated Dropout and HeteroFL in both high and low data heterogeneity levels. Strengths:\n- The authors did a good job in reviewing the existing works, especially the ones that are recently published. \n- This is a timely and important topic to work on, and the authors have demonstrated that a simple submodel rotation technique can significantly bridge the gap between model homogeneous and model heterogeneous settings.\n- The work has a potential to greatly improve the fairness of federated learning. \n\nWeaknesses:\n- The primary weakness of the work is that although the work claims that the proposed FedRolex could train large server models, the model studied in the current version is ResNet18. The work would be much more convincing if FedRolex could train modern large models such as Transformers.\n -\tWould FedRolex work on modern large models such as Transformer?\n-\tCan the authors elaborate more on the concept of client drift to make the paper self-contained?\n The authors discussed the limitations in section 5.", " In this paper, the authors study the model heterogeneous problem in federated learning. The authors propose a simple but effective method, which adopts a rolling window to generate small sub-models for local training. Extensive on real-world datasets verify the effectiveness of the proposed method. Strengths:\n1. The model heterogeneity problem studied in this paper is important for the applications of federated learning in real-world scenarios.\n2. The proposed method is simple to implement.\n3. This paper is well-written and easy to follow.\n\nWeakness:\n1. The proposed method can only be used to train CNN-based models and is difficult to be applied to train other SOTA models, such as transformers.\n2. The proposed method reduces computation costs for low-resource clients by only updating a small part of model parameters in a training round. The proposed method still needs different clients to train an intelligent model with the architecture. However, in some scenarios, it may be difficult for the low-resource clients to save and load the model that can be trained on the rich-resource clients. It seems that the proposed model cannot tackle such scenarios.\n3. The computation resources in the rich-resource clients seem not be fully exploited by the proposed method. Refer to the weakness. Refer to the weakness.", " In this paper, the authors target an important problem in federated learning (FL) and propose a simple partial training (PT)-based technique called FedRolex to enable model-heterogeneous FL. Compared to existing methods, the core innovation in FedRolex is a rolling sub-model extraction scheme, where the sub-model is extracted from the global model using a rolling window that advances in each communication round. As such, parameters of the global model are evenly trained other than trained either randomly or statically in existing methods. This simple innovation has been demonstrated empirically to be the key factor that contributes to the superior performance of FedRolex over state-of-the-art methods. Strengths:\n- The review of the existing literature is solid. Table 1 summarizes the key differences between the proposed technique and existing works across a number of important dimensions. This is very helpful to understand the contributions of the paper.\n- The proposed technique is simple yet powerful. Such simplicity helps the readers to factor out tricks that are not essential. \n- The experiments are designed with careful thought and the conclusions drawn from the experimental results are quite exciting, especially enhancing the inclusiveness of FL in real-world distribution. \n\nWeaknesses:\n- The experiments are limited to vision datasets (i.e., CIFAR-10, CIFAR-10). It is not clear whether the proposed technique can generalize to other data types such as languages.\n- The cohort size considered in the experiments is quite small. In practice, a much larger cohort size is normally incorporated in cross-device FL.\n - Can the proposed technique generalize to other data types such as languages and the corresponding language models?\n- Can the proposed technique generalize to a larger cohort size which is more practical in real-world deployments?\n The authors discussed the limitations in section 5.", " In this paper, the authors propose to sample the sub-models in a rolling way and find the simple method improves the performance of the global model compared with baselines.\n Strengths:\n* Overall, the paper provides an interesting perspective for understanding the limitations of HeteroFL and Federated Dropout. \n* The authors not only evaluate the PT-based methods but also compare **FedRolex** with KD-based methods.\n* In the appendix, the authors give statistical analysis to further analyze the proposed method.\n\nWeakness:\n* Compared with Federated Dropout and HeteroFL, FedRolex only modifies the sampling method for Federated Dropout. The idea is not novel enough.\n* The experimental results do not list the performance of the simplest model in FedRolex. For the heterogeneous methods, the performance of each group should be listed.\n* The authors only provide the statistical analysis to prove that the index is chosen equally randomly, the convergence analysis is not provided. \n* The authors should provide the communication cost and computation overhead to reach the target accuracy. * KD-based baselines perform much worse than PT-based methods on CIFAR-100, could the authors provide the details of the chosen proxy data?\n* Though sampling the sub-network in a rolling way outperforms baselines in the given experimental settings, it is not clear whether FedRolex outperforms baselines with a higher client participation rate. \n This paper does not have a limitation subsection. But I don't think the paper will have potential negative societal impacts." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 5, 8, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 5, 4 ]
[ "FLVZlP3uMAY", "8YlqFSRnPnX", "E9XjD4CZ-Xe", "MH5WBbl2PEc", "lHzfU7_OEtN", "8YlqFSRnPnX", "CznujbdzLMH", "lHzfU7_OEtN", "lHzfU7_OEtN", "CznujbdzLMH", "CznujbdzLMH", "MH5WBbl2PEc", "MH5WBbl2PEc", "MH5WBbl2PEc", "MH5WBbl2PEc", "MH5WBbl2PEc", "MH5WBbl2PEc", "8YlqFSRnPnX", "8YlqFSRnPnX", "8YlqFSRnPnX", "nips_2022_OtxyysUdBE", "nips_2022_OtxyysUdBE", "nips_2022_OtxyysUdBE", "nips_2022_OtxyysUdBE" ]
nips_2022_ZVuzllOOHS
Differentially Private Covariance Revisited
In this paper, we present two new algorithms for covariance estimation under concentrated differential privacy (zCDP). The first algorithm achieves a Frobenius error of $\tilde{O}(d^{1/4}\sqrt{\mathrm{tr}}/\sqrt{n} + \sqrt{d}/n)$, where $\mathrm{tr}$ is the trace of the covariance matrix. By taking $\mathrm{tr}=1$, this also implies a worst-case error bound of $\tilde{O}(d^{1/4}/\sqrt{n})$, which improves the standard Gaussian mechanism's $\tilde{O}(d/n)$ for the regime $d>\widetilde{\Omega}(n^{2/3})$. Our second algorithm offers a tail-sensitive bound that could be much better on skewed data. The corresponding algorithms are also simple and efficient. Experimental results show that they offer significant improvements over prior work.
Accept
The reviewers all concurred that the main result of this paper is quite interesting. It privately estimates the covariance better than established methods in particular parameter regimes. Given the clear accept sentiments towards this paper, there was little additional discussion.
train
[ "Eo1OYFaFH4", "t-WwuMZbvD", "4BtpOocJBxg", "lMKnkDLSJfX", "y14RQa8c6Fu", "Xk_LWa8zMbS", "2ThMVST8Wq3", "abDYpy77f1M", "NE3Ccu3NM_", "fe0mhJOY9JZ", "HR9NEXNKvIu", "mC6-yuQ0c7-" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your helpful comments!\n\nQuestion 2: This is an interesting observation! Yes, we can use simhash to transform a vector from the unit sphere to one in $\\\\{-1/\\sqrt{d},1/\\sqrt{d}\\\\}^d$ and then apply the mechanism for the 2-way marginal problem. That does work but it is unclear what the error is: covariance is computed from the cross products of the vectors and it seems unclear how much their cross products differ since simhash does some ''rounding''. We will add this discussion in our paper.\n\nWeakness 2: We agree. We will discuss this a bit more and include the discussion from our previous response.\n\nAny further comments are welcomed!", " Thank you for your helpful comments and clarifications! Now, we understand your questions better and will follow your suggestions.\n\nQuestion 1: We agree an explicit theorem will be better for showing our result for the worst-case scenario and will add one in Section 1.1.\n\nLimitation 1: We agree. We should more explicitly define these terms in the paper by adding one line for each.\n\nLimitation 2: We will discuss the differences between the statistical setting (such as [KLSU19] and [a]) and empirical setting, as well as how statistical lower bounds imply empirical lower bounds. \n\nLimitation 3: We will point out that the trace-sensitive result can be obtained by setting $\\tau=1$ in the tail-sensitive bound, thus the former is strictly no better than the latter. Also, we will rephrase the example to provide more context, such as adding more descriptions on the example's data distribution and how the clipping works on it.\n\nAny further comments are welcomed!\n\n[a] Kamath, G., Mouzakis, A., & Singhal, V. (2022). New Lower Bounds for Private Estimation and a Generalized Fingerprinting Lemma. arXiv preprint arXiv:2205.08532.", " Thanks for the clarifications!\n\nRegarding question 2, it is unclear to me how much generality is lost by considering vectors from {$-1/\\sqrt{d}, 1/\\sqrt{d}$}$^d$. After all, we can use simhash to construct vectors in {$-1/\\sqrt{d}, 1/\\sqrt{d}$}$^d$ whose dot products are in a 1-1 correspondence with the dot products of the original, normalized vectors, so possibly there is an efficient reduction of the general case to 2-way marginals. Though the direct approach presented in this paper may be preferable, it would be good to discuss this alternative approach.\n\nIt makes sense to not include results against CoinPress, but again it would be good to discuss this a bit more.", " Thank you for your attempt to provide the clarifications!\n\nRegarding question 1: I understand the math that setting $tr=1$ would do the trick for the worst-case scenario. My point is that an explicit theorem/corollary about it somewhere would be nice because that's one of your main results. It's more of a question about your presentation than about the result itself.\n\nRegarding limitation 1: I don't think my point was clear here. I'm essentially asking the authors to define the terms \"worst-case\", \"trace-sensitive\", and \"tail-sensitive\" in the main body of the paper **explicitly**. What do those settings mean? Sure, I can read the math eventually, and figure out what is going on, but it is generally not a good idea to force the reader to search around in your paper for their meanings, especially when there is not explicit definition. Again, it is a comment about the presentation, and not about the results themselves. I hope that's clear now!\n\nRegarding limitation 2: I think what I'm saying is again not clear. The setting you have in your paper is where the rows (or the trace) have bounded norm of $1$. [KLSU19] does not have that assumption -- their rows (when scaling down the range-bounds on the covariance matrix of the Gaussian) will have similar, *high-probability* bounds on the norm. Sure, I understand there are standard reductions, but both these things need to be said somewhere. I mentioned this point as a \"limitation\", rather than as a \"weakness\", because your statement is very open to misinterpretation without any clarification, and has nothing to do with the quality of your own results.\n\nRegarding limitation 3: I'm not sure if it is okay to dismiss something as \"obvious\". If something is confusing to (or has the potential to confuse) the reader, then by definition, it is *not* obvious. It can never hurt to improve the presentation of the manuscript because the goal is not just to establish who did what first, but in the interest of science, it is also to spread and convey that knowledge successfully. Obviously, I'm not entitled to any improvement in your manuscript, but I'm just doing my job as a reviewer here. Please, see my comment about limitation 1 for this. In that whole example, I'm simply suggesting to give more context of the problem, as opposed to just stating numbers and expressions.\n\nI appreciate the authors responding to my comments/questions. I hope that they do make an attempt to understand my perspective here. Would be happy to clarify anything else.", " Weakness 1: Yes, we refer to mean estimation with l2 error and we will clarify it.\n\nWeakness 2: The dependency on the privacy parameters is explicitly given in the formal theorems, but omitted in the introduction for brevity.\n\nWeakness 3/Question 3: We use zCDP, whose parameter is $\\rho$ (we give the values of $\\rho$). As mentioned in the experiment section, this can be converted to $(\\varepsilon,\\delta)$-DP via standard formulas in line 122-123.\n\nWeakness 4: Yes, the error has a linear dependency on $R^2$ and this is optimal. When data is scaled by $R$, covariance is scaled by $R^2$. Suppose one can achieve a lower dependency, say $R^{1.5}$. Then, given a dataset in the unit ball, we scale it up to $R$, apply this algorithm, and scale the result down. This would improve the result for the original dataset by a factor $R^{-0.5}$.\n\nWeakness 5/Question 1: Thank you for your careful review! This is a good observation and is because AdaptiveCov chooses the better of GaussCov and SeparateCov, but this decision has to be done privately, thus incurring a constant-factor gap. For a few datasets, AdaptiveCov may choose the worse one (up to a constant factor).\n\nQuestion 2: All GaussCov, EMCov, SeparateCov, and AdaptiveCov have the $\\varepsilon^{-1}$ ($\\rho^{-1/2}$) dependency.\n\nQuestion 3: The values for $\\rho$ we use are in the range [1e-4,1e1], which covers the range [0.005, 0.5] used by CoinPress (NeurIPS'20), and roughly covers the range implied by the $\\varepsilon$'s used in experiments of EMCov (NeurIPS'19).", " Weakness 1/Question 2: Thank you for mentioning that the 2-way marginal problem is a special case of the covariance problem where the input vectors are from $\\\\{-1/\\sqrt{d},1/\\sqrt{d}\\\\}^d$. The best result for this problem achieves an l2 error of $\\tilde{O}(\\min(d/n, d^{1/4}/\\sqrt{n}))$ [b], which matches our worst-case result (i.e., the better of GaussCov and SeparateCov). The differences are that (1) their algorithm cannot handle vectors from the unit ball; (2) [b] has the computational complexity $O(nd^3)$ while ours is $O(nd^2+d^3)$; (3) there are no \\``trace-sensitive\\'' or \\``tail-sensitive\\'' algorithms for the 2-way marginal problem as all input vectors have the same norm. We should definitely include this comparison to the paper.\n\nWeakness 2: The two distance measures coincide when $\\mathbf{\\Sigma}$ is well-conditioned, but in this case, CoinPress degenerates into GaussCov. For other cases, we do have the experimental results for CoinPress as well, but just omitted them from the paper as it is significantly worse than the other methods.\n\nWeakness 3/Question 3: For evaluation of the algorithms, we choose two datasets that are different in nature (one containing image data, the other containing news data). We briefly described the potential use cases in line 298-305.\n\nQuestion 1: Please note that we assume the vectors are in the unit ball, not necessarily unit length. And for vectors with lengths in some fixed range $R$, they can be scaled down to have a length at most $1$ and the error will depend on $R^2$.\n\nComment 2: Sorry for the typo, we mean the scientific notation 1e-10, which is equal to $10^{-10}$. Thanks for pointing it out!\n\n[b] Dwork, C., Nikolov, A., & Talwar, K. (2015). Efficient algorithms for privately releasing marginals via convex relaxations. Discrete & Computational Geometry, 53(3), 650-673.", " Weakness 2&3: We note a very recent lower bound of $\\tilde{\\Omega}(d/n)$ for $d<\\sqrt{n}$ under $(\\varepsilon, \\delta)$-DP [a]. Thus, in the high-trace (i.e., worst-case) setting, GaussCov is already optimal for $d<\\sqrt{n}$, while we improve the case $d>n^{2/3}$. Whether one can do better than GaussCov for $\\sqrt{n}<d<n^{2/3}$ remains open.\n\nQuestion 1: Our worst-case bound follows from the trace-sensitive result by taking $\\mathrm{tr}=1$. Please see line 51-52.\n\nQuestion 2: We sometimes simply write $\\mathbf{\\Sigma}(\\mathbf{X})$ as $\\mathbf{\\Sigma}$. Sorry about the confusion!\n\nQuestion 3: $\\mathbf{A}$ is just the input to the functions $\\mathbf{P}[\\cdot]$ and $\\mathbf{\\Lambda}[\\cdot]$. We often take $\\mathbf{A}=\\mathbf{\\Sigma} = \\mathbf{\\Sigma}(\\mathbf{X})$, in which case we simplify the notation to $\\mathbf{P}$ and $\\mathbf{\\Lambda}$.\n\nLimitation 1: That depends on the parameters used in the error bound. Worst-case bounds only use $d$ and $n$; trace-sensitive bounds also use $\\mathrm{tr}$; tail-sensitive bounds also use $\\gamma$. \n\nLimitation 2: Statistical lower bounds imply empirical lower bounds via a standard reduction; see e.g. [16].\n\nLimitation 3: It should be obvious that the tail-sensitive bound is always no worse than the trace-sensitive bound. The example in line 75-80 is meant to show that the tail-sensitive bound is asymptotically better than the trace-sensitive bound for certain distributions.\n\nLimitation 4: These prior results have the same error metric as our worst-case result and we will explicitly mention this.\n\nLimitation 5&6: Sure!\n\nWe will make the modifications mentioned above and thanks!\n\n[a] Kamath, G., Mouzakis, A., & Singhal, V. (2022). New Lower Bounds for Private Estimation and a Generalized Fingerprinting Lemma. arXiv preprint arXiv:2205.08532.", " Thank you for your review and positive comments!", " The paper considers the problem of privately estimating the covariance matrix that corresponds to a dataset. Two algorithms are given, with each of them attaining better performance depending on the regime. The first algorithm is *trace sensitive*, in the sense that the error rate depends on the trace of the sample covariance matrix, which is equal to $\\frac{1}{n} \\sum_{i = 1}^n ||X_i||^2$. This algorithm yields a new worst-case upper bound on the error rate of $\\frac{d^{\\frac{1}{4}}}{\\sqrt{n}}$ when $n^{\\frac{2}{3}} \\le d \\le n^2$. The second algorithm is a *tail-sensitive* algorithm, in the sense that the error rate becomes worse when we have more data-points whose $\\ell_2$-norm is close to $1$ (all points are assumed to exist in the unit ball). Strength: The problem considered in this work is central to private statistics. Out of the $2$ algorithms proposed in the paper, the one that I found of greater interest was the tail-sensitive one. I found the way SVT was employed in the context of the algorithm to be quite interesting.\n\nWeaknesses: I did not identify any obvious flaws or weaknesses. None for the time being. The paper is a theoretical work and has no obvious societal impact.", " This paper revisits the problem of differentially private (DP) covariance estimation. It provides bounds for three settings of the problem: (1) worst-case setting, (2) trace-sensitive setting, and (3) tail-sensitive setting. Several improvements are made for certain parameter regimes and types of DP notions over the prior work on this problem, and the algorithms provided are computationally efficient.\n\nIn the worst-case setting, in the high-dimensional case ($d>n^{2/3}$), a separation between the bounds is shown between pure DP and zCDP.\n\nIn the trace-sensitive setting, improvements in both pure DP and zCDP are shown via algorithms that are also more computationally efficient than those of the prior work.\n\nThe tail-sensitive setting also sees some improvements.\n\nBoth theoretical and experimental results are provided in this work, with the latter showing improvements on both synthetic and real-world datasets.\n\nEdit: Updated my score. Strengths:\n1. Improvements are made upon the prior work in all the three listed settings, which I would say is important. Polynomial improvements in the results over prior work in the listed parameter regimes look quite decent, I think.\n2. The separation in the worst-case setting for the high-dimensional setting between pure DP and zCDP regimes is somewhat significant. It means that for a decent range of parameters, there are newer and better tools at our disposal. I'm not fully convinced though that this regime is the most important one in terms of the significance of the contributions.\n3. The algorithms (SeparateCov and AdaptiveCov) look simple enough, and easy to understand. It is surprising that such simple algorithms are able to improve upon existing work.\n4. The experimental results are also in favour of their work for both synthetic and real-world datasets (MNIST and news commentary). In each, different variations are considered according to different parameters like $n$, $tr$, $d$, and $\\rho$. AdaptiveCov seems more consistent in the experiments on the latter maybe because it seems to be choosing the best of (truncated) SeparateCov and GaussCov.\n\nWeaknesses:\n1. My main complaint with this work is the writing quality. In many places, the notations are unclear, broad statements are made, and things are ill-defined. I will elaborate all of these in later sections of this review.\n2. I'm a little concerned that in the trace-sensitive setting, the SeparateCov algorithm is not able to improve upon the existing work for all settings. For lower trace, SeparateCov is better than GaussCov, but otherwise, it is not. So, it seems to be of limited utility to some extent.\n3. The above seems to be a general concern for me for the entire paper, to be honest. Yes, for a decent range, the worst-case bounds are nicer and give a separation between two forms of DP. I'm not sure how strong the results are owing to these constraints on the dimension. I do have a few questions.\n\n1. Where are the results for the worst-case setting? I would expect a theorem or something in the main body or somewhere about it. I read a subsection about it in the introduction, but that's pretty much it.\n2. Section 1.2, Line 39. What is $\\Sigma$? Are you defining a notation here, or referencing something else?\n3. Line 41. Is $A = (X_1,\\dots,X_d)$? Or is $A$ the normalised, empirical covariance matrix? Please, be clear about your notations! It's not the responsibility of the reader to infer them. 1. Please, explain what you mean by things like \"worst-case\", \"trace-sensitive\", and \"tail-sensitive\". These terms don't mean anything by themselves, as far as I know. It may be obvious to the authors, but even a one-liner for each in the main body would make it so much easier to distinguish among these settings. It is a little unpleasant as a reviewer and a reader, to be getting into the manuscript without knowing what the context is.\n2. The first part of Lines 29-30 is a bit misleading. The lower bound from [KLSU19] is under a distributional assumption of a Gaussian with an almost identity covariance. The setting in this paper seems more empirical and under bounded norm assumption for the data.\n3. In the second half of Section 1.3 (mostly from Line 75 onward), things don't make a lot of sense here without any context about what these settings are. The results are just stated without any meaning.\n4. In the first paragraph of Section 2, there are really broad statements about the prior work without stating any specific error metrics and range bounds for the data. I would be more specific here.\n5. In the related work about low-rank approximations under DP, I would also state the recent work of Singhal-Steinke 2021 on private subspace estimation.\n6. Also, for the citation of DP with Definition 1, along with [DR14], please use [DMNS06]. The latter is the most important citation for the subject.", " The paper gives improved error bounds for releasing an approximation of a covariance matrix $\\Sigma$ under approximate differential privacy/zCDP, breaking lower bounds that hold under pure DP for dimension d between n^2/3 and n^2, where n is the number of data points. (Privacy is relative to changing one data point, a d-dimensional *unit vector*.) The new bounds improve both existing worst-case bounds and bounds expressed in terms of the trace of the covariance matrix (generalizing the worst-case bounds) measured in terms of *Frobenius norm* of the error.\n\nThe approach is surprisingly simple, and also more computationally efficient than previous work: Release the eigenvalues with DP by adding noise to the exact eigenvalues. Estimate the eigenvectors by doing an eigendecomposition of the covariance matrix made private using the Gaussian mechanism. Then put everything together. (The paper also has a more sophisticated approach that improves on the simple method in a data-dependent way by carefully clipping data.)\n\nPracticality of the methods is investigated through experimental comparison with the Gaussian mechanism and EMCov (NeurIPS '19). Good improvements are demonstrated on synthetic data as well as two real-world datasets. Strengths\n* A polynomial improvement in error (for sufficiently large dimension) compared to EMCov (NeurIPS '19) as well as the Gaussian mechanism\n* The mechanism is very neat --- it is surprising that it pays off to estimate eigenvalues and eigenvectors separately in different ways\n* Experimental evidence of practicality\n\nWeaknesses\n* It is not clear how the bounds achieved relate to what can be achieved using general query release methods, for example simhash combined with methods for release of 2-way marginals\n* There is no experimental comparison to CoinPress (NeurIPS '20), since CoinPress optimizes a Manahalobis distance and in general performs poorly in terms of Frobenius error; however there are covariance matrices for which these distance measures coincide, and where a comparison would make sense\n* Arguably, for the real-world data sets used in experiments it does not seem natural to be interested in computing the covariance matrix * You assume vectors are unit length --- can you say something about to what extent this is a limitation of the approach, or whether (say) you can generalize to vectors with lengths in some fixed range?\n* In the case of vectors from {$-1/\\sqrt{d}, 1/\\sqrt{d}$}$^d$ one can apply results on release of 2-way marginals to estimate the covariance matrix. How do your results compare to that approach?\n* Do you have a use case/justification why one might be interested in the covariance matrix for the data sets you study? If not, are there other data sets with a more compelling use case you could try?\n\nComments\n* It would be good to describe GaussCov together with Algorithm 1, rather than hidden in the preliminaries\n* Line 263: Do you mean $10^{-10}$ rather than $1e^{-10}$? I am satisfied with the paper discussion of limitations. Since this is a very general question it seems unlikely that there are negative societal impacts (and the authors do not discuss this).", " The authors consider the problem of releasing the empirical covariance matrix under concentrated differential privacy. Assuming a boundedness assumption on the l2 norms of the data points, they propose a new algorithm for the problem and show theoretically that it has a better error bound compared to the previous work in the regime where d is very large. They also give a trace-sensitive guarantee to exploit the situations where an average point has a small norm, as well as a tail-sensitive guarantee. \n\nThe first algorithm is a combination of learning the eigenvalues and running PCA. I believe both of these individual steps are known in the literature. The tail sensitive result has more subtleties in order to pick the right threshold to clip the further points. Strength:\n\n+ contributions to a well-defined and well-motivated theoretical problem in the DP literature\n+ simple and intuitive algorithm which is implemented by the authors\n+ empirical results demonstrating the effectiveness of the method\n\nWeaknesses:\n\n+ The problem of mean estimation has been mentioned throughout the text many times but the exact definition is missing. Do you mean estimation w.r.t. l2 error? It will help to define the problem of mean estimation and even covariance estimation formally.\n+ The error bounds are only stated in terms of dimension and number of samples. The dependence on the privacy parameter is not explicit in these results\n+ As far as I can tell, the experiments results also don't mention the value of epsilon either\n+ It is assumed that all data points lie in a unit l2 ball. If we relax this to a ball of radius R, then I believe the authors error bound grows linearly with R. It is not clear if this linear dependence is optimal. Some discussion of the dependence on R will be useful. \n+ in some experiments the error increases with the number of samples, which seems odd.\n + It is somewhat strange that in some experiments the Frobenius error increases with n. Can you elaborate on that?\n\n+ How does your method compare to the previous work in terms of dependence on the privacy parameters?\n\n+ How did you pick epsilon in the experiments?\n\n N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "4BtpOocJBxg", "lMKnkDLSJfX", "Xk_LWa8zMbS", "2ThMVST8Wq3", "mC6-yuQ0c7-", "HR9NEXNKvIu", "fe0mhJOY9JZ", "NE3Ccu3NM_", "nips_2022_ZVuzllOOHS", "nips_2022_ZVuzllOOHS", "nips_2022_ZVuzllOOHS", "nips_2022_ZVuzllOOHS" ]
nips_2022_tHK5ntjp-5K
LION: Latent Point Diffusion Models for 3D Shape Generation
Denoising diffusion models (DDMs) have shown promising results in 3D point cloud synthesis. To advance 3D DDMs and make them useful for digital artists, we require (i) high generation quality, (ii) flexibility for manipulation and applications such as conditional synthesis and shape interpolation, and (iii) the ability to output smooth surfaces or meshes. To this end, we introduce the hierarchical Latent Point Diffusion Model (LION) for 3D shape generation. LION is set up as a variational autoencoder (VAE) with a hierarchical latent space that combines a global shape latent representation with a point-structured latent space. For generation, we train two hierarchical DDMs in these latent spaces. The hierarchical VAE approach boosts performance compared to DDMs that operate on point clouds directly, while the point-structured latents are still ideally suited for DDM-based modeling. Experimentally, LION achieves state-of-the-art generation performance on multiple ShapeNet benchmarks. Furthermore, our VAE framework allows us to easily use LION for different relevant tasks: LION excels at multimodal shape denoising and voxel-conditioned synthesis, and it can be adapted for text- and image-driven 3D generation. We also demonstrate shape autoencoding and latent shape interpolation, and we augment LION with modern surface reconstruction techniques to generate smooth 3D meshes. We hope that LION provides a powerful tool for artists working with 3D shapes due to its high-quality generation, flexibility, and surface reconstruction. Project page and code: https://nv-tlabs.github.io/LION.
Accept
This paper proposes a latent point diffusion model, LION, for 3D shape generation. The model builds two denoising diffusion models in the latent spaces of a variational autoencoder. The latent spaces combine a global shape latent representation with a point-structured latent space. Comprehensive experiments are conducted to evaluate the performance of the proposed method. The authors address the major concerns of the reviewers and strengthen the paper by providing additional empirical results. After the rebuttal, all four reviewers reach an agreement on accepting the paper because of the novelty and the state-of-the-art performance. The AC agrees with the reviewers and recommends accepting the paper.
train
[ "AGFCoCOUXbF", "Z-c0HikpD3", "1SnmvizqK5X", "fr8AbMwPJ_T", "vaoqfL-FGi", "42-dopnqLI", "efSATmyxu-m", "KVvxm0gGp7H", "dDQRjW9rV2N", "mqKwAsApmjT", "Ogc4trMz9lC", "zWJJutKKt4Z", "TsorGO7i9t", "YW6ROs7TV4", "eF1sj5tD47e", "FmxngAJUJTA", "UbrkT1OPUT", "Ev7OK1CGc8", "S_HLpYr_mOu" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " The rebuttal alleviates my concerns and also answers other reviewers' questions. So I keep my attitude as borderline accept. \n", " After reading the authors' clarifications, I think their comments have resolved most of my concerns. I'm inclined to accept this paper and thus keep my original rating.", " Thanks authors for the updates (particularly the experimental results on more classes), and the clarifications. I remain in favor of acceptance, and have upgraded my rating accordingly.", " Dear Reviewers,\n\nWe would like to kindly remind you that the author-reviewer discussion period ends in 3 days. It would be great if you could have a look at our replies and additional experiments and let us know what you think and whether our replies are satisfactory or whether there are any further follow-up questions. We would appreciate any feedback and would be happy to further discuss.\n\nThank you very much,\n\nThe authors of “LION: Latent Point Diffusion Models for 3D Shape Generation”", " Finally, we would like to emphasize again that we added several additional results to the paper to make it overall stronger. We would like to point the reviewer to the additional message/comment that we sent to all reviewers for a more detailed overview over these experiments. Here, we summarize the most interesting additional experiments:\n- **Appendix F.2:** We are now running LION also on all 55 ShapeNet classes jointly without any conditioning. The qualitative results demonstrate that LION can even be trained on such highly diverse and multimodal data and still generate high-quality outputs with excellent mode coverage. To the best of our knowledge, there is no previous 3D generative model that successfully trains on such diverse 3D data and generates reasonable outputs.\n- **Appendices F.3 and F.4:** We also trained LION on the mug and bottle ShapeNet classes (149 and 340 training shapes, respectively) as well as 400 animal assets from the Turbosquid (https://www.turbosquid.com/) dataset. In all cases, LION can reliably generate coherent shapes. The experiment demonstrates that LION can also be trained on very small datasets.\n- **Appendix F.5:** To demonstrate the value to artists of being able to synthesize meshes and not just point clouds, we consider a downstream application: We apply Text2Mesh [49] on generated meshes from LION to additionally synthesize textures in a text-driven pers-sample manner, leveraging CLIP. This is only possible because of our SAP-based mesh reconstruction.\n- **Appendix F.6:** To demonstrate LION’s extendibility to even more tasks, we also trained a LION model where its latent diffusion models are conditioned on CLIP image embeddings, inspired by CLIP-Forge [34]. This allows LION to generate shapes based on text prompts, leveraging CLIP’s text encoder. Using CLIP’s image encoder, this additionally allows LION to infer and reconstruct 3D shapes from images.\n\n*If our reply is satisfactory and the additional results are appealing, we would like to kindly ask the reviewer to consider raising their score accordingly. Otherwise, we will be happy to further discuss. Thank you!*\n", " - **No End-to-End Training:** We agree with the reviewer that it would be very interesting to explore end-to-end training of the entire model, potentially including both the two latent space diffusion models, and also the SAP shape reconstruction method. As previous work (LSGM) [58] has noted, however, such end-to-end training comes with additional challenges. In LSGM, one of the main reasons for their end-to-end training is improved performance. However, we already obtain state-of-the-art performance even with separate training stages. Hence, we do not consider this aspect of our model a limitation or weakness. Training the different components separately is simpler from the user perspective, and is less memory- and compute-expensive, which will hopefully encourage adoption of our method. That said, we agree with the reviewer. It would be interesting to explore this in future work and potentially improve performance even further. Similarly, it would also be interesting to explore training end-to-end with SAP. Note, though, that we are already fine-tuning SAP on LION, thereby achieving a tight coupling between the two models, even though they are trained one after another. We will add a brief discussion on this in the final version of the paper.\n- **Question about how computationally expensive the model is:** Generating a point cloud sample (with 2048 points) from LION takes 27.12 sec., where 4.04 sec. are used in the shape latent diffusion model and 23.05 sec. in the latent points diffusion model prior. Optionally running SAP for mesh reconstruction requires an additional 2.57 sec. For single-class LION models, the total training time is 550 GPU hours (110 GPU hours for training the backbone VAE; 440 GPU hours for training the two latent diffusion models). We added the numbers in Appendix F.10.\n- **Question about discussing discrete-time diffusion formalism:** We believe that there is a misunderstanding: In fact, in all our experiments, we use the discrete-time formalism both during training and sampling (for instance, in our training objectives, we sample from discrete uniform distributions U{0,T} over the steps, as indicated). There is only a single qualitative experiment, where we convert the model into a continuous time model, this is, when we perform shape interpolation, where we require deterministic generation paths based on the probability flow ODE. Other than that our paper entirely relies on the discrete-time formalism, which is why we focused on that in our background section. If the reviewer would be able to point out what specifically led to this misunderstanding, we will be happy to improve our presentation accordingly.\n- **Limitations:** We are happy to more extensively discuss our method’s limitations. For now, we added a discussion in Appendix F.11. In the final paper, when there is an additional page available for the main text, we may bring this into the main paper. In summary, additional limitations of LION beyond slow sampling are that it cannot directly generate textured shapes out of the box. Furthermore, it relies purely on geometry-based training and currently cannot profit from image-based training with differentiable rendering. Also, LION currently focuses on single object generation only. It would be interesting to extend it to full 3D scene generation (LION has these limitations in common with all the relevant baselines).", " We would like to thank the reviewer for their positive feedback and for appreciating our novel and sensibly-motivated method as well as both our qualitative and quantitative results. We are also glad that our writing came across as clear and that our detailed appendix is appreciated. We are happy the reviewer enjoyed reading our manuscript.\n\nBelow, we reply to the individual points and questions raised by the reviewer (citations correspond to paper bibliography):\n- **Technical Contributions:** It is correct that models with DDMs in latent space exist in the image-modeling literature, and that DDMs have generally been used for point cloud generation before. However, models with an architecture like ours with two complementary DDMs that both operate in latent space have not been used before, neither for images nor 3D shapes. Furthermore, the idea of latent DDMs has never been used in 3D generation at all, to the best of our knowledge, and it is generally not obvious how to extend it to 3D in the point cloud case and how to structure the latent space. To this end, we introduce the concept of latent points and combine it with a complementary vector-valued latent variable. Hence, we believe our architecture is novel and the experimental results clearly support that our design choices are crucial: We outperform all baselines, achieve state-of-the-art results, and demonstrate that with this architecture we can scale to extremely diverse shape datasets, like modeling 13 or even 55 (see below) ShapeNet categories jointly without conditioning. These cases represent highly complex and multimodal distributions that need to be learnt, thereby stress-testing LION’s scalability. In particular on these diverse tasks we outperform previous DDM-based models for point clouds [46,47] by huge margins (Table 16). This implies the importance and clear superiority of our novel design, rendering it a relevant technical contribution.\n\n Furthermore, even though we did not train Shape As Points (SAP) end-to-end with LION, we are still fine-tuning SAP on LION. In fact, another technical contribution of our work is to demonstrate how to best combine point cloud generative models with a mesh reconstruction method like Shape As Points (SAP). In particular, to be more robust against the specific kind of noise that is present during synthesis from LION we show how to fine-tune SAP on point clouds augmented with our proposed *diffuse-denoise* technique (this fine-tuning makes a significant difference; see newly added Fig. 14). Overall, we think that demonstrating how to combine SAP with LION is in fact a novel and relevant contribution, and this has never been done before for any point cloud generation method, to the best of our knowledge. Due to the simplicity and relevance (digital artists use meshes in practice, after all), we believe that this idea will be adopted by future works.\n\n- **Diversity in Experiments:** Our main experiments are run on three ShapeNet classes, simply because the majority of previous work focuses on these three individual classes and we wanted to compare to a broad set of previous methods. However, we also run experiments where we jointly model 13 ShapeNet classes without conditioning, which represents a challenging and diverse distribution (see Sec. 5.2). We did this to demonstrate LION’s scalability to much harder and more diverse shape datasets. Hence, we did on purpose not use class conditioning, to make the task difficult. We now added several more experiments: (i) One experiment where we even jointly model all 55 ShapeNet classes, again without class-conditioning (see Appendix F.2, samples in Fig. 33). We see that LION can generate good output samples even when trained on such complex and diverse data. To the best of our knowledge, no previous 3D generative model has demonstrated such scalability. Importantly, the model also still generates data from very rare categories, like the cap class, which contributed with only 39 training samples. This validates that LION does not drop modes and faithfully learns the data distribution (ii) Next, we train on 2 individual ShapeNet classes that contain only very few samples as training data. We train on the mug (149 train samples) and bottle (340 train samples) classes, and can still generate good outputs (see Appendix F.3). (iii) We also used 400 animal assets from the TurboSquid (https://www.turbosquid.com/) dataset and trained a model on those. We again can generate satisfactory outputs (Appendix F.4). These additional experiments validate that LION can be trained both on highly complex diverse data distributions and also on small-scale data.", " - **Numbers in ablation experiments not comparable to [46,47]:** We believe that our ablations are run in a fair manner. In our ablation experiments in Table 10 and 11, we always used the same hyperparameters, compute budgets, etc., to be able to clearly filter out the effect of only removing individual model components. The 4th and 7th lines in Table 10 actually approximately correspond to PVD-like models [46] in terms of the overall architecture (i.e. no more latents). Our full model’s improved performance compared to these lines can, therefore, only be due to the additional model components and architecture innovations, i.e. the shape latents and the point latents with their corresponding diffusion models. Furthermore, it is expected that these lines do not exactly correspond to performance values reported in [46,47], because the works may have been trained with differently tuned hyperparameters, different numbers of network parameters, for a longer time, etc. However, in these ablations we wanted to run fair internal comparisons for our model. Finally, comparing the performance of LION to [46,47], we would like to point to the results of the 13-class experiment (Table 16). We are outperforming both baselines by a huge margin here, and the main difference compared to PVD [46] in these experiments is essentially our hierarchical VAE structure, as discussed. Consequently, we conclude that it is this architectural novelty that is responsible for the much stronger state-of-the-art performance.\n- **Novelty regarding Shape-As-Points:** It is true that using Shape As Points (SAP) on LION’s output can be considered a relatively simple trick to extract meshes from the generated point cloud. However, we believe that this is significant and impactful, precisely because it is simple, but also highly relevant for practitioners (digital artists use meshes in practice, after all). Moreover, we are in fact not naively applying SAP on the generated point clouds, but we propose to fine-tune SAP on the LION generations, which makes a significant difference (see newly added Fig. 14). In particular, to be more robust against the specific kind of noise that is present during synthesis from LION we show how to fine-tune SAP on point clouds augmented with our proposed *diffuse-denoise* process. Overall, we think that demonstrating this capability is in fact a novel and relevant contribution, and this has never been done before for any point cloud generation method, to the best of our knowledge. Due to the simplicity and relevance, we believe that this idea will be adopted by future works.\n\nWe will try to further clarify these points in the final version of the paper, when we have an additional page available for the main text.\n\nFinally, we would like to mention that we added several additional results to the paper to make it overall stronger. We would like to point the reviewer to the additional message/comment that we sent to all reviewers for a more detailed overview over these experiments. Here, we summarize the most interesting additional experiments:\n- **Appendix F.2:** We are now running LION also on all 55 ShapeNet classes jointly without any conditioning. The qualitative results demonstrate that LION can even be trained on such highly diverse and multimodal data and still generate high-quality outputs with excellent mode coverage. To the best of our knowledge, there is no previous 3D generative model that successfully trains on such diverse 3D data and generates reasonable outputs.\n- **Appendices F.3 and F.4:** We also trained LION on the mug and bottle ShapeNet classes (149 and 340 training shapes, respectively) as well as 400 animal assets from the Turbosquid (https://www.turbosquid.com/) dataset. In all cases, LION can reliably generate coherent shapes. The experiment demonstrates that LION can also be trained on very small datasets.\n- **Appendix F.5:** To demonstrate the value to artists of being able to synthesize meshes and not just point clouds, we consider a downstream application: We apply Text2Mesh [49] on generated meshes from LION to additionally synthesize textures in a text-driven pers-sample manner, leveraging CLIP. This is only possible because of our SAP-based mesh reconstruction.\n- **Appendix F.6:** To demonstrate LION’s extendibility to even more tasks, we also trained a LION model where its latent diffusion models are conditioned on CLIP image embeddings, inspired by CLIP-Forge [34]. This allows LION to generate shapes based on text prompts, leveraging CLIP’s text encoder. Using CLIP’s image encoder, this additionally allows LION to infer and reconstruct 3D shapes from images.\n\n*If our reply is satisfactory and the additional results are appealing, we would like to kindly ask the reviewer to consider raising their score accordingly. Otherwise, we will be happy to further discuss. Thank you!*", " We would like to thank the reviewer for their positive feedback and for appreciating our state-of-the-art experimental results, our exhaustive evaluation, and our exposition.\n\nBelow, we reply to the individual points raised by the reviewer (citations correspond to paper bibliography):\n- **Novelty:** First, we would like to point out the methodological novelty compared to PVD [46] and DPM [47]. PVD [46] trains a single diffusion model directly in the point cloud space. In contrast, we are training two diffusion models in a latent space in a hierarchical fashion. This has crucial advantages: As was shown in LSGM [58], it is easier to train diffusion models in smoothly regularized latent spaces together with additional encoder and decoder networks and this can translate to improved expressivity. But most importantly, our hierarchy is crucial: The vector-valued global shape latent essentially allows LION to switch between different modes of the data distribution, whose details are then modeled by the other point cloud latents. In that context, the architecture of how this is implemented is important, this is, the hierarchical conditioning via adaptive group normalization.\n\n Furthermore, [47] also employs a latent variable, but trains (i) a Normalizing Flow in this latent space instead of a diffusion model and then (ii) uses a very weak diffusion model-based decoder for the point cloud output. This weak diffusion model operates on a per-point basis, thereby behaving entirely differently compared to the diffusion models in LION and PVD [46]. It is ultimately our novel architecture that results in our state-of-the-art results (also see discussion below).\n- **Ablation Experiments:** We will be happy to move Tables 10 and 11 into the main paper in the final version, when we have an additional page available. We agree that these are important experiments. We would also like to point out that we accidentally copied incorrect results into Tables 10 and 11 for our full model, thereby contributing to the confusion. We have now updated these tables and the numbers for the full model are now the same ones as for our LION in the main results Table 13.\n- **Performance Boost by Shape Latents:** The ablation experiment in Table 10 was designed to unambiguously show the effects of the different model components, like the shape latents. Removing the shape latents (line 2), performance drops, even when increasing the number of overall network parameters to compensate and make the comparison fair (line 5). However, as pointed out above, the shape latents’ main job is to switch between modes in the data when training on highly diverse and multimodal shape data, like in our experiment where we jointly train on 13 different ShapeNet classes without conditioning. In that case, it becomes evident that the shape latents encode crucial global shape information (see Figs. 7 and 25, where we keep the shape latent constant). However, to save computational resources the ablation experiment of Table 10 was run with a single-class LION model where this mode-switching effect of the shape latents is not necessary and the shape latents’ effect is hence smaller. Nevertheless, a non-negligible performance boost is even obtained in that case. We believe that the results of this small-scale ablation together with the experiment on diverse data with 13 different shape categories clearly demonstrate that the shape latents encode relevant information and are therefore an important component of the model. Generally, note that LION has been designed in particular with scalability to modeling complex and diverse data in mind and that’s where the shape latents become particularly important.\n- **Latent Point Features D_h:** We believe that D_h is indeed less important and the results in Table 11 validate that, showing only a slightly advantageous performance for D_h=1. Adding these additional features to the latent points is flexibility the model has. However, we chose the value D_h=1 early on in our experiments and then kept using it. It is possible that tuning D_h might further improve our large-scale models that were trained on 13 and 55 (see below) ShapeNet classes, respectively, and that the model could benefit from the additional expressivity for these more challenging setups. We leave this for future research and did not explore this to save computational resources.", " Finally, we would like to mention that we added several additional results to the paper to make it overall stronger. We would like to point the reviewer to the additional message/comment that we sent to all reviewers for a more detailed overview over these experiments. Here, we summarize the most interesting additional experiments:\n- **Appendix F.2:** We are now running LION also on all 55 ShapeNet classes jointly without any conditioning. The qualitative results demonstrate that LION can even be trained on such highly diverse and multimodal data and still generate high-quality outputs with excellent mode coverage. To the best of our knowledge, there is no previous 3D generative model that successfully trains on such diverse 3D data and generates reasonable outputs.\n- **Appendices F.3 and F.4:** We also trained LION on the mug and bottle ShapeNet classes (149 and 340 training shapes, respectively) as well as 400 animal assets from the Turbosquid (https://www.turbosquid.com/) dataset. In all cases, LION can reliably generate coherent shapes. The experiment demonstrates that LION can also be trained on very small datasets.\n- **Appendix F.5:** To demonstrate the value to artists of being able to synthesize meshes and not just point clouds, we consider a downstream application: We apply Text2Mesh [49] on generated meshes from LION to additionally synthesize textures in a text-driven pers-sample manner, leveraging CLIP. This is only possible because of our SAP-based mesh reconstruction.\n- **Appendix F.6:** To demonstrate LION’s extendibility to even more tasks, we also trained a LION model where its latent diffusion models are conditioned on CLIP image embeddings, inspired by CLIP-Forge [34]. This allows LION to generate shapes based on text prompts, leveraging CLIP’s text encoder. Using CLIP’s image encoder, this additionally allows LION to infer and reconstruct 3D shapes from images.\n\n*If our reply is satisfactory and the additional results are appealing, we would like to kindly ask the reviewer to consider raising their score accordingly. Otherwise, we will be happy to further discuss. Thank you!*", " We would like to thank the reviewer for their positive feedback and for appreciating both our methodological ideas as well as the strong experimental results. \n\nBelow, we reply to the individual points raised by the reviewer (citations correspond to paper bibliography):\n- **Discussion of the papers DiffuseVAE [140] and Diffusion Autoencoders [60]:** We added a detailed discussion on both papers in Appendix F.9. Note that we put this into the Appendix for now due to the main text page limit. If our work is accepted for publication and an additional page for the main text is available, we will incorporate this discussion directly into the related work section in the main paper. We would like to mention that these works operate purely on images, in contrast to our work, which models 3D shapes and also follows an overall different architecture with both diffusion models in latent space. In fact, extending the latent diffusion model concept to 3D synthesis is not trivial. To this end, we proposed the concept of latent points, which is nicely complementary to the vector-valued shape latent. Note that Diffusion Autoencoders was cited already (citation [60] in the paper); however, we agree that a more detailed discussion is appropriate and thank the reviewer for pointing that out.\n- **Evaluation Metrics:** We agree that there are many metrics for quantifying 3D point cloud generation performance. For point cloud generative models, the most popular metrics are arguably 1-NNA, MMD and COV (see explanations in Appendix D.2). However, we did not choose 1-NNA simply because it provided the best results, but it is the only metric among those three metrics that reliably captures both shape diversity and also generation quality. As was pointed out by previous work (PointFlow) [31], MMD and COV can measure diversity and detect mode collapse, but fail to reliably quantify quality (see discussion in section 6.1 in [31]). This is also supported considering that simply with CD-based vs. EMD-based evaluations, different methods rank differently for COV and MMD. Furthermore, the highly relevant work PVD [46] also follows this approach and primarily relies on 1-NNA. Consequently, we also choose 1-NNA as our primary evaluation metric, where we consistently outperform all baselines for all experiments and achieve state-of-the-art results. This is explained in Sec. 5.1 and Appendix D.2. Nevertheless, since the broader 3D generation community also often uses MMD and COV, we also reported those metrics in the Appendix. However, these results are not as easy to interpret, considering the nature of the problematic MMD and COV metrics. The fact that for MMD and COV different methods score best in different experiments, with only small gaps, essentially only implies that none of the more competitive methods suffers from significant mode collapse. But these metrics cannot be used to make definite conclusions about detailed shape quality, unlike 1-NNA. That being said, our results for these metrics are usually still on-par with the best baselines, or we still largely outperform the baselines as for the challenging unconditional 13-class generation task (see Table 16).\n- **Interpolation:** Yes, we can confirm that we simultaneously interpolate in both the shape latent and also the point latent space. This is discussed in detail in Appendix B.3.", " Finally, we would like to mention that we added several additional results to the paper to make it overall stronger. We would like to point the reviewer to the additional message/comment that we sent to all reviewers for a more detailed overview over these experiments. Here, we summarize the most interesting additional experiments:\n- **Appendix F.2:** We are now running LION also on all 55 ShapeNet classes jointly without any conditioning. The qualitative results demonstrate that LION can even be trained on such highly diverse and multimodal data and still generate high-quality outputs with excellent mode coverage. To the best of our knowledge, there is no previous 3D generative model that successfully trains on such diverse 3D data and generates reasonable outputs.\n- **Appendices F.3 and F.4:** We also trained LION on the mug and bottle ShapeNet classes (149 and 340 training shapes, respectively) as well as 400 animal assets from the Turbosquid (https://www.turbosquid.com/) dataset. In all cases, LION can reliably generate coherent shapes. The experiment demonstrates that LION can also be trained on very small datasets.\n- **Appendix F.5:** To demonstrate the value to artists of being able to synthesize meshes and not just point clouds, we consider a downstream application: We apply Text2mesh [49] on generated meshes from LION to additionally synthesize textures in a text-driven pers-sample manner, leveraging CLIP. This is only possible because of our SAP-based mesh reconstruction.\n- **Appendix F.6:** To demonstrate LION’s extendibility to even more tasks, we also trained a LION model where its latent diffusion models are conditioned on CLIP image embeddings, inspired by CLIP-Forge [34]. This allows LION to generate shapes based on text prompts, leveraging CLIP’s text encoder. Using CLIP’s image encoder, this additionally allows LION to infer and reconstruct 3D shapes from images.\n\n*If our reply is satisfactory and the additional results are appealing, we would like to kindly ask the reviewer to consider raising their score accordingly. Otherwise, we will be happy to further discuss. Thank you!*", " - **Question 1 - Why choose PVCNN as the feature extraction module?:** See above “Different backbone architectures”.\n- **Question 2 - Baselines for Shape Interpolations:** As suggested, we added visualizations of shape interpolations for our two main diffusion model-based baselines, PVD [1] and DPM [2], in Appendix F.8. While PVD and DPM can also generate shape interpolations, LION’s interpolations appear less noisy and more coherent along the interpolation path. In particular, PVD and DPM break down when interpolating with their more complex 13-class models. In that case, they tend to generate very noisy samples all along the interpolation path. LION generates high-quality interpolations even in that challenging setting.\n- **Question 3 - SVR from rgb or rgbd data as in PVD:** See above “Single View Reconstruction (SVR)”. Qualitatively, our results appear to be of similar quality as the results of PVD for that task, and at least as good or better than the results of AutoSDF. However, PVD requires rgb-d images, including depth, while LION can do SVR directly from rgb images without depth. Moreover, we would like the reviewer to keep in mind that LION was originally not designed for that task and that we outperform PVD by large margins on the other different unconditional generation tasks as well as on the controllable generation tasks (for instance, using voxel guidance). Furthermore, PVD does not incorporate mesh synthesis, which is highly useful in practice for artists.\n- **Question 4 - Surface Reconstruction:** The reviewer proposed to compare LION to AutoSDF [30] and Deep Marching Tetrahedra (DMTet) [41] for surface reconstruction. AutoSDF seems to be designed for tasks like single view reconstruction and has been discussed above already. In contrast, DMTet is rather designed to reconstruct surfaces from voxel inputs or noisy point clouds. Hence, we agree that for our voxel and noise guidance experiments, DMTet can be an additional baseline. In Appendix F.7, we now also quantitatively compare to DMTet and we achieve similar or slightly better reconstruction results. However, DMTet is purely designed for this specific surface reconstruction task and is not a generative model that can generate shapes from scratch, like our LION can do. Unlike LION, DMTet can not do multimodal surface reconstructions and thereby does not capture the ambiguity of the task. In contrast, LION is a full generative model and significantly more versatile. Nevertheless, we perform as good as DMTet on this specific task.\n- **Question 5 - Single-Class Unconditional Generation Performance:** There are multiple aspects: (i) As was pointed out in previous work (PointFlow) [31] and as we also discussed in Appendix D.2, COV and MMD are not reliable metrics to quantify generation quality. They can detect mode collapse, but beyond that are not fully reliable (this is also supported considering that simply with CD-based vs. EMD-based evaluations, different methods rank differently for COV and MMD). That is why we use 1-NNA as our primary evaluation metric, which reliably captures both quality and diversity, and here we clearly outperform all baselines in all experiments throughout the entire paper. Nevertheless, we also reported the COV and MMD results in the Appendix, as these metrics are still often used in the wider literature. The fact that for MMD and COV different methods score best in different experiments, with only small gaps, essentially only implies that none of the more competitive methods suffers from significant mode collapse. But these metrics cannot be used to make definite conclusions about detailed shape quality, unlike 1-NNA. (ii) Furthermore, LION was designed as a generative model that is also scalable to more challenging and diverse data. In fact, for the experiments where we jointly model 13 classes, we outperform previous methods on 1-NNA by large margins and are even mostly leading on MMD and COV (see Table 16). It is in these challenging multiclass generation tasks, where our novel hierarchical architecture with two diffusion models becomes most important.\n- **Limitations:** We are happy to more extensively discuss our method’s limitations. For now, we added a discussion in Appendix F.11. In the final paper, when there is an additional page available for the main text, we may bring this into the main paper. In summary, additional limitations of LION beyond slow sampling are that it cannot directly generate textured shapes out of the box. Furthermore, it relies purely on geometry-based training and currently cannot profit from image-based training with differentiable rendering. Also, LION currently focuses on single object generation only. It would be interesting to extend it to full 3D scene generation (LION has these limitations in common with all the relevant baselines).", " We would like to thank the reviewer for their positive feedback and for appreciating our extensive experiments and strong generation results. We are also happy that our paper came across as well written and easily understandable, and that our detailed related work discussion is welcomed.\n\nBelow, we reply to the individual points and questions raised by the reviewer (citations correspond to paper bibliography):\n- **Technical Contributions:** It is correct that a point cloud denoising diffusion model (DDM) based on PVCNN layers exists, Point Voxel Diffusion (PVD) [46]. However, we are the first to explore the training of multiple DDMs in a latent space. Moreover, the idea of latent DDMs has never been used in 3D generation at all, to the best of our knowledge, and it is generally not obvious how to extend it to 3D in the point cloud case and how to structure the latent space. To this end, we introduce the concept of latent points and furthermore combine it with a complementary vector-valued latent variable, leveraging an efficient coupling via adaptive group normalization. Hence, we believe our architecture is novel and the experimental results clearly support that our design choices are crucial: We outperform all baselines and demonstrate that with this architecture we can scale to extremely diverse shape datasets, like modeling 13 or even 55 (see below) ShapeNet categories jointly without conditioning. These cases represent highly complex and multimodal distributions that need to be learnt, thereby stress-testing LION’s scalability. In particular for these challenging tasks we outperform previous DDM-based models for point clouds [46,47] by a huge margin (Table 16). This implies the importance and clear superiority of our novel hierarchical design, rendering it a relevant technical contribution. This is further supported by our ablation experiments. Ultimately, it is this novel technical contribution that leads to LION’s state-of-the-art results. \n\n Furthermore, it is true that using Shape As Points (SAP) on LION’s output can be considered a relatively simple trick to extract meshes from the generated point cloud. However, we believe that this is significant and impactful, precisely because it is simple, but also highly relevant for practitioners (digital artists use meshes in practice, after all). That said, we are in fact not naively applying SAP on the generated point clouds, but we propose to fine-tune SAP on LION, which makes a significant difference (see newly added Fig. 14). In particular, to be more robust against the specific kind of noise that is present during synthesis from LION we show how to fine-tune SAP on point clouds augmented with our proposed diffuse-denoise process. Overall, we think that demonstrating how to combine SAP with LION is in fact a novel and relevant contribution, and this has never been done before for any point cloud generation method, to the best of our knowledge. Due to the simplicity and relevance, we believe that this idea will be adopted by future works.\n- **Different Backbone Architectures:** In fact, we did experiment with different architectures at the early stages of the project. We tried not only point cloud processing networks based on Point-Voxel CNN (PVCNN) [79], but also Dynamic Graph CNN (DGCNN) [134] and Point Transformers [135]. However, the latter two led to worse performance. Consequently, we quickly converged to PVCNNs for our neural networks. We now added an ablation study in Appendix F.1 (Tables 19 and 20).\n- **Single View Reconstruction (SVR):** As pointed out in the paper, we designed LION primarily as a tool for digital artists, focusing on generation quality, controllability, and meshed outputs, and not as a geometry reconstruction method from images. However, we appreciate the reviewer’s suggestion to also explore SVR. In fact, we can easily extend LION to also perform SVR. To this end, we rendered 2D images from the 3D ShapeNet shapes, extracted the images’ CLIP [137] image embeddings, and trained LION’s latent diffusion models while conditioning on the shapes’ CLIP image embeddings. Now, at test time, we can take a single view 2D image, extract the CLIP image embedding, and generate corresponding 3D shapes, thereby effectively performing SVR. We added those experiments in Appendix F.6 and we see that we can achieve realistic 3D reconstructions. This approach is inspired by CLIP-Forge [34]. We leave a more quantitative evaluation and more fine-tuning to future research, but we believe that these experiments demonstrate that LION can indeed be easily extended to and used for SVR. Note that using CLIP’s text encoder, our approach additionally allows for text-guided generation as we now also demonstrate in Appendix F.6. PVD [1] cannot do this. Moreover, we only use RGB images for SVR, whereas PVD also requires depth information.", " We thank all reviewers for engaging in the review process. In our individual replies, we attempted to address specific questions and comments as clearly and detailed as possible.\n\nMoreover, we added several additional results to the paper to make it overall stronger. These results have almost all been added into Appendix F. However, we will reorganize the structure for the final camera-ready version, if our paper is accepted, and potentially bring some of these results into the main text.\n\nHere, we briefly summarize these additional experiments (citations correspond to paper bibliography):\n- *Appendix F.1*: We added a further ablation experiment to study different point cloud processing neural network architectures, based on which LION is implemented. We find that the PVCNN architecture used in LION works best.\n- *Appendix F.2*: We are now running LION also on all 55 ShapeNet classes jointly without any conditioning (we on purpose avoid conditioning to make the task challenging and thereby test LION’s scalability to complex and diverse multimodal datasets). The qualitative results (see Fig. 33) demonstrate that LION can even be trained on such highly diverse and multimodal data and still generate coherent outputs. Moreover, even very rare classes with only 39 training samples are reproduced, thereby validating excellent mode coverage. To the best of our knowledge, there is no previous 3D generative model that successfully trains on such diverse 3D data without any conditioning information and still generates reasonable outputs.\n- *Appendix F.3*: To test whether LION can also be trained on very small data sets, we trained on ShapeNet’s bottle and mug classes with 148 and 340 shapes for training, respectively. As we see in Figs. 34 and 35, LION is also able to generate correct mugs and bottles in this very small training data set situation.\n- *Appendix F.4*: Similarly, we also trained LION on 400 animal assets from the Turbosquid (https://www.turbosquid.com/) dataset (see Fig. 36). LION can reliably generate coherent animal meshes.\n- *Appendix F.5*: To demonstrate the value to artists of being able to synthesize meshes and not just point clouds, we consider a downstream application: We apply Text2Mesh [49] on some generated meshes from LION to additionally synthesize textures in a per-sample text-driven manner, leveraging CLIP (see Figs. 37, 38, and 39). This is only possible because of our SAP-based mesh reconstruction.\n- *Appendix F.6*: To demonstrate LION’s extendibility to even more relevant tasks, we also trained a LION model where its latent diffusion models are conditioned on CLIP image embeddings, inspired by CLIP-Forge [34]. This allows LION to (i) generate shapes based on text prompts, leveraging CLIP’s text encoder (Fig. 41). Using CLIP’s image encoder, this additionally allows LION to (ii) infer and reconstruct 3D shapes from images (Fig. 40). Unlike PVD [46], LION can use RGB images directly and does not require depth information. Note that this is a simple qualitative demonstration of LION’s extendibility. We did not perform any hyperparameter tuning here and believe that these results could be further improved with more careful tuning and training.\n- *Appendix F.7*: We provide an additional comparison to Deep Marching Tetrahedra (DMTet)-based shape reconstruction for the experiment in which we synthesize shapes with voxel-based guidance. Quantitatively, LION marginally outperforms DMTet (Tab. 21). However, note that DMTet was specifically designed for such reconstruction tasks and is not a general generative model that could synthesize novel shapes from scratch without any guidance signal, unlike LION, which is a highly versatile general 3D generative model. Furthermore, as we demonstrated in the main paper, LION can generate multiple plausible de-voxelized shapes, while DMTet is fully deterministic and can only generate a single reconstruction. \n- *Appendix F.8*: We now also provide additional shape interpolation results with the PVD [46] and DPM [47] baselines (Figs. 42 and 43). We find that LION provides the most coherent and clean interpolation results (see Figs. 28, 29, 30, 31 for reference), in particular when we interpolate using the model that was trained over 13 classes jointly, where PVD and DPM break down and generate noisy interpolations.\n- For the 13-class ShapeNet experiment, we ran four additional competitive baselines (PointFlow, ShapeGF, SetVAE, PDGN), which we all outperform (see extended Tab. 16).\n\nWe hope that these additional results further strengthen LION’s position as state-of-the-art generative model of 3D shapes and demonstrate its flexibility and versatility.\n", " The paper proposes a hierarchical architecture for point cloud generation, which is composed of two diffusion models for shape latents and latent points. \nThe major difference between PVD and this paper is the hierarchical architecture and its ability to reconstruct corresponding meshes even though it comes from an off-the-shelf method SAP. \nBoth qualitative and quantitative results are performed to show the effectiveness.\n Strengths:\n- Extensive results and ablations are given to show the effecitieness of the proposed method (in both the paper and supp). \n\n- The paper is well written and easy to understand. It adequately discusses related works. \n\n- In terms of generation quality, it truely beats several SOTA methods. \n\nWeaknesses: \n\n- The proposed pipeline is not particularly novel. It is more like a combination of PVCNN and the diffusion model. The mesh reconstruction method is also off-the-shelf. The main contribution might be the hierarchical generation architecture. \n\n- The paper misses insights for choosing specific backbone for feature extraction. There should be a comparison between different backbones. \n\n- SVR should be a common application for generative networks. However, It does not show any single-view recontruction results. - Why choose PVCNN as the featue extration module? Author should include more insights and perform experimental results to show the reason. \n\n- For the shape interpolation application, the author should include results of other baselines. \n\n- Most generation methods can do single-view reconstruction. The paper may miss a comaprison on SVR from rgb or rgbd data like PVD does. \n\n- As the proposed method can recontruct surface, I think it is possible to compare it with SOTA explicit/implicit surface-based methods like AutoSDF and Deep marching tetrahedra. \n\n- In 'Single-Class Unconditional Generation', I can not see a clear improvement in quantitative results. What is the reason? I only saw one 'slow sampling' limitation in the paper. However, this limitation comes from the generative model they use. So I do not think they fully showed their method's limitations. I did not see any negative societal impact. \n", " This paper proposes the hierarchical Latent Point Diffusion Model (LION) for 3D shape generation. Different from previous works where denoising diffusion models (DDMs) operate on the point clouds directly, LION is based on a VAE structure with a hierarchical latent space that includes a global shape latent space and a local point-structured latent space. Meanwhile, two hierarchical DDMs operate on such two latent spaces instead. Exhaustive experiments demonstrate LION outperforms several state-of-the-art models on multiple ShapeNet benchmarks. Besides, owing to the adopted VAE structure, it is convenient to transfer the LION model to other relevant tasks, which only requires fine-tuning the encoder. Combining with a surface reconstruction model like SAP enables the generation of smooth 3D meshes. Pros:\n1. This paper proposes to employ denoising diffusion models (DDMs) on a hierarchical latent space, instead of raw point clouds. It's a pretty good idea and the effectiveness is supported by the impressive generation results and ablation study contained in the supp. material.\n2. The adopted VAE structure makes the LION model flexible to transfer to different relevant tasks without re-training the computationally intensive DDMs. Besides, the experiments on shape denoising and voxel-guided synthesis also demonstrate the power of the hierarchical latent space.\n3. LION model achieves state-of-the-art results on both single-class 3D generation and many-class 3D generation tasks on widely used ShapeNet benchmarks.\n4. Combining with a surface reconstruction model like SAP enables the generation of smooth 3D meshes.\n\nCons:\n1. The idea of employing DDMs on latent space and incorporating VAE structure is not new. Actually, several relevant papers [1,2] are adopting similar ideas in other domain. However, these papers are not cited and discussed in this manuscript.\n2. The authors just select the most compelling results (on purpose) and put them in the main paper. As we all know, there are a lot of metrics for 3D shape generation evaluation. But the main paper only contains one metric, 1-NNA. Actually, the LION model doesn't always outperform baselines in terms of other metrics like MMD and COV according to Table 13 & 14 results in the supp. material.\n\n[1] DiffuseVAE: Efficient, Controllable and High-Fidelity Generation from Low-Dimensional Latents. Kushagra Pandey, et al.\n[2] Diffusion Autoencoders: Toward a Meaningful and Decodable Representation. Konpat Preechakul, et al. 1. For the shape interpolation, do you interpolate on both latent spaces, or just one? 1. A necessary discussion with some closely relevant papers is missing.\n2. The explanation of the modest performance on MMD and COV metrics should be provided.", " This paper targets 3D shape generation and manipulation. Similar to previous works PVD and DPM [46,47] it uses a denoising diffusion model on point clouds. My understanding is that the novelty compared to these works is:\nN1: using a global shape and a point feature (a.k.a. hierarchical latent space)\nN2: combining the network with a shape as point [67] network to obtain a surface from the output point cloud.\nThe other claimed contribution are essentially results, that go beyond [46,47] in terms of performances (clearly better numbers on ShapeNet dataset) and applications (shape interpolation, guided generation) This paper seems to clearly improve state of the art shape generation (note this is not my area of expertise, so I have to trust the authors the metrics and datasets are meaningful) the evaluation seems exhaustive and the exposition of the method is clear.\n\nWhat is less clear to me is what the novelty actually is compared to [46,47] and what explains the performance gap. To me, the ablation tables 10 and 11 from the supplementary material would be the most interesting to understand this, but are still unclear:\n- I think they should be moved to the main paper\n- I couldn't match the results from these two tables to any in the main paper: to which should they be compared? is there a change in the method? or is this the effect of the randomness of training? (if so, some variance really seems necessary)\n- having additional dimensions for the latent points seem to provide only a very small boost, I wonder if this isn't just noise that leads to this selection. Moreover, the fact that the selected value is D_h=1 (and using D_h=0 performs almost as well) is important for understanding why the method work and should be emphasized in the main paper\n- from table 10, using shape latents also provides only a small boost, this together with the previous point really makes me wonder what part of the performance improvement is really due to the proposed hierarchical architecture v.s. other changes\n- putting the two previous points together, I would really want to see numbers comparable to [46,47] in the ablation, to better understand the origin of the performance improvement, which is more unclear to me the more I look at the results.\n\nTo conclude, I am unsure about this paper, that falls slightly out of my expertise (not in diffusion models or generation). At first look it seems to really boost state of the art, but looking at the results and paper in more details, I couldn't figure out any clear, strong and well justified novelty (using shape as point is different but very low novelty, the hierarchical architecture seems different if not very original but in the end I am unsure which part of the improvement is really due to it) and I do not really understand where the gain in performance comes from (except the very large 300k hours of V100) see weaknesses (clearer ablation compared with SoA) yes", " Authors present a generative model of 3D point clouds. This is a latent-space diffusion model, with variational encoder and decoder mapping between point cloud and latent representation; the latent space itself is split into a 'global' shape embedding, and 'local' embeddings of details, the latter itself structured as a sparser, feature-valued point-cloud. The model is first trained as a 'simple' VAE with gaussian prior/posterior, and then the diffusion model is fitted post-hoc to the aggregate posterior. The model is demonstrated on various tasks using ShapeNet data: unconditional generation; sampling 'variations' on a coarse model; interpolating shapes; and sampling meshes by integration with a differentiable Poisson solver. It achieves qualitative-reasonable results on all tasks, and quantitatively out-performs various point-cloud-based baselines on generation tasks. Strengths:\n\n- the model is novel, and the approach sensibly motivated\n\n- quantitative results show significantly better performance than state-of-the-art baselines on unconditional (per-class) generation tasks\n\n- qualitative results are impressive – particularly the displayed meshes\n\n- there are very helpful and detailed descriptions of components (e.g. how interpolation is performed) in the supplementary; this mitigates the fact that the overall model is rather complex\n\n- there is an ablation study in the supplementary that provides some additional motivation for various design decisions\n\n- the writing is clear and easy-to-read throughout; overall I found the paper enjoyable and informative to read\n\nWeaknesses:\n\n- the technical contribution of the model itself is fairly small – DDMs on latent space have become fairly common, and even DDMs on point-clouds is not a new idea (though authors show that the proposed method outperforms other such approaches)\n\n- most experiments use only three shapenet classes; it would be nice to see more diversity, or discussion of why this is not possible (e.g. if there are too few shapes in some classes to train per-class models successfully -- this should be stated as a limitation of the method if so)\n\n- the mapping from point-clouds to meshes is responsible for the best-looking results in the paper (the meshes are significantly cleaner than the point-clouds); however this is apparently trained as a post-hoc stage rather than jointly with the generative model. It would be more elegant to train this end-to-end (the same could also be said for training the diffusion jointly with the encoder/decoder, however I appreciate that this is challenging)\n Exactly how computationally expensive is the model, e.g. to generate a single chair or train all parts of the corresponding model? This doesn't seem to be specified anywhere, only the total compute for the entire project.\n\nDiscussing only discrete-time diffusion in the paper (sec. 2) in spite of the model using exclusively continuous-time (for which full details are only in the supplementary) seems a little strange. It might be better to pull in a couple of paragraphs from the supplementary into the main text.\n There is very minimal discussion of limitations; this should be expanded. There is sufficient discussion of broader impact (which is negligible)." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 5, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 3, 4 ]
[ "zWJJutKKt4Z", "Ogc4trMz9lC", "vaoqfL-FGi", "nips_2022_tHK5ntjp-5K", "42-dopnqLI", "efSATmyxu-m", "S_HLpYr_mOu", "dDQRjW9rV2N", "Ev7OK1CGc8", "Ogc4trMz9lC", "UbrkT1OPUT", "TsorGO7i9t", "YW6ROs7TV4", "FmxngAJUJTA", "nips_2022_tHK5ntjp-5K", "nips_2022_tHK5ntjp-5K", "nips_2022_tHK5ntjp-5K", "nips_2022_tHK5ntjp-5K", "nips_2022_tHK5ntjp-5K" ]
nips_2022_crFMP5irwzn
Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation
In the past few years, transformers have achieved promising performance on various computer vision tasks. Unfortunately, the immense inference overhead of most existing vision transformers withholds them from being deployed on edge devices such as cell phones and smart watches. Knowledge distillation is a widely used paradigm for compressing cumbersome architectures into compact students via transferring information. However, most of them are designed for convolutional neural networks (CNNs), which do not fully investigate the character of vision transformers. In this paper, we fully utilize the patch-level information and propose a fine-grained manifold distillation method for transformer-based networks. Specifically, we train a tiny student model to match a pre-trained teacher model in the patch-level manifold space. Then, we decouple the manifold matching loss into three terms with careful design to further reduce the computational costs for the patch relationship. Equipped with the proposed method, a DeiT-Tiny model containing 5M parameters achieves 76.5\% top-1 accuracy on ImageNet-1k, which is +2.0\% higher than previous distillation approaches. Transfer learning results on other classification benchmarks and downstream vision tasks also demonstrate the superiority of our method over the state-of-the-art algorithms.
Accept
Four experts in the field reviewed the paper and recommended Borderline Accept, Weak Accept, Accept, and Borderline Accept. The reviewers generally liked the approach, though some commented that it is straightforward. The reviewers' questions about experiments and clarifications were well addressed by the rebuttal. Hence, the decision is to recommend the paper for acceptance. We encourage the authors to consider the reviewers' comments and make the necessary changes to the best of their ability. We congratulate the authors on the acceptance of their paper!
train
[ "7XoVK1fCr9C", "mkAA-JgBjbg", "TOEUKWhFKoq", "gpXagSTCwA", "k3dV-Qq6nwk", "mZay8T8hH81", "d-3-M9YMkdu", "xKDm2uFmb1", "SeRKYY242C", "AajF035iLPq", "XdOoL1Gvvqh", "yIH5fmXSF2K", "6iVBqGI4WD", "yhKmzpoc1pY", "1A_wLFZ8axy", "MfL3CtHRz5t", "ZIWxACZ1yGs" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer ipbi:\n\n\nThanks for your feedback!\n\n\nBest,\n\nPaper3797 Authors", " Thanks for the reviewers' rebuttal, which has solved most of my concerns.\n\nSo I would like to raise my rating from Borderline reject (4) to Borderline accept (6) \n\nBest,", " Dear Reviewer eVwU,\n\n\nThanks again for your constructive comments which will help improve the quality of our draft.\n\n\nBest,\n\nPaper3797 Authors.", " Thanks for the clarifications. I am satisfied with the responses and raise my score.", " Dear Reviewer ipbi:\n\nWe thank you for your time and the detailed and valuable reviews!\n\nWe would like to kindly remind you that the author-reviewer discussion period ends on Aug 09 ' 22 08:00 PM UTC. We have provided detailed replies and new experiments to your comments. We hope to have a further discussion with you to see if our response solves the concerns. Your support is very important to us and we greatly appreciate that.\n\nBest,\nPaper3797 Authors", " Dear Reviewer JpXV,\n\nThanks for your support and constructive comments. We have included the added experiments in the conclusion section.\n\nBest,\nPaper3797 Authors.", " Thank you for the detailed response. I appreciate the additional efforts regarding the new experiments and tables, which improve the overall quality of the paper. \n\nAs a reminder, please include the added experiments on semantic segmentation in the conclusion section. \n\nConsidering the work is technically sound, and experimentally satisfying, I have increased my initial score. ", " **Response to Q1:** Many thanks for your valuable suggestion, we replace the confusing notation in the revised paper.\n\n**Response to Q2:** Thanks for your suggestion. In the original design of vision transformers, outputs of a classification token is used as extracted features. The \"Hard\" method introduces an additional distillation token to learn from a CNN teacher, and uses averaged outputs of the two tokens as features. However, Swin Transformer model family adopts an average pooling operation on all image tokens to extract features, indicating that the adding of an additional distillation token is not allowed. Hence, we cannot use the \"Hard\" method to train a Swin Transformer student, and can only adopt the \"KD\" method in turn. For DeiT students, we design a simple experiment with a CaiT-small teacher and a DeiT-Tiny studnet to evaluate the \"KD\" method, and corresponding results are as follows:\n\n| method | Top-1(%) |\n| :-------: | :------: |\n| KD (soft) | $74.9$ |\n| Hard | $74.5$ |\n| Manifold | $76.5$ |\n\nThe results shows that \"KD\" method outperforms the \"Hard\" method, indicating that \"KD\" is a stronger baseline than \"Hard\" when both the teacher and the studnet are vision transformers. To strengthen our paper, We will complete all experiments of training DeiT students with the \"KD\" method and update the results in our final submission.\n\n**Response to Q3:** Thanks for your suggestion. Table 4 shows that selecting layers at the shallow/deep part of a model obtains the best result. Following this selecting strategy, we ablate the influence of the number of selected layers. We denote the indices of layers selected from a $L$-layer model as {$1, 2, ...,k,L-k+1,...,L-1, L$}, and conduct experiments with the CaiT-small teacher ($L=24$) and the DeiT-Tiny ($L=12$) student to compare different $k$ setting. The results are listed as follows:\n\n| $k$ | Top-1(%) |\n| :--: | :------: |\n| $1$ | $76.3$ |\n| $2$ | $76.4$ |\n| $4$ | $76.5$ |\n| $6$ | $76.5$ |\n\nWe can find that our method is robust with various layer numbers, selecting 4 layers ($k=2$) obtains 76.4 top-1 accuracy, and selecting 8 ($k=4$) and 12 ($k=6$) layers achieve 76.5 top-1 accuracy, respectively.\n\n**Response to Q4:** Thanks for pointing out this. We will add more discussions in our final version. In our experiments, the patch mergting scheme is only adopted to Swin Transformer students, whose extremely small patch size setting results in an unaffordable GPU memory usage. By default, Swin Transformers use a patch size of $p=4$. When the input resolution is $224\\times 224$, there are $56\\times 56$ patches in the first model stage. Following the patch number in standard vision transformers is $14\\times 14$, we adopt a patch merging setting of $(H'=14,W'=14)$, indicating that every $4\\times 4$ adjacent patches are merged. For comparison, we conduct experiments with other two merging settings and report the corresponding results as follows:\n\n| setting of $(H',W')$ | size of merged patch | Top-1(%) |\n| :------------------: | :------------------: | :------: |\n| $(28,28)$ | $2\\times 2$ | $82.27$ |\n| $(14,14)$ | $4\\times 4$ | $82.24$ |\n| $(7,7)$ | $8\\times 8$ | $82.12$ |\n\nWe can find that increasing or decreasing the patch meaging setting $(H',W')$ has little effect on the Swin Transformer student performance. However, if the merged patch number is increased excessively, following our response to the first weakness proposed by reviewer *ipbi*, it greatly damages the student accuracy. We further conduct this experiment on a DeiT-Tiny student with a patch merging setting of $(H'=2,W'=2)$ here:\n\n| setting of $(H',W')$ | merge every | Top-1(%) |\n| :------------------: | :---------: | :------: |\n| $(2,2)$ | $7\\times 7$ | $74.8$ |\n| None | None | $76.5$ |\n\n**Response to Q5:** When we train the student with the failed setting, a NAN loss occurs at the 126th training epoch (300 in total). Hence, we report the Top-1 accuracy of the model saved at the epoch of 125. When preparing this rebuttal, we speculate that the mixed precision training leads to the failure, and adopt a full precision setting to re-run the experiment, where the training converges and the student achieves $75.8\\%$ Top-1 accuracy. In particular, we think that only transferring inter-image relation maps leads to an unstable training process, since these relations are highly related to randomly sampled image batches. However, mixed precision training cannot handle such an unstable training, and finally corrupts the student model, leading to the NAN loss. We will report the new result in our revised paper.\n\n**Response to Q6:** Thank you very much for carefully reading our submission and pointing out the typos. We have revised them in our rebuttal revision.\n\n**Response to the limitation:** We will add the societal impact discussion in our final submission.", " Thanks for your valuable comments, we have updated the rebuttal revision accordingly.\n\n> **Weakness 1:** The contribution of the paper is limited. The proposed soft distillation has been proposed in [1]. Relying on the proposed patch-level distillation loss, the authors simply combine [1] to learn a compact vision transformer. Moreover, the proposed fixed depth seems to be an empirical trick to adjust the regularization strength.\n\n**Response to weakness 1:** Soft distillation has been proposed for several years. Our main contribution is not the soft distillation, but are about the following two aspects. (i) We propose to leverage both batch and patch level information for knowledge distillation. Given input samples within a mini-batch, all patch embeddings depict the sturcture of a potential manifold space. We train the student to project input into a similar manifold space as the teacher. (ii) We decouple the manifold map to simplify the computation. Since the structure of manifold space is described by relations among all embeddings, the computational cost is usually unaffordable. To reduce the computational burden, we orthogonally decouple the manifold relations into intra-image relations and inter-image relations, and introduce relations among a random sampled subset of embeddings as an auxiliary manner to capture both batch and patch level information.\n\nAs for the fixed depth setting, we acknowledgement that it is an empirical trick and it is not our technical contribution. It can bring new insight for the research community: the regularization schemes should be carefully designed for knowledge distillation task, while directly adopting the training skills working well for teacher models may bot be the optimal choices.\n\n> **Weakness 2:** The proposed method belongs to the relation-based knowledge distillation [2]. What is the difference between the proposed method and existing relation-based knowledge distillation methods [3] [4]? It would be better for the authors to provide more discussions.\n\n**Response to weakness 2:** Our method is quite different from the mentioned reference [3] and [4]. [3] assumes the input and output shape of a model layer are $F_{in}\\in R^{H\\times W\\times M}$ and $F_{out}\\in R^{H\\times W\\times N}$, the transferred knowledge matrix of shape $N\\times M$ is the product of reshaped $F_{in}$ and $F_{out}$, where $H$, $W$, and $M/N$ are the height, width, and channel dimension of features, respectively. In reference [4], the authors process knowledge in a similar way, except the use of a singular value decomposition (SVD) based feature compressing scheme and a radial basis function (RBF) based matching scheme. These two works focus on transferring the relation between different layers of a model, which describes the structure of a information flow in a model. However, out method targets at transferring the structure of the manifold feature space. Moreover, our proposed manifold relation maps could transfer more generalized batch-information and fine-grained patch-level information, which is impracticable with the two mentioned references. We will add these discussions and the referenced papers to our revision paper.\n\n\n**Reference:**\n\n[1] Touvron, Hugo, et al. \"Training data-efficient image transformers & distillation through attention.\" *International Conference on Machine Learning* (2021).", " Thanks for your valuable comments, we have updated the rebuttal revision accordingly.\n\n> **Weakness:** The efficientness of the proposed method is not well supported due to the lack of discussion/details/evaluation on common public benchmarks.\n\n**Response to the weakness:** Thanks for your advices, we add more details and discussion in Section 4.2 and 4.3, as shown in our \"Rebuttal Revision\".\n\n> **Questions:** Is there a trade-off between the scheme for reducing computational complexity and the performance on downstream tasks? More explanation and reasoning should be done here.\n\n**Response to the question:** Intuitively, we believe that there must be a trade-off between the computational complexity and the performance on downstream tasks, but the proposed method can greatly reduce the computational complexity with moderate performance degradation. In particular, the original manifold relation map describes relationships among all patches of a mini-batch samples, while the decoupled map only takes a subset into consideration. Such decoupling may lead to an imperfect imitation of the teacher model because of the loss of information. However, this trade-off is inevitable since most common hardwares cannot afford the computation of a complete manifold relation map. Take the object detection task as an example, the input image is of size $1333\\times 800$, and the patch size of a Swin-Transformer is $4$ at the first stage, thus there are $334\\times 200=66800$ patches in total. The original computation cost and memory usage are quadratic to the number of patches, which makes the computation extremely time-consuming and leads to the out-of-memory problem. To make the computation feasible, we have to decouple the manifold relation map. We speculate that patches in the same image or at the same position are related more closely and provide most of the useful information. Hence, only relation of patches in the same images and patches at the same position of different images is taken into consideration in the proposed scheme. We also randomly select a small set of patches as a complement of information. This scheme can reduce the resource requirement greatly (see our analysis in Section 3.2) while training a well-performed student.\n\n> **Limitation:** More quantitative evaluation and examples should be provided in the Complexity analysis section.\n\n**Response to the limitation:** Thanks for your valuable comments. The original computational complexity and memory usage of a manifold relation map are $\\mathcal{O}(B^2N^2D)$ and $B^2N^2$, respectively, where $B$ is the batch size, $N$ is the patch number, and $D$ is the embedding dimension. After decoupling, the two terms becomes $\\mathcal{O}(BN^2D+B^2ND+K^2D)$ and $BN^2+NB^2+K^2$, respectively. Ignoring the lower order terms, both the computational complexity and memory usage are reduced by nearly $BN/(B+N)$ times. We will add the quantitative evaluation in the revised paper.", " Thanks for your valuable comments, we have updated the rebuttal revision accordingly.\n\n> **Question 1:** Why does the Soft Distillation (3rd row) in Table 3 improve only 0.3% compared with the 2nd row?\n\n**Response to question 1:** We speculate that there may be some misunderstanding about notations of Table 3, and we will refine the expression to clarify them in our revision paper. Our baseline, the first line with three $\\times$ marks, is the method proposed in [1], which uses hard labels from the teacher to perform knowledge distillation. The result in the original 2nd line with \"Soft distillation $\\times$\" does not indicate that no distillation scheme is used, but a hard logits label based knowledge distillation method is adopted. The $0.3\\%$ improvement is achieved by comparing with the hard logits label distillation method. We update the notation from \"Soft distillation\" to \"Logits label\" in Table 3, and we further design two ablation experiments to evaluate the impact of soft logits label (2nd & 5th line):\n\n| Manifold Distillation | Logits label | Fixed Depth | Top-1(%) |\n| :-------------------: | :----------: | :----------: | :------: |\n| $\\times$ | Hard | $\\times$ | $74.5$ |\n| $\\times$ | Soft | $\\times$ | $75.0$ |\n| $\\times$ | Hard | $\\checkmark$ | $75.5$ |\n| $\\times$ | Soft | $\\checkmark$ | $75.8$ |\n| $\\checkmark$ | Hard | $\\checkmark$ | $75.9$ |\n| $\\checkmark$ | Soft | $\\checkmark$ | $76.5$ |\n\nThe results in this new 2nd line shows that soft logits itself brings $0.5\\%$ accuracy improviment compared with hard logits. Moreover, when working together with other modules, soft logits still helps improve the student accuracy.\n\n> **Question 2:** The proposed manifold-based KD still belongs to a subset of KD, but why the authors separate the proposed MD from the original KD, in the caption of Table 2?\n\n**Response to question 2:** Thanks for the valuable comments. The separateion is just for clear notations. The proposed method surely belongs to a subset of KD. Actually, the method notated by \"Hard\" also belongs to this subset. The sign \"KD\" in Table 2 is not the generalized mean of knowledge distillation, i.e., the model compression task, but refers in particular to the original distillation method proposed by Hinton *et al.* [2]. We will update the \"KD\" to \"Soft\" to eliminate the ambiguity.\n\n> **Question 3:** I appreciate the authors admit the failure of distillation with L_inter, but why do not you re-run the experiment and report a new number?\n\n**Response to question 3:** Thanks for your valuable comments. When preparing our submission, we use the mixed precision training following the DeiT codebase and the experiments fail to converge (NAN at 126 epoch). During rebuttal, we adopt a full precision setting and re-run the experiment, where the NAN loss problem is fixed, and the student achieves $75.9\\%$ Top-1 accuracy. We speculate that only transferring inter-image relation maps leads to an unstable training process, since these relations are highly related to random sampled image batches. We will report the new result in our revised paper.\n\n> **Question 4:** Why missing semantic segmentation experiments as an extension of the downstream tasks for the proposed new KD?\n\n**Response to question 4:** Thanks to your valuable suggestion. We have designed experiments on the semantic segmentation task with the ADE20K dataset. The results are as follows:\n\n| model | #params (M) | FLOPs (G) | mIoU |\n| :------------------------------------- | :---------: | :-------: | :-----: |\n| (Teacher) Swin-Small+UPerNet | $81$ | $1038$ | $47.64$ |\n| (Student) Swin-Tiny+UPerNet | $60$ | $945$ | $44.51$ |\n| (FitNet Distilled) Swin-Tiny+UPerNet | $60$ | $945$ | $44.85$ |\n| (Manifold Distilled) Swin-Tiny+UPerNet | $60$ | $945$ | $45.66$ |\n\nMore implementation details, please refer to Section 4.5 in our revised paper.\n\n> **Limitation:** The authors should add discussions about societal impact. For other limitations, please refer the question part.\n\n**Response to the limitation:** Our work does not present any foreseeable societal consequence. We will add the societal impact discussion in our rebuttal revision.\n\n**Reference:**\n\n[1] Touvron, Hugo, et al. \"Training data-efficient image transformers & distillation through attention.\" *International Conference on Machine Learning* (2021).\n\n[2] Hinton, Geoffrey, et al. \"Distilling the knowledge in a neural network.\" *arXiv preprint arXiv:1503.02531* (2015).", " **Response to weakness 1:** Only using the patch merging strategy can do reduce the computation complexity, but is not effective and will lead to a worse top-1 accuray. Following the analysis in our paper, the computational complexity of the original manifold map is $\\mathcal{O}(B^2N^2D)$, where $N=H\\times W$ is the total number of image patches. If we divide the intermediate fature into $\\frac{H}{n}\\times \\frac{W}{n}$ non-overlapping patches, and merge each patch of size $n\\times n$ , the complexity becomes $\\mathcal{O}(\\frac{B^2N^2D}{n^2})$. In our paper, we analyze the complexity on DeiT model, and show that the decoupling method approximately reduces the computational complexity by a factor of $80\\times$. To achieve a similar compression rate, patch merging window-size $n$ should be set to $9$, whcih means that there will be only $4$ remaining patches after merging an image with $196$ patches. However, since we aims to utilize the fine-grained patch-level information, such an aggresive merging setting greatly ruined the fine-grained information. We compare the decoupling method and the patch merging method with a DeiT-Tiny student, and the results are listed below:\n\n| method | FLOPs of a single manifold map | Top-1(%) |\n| :------------------------ | :----------------------------: | :------: |\n| only patch merging, $n=7$ | $4.9$G | $74.8$ |\n| only decoupling | $3.0$​G | $76.5$ |\n\n(We set $n=7$ here since $\\sqrt{196}=14$.)\n\nIn our paper, the manifold map decoupling is to reduce the complexity for capturing information in both batch and patch level, and it is an intuitive implementation to capture the relations between patches in the same image or patches at the same position of multiple images. The patch merging is just an auxiliary strategy to deal with cases where the patch number is too large, e.g., the Swin Transformer, which leads to an unaffordable computation even after decoupling.\n\n**Response to weakness 2:** The random sampled relation map serves as complementary information in batch and patch level. Excatly, the first two decoupled maps are enough for knoweldge transfer, but these two maps still ignore relations across patches in different images and different positions. Although the ignored relations are less important intuitively, directly discarding all of them will cause performance drop. To strike the trade-off between the student performance and the computational cost, we randomly sample a subset of these relations to provide more structual information and imbue the student with this relation map. In Tabel 5, the results prove the effectiveness of the randomly sampled relation map.\n\n**Response to weakness 3:** The proposed method is robust with various hyper-parameters. In our experiments, we excute an empirical study of hyper-parameters with a CaiT-Small teacher and a DeiT-Tiny student. We compare different hyper-parameter settings and provide the results in Table 6, where the results fluctuate moderately even when we enlarge or shrink one of the weights by two times. Then we adopt the same weight setting for all teacher-student pairs, and every student outperforms its counterpart trained with the vanilla KD method. In practice, users can directly adopt our empirical hyper-parameter setting, and this will guarantee a decent performance.\n\n**Response to weakness 4:** We carefully ablate the influences of different layer selecting schemes in Table 4, and we find that selecting layers from both the shallow and the deep part of a model achieves the best result. Therefore, given a model with $L$ layers in total, we can select layers at the shallow/deep model indexed by {$1, 2, ...,k,L-k+1,...,L-1, L$}. And this shceme is general and can be easily applied to any teacher-student model, only if the teacher and the student adopt the same $l$ setting. Take $k=3$ as an example, we select layer {1,2,3,22,23,24} from CaiT-small teacher ($L=24$) and layer {1,2,3,10,11,12} from DeiT-Tiny ($L=12$) student.\n\nTo evaluate the reliability of the proposed selecting scheme, we further conduct the ablation study on Swin-Small teahcer and Swin-Tiny student, the corresponding results are as follows:\n\n| Scheme | Teacher layers | Student layers | Top-1(%) |\n| :----------: | :----------------------: | :-------------------: | :------: |\n| Shallow | {1, 2, 3, 4, 5, 6} | {1, 2, 3, 4, 5, 6} | $81.9$ |\n| Deep | {19, 20, 21, 22, 23, 24} | {7, 8, 9, 10, 11, 12} | $81.8$ |\n| Uniform | {4,8,12,16,20,24} | {2,4,6,8,10,12} | $82.1$ |\n| Shallow/Deep | {1,2,3,22,23,24} | {1,2,3,10,11,12} | $82.2$ |\n\nWe will add these results to the supplementary material due to the limited pages.", " Thanks for your valuable comments, we have updated the rebuttal revision accordingly.\n\n> **Overall comment:** the manifold map is like a kind of structure information in the pixel-level tasks.\n\n**Response to the overall comment:** We acknowledge that the manifold map is similar to a kind of structure information to some extent, and distilling structural information has always been an effective method in structured prediction problems such as detection and segmentation. However, our manifold map is different from conventional KD solutions for dense prediction in the following two aspects: (i) Utilizing information inside both batch and patch level. Previous methods only consider the structal information in a patch-level-like way, but ignore the information in batch-level. For example, [1] proposes a \"pair-wise\" distillation to help student preserve the affinity relation in a local region of size $\\frac{H'\\times W'}{\\beta} \\times \\alpha$, and [2] proposes to distill on a $HW \\times HW$ pixel-level relation map. Our propsoed manifold map can fully exploit the information inside bath and patch simultaneously, which is able to transfer the teacher knowledge with a higher fidelity. (ii) Decoupleing the manifold space to simplify computational complexity. Directly leveraging previous \"pixel-wise\" distillation method for each patch from transformer brings a huge computational burden.Therefore we propose to use three decoupled terms to describe the manifold space and achieve satisfactory result, which is rarely explored in previous structural infromation-based KD.\n\nIn addition, we present some practical tips for distilling tranformer-based architecture such as patch merging, fixed-depth, and layer slecting schemes.\n\n**Reference:**\n\n[1] Liu, Yifan, et al. \"Structured knowledge distillation for dense prediction.\" *IEEE transactions on pattern analysis and machine intelligence* (2020).\n\n[2] Shan, Yuhu. \"Distilling pixel-wise feature similarities for semantic segmentation.\" *arXiv preprint arXiv:1910.14226* (2019).", " The authors apply knowledge distillation on vision transformer to learn a more efficient one, so that it could be deployed on edge devices like cell phones and smart watches. \n\nConsidering the knowledge distillation has been fully investigated in neural networks, the authors utilize the patch-level information and propose a fine-grained manifold distillation method. \n\nTo achieve it, the authors use a kind of manifold learning-based KD method, which supports mismatched embedding dimensions. And to utilize the patch-level information in VIT , the authors propose a fine-grained manifold distillation method. The aim is to teach the student layer to output features having the same patch-level manifold structure as the teacher layer.\n\nAnd in order to simplify the computation, the authors decouple a manifold relation map into three parts, which are intra-image relation map, inter-image relation map and randomly sampled relation map.\n\nTogether with the path merging strategy, the computation complexity can be further reduced. First of all, the motivation of this paper is clear. The authors want to do knowledge distillation to learn a smaller and more efficient vision transformer. And as it is applied in the vision transformer architecture, the authors consider to preserve the relationships among patches.\nSo, the afterwards proposed schemes are just reasonable. There are mainly two strategies, both of which are proposed to reduce the computation complexity. One is to use three kinds of maps to replace the manifold relation map. The other is to merge patches.\n\nSo the weakness of this paper comes, that it doesn't surprise me a lot. It is just looked like a normal solution. In distillation, the manifold map is like a kind of structure information in the pixel-level tasks.\n\nTo be more specific, I wonder:\n1. If we only use the patch merging strategy, it could reduce the computation complexity a lot. So what is the necessarily to uncouple the map?\n2. And when uncoupling the map, what is the exact meaning of the random sampled relation map. I think that the first two kinds may work.\n3. And in equation 10, there introduces three weights for balancing each item. When it is applied in each block, I think choosing the weights for each block is a complex task.\n4. In line 120-121, the authors say the teacher-student layers are manually selected, is it reliable enough? see weakness Yes.", " This paper works on improving manifold learning-based knowledge distillation when using lightweight vision transformer models. The authors proposed a patch-level fine-grained manifold distillation method by matching the intermediate features between student and teacher in a manifold space and further decoupled the matching loss to reduce the computational complexity.\n\nThe proposed approach is evaluated on ImageNet-1K classification task and COCO2017 object detection task and shows better performance than other baselines. Ablation studies are done on the four design components. Strengths: The evaluation and ablation study is relatively comprehensive, covering different downstream tasks and proposed design components.\n\nWeakness: The efficientness of the proposed method is not well supported due to the lack of discussion/details/evaluation on common public benchmarks. Is there a trade-off between the scheme for reducing computational complexity and the performance on downstream tasks? More explanation and reasoning should be done here. More quantitative evaluation and examples should be provided in the Complexity analysis section.", " The authors found the existing vanilla distillation in ViT can not work with different embedding dimensions between the teacher and student model. Moreover, the authors argue previous ViT distillation process is conducted at a coarse level.\nTo resolve the above issues, they propose a fine-grained patch-level manifold distillation method. \nBy decomposing the distillation process into inter and intra parts, the authors found the results have better accuracy with negligible computational overhead.
 1. originality:\nManifold learning-based KD, which does not require matching dimensions among teacher and student networks, was first implemented in ViTs.\nTo reduce the computational complexity, the relation maps are decomposed into three parts for efficiency.\n2. quality:\nThe paper is well written, with detailed experiments and ablation studies with other methods and the proposed variants.\n3. clarity:\nThe paper consists of an illustration of the decoupled manifold relation map, text explanations of the proposed methods.\n4. significance:\nThe paper solves the problem that distillation of different hidden dimensions in ViTs, reduces computational cost than its baseline, and improves the distilled accuracy of previous ViT+distillation methods. 1. Why does the Soft Distillation (3rd row) in Table 3 improve only 0.3% compared with the 2nd row?\n2. The proposed manifold-based KD still belongs to a subset of KD, but why the authors separate the proposed MD from the original KD, in the caption of Table 2?\n3. I appreciate the authors admit the failure of distillation with L_inter, but why do not you re-run the experiment and report a new number? \n4. Why missing semantic segmentation experiments as an extension of the downstream tasks for the proposed new KD? The authors should add discussions about societal impact. \nFor other limitations, please refer the question part.", " This paper presents a fine-grained patch-level distillation method for vision transformers. Specifically, the proposed method matches the student and teacher models in the patch-level manifold space. The proposed method further decouples the relation map into three terms to solve the high computational complexity brought by the patch-level distillation. Experimental results on various vision tasks and benchmarks demonstrate the effectiveness of the proposed method. Strengths:\n1. The authors propose a manifold learning-based knowledge distillation method for vision transformers to match patch-level manifold information between teacher and student models.\n\n2. The authors decouple the relation map into three terms, reducing the computational complexity in computing the patch-level matching loss.\n\n3. Extensive experiments demonstrate the effectiveness of the proposed method.\n\nWeaknesses:\n1. The contribution of the paper is limited. The proposed soft distillation has been proposed in [1]. Relying on the proposed patch-level distillation loss, the authors simply combine [1] to learn a compact vision transformer. Moreover, the proposed fixed depth seems to be an empirical trick to adjust the regularization strength.\n\n2. The proposed method belongs to the relation-based knowledge distillation [2]. What is the difference between the proposed method and existing relation-based knowledge distillation methods [3][4]? It would be better for the authors to provide more discussions.\n\nReference:\n[1] Distilling the knowledge in a neural network. arXiv 2015.\n[2] Knowledge Distillation: A Survey. IJCV 2021.\n[3] A gift from knowledge distillation: Fast optimization, network minimization and transfer learning. CVPR 2017.\n[4] Self-supervised knowledge distillation using singular value decomposition. ECCV 2018.\n 1. Some notations are confusing. For example, in Section 3.3, the authors use the same notation $F$ for the feature maps before and after reshaping/merging operations. Please make the notations more distinguishable.\n\n2. In Table 2, for different students, the authors only compare the proposed method with either the “Hard” or “KD”. It would be better for the authors to compare the proposed method with both “Hard” and “KD”, which would strengthen the paper.\n\n3. The authors manually select teacher-student layers as distillation pairs. However, it is unclear how the number of selected layers would affect the performance. Please provide more discussions and experimental results.\n\n4. The authors introduce a pair of hyper-parameters $(H^{\\prime}, W^{\\prime})$ for patch merging. However, the effect of different values of $(H^{\\prime}, W^{\\prime})$ is unclear. It would be better for the authors to provide more discussions and experiments, which would strengthen the paper.\n\n5. In Table 5, the authors state that training with only inter-image patch-level distillation loss fails because of a nan loss value. In this case, how to obtain the Top-1 accuracy? Why does the model fail to converge? It would be better for the authors to provide more explanations.\n\n6. Minor issues\n\n(1) In Line 3 of Page 1, “... their ...” should be “... them ...”.\n\n(2) In Line 42 of Page 2, “... result ...” should be “... results ...”.\n\n(3) In Line 84 of Page 3, “train ...” should be “trained ...”.\n\n(4) In Line 123, the notations for features of the student and teacher layer should be $F_S$ and $F_T$, respectively.\n\n(5) In Line 143, the dimension of $F_r$ should be $K \\times D$.\n\n(6) In Line 172 of Page 6, “... that and a ...” should be “... that a ...”.\n\n(7) In Line 198 of Page 6, “... model.” should be “... models.”.\n\n(8) In Line 208 of Page 7, “... grad ...” should be “... grid ...”.\n\n(9) In Line 228 of Page 8, “... demonstrates ...” should be “... demonstrate ...”.\n\n(10) In Line 232 of Page 8, “... result ...” should be “... results ...”.\n\n(11) In Page 8, the reference position of Tables 5 and 6 seems reversed. The authors do not provide the potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 6, 7, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3, 4 ]
[ "mkAA-JgBjbg", "yhKmzpoc1pY", "gpXagSTCwA", "xKDm2uFmb1", "yhKmzpoc1pY", "d-3-M9YMkdu", "XdOoL1Gvvqh", "SeRKYY242C", "ZIWxACZ1yGs", "1A_wLFZ8axy", "MfL3CtHRz5t", "6iVBqGI4WD", "yhKmzpoc1pY", "nips_2022_crFMP5irwzn", "nips_2022_crFMP5irwzn", "nips_2022_crFMP5irwzn", "nips_2022_crFMP5irwzn" ]
nips_2022_P_eBjUlzlV
On the Limitations of Stochastic Pre-processing Defenses
Defending against adversarial examples remains an open problem. A common belief is that randomness at inference increases the cost of finding adversarial inputs. An example of such a defense is to apply a random transformation to inputs prior to feeding them to the model. In this paper, we empirically and theoretically investigate such stochastic pre-processing defenses and demonstrate that they are flawed. First, we show that most stochastic defenses are weaker than previously thought; they lack sufficient randomness to withstand even standard attacks like projected gradient descent. This casts doubt on a long-held assumption that stochastic defenses invalidate attacks designed to evade deterministic defenses and force attackers to integrate the Expectation over Transformation (EOT) concept. Second, we show that stochastic defenses confront a trade-off between adversarial robustness and model invariance; they become less effective as the defended model acquires more invariance to their randomization. Future work will need to decouple these two effects. We also discuss implications and guidance for future research.
Accept
This paper considers the effectiveness of stochastic preprocessing methods at achieving adversarial robustness. It shows empirically that the common Expectation of Transformations attack is not necessary to break many such defenses, as these defenses are vulnerable to standard PGD attacks when the amount of randomization is small. In a specific setup, the authors prove a trade-off between the utility and robustness of randomization defenses, and demonstrate on real data sets that such a trade-off exists for two randomized defenses (Barrage of Random Transforms and randomized smoothing). Although there is concern about the lack of clear impact on the development of future defense schemes, the reviewers found the message and empirical results of the paper to be illuminating.
train
[ "-H0w6Wzd9pc", "O1YQn4q5sW", "tZrpTLf0rm6", "8Yj2uIrRjd0", "v68GFJVNQaB", "yXy_mfn7BnB", "VMMb05-yXFl", "8wpq_06uv_S", "k2srZV3h2EH", "0nagCvU9QZT", "p9jslyzMVGK", "63Cry6fZp8o", "tr7DOUtqQgX", "i1vRufZ61gt", "ubn_atE0RGr", "IuOL07UHv1R", "gUx6_iCNaeV", "nKRRBEEzeem" ]
[ "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We appreciate the reviewer's detailed review and positive comments. We will incorporate these discussions and additional experimental results into the main paper and provide more details in the appendix.", " We appreciate the reviewer's detailed review and positive comments. We will incorporate these discussions into the main paper and provide more details in the appendix.", " Thank the author for providing detailed rebuttals. My major concerns are addressed. I raise my score and tend to accept this paper.", " First, I would like to apologize for the delay in responding, and I thank the authors for their thoughtful responses.\n\nAfter having read the other reviews and the responses, I believe that my main and only concern still stands. However, the other questions I posted are reasonably addressed, or at least, acknowledged. I hope that the authors incorporate the explanations they provided into the final version of the paper. Overall, I believe that this paper meets the standard of NeurIPS publications and would benefit the overall community and other researchers. Therefore, I decided to increase my score from 5 to 6.", " We would like to thank you again for your detailed review and insightful comments. We hope our responses can adequately address your concerns and encourage you to reconsider the score. We sincerely look forward to further discussions if you have any questions.", " We would like to thank you again for your detailed review and insightful comments. We hope our responses can adequately address your concerns and encourage you to reconsider the score. We sincerely look forward to further discussions if you have any questions.", " Just for the record, I've done so.", " Dear Program Chairs, Area Chairs, and Reviewers,\n\nWe would like to thank all reviewers for their time and very much appreciate their assessment of our work as *\"interesting yet under-explored\"* (SAKU, G9xL, wZxu) with *\"rigorous and comprehensive results\"* (SAKU) that are *\"illuminating for the wider community\"* (G9xL). The concerns are mainly focused on \"the findings are expected\" and \"the final part questions\" of this paper. We summarize and highlight our responses to these concerns below.\n\n**Q1: Findings are expected.**\n\nReviewer wZxu raises a concern that our findings are \"expected.\" While this concern is valid, we want to kindly note that such \"expected findings\" have not been well recognized by the research community (Q2). In fact, it is precisely this *gap* between the community's effort and the reviewer's correct understanding (of \"expected\" findings) that our work aims to bridge. Hence, our main contribution is to *provide a theoretical underpinning for the \"expected ineffectiveness\" of stochastic pre-processing defenses.* This is a novel insight that has not been rigorously studied.\n\n**Q2: DiffPure at ICML this year.**\n\nReviewer G9xL brings up an insightful [discussion](https://openreview.net/forum?id=P_eBjUlzlV&noteId=tr7DOUtqQgX) about DifffPure, a new stochastic pre-processing defense published at ICML this year (after our submission). This new defense is strong evidence showing that the community continues improving defenses through more complicated randomized transformations. Fortunately, this new defense fits our model, so we can use our findings to identify concerns with the way its robustness is achieved without needing to design adaptive attacks. We hope this discussion can help solidify the novelty and impact of our work from a different perspective.\n\n**Q3: The final part questions.**\n\nWhile reviewers endorse our discussion section, they expect to see more implications for future research (SAKU) and systematic guidance to design attacks and defenses (wZxu). We understand that our discussion section was not as comprehensive as we hoped, and we very much appreciate for bringing up these insightful questions. We have provided an in-depth discussion of these questions in our response to Reviewers [SAKU](https://openreview.net/forum?id=P_eBjUlzlV&noteId=i1vRufZ61gt) and [wZxu](https://openreview.net/forum?id=P_eBjUlzlV&noteId=63Cry6fZp8o).\n\nWe hope our responses can adequately address the reviewers' concerns and encourage them to reconsider their scores. We sincerely look forward to further discussions with the reviewers if they have any questions.\n\nBest wishes,\n\nAuthors of Submission 3796", " We appreciate you bringing up the insightful discussion as it is an excellent fit for our motivation. We will incorporate these discussions to the main paper and provide more details in the appendix. We hope that this adequately addresses your concerns to raise the score further.", " Thank you for the additional explanations and sorry for springing this new information on you during the rebuttal phase. Including the provided discussion about DiffPure in a camera-ready version / revision would already be meaningful to me, even without additional experiments and raise my score of this submission further. \n\nOf course, additionally verifying that the proposed analysis correctly predicts that DiffPure is vulnerable, would be even more impactful, but, I agree, this is tough to engineer within a short amount of time given the computational requirements of the defense.", " Thank you for your insightful comments. Our detailed response to your major concerns is below. \n\n**Q1: The findings are expected.**\n\nWhile our findings are expected, they have not been well recognized by the research community (detailed below). This problem is under-explored and lacks theoretical support. *Our main contribution lies in bridging this gap with a theoretical underpinning for the \"expected ineffectiveness\" of stochastic pre-processing defenses.* It provides key insights that future research should (at least try to) abandon this assumption.\n\nBelow we highlight the significance of our contributions.\n\n**1. The \"expected\" findings have not been well recognized by the research community.**\n\nThe research community continues improving defenses through more complicated transformations. For example, there is a new stochastic pre-processing defense DiffPure [1] at ICML this year (published after our submission). This defense has a complicated solver of stochastic differential equations (SDE) and requires high-end GPUs with 32 GB of memory [2]. Our initial experiment shows that it takes several hours to attack even one batch of 8 CIFAR10 images on an Nvidia RTX 2080 Ti GPU with 11 GB of memory, and we received an out-of-memory error when attempting ImageNet with batch size 1. Because of these complications and computational costs, fully understanding DiffPure’s robustness requires substantially more effort than a previous stochastic pre-processing defense BaRT, which was only broken after 3 years of its publication.\n\n[1] Diffusion Models for Adversarial Purification. ICML 2022.\n\n[2] https://github.com/NVlabs/DiffPure\n\n**2. Our theoretical results have the potential to include newer defenses like DiffPure.**\n\nDiffPure has two consecutive steps:\n1. Forward SDE adds noise to the image to decrease invariance. The model becomes more robust (Eq. 5) due to shifted input distribution.\n2. Reverse SDE removes noise from the image to recover invariance. The model becomes less robust (Eq. 6) due to recovered input distribution.\n\nThese two steps are consistent with our characterization of stochastic pre-processing defenses in Section 5. While our submission mainly focused on trained invariance (through model fine-tuning), an auxiliary denoiser (like Reverse SDE) can achieve a similar notion of invariance. Hence, we expect our arguments about the robustness-invariance trade-off to hold here as well.\n\n**3. Our findings raise concerns with the way DiffPure claims to obtain robustness.**\n\nThe above discussion finds no evident difference between DiffPure and our model. When the Reverse SDE is perfect, we should achieve full invariance (Eq. 7) and expect no improved robustness — attacking the whole procedure is equivalent to attacking the original model (if non-differentiable and randomized components are handled correctly). Hence, our findings raise concerns with the way DiffPure claims to obtain robustness.\n\nDriven by these concerns, we carefully reviewed DiffPure’s evaluation and identified red flags.\n1. They only used 100 PGD steps and 20 EOT samples in AutoAttack. This setting is potentially inadequate based on our empirical results (e.g., Tab. 2). Even breaking a less complicated defense requires far more steps and samples.\n2. Previous purification defenses cannot prevent adversarial examples on the manifold of their underlying generative model or denoiser [3]. However, DiffPure did not discuss this attack, i.e., whether it is possible to find an adversarial example of the diffusion model such that it remains adversarial (to the classifier) after the diffusion process. This strategy is different from attacking the whole pipeline with BPDA and EOT.\n\nThese red flags suggest that there is still room for improving DiffPure’s evaluation.\n\n[3] Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. ICML 2018.\n\n**4. Our findings help to mitigate the challenges of robustness evaluation.**\n\nWe cannot finish the evaluation of the above discussions within the short rebuttal period, mainly due to the complicated nature of stochastic pre-processing defenses and their high computational costs. However, this challenge is exactly what our work aims to mitigate — we can identify concerns with the way robustness is achieved without needing to design adaptive attacks, and our findings have motivated us to identify several red flags in their robustness evaluation.\n\nLastly, while the findings are expected, our theoretical results increase the confidence of future research towards understanding the robustness of defenses relying on a similar assumption. This problem is under-explored and lacks theoretical support. Our work bridges this gap and helps to mitigate the arms race between attacks and defenses.\n", " **Q2: Systematic guidance to design attacks and defenses regarding randomness.**\n\nThank you for recognizing the value of our discussion section. We also discussed implications for future research in our response to the first reviewer’s Q1. Below we elaborate on the specific questions you proposed; we hope these discussions can sharpen our contributions to the community. \n\nFirst, we want to kindly note that systematic guidance for designing attacks and defenses remains an open question, as evident from the difficulty of designing general adaptive attacks and robust defenses [3]. Still, our work suggests critical insights in this direction.\n\nGuidance for attacks:\n1. Attackers aiming to evaluate defenses (i.e., not merely breaking them) should start with standard attacks before resorting to more involved attack strategies like EOT. This helps form a better understanding of the defense’s fundamental weakness.\n2. Randomized pre-processors cannot provide inherent robustness, so an effective attack should exist. Although there has not been a systematic way to design such attacks, our work provides general guidelines to help with this task.\n3. Randomized pre-processors provide robustness by invariance, so attackers can examine the model invariance to check the room for improvements.\n\nGuidance for defenses:\n1. The current use of randomness is not promising. Defenses should decouple robustness and invariance; otherwise, future attacks may break them.\n2. Defenses should look for new ways of using randomness, such as those below or beyond the input space. Below-input randomness divides the input into orthogonal components, like modalities [4] and independent patches [5]. Beyond-input randomness routes the input to separate components, like non-transferable models [6].\n3. Randomness should force the attack to target all possible (independent) subproblems, where the model performs well on each (independent and) non-transferable subproblem. In this case, defenses can decouple robustness and invariance, hence avoiding the pitfall of previous randomized defenses.\n4. Randomness alone does not provide robustness. Defenses must combine randomness with other inherently robust concepts to improve robustness.\n\n[4] Defending Multimodal Fusion Models against Single-Source Adversaries. CVPR 2021.\n\n[5] Certified Robustness Against Physically-Realizable Patch Attacks via Randomized Cropping. ICML 2021 Workshop.\n\n[6] TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness. NeurIPS 2021.\n\n**Q3: Claims about the necessity of EOT are inappropriate.**\n\nThank you for pointing this out; we tried to be very careful about these claims. We have revised our claims to clarify that we do not intend to claim EOT as being unnecessary. Rather, we use this finding to show that adaptive attacks might overestimate the robustness of weak defenses (L51-53).\n\nDespite this, we kindly note that several adaptive attack papers have indeed claimed EOT as necessary (or at least standard). For example, AutoAttack [7] explicitly detects random components and enforces EOT, and other attacks [8] regard EOT as a \"standard technique for computing gradients of models with randomized components.\" Such default settings could worsen the above overestimation of the defense robustness.\n\n[7] Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-free Attacks. ICML 2020.\n\n[8] On Adaptive Attacks to Adversarial Example Defenses. NeurIPS 2020.\n\n**Q4: Visualization of adversarial noises.**\n\nAdversarial noises are smoother when created (1) with EOT or (2) on models with more randomness. This observation shows that (1) applying EOT and (2) making models invariant to more randomness both lead to more stable gradients (hence easier to attack and confirm our findings). Please find detailed results and discussions in the revised Appendix E.3.\n\n**Q5: Evaluation of CIFAR10.**\n\nWe test randomized smoothing on CIFAR10, and our observations hold. The model becomes less robust to the same attack after fine-tuning, as shown in the table below. Please find detailed results in the revised Appendix E.4.\n\n| Attack Success Rates | $\\sigma=0.12$ | $\\sigma=0.25$ | $\\sigma=0.50$ | $\\sigma=1.00$ |\n|-------------------:|:-------------:|:-------------:|:-------------:|:-------------:|\n| Before Fine-tuning | 52.1% | 1.1% | 0.0% | 0.0% |\n| After Fine-tuning | **63.1%** | **29.5%** | **18.1%** | **12.3%** |\n", " Thank you for your positive and insightful comments.\n\nThe framework of DiffPure [1] is consistent with our model. Hence, we expect our arguments about the robustness-invariance trade-off to hold here as well: Applying the diffusion preprocessor makes the model invariant to the noise and less robust. We also identify red flags in their robustness evaluation based on our findings. Please note that DiffPure was published after our submission. Our detailed response is below.\n\n**1. DiffPure is the defense that our work expects to avoid.**\n\nAs we indicated in the introduction, a thorough evaluation of stochastic pre-processing defenses typically requires significant modeling and computational efforts. DiffPure is a new example of such defenses — it has a complicated solver of stochastic differential equations (SDE) and requires high-end GPUs with 32 GB of memory [2]. Our initial experiment shows that it takes several hours to attack even one batch of 8 CIFAR10 images on an Nvidia RTX 2080 Ti GPU with 11 GB of memory, and we received an out-of-memory error when attempting ImageNet with batch size 1. Because of these complications and computational costs, fully understanding its robustness requires substantially more effort than a previous stochastic pre-processing defense BaRT, which was only broken after 3 years of its publication.\n\nGiven this challenging arms race between attacks and defenses, our work provides empirical and theoretical evidence to show that stochastic pre-processing defenses are fundamentally flawed. They cannot provide inherent robustness (like that from adversarial training) to prevent the existence of adversarial examples. Hence, future attacks may break it. As a result of these findings, future research should look for new ways of using randomness, such as those discussed in L322-328.\n\n[1] Diffusion Models for Adversarial Purification. ICML 2022.\n\n[2] https://github.com/NVlabs/DiffPure\n\n**2. DiffPure matches our theoretical model.**\n\nDiffPure has two consecutive steps:\n1. Forward SDE adds noise to the image to decrease invariance. The model becomes more robust (Eq. 5) due to shifted input distribution.\n2. Reverse SDE removes noise from the image to recover invariance. The model becomes less robust (Eq. 6) due to recovered input distribution.\n\nThese two steps are consistent with our characterization of stochastic pre-processing defenses in Section 5. While our submission mainly focused on trained invariance (through model fine-tuning), an auxiliary denoiser (like Reverse SDE) can achieve a similar notion of invariance. Hence, we expect our arguments about the robustness-invariance trade-off to hold here as well.\n\n**3. Our findings raise concerns with the way DiffPure claims to obtain robustness.**\n\nThe above discussion finds no evident difference between DiffPure and our model. When the Reverse SDE is perfect, we should achieve full invariance (Eq. 7) and expect no improved robustness — attacking the whole procedure is equivalent to attacking the original model (if non-differentiable and randomized components are handled correctly). Hence, our findings raise concerns with the way DiffPure claims to obtain robustness.\n\n**4. DiffPure’s robustness evaluation has red flags.**\n\nDriven by the above concerns, we carefully reviewed DiffPure’s evaluation and identified red flags.\n1. They only used 100 PGD steps and 20 EOT samples in AutoAttack. This setting is potentially inadequate based on our empirical results (e.g., Tab. 2). Even breaking a less complicated defense requires far more steps and samples.\n2. Previous purification defenses cannot prevent adversarial examples on the manifold of their underlying generative model or denoiser [3]. However, DiffPure did not discuss this attack, i.e., whether it is possible to find an adversarial example of the diffusion model such that it remains adversarial (to the classifier) after the diffusion process. This strategy is different from attacking the whole pipeline with BPDA and EOT.\n\nThese red flags suggest that there is still room for improving DiffPure’s evaluation.\n\n[3] Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. ICML 2018.\n\n**5. Summary.**\n\nDiffPure matches our theoretical characterization of previous stochastic pre-processing defense. Thus, we expect our findings to hold here as well. Unfortunately, we cannot finish the evaluation of the above discussions within the rebuttal period due to their high computational costs. If we are able to get results, we will add them during the discussion period.\n\nHowever, this challenge is exactly what our work aims to mitigate — we can identify concerns with the way robustness is achieved without needing to design adaptive attacks, and our findings have motivated us to identify red flags in their evaluation. We hope our work can increase the confidence of future research towards understanding the robustness of defenses sharing a similar assumption.\n", " Thank you for your detailed review and insightful comments. We will address the minor weaknesses and are glad to continue the discussion. Our detailed response to your major concerns is below. \n\n**Q1: The paper seems to miss the final part.**\n\nWe briefly covered a few questions in the discussion section, such as how defenses should utilize randomness and the implications for adaptive attackers. Below we elaborate on the specific questions you proposed; we hope these discussions can sharpen our contributions.\n\n> *What's next? What does this mean to future research?*\n\nOur work suggests that future defenses should decouple robustness and invariance; that is, avoid providing robustness by introducing variance to the added randomness. Otherwise, defenses that shift the input distribution will result in errors, and the observed \"robustness\" is only a result of these errors. These findings imply that future research should (at least try to) abandon this assumption.\n\nThis implication is crucial as the research community continues improving defenses through more complicated transformations. For example, there is a new stochastic pre-processing defense DiffPure [1] at ICML this year (published after our submission). This defense has a complicated solver of stochastic differential equations (SDE) and requires high-end GPUs with 32 GB of memory [2]. Our initial experiment shows that it takes several hours to attack even one batch of 8 CIFAR10 images on an Nvidia RTX 2080 Ti GPU with 11 GB of memory, and we received an out-of-memory error on ImageNet with batch size 1. Because of such complications and high computational costs, fully understanding DiffPure’s robustness requires substantially more effort than a previous stochastic pre-processing defense BaRT, which was only broken after 3 years of its publication.\n\n[1] Diffusion Models for Adversarial Purification. ICML 2022.\n\n[2] https://github.com/NVlabs/DiffPure\n\n> *How do we improve defenses? Should we abandon randomized defenses?*\n\nWe should not abandon randomized defenses but utilize randomness in new ways. One promising approach is dividing the problem into orthogonal subproblems. For example, some speech problems (such as keyword spotting) are inherently divisible in the spectrum space, and vision tasks are divisible by introducing different modalities [3], independency [4], or orthogonality [5]. In such cases, randomization forces the attack to target all possible (independent) subproblems, where the model performs well on each (independent and) non-transferable subproblem. As a result, defenses can decouple robustness and invariance, hence avoiding the pitfall of previous randomized defenses.\n\n[3] Defending Multimodal Fusion Models against Single-Source Adversaries. CVPR 2021.\n\n[4] Certified Robustness Against Physically-Realizable Patch Attacks via Randomized Cropping. ICML 2021 Workshop.\n\n[5] TRS: Transferability Reduced Ensemble via Promoting Gradient Diversity and Model Smoothness. NeurIPS 2021.\n\n> *What are concrete settings that this defense works?*\n\nRandomized defenses make the attack harder in the black-box setting (L315-321). However, we cannot find evidence that stochastic pre-processing defenses work in the white-box setting. Other forms of randomness discussed above are more promising. The only exception is randomized smoothing, which remains an effective tool to certify the inherent robustness of a given decision. \n\n**Q2: Theory cannot exclude other forms of randomness.**\n\nWe clarify that we are not making a statement about stochastic defenses in general. Instead, we focus on a popular subclass that leverages randomness to affect the model’s invariance. Other forms of randomness that our theory cannot exclude are exactly what we advocate for future research, like those discussed in Q1.\n\nHowever, we can use our results to argue about newer defenses. For example, DiffPure has two consecutive steps:\n1. Forward SDE adds noise to the image to decrease invariance.\n2. Reverse SDE removes noise from the image to recover invariance.\n\nThese two steps correspond to our characterization of stochastic pre-processing defenses in Section 5. While our submission mainly focused on trained invariance (through model fine-tuning), an auxiliary denoiser (like Reverse SDE) can achieve a similar notion of invariance. Hence, we expect our arguments about the robustness-invariance trade-off to hold here as well.\n\n**Q3: A stronger claim about which defenses might look more promising.**\n\nWe clarify that the main purpose of our evaluation is *not* to find defenses with a better robustness-invariance trade-off, but to show that they do have this trade-off. Such a trade-off implies that the defense uses randomness to affect the invariance and thus cannot provide inherent robustness (like that from adversarial training). A stronger (negative) claim based on our empirical results would be: Any defenses facing the robustness-invariance trade-off are not promising.\n", " **Q4: The robustness-invariance trade-off may be less important in the targeted setting.**\n\nThe reviewer is correct that the defender needs no invariance for the wrong class, and it is hard for our theoretical model (in the binary regime) to cover the multi-class targeted setting. However, our experiments already covered the class-dependent invariance, as our training loss does not penalize the inconsistent wrong prediction $F_\\theta(x)=F(x)\\neq y$.\n\nFigures 4a and 4c show that the robustness-invariance trade-off does exist in the targeted setting and is more critical than in the untargeted setting. While untargeted attacks obtain ~20% higher success rates after model fine-tuning, the success rates of targeted attacks increase from ~0% to 50-80%. This is because we still need to preserve invariance (or utility) in the targeted setting, and the low invariance is more challenging for targeted attacks. As a result, model fine-tuning makes targeted attacks more effective than untargeted attacks.\n\n**Q5: The trade-off somewhat disappears in Figures 4a and 4c.**\n\nThe reviewer is correct that the attack is suboptimal for large noise. However, we choose this setting *on purpose* to compare the robustness under the *same* attack before and after fine-tuning. Under this setting, we can observe that the same attack (regardless of its strength) that hardly works for the defense (before fine-tuning & low invariance) now becomes more effective (after fine-tuning & high invariance). We can surely run each attack for more iterations and samples, but the current setting suffices to show that the defense provides robustness by explicitly reducing invariance.\n\n**Q6: EOT being unnecessary could be an effect of the adaptive step size from AutoPGD.**\n\nWe are also cautious about this setting. For all motivation experiments in Section 4 and main experiments in Section 6.2, we only use standard PGD with constant step size and no random restarts.\n", " This paper contains three main contributions. First, this paper shows that using EoT ($m>1$) is not always necessary to break randomized defenses, i.e., it does not necessarily save the number of gradient computations. Second, it shows a theoretical result in a toyish setting that the robustness of randomized models is at odds with their utility (accuracy). This is not an impossibility result but rather, the authors propose some settings where such a trade-off exists. Lastly, the paper supports the second the previously mentioned theoretical results with empirical ones on two types of defenses (BaRT and randomized smoothing). # Strengths\n1. The paper addresses an interesting yet under-explored problem. Rigorous studies are lacking on randomized models/defenses.\n2. While the setup for the theoretical part is toyish, the results are comprehensive. I would have missed the more interesting results, in my opinion, in the appendix if I didn't check out Appendix C.2. The paper might benefit from bringing up or summarizing some of these results in the main paper.\n3. Experiment setup seems mostly rigorous and up to date with recent works. \n4. The presentation is good. The main messages are all easy to understand, and the results mostly support the claims well.\n\n# Weaknesses\n1. The main and only major weakness is that the overall contribution is slightly limited (justified in a. and b. below). Personally, I find the two main observations in this paper interesting. However, the paper seems to miss the final part, i.e., I find the following questions unanswered: What's next? What does this mean to future research? How do we improve defenses? Should we abandon randomized defenses? What are concrete settings that this defense works (with some supporting results)? \n a. *Theory*: As mentioned in the summary, there is no guarantee that the trade-off between utility, i.e., invariance, and robustness must hold. The results do cover a reasonable set of settings used by the proposed defenses, but none really exclude the possibility of using other forms of randomness or aggregation methods that may not conform to this trade-off or in general achieve a better trade-off. \n b. *Empirical*: I don't believe that the empirical contribution quite makes up for the limitation in the theoretical part. Additional experiments on more models and defenses might be good. Surely, all defenses are not made equal and so they should not be on the same trade-off curve. Could the authors make a stronger claim about which defense might look more promising? Or maybe a heuristic to properly choose $m$ and $k$ without just trying out a lot of them?\n\nNote that I'm not set in stone about this. The authors and the other reviewers may convince me otherwise.\n\n# Minor Weaknesses\nI think of these as suggestions or room for improvement rather than things I penalize this paper for.\n1. I believe that Pinot et al. [2021] (https://arxiv.org/abs/2102.10875) might be related to this work, at least remotely. This work is not peer-reviewed as far as I can find so I'm not imposing any penalty. However, I'd recommend citing and discussing their result as well since they also show many solid theoretical results on randomized defense.\n2. Maybe the comparison between the benefits of the number of PGD steps ($m$) and the number of EoT samples ($k$) can be put in the perspective of stochastic gradient descent. For instance, from the central limit theorem, we know that variance decreases by $1/k$ and $1/\\sqrt{k}$ for standard deviation. We also know that the variance of the gradient estimates affects the convergence rate of SGD (see this [paper](https://cpn-us-w2.wpmucdn.com/sites.gatech.edu/dist/f/330/files/2016/02/NonconvexSA-Revision-9-25-13.pdf)). This might allow one to predict, in theory, the optimal value of $k$ and $m$ given $\\sigma$ and $k \\times m$. \n3. Presentation of Figure 3 and 4: I don't think the epoch views are necessary. I would also recommend plotting the attack success rate or robust accuracy vs clean accuracy to show the trade-off better. \n\n# Very Trivial Weaknesses\n1. Reference [10] and [11] are duplicates.\n2. It's a bit difficult to compare a fixed number of gradient computation ($k \\times m$) in Figure 1 and 2. It just takes another little step to multiply the numbers. I don't have a good suggestion for this without significantly changing the figure (e.g., x-axis is just $k \\times m$). 1. What about the robustness-invariance trade-off in the targeted setting? The invariance is needed for accuracy when it represents $F_\\theta(x) = y$, i.e., \"prediction has to be correct after the random transform.\" However, the defender needs *no* invariance for the wrong class, correct? We don't care, almost in all cases, that $F_\\theta(x) = F(x)$ when they are both wrong. The defense can surely use this to its advantage. I expect that the robustness-invariance trade-off is far less important, if it exists at all, in the targeted setting. \n2. In Figure 4a and 4c, we can see that the red and the green solid lines start to diverge at large $\\sigma$. This means that the trade-off somewhat disappears here, i.e., you gain more robustness while the accuracy does not go down. Why does this happen on randomized smoothing and not BaRT? Why is this the case? Could it be that the attack is sub-optimal with large $\\sigma$? It might be good to show that the attack does indeed converge by no large $m$ and $k$ could possibly increase the attack success rate further.\n3. Do all attacks in the paper use AutoPGD? If so, did the authors take into account the observation that EoT is unnecessary could be, at least partially, an effect of the adaptive step size from AutoPGD? This is adequately discussed in Appendix F.", " \"On the Limitations of Stochastic Pre-processing Defenses\" discusses strategies for the adequate evaluation of adversarial attacks against systems defended by stochastic components. Previous work generally considers expectation-over-transformations to be necessary to attack these defenses, but the submission shows that merely running standard defenses with larger step budgets can often be sufficient in overcoming these defenses. The submissions supplements this finding with further analysis on the interactions of robustness and invariance with the stochastic nature of these defenses. I like the clarity of writing and presentation of the findings in this work. The grouping of stochastic defenses into defenses with and without meaningful stochasticity is nicely illustrated and exemplified.\n\nThe experimental evaluation underpins these findings for two defenses, barrage-of-transformations and randomized smoothing. Overall I don't have a lot of say here, I liked the evaluations done by the authors and find the results interesting. The overall topic is not groundbreaking, but reinvestigating the properties of stochastic preprocessing is, in my opinion, illuminating for the wider community. \n\nThe only minor weakness I see is that the evaluations center on two classic stochastic defenses. Recently there has been a lot of renewed interest in stochastic defenses, especially based on diffusion processes (For example Nie et al, \"Diffusion Models for Adversarial Purification\n\" at ICML this year). I think the impact of this submission could be improved by also considering one such \"newer\" attempted adversarial defense, where would this defense fall in the categorization of stochastic defenses proposed in this work? As described above, how would a stochastic defense based on diffusion processes be categorized? Do the findings of this work for older defenses also hold for these newer stochastic defenses? n.a.", " This work investigates stochastic pre-processing defenses both empirically and theoretically. Specifically, the authors first revisit previous stochastic pre-processing defenses and explain why such defenses are broken. Then, the authors study recent stochastic defenses that exhibit more randomness and show that they also face key limitations.\n\n--post rebuttal--\nThank the author for providing detailed rebuttals. My major concerns are addressed. I raise my score and tend to accept this paper. Strengths:\n1. This work focuses on an interesting topic, namely, the effectiveness of stochastic defenses.\n2. In general, the paper is well organized.\n\nWeaknesses:\n1. My major concern is the novelty of the findings. The finding that applying EOT is only beneficial when the defense is sufficiently randomized is trivial to me. The other finding is that a trade-off between the model’s robustness provided by the defense and its invariance to the applied defense, which is also expected. When the model becomes invariant to the applied randomness, the function modeled by a model is more ‘deterministic’. The randomness is expected to bring less robustness to the model.\n\n2. Some claims made in this paper are inappropriate, to my knowledge, e.g., in Line 46-47. The work [2] shows that EOT can be used to break stochastic defenses. However, the necessity of EOT is not claimed since they mainly aim to show that stochastic defenses can be easily broken with EOT.\n\n3. Besides the success rate, an analysis of created adversarial noises should be presented. What is the difference between the adversarial noises created by PGD(n*m) or PGD(n)+EOT(m)? Similarly, what is the difference between the adversarial noises created on models with different degrees of randomness? The visualization of the noises might reveal more interesting conclusions.\n\n4. It is great that the main experiments are conducted on ImageNet dataset. It will be better if the experiments on a smaller dataset, e.g.CIFAR10, can be provided. It is not clear if the experimental observation still holds when the input space is smaller.\n I expected systematic guidance to design attacks and defenses regarding randomness from this paper. It will bring more value to the community if the author can provide one. The one provided in Discussion section is a good starting point.\n\nI also encourage the authors to provide the expected experiments listed in weaknesses.\n This work provides an analysis of the existing techniques, namely, stochastic defense. The new technique is the focus of this paper. The only limitation of this work is its novelty." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4 ]
[ "tZrpTLf0rm6", "8Yj2uIrRjd0", "63Cry6fZp8o", "IuOL07UHv1R", "nKRRBEEzeem", "IuOL07UHv1R", "k2srZV3h2EH", "nips_2022_P_eBjUlzlV", "0nagCvU9QZT", "tr7DOUtqQgX", "nKRRBEEzeem", "nKRRBEEzeem", "gUx6_iCNaeV", "IuOL07UHv1R", "IuOL07UHv1R", "nips_2022_P_eBjUlzlV", "nips_2022_P_eBjUlzlV", "nips_2022_P_eBjUlzlV" ]
nips_2022_caH1x1ZBLDR
Distributionally Robust Optimization with Data Geometry
Distributionally Robust Optimization (DRO) serves as a robust alternative to empirical risk minimization (ERM), which optimizes the worst-case distribution in an uncertainty set typically specified by distance metrics including $f$-divergence and the Wasserstein distance. The metrics defined in the ostensible high dimensional space lead to exceedingly large uncertainty sets, resulting in the underperformance of most existing DRO methods. It has been well documented that high dimensional data approximately resides on low dimensional manifolds. In this work, to further constrain the uncertainty set, we incorporate data geometric properties into the design of distance metrics, obtaining our novel Geometric Wasserstein DRO (GDRO). Empowered by Gradient Flow, we derive a generically applicable approximate algorithm for the optimization of GDRO, and provide the bounded error rate of the approximation as well as the convergence rate of our algorithm. We also theoretically characterize the edge cases where certain existing DRO methods are the degeneracy of GDRO. Extensive experiments justify the superiority of our GDRO to existing DRO methods in multiple settings with strong distributional shifts, and confirm that the uncertainty set of GDRO adapts to data geometry.
Accept
The paper proposes a novel distributionally robust optimization formulation leveraging data geometry to construct the uncertainty set. After a lengthy discussion and revision process, the reviewers have reached a consensus acceptance recommendation, which I support. Currently the reproducibility checklist (part 3a) states that the authors submitted code to reproduce their results along with the paper, but I do not see it as part of the supplementary material or as a link. Please provide code with the camera ready submission, or correct the checklist.
train
[ "FgSmng8wexw", "tIp2EvGvYrU", "2LZNJECPt-", "-6QNXzgSTQR", "FXP0PnY_HAj", "1fzyh4tziQB", "ad-x1n85qcr", "mEiDfafqZas", "cG4DwX-5rW4I", "KOmpJ99kv2VW", "VopZSZL8CUS", "5bkYZVz4OAQV", "Tl61RZpCF7_", "tC6kwUS7blv", "PC-pJQudCXr", "uWnaXbdNT87", "3aYAIlBue7c", "vtV6uKnSq8z", "2JH1tYRRgVt", "mf0GwNSyrtL", "DktFov9iZyV", "CZRnOA9M12S", "DPIGA9XSRlf", "eSyFkO2mEPI", "a9oyQCNiri" ]
[ "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to raise my score to 7.", " Thank you for your support! \nWe appreciate your efforts in all the constructive suggestions and discussions that help to improve this paper.", " Thank you for your support! \nThanks to your suggestions, several parts have been improved in the rebuttal revision:\n* We demonstrate the training efficiency in the `Implementation Details` in Section 4 in the main body.\n* We add some discussions on related areas of OOD generalization in Appendix A.1.\n* We also discuss the out-of-manifold generalization in Appendix A.8.", " I would like to thank the authors for the paper revision and the discussions. For the revised version, I would like to raise my rating to accept.", " Thank the authors for their detailed response. All my concerns have been addressed. I confirm my positive feedback on this paper and recommend acceptance.", " Thank you for your support! It’s our pleasure to work with you during the rebuttal.", " We would like to thank the reviewer for your prompt reply and constructive suggestions for the improvement of this paper. **We revise our paper according to your suggestions**, especially the experiment section (we highlight the parts revised in the main body).\nWe summarize our revision as:\n\n* For `Section 4. Experiments`:\n\n **(1)** In Section 4, we record implementation details to clarify the settings of experiments, including the construction of KNN from source data only, loss functions for different tasks, hyperparameters during training, and technical designs to support DNN and GPU parallelization.\n\n \n **(2)** We abandon the toy example as suggested in the 2nd comment so that we focus on inside manifold generalization to clarify the setting (all experiments are the inside manifold setting). Also, half of the experiments are established in a high-dimensional setting ($d \\geq 300$). \n\n **(3)** We put the Selection Bias as the first simulated experiment as suggested in the 2nd comment. We **rewrite** the `Data Generation` of the selection bias experiment from the perspective of sub-population shift to improve readability, and we clarify the `Simulation Settings` of this experiment.\n\n **(4)** In Section 4.1, we add one paragraph (`Discussion on kNN`) to discuss the results of kNN in detail, as suggested in the 3rd comment. We showcase the *visualization results* of kNN in the Selection Bias experiment in **Figure 2**, discuss its accuracy and stability w.r.t. $k$, and present an extreme *failure case* of kNN. And we add the results of GDRO under different choices of $k$ in **Table 1**. \n\n **(5)** In Section 4.1, to support the motivation of this paper: data lying in a low-dimensional manifold is embedded in high-dimensional space, we add one high-dimensional ($d=300$) simulated experiment as suggested by the reviewer, where high-dimensional data lie in a low-dimensional manifold. We report the results of all methods in **Table 2**.\n\n **(6)** In Section 4.2, to improve the readability, we add more details and explanations of the problem setting of our real-world experiments (Ionosphere data and Colored MNIST data).\n\n* For `Section 3. Proposed Method`:\n\n (1) In Section 3.1, we add one paragraph to clarify `how is $G_0$ estimated` to avoid any ambiguity on this.\n\n (2) In Section 3.1~3.2, we carefully refine our writings to avoid the ambiguity caused by notations.\n\n* For `Section 2. Preliminaries`: we extensively discuss more related DRO works and their relationship with GDRO.\n\n\nAs for the generalization to data out of the manifold, this work focuses on the inside manifold generalization, which corresponds with our intuitions. The toy example setting assumes some specific invariance structures in it, which is not a general setting to test the out-of-manifold generalization and is low-dimensional. Thus, we agree with the reviewer's suggestion that we *completely abandon the toy example* to avoid confusion, and *replace it with the high-dimensional simulation as suggested*. In this revised main body, **all experiments are the inside manifold setting**. Further, we also discuss the out-of-manifold generalization in Appendix A.8 (due to the space limits).\n ", " Thank the authors for the detailed clarification. My concerns about this work have been well addressed. I am happy to vote for its acceptance. ", " I want to thank the authors for the new response. I create a new post for this reply so that we don't need to click through all the links to show all the replies. I want to summarize my suggestions for improving this paper as follows:\n\n1. Based on the new response, as I understand, the authors want to show two things: (i) GDRO does well inside the manifold (like alpha_V = 1 or -0.1 in the first experiment); (ii) GDRO can even do well outside the manifold, i.e. has good OOD generalization (like alpha_V = -3, -2, -1, 2, 3). These are two very different arguments. Argument (i) corresponds to subpopulation shift where the data support does not change. Argument (ii) corresponds to OOD generalization where the data supports of train and test are different. One of the main reasons why the current experiment section is so confusing is that the authors do not distinct between these two, so there seems to be a huge gap between the experiments and the authors' motivation. Therefore, I suggest the authors make a clear distinction in the new version.\n\n2. It is probably a good idea to completely abandon the first experiment as it is very confusing because: (i) The same as point 1; (ii) It is low-dimensional (even with 3-D). I think the authors can make the Selection Bias as their first experiment which can probably make a better demonstration, but the description of this experiment still needs a lot more improvement.\n\n3. The description of kNN in the authors' new response is good, so I suggest the authors include them together with the visualization in the main body of the paper, because this is an important component of their method.\n\n4. As other reviewers have also pointed out, the writing of this paper really needs huge improvement. I am very familiar with DRO, but I still have quite a few difficulties understanding the paper, and people who are not so familiar with DRO could have even more difficulties.\n\nI am happy to further discuss with the authors. Though I still think that the current manuscript needs huge improvement, I am willing to raise my rating if the authors could update the paper as suggested.", " We clarify that in the selection bias experiment, the testing data are constrained in the source manifold in training. We feel sorry for the ambiguity in our formulation, and here we make more demonstrations.\n\nFirstly, we characterize the data distribution in a more clear way, which is strictly equivalent to the original selection mechnism ($P(x_i,y_i)=|r|^{-5*|y_i-\\text{sign}(r)v_i^b|}$).\n$$\nS\\sim \\mathcal{N}(0,2\\mathbb{I}_5),\\quad Y=f(S)+\\mathcal{N}(0,5),\\quad V \\sim \\text{Laplace}(\\text{sign}(r)\\cdot Y, \\ \\ \\frac{1}{5\\ln |r|})\n$$\nNotably that this data generation is exactly equivalent to the original one. And here we can see that $\\text{sign}(r)\\cdot Y$ is the location parameter, and $\\frac{1}{5\\ln |r|}$ is the scale parameter. Therefore, in the training data, we let $r_1=1.7$ and $r_2=-1.3$, which makes $V \\sim \\text{Laplace}(+Y, \\ \\ \\frac{1}{5\\ln 1.7})$ in the major group and $V \\sim \\text{Laplace}(-Y, \\ \\ \\frac{1}{5\\ln 1.3})$ in the minor group. And the manifold structure is similar to the original toy example in that the training data is the union of two sub-spaces. \n\nSecondly, the sign of $r$ controls the center of $V$, but the main difference with the original toy example lies in the source of distributional shifts. The noise scales of $V$ vary between source and target: the center of $V$ does not change, since it remains $+Y$ or $-Y$. And $|r|$ controls how close $V$ is to the center. \nAs for the testing data where $r=-3.0$, we have $V \\sim \\text{Laplace}(-Y, \\ \\ \\frac{1}{5\\ln 3.0})$, which locates at $-Y$ even more closely. The main difference with the original toy example is that the center of $V$ does not change, since it remains $+Y$ or $-Y$. \n", " Here we address the reviewer's remaining concerns over the two simulation experiments (toy example and selection bias) respectively.\n\n#### **1. Toy Example**\n\nIn this experiment, $S$ is the stable feature with a stable relationship with $Y$, and $V$ is the unstable feature whose relationship with $Y$ changes among sub-populations. \n\n**Firstly**, we have to clarify that distributional shifts inside the source manifold is the focus of this paper, and all the other experiments are established in such a setting. Still, to avoid confusion, we supplement the results on the two sub-populations strictly constrained in the manifold of source data ($\\alpha_V=1.0$ and $\\alpha_V=-0.1$) in the following.\n\n| RMSE | Major Group ($\\alpha_V=1.0$) | Test: Minor Group ($\\alpha_V=-0.1$) |\n| ------------ | ---------------------------- | ----------------------------------- |\n| ERM | 0.992 | 3.197 |\n| WDRO | 0.956 | 2.563 |\n| KL-DRO | 1.374 | 1.971 |\n| $\\chi^2$-DRO | 1.365 | 1.981 |\n| GDRO (K=5) | 1.655 | 1.666 |\n\n\n\n**Secondly**, in the original testing scenarios with $\\alpha_V=\\{-3,-2,-1,2,3\\}$, as for the whole data $X=[S,V]^T$, we admit that the testing data falls out of the estimated manifold. However, we perceive it as a more challenging setting where the data generation mechanism of the spurious variable $V$ changes (this makes the testing data fall out of the source manifold), but the stable one still remains the same as training. This setting is a typical setting in OOD generalization [1]. And from the results, we find that our GDRO could deal with this challenging setting. \nAnd we will investigate what will happen if the stable variable $S$ also falls out of the manifold in the added experiment below.\n\n**Thirdly**, we would like to demonstrate why GDRO could deal with the setting where $V$ changes a lot (even when it makes the data out of the source manifold). From the results in Table 1 (Est Error), we find that the estimation error of GDRO is quite low, which validates that GDRO could \"differentiate\" the stable variables from the unstable ones. And the resultant prediction model mainly uses the stable variable $S$ for prediction. Therefore, changing $V$ would not affect much. \n\n\n\n**Further**, we admit that for the simplicity of the experimental setting, we make the stable feature $S$ one-dimensional, which does not form a typical manifold structure. Therefore, we revised the toy example into a 3-D version with a simple extension:\n$$\nS_1 \\sim \\mathcal{N}(0,1),\\quad S_2 = S_1 + \\mathcal{N}(0,0.1),\\quad Y=\\alpha_S(S_1+S_2)+S_1^2+\\mathcal{N}(0,0.1),\\quad V=\\alpha_V Y +\\mathcal{N}(0,1).\n$$\nAs in the 2-D case, we set $\\alpha_V$ in the training set as:\n$$\n\\alpha_V=1, \\text{with probability }1-r;\\quad \\alpha_V=-0.1,\\text{with probability }r.\n$$\nand $r$ is set to 0.05.\nIn this setting, the whole manifold of data $X=[S_1,S_2,V]^T$ with arbitrary $\\alpha_V \\in \\mathbb R$ is approximately the plane $S_1=S_2$. The RMSE of different methods are shown in the following table:\n\n| RMSE | Major Group ($\\alpha_V=1.0$) | Minor Group ($\\alpha_V=-0.1$) | $V$ Shift ($\\alpha_V=-1.0$) | $S$ Shift ($\\alpha_V=-0.1$) |\n| ------------ | :--------------------------: | :---------------------------: | :-------------------------: | :-------: |\n| ERM | 1.081 | 3.392 | 6.136 | 3.789 |\n| WDRO | 1.041 | 2.817 | 4.989 | 3.098 |\n| KL-DRO | 1.441 | 2.114 | 3.280 | 4.036 |\n| $\\chi^2$-DRO | 1.409 | 2.196 | 3.536 | 2.450 |\n| GDRO | 1.709 | 1.762 | 1.832 | 2.386 |\n\nAnd in this new setting, we could investigate the aforementioned two kinds of 'out-of-manifold' respectively induced by stable variable $S$ and unstable variable $V$:\n\n* The correlation between stable variables shifts so that data is out of the source: we set $S_2=-S_1+\\mathcal{N}(0,1)$ (named by \"$S$ Shift\");\n* The correlation between label and the unstable variable shifts so that data is out of the source: we set $\\alpha_V=-1.0$ (named by \"$V$ Shift\").\n\nFrom the results, we can see that our GDRO could to some extent deal with the \"unstable variable shift\" setting, while its performance drops a lot when the stable variables fall out of the estimated manifold.\n\n\n[1] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization.", " We would like to thank the reviewer for your prompt reply and constructive suggestions for the improvement of this paper. It's a pleasure for us to further discuss the experimental details with you. \n\n### **Q1 & Q2: KNN's estimation of the source manifold.**\n\n#### **1. How does kNN work**\n**Firstly**, the k-nearest neighbor graph is a fundamental and basic method to represent the data structure. $K$-nearest neighbors graph is constructed by connecting each sample $x_0$ in the source data with another $K$ samples closest to $x_0$. The resulting KNN graph $G_0$ serves as an approximation for the source manifold by capturing local structures within neighborhoods [1]. As demonstrated in Lemma 1 in [1], it follows that one can approximate geodesic distance from $x_i$ to its neighbors by normalizing distances with respect to the distance to the $k$th nearest neighbor of $x_i$, which shows that kNN is able to capture the manifold. And kNN is also used in many fields to model the data manifold. For example, [2] uses kNN to model the single-cell data and achieves good performance in analyzing the single-cell RNA data. \n\n\n**Secondly**, as for our toy example, in Figure 5 in the revised supplementary material, we showcase the visualized examples of $G_0$, where source data lies on two crossing parabolas. It could be observed that kNN consistently manages to distinguish the two sub-populations until $K=500$. Further, empirical results in the table below prove that with $K<500$ GDRO performs stably better than the baseline with small and moderate $K$, except that smaller $K$ leads to slower convergence since sparse graphs restrain the flow of probability weights. In this experiment, we choose $K=5$ with moderate computing resources. \n\n| Method | Test RMSE ($\\alpha_V=-0.1$) |\n| ------------ | --------------------------- |\n| GDRO (K=5) | 1.666 |\n| GDRO (K=10) | 1.665 |\n| GDRO (K=20) | 1.678 |\n| GDRO (K=50) | 1.756 |\n| GDRO (K=100) | 1.812 |\n| GDRO (K=500) | 1.926 |\n| KL-DRO | 1.971 |\n\n**Thirdly**, manifold learning is an area with intensive research. We have to clarify that manifold learning is not the focus of this paper, and GDRO is compatible with any manifold learning method. We do believe that a more accurate estimated data structure with advanced manifold learning algorithms will further boost the performance of GDRO, and we leave it to future work.\n\n[1] McInnes, L., Healy, J., & Melville, J. (2018). Umap: Uniform manifold approximation and projection for dimension reduction. \n\n[2] Dann, E., Henderson, N. C., Teichmann, S. A., Morgan, M. D., & Marioni, J. C. (2022). Differential abundance testing on single-cell data using k-nearest neighbor graphs. Nature Biotechnology, 40(2), 245-253.\n\n\n#### **2. A failure case**\n\nHere we present an extreme case where KNN achieves poor approximation of the data manifold. When $K$ increases to an extremely large number as $K=500$ in the Toy Example, the neighborhood of KNN diffuses and two manifolds start to merge on the graph, in which case GDRO could not distinguish between two sub-populations and its performance degrades as shown in the table above. Actually in Theorem 3.4 of this paper, we have proved that with an infinitely large $K$ GDRO could be reduced to one of the baseline DROs: KL-DRO, which completely ignores data geometry. Still, we have to clarify that KNN and GDRO perform stably well for a large range of $K$.", " First of all, I want to thank the authors for this very detailed response. It is very impressive that the authors write such a detailed response in just one week. Based on this response, I would like to have further discussions with the authors:\n\nQ1-Q2. One of my main concerns is that GDRO could completely fail if the target distribution falls out of the manifold esimated by kNN, and I believe that the accuracy of kNN is crucial for GDRO. Thus, I would suggest the authors (i) use a separate paragraph to describe how kNN works, give some examples of the graph G_0, and even better provide some visualizations of the estimated manifold (e.g. for the 2-D case). (ii) Discuss possible failure cases: what could happen if the manifold is not well estimated? (iii) (Optional) Even better, talk about the sample complexity of manifold estimation by kNN. For example, how many training samples are required to estimate a 20-dimensional manifold in a 2000-dimensional space?\n\nQ3. I still have questions about the experiments. For the Simulation Data experiment, yes the training data lies on two 1-D curves (alpha_V = 1 or -0.1). However, as the authors wrote in lines 243-244, for testing data alpha_V is chosen from {-3,-2,-1,1,2,3}. This leads to two questions:\n\n(1) Could the authors clarify what is the estimated manifold in this case? If kNN can estimate the manifold very accurately as the authors claim, then the estimated manifold should be the union of the two 1-D curves, with alpha_V = 1 or -0.1. However, for the testing data, alpha_V can be 3 or -3, which falls out of the estimated manifold, so why does GDRO work? Similarly, for the Selection Bias experiment, why does GDRO work for r = -3.0 while the manifold only contains r = 1.9 and -1.3? (By the way, the setting of Selection Bias is also hard to understand)\n\n(2) Does the model know these six values of alpha_V for testing? If the model only sees the training data, it should not know the alpha_V for the test data, which means that alpha_V can be an arbitrary real number. Thus, I am curious about: (i) Does the model perform well for all alpha_V in [-3,3], such as alpha_V = 2.5? If it does, then this is still a 2-D manifold, not a low-dimensional manifold. (ii) Does it work for very large alpha_V such as alpha_V = 10?", " The reviewer suggests investigating the effect of hyperparameter $K$ in the KNN algorithm. Even though graph learning is not the focus of this paper, we show by experiments that the performances of our GDRO remain stable within a quite broad range of $K$. As is shown in the table below, the accuracy of GDRO remains strong with increasing $K$ until $K \\geq 200$ when GDRO experiences performance decay but still beats the second strongest baseline $\\mathcal{X}_2$-DRO. The experimental setting is the same as the manifold experiment introduced in Q4 and the reported metric is accuracy for testing data.\n\n| Accuracy | $K=10$ | $K=20$ | $K=50$ | $K=200$ | $K=500$ | $\\mathcal{X}_2$-DRO |\n| -------- | ---------------- | ---------------- | ---------------- | ---------------- | ---------------- | ------------------- |\n| $r_1=0.5$ | 0.758$\\pm 0.006$ | 0.768$\\pm 0.005$ | 0.767$\\pm 0.005$ | 0.761$\\pm 0.008$ | 0.758$\\pm 0.010$ | 0.734$\\pm 0.022$ |\n| $r_2=0.0$ | 0.753$\\pm 0.036$ | 0.767$\\pm 0.035$ | 0.756$\\pm 0.035$ | 0.690$\\pm 0.044$ | 0.669$\\pm 0.047$ | 0.644$\\pm 0.025$ |\n\n* Recall that in the training data, $r=0.85$, which means the spurious attribute $A$ is positively correlated with $Y$. And in testing, $r_1=0.5$ means $A\\perp Y$, and $r_2=0.0$ means $A$ is negatively correlated with $Y$, which introduces strong distributional shifts.\n* The reported testing accuracies are averaged over 10 repeated experiments.\n\nThus, the hyperparameter of KNN does not require careful tuning, and any $K$ that is not excessively large suffices. Further, there exist many works analyzing how to select a proper $K$ for KNN [1,2]. In principle, KNN is designed to capture the local structure of the manifold and the neighborhood size $K$ shall not exceed the range where the local Euclidean metric holds. \n\n[1] Zhang, S., Li, X., Zong, M., Zhu, X., & Cheng, D. (2017). Learning k for knn classification.\n\n[2] Barrash, S., Shen, Y., & Giannakis, G. B. (2019, December). Scalable and adaptive KNN for regression over graphs.\n\n\n### **Q6 Performance on Retiring Adults dataset**\nWhen the ratio of the minor group is too low, the performance of GDRO will drop. Here we add some results on the Health Insurance Coverage task for the Retiring Adults data.\n\n| Minor Group Ratio | 0.0 | 1% | 2.5% |\n|---------------------|-------|-------|-------|\n| ERM | 0.485 | 0.490 | 0.490 |\n| WDRO | 0.475 | 0.477 | 0.477 |\n| KL-DRO | 0.485 | 0.496 | 0.498 |\n| $\\chi^2$-DRO | 0.490 | 0.497 | 0.498 |\n| GDRO | 0.476 | 0.503 | 0.527 |", " Thanks for your suggestions.\n\n#### 1. We would like to demonstrate that our Colored MNIST experiment exactly corresponds with the suggested valid setting, because\n* the data is high-dimensional (i.e. 2352).\n* the data lie on a low-dimensional manifold, which is well-accepted in computer vision. \n\n#### 2. We add one high-dimensional simulated experiment following your advice ($X_{high}\\in\\mathbb{R}^{300}$).\n\nThe data generation process is similar to [1], which is a typical classification setting in OOD generalization. In this setting, we introduce the spurious correlation between the label $Y=\\{+1,-1\\}$ and $A=\\{+1,-1\\}$. We firstly generate low-dimensional data $X_{low}=[S,V]^T \\in \\mathbb{R}^{10}$ as:\n$$\nS \\sim \\mathcal{N}(Y{\\bf 1}, \\sigma_s^2\\mathbb{I}_5), V \\sim \\mathcal{N}(A{\\bf 1}, \\sigma_v^2\\mathbb{I}_5)\n$$\nand\n$$\nA = Y, \\text{with probability }r; \\quad\nA = -Y, \\text{with probability }1-r\n$$\n\nIntuitively, $r$ controls the spurious correlation between $A$ and $Y$. When $r>0.5$, the spurious attribute $A$ is positively correlated with $Y$, and when $r<0.5$, the spurious correlation becomes negative.\nAnd larger $|r-0.5|$ results in stronger spurious correlation between $A$ and $Y$.\nThen as suggested, we convert this low-dimensional data to **high-dimensional** space via:\n$$\nX_{high} = (H X_{low}) \\in \\mathbb{R}^{300}\n$$\nwhere $H\\in\\mathbb{R}^{300\\times 10}$ and each column of $H$ is linear independent from each other ($H$ is full column rank). We randomly choose such $H$ in each run to introduce some randomness.\n\nFor the both training and testing data, we set $\\sigma_s^2=1.0$ and $\\sigma_v^2=0.3$.\nIn training, we set $r=0.85$ (here $A$ is positively correlated with $Y$). \nIn testing, we design two testing environments with $r_1=0.5$ (here $A\\perp Y$) and $r_2=1.0$ (here $A$ is negatively correlated with $Y$) to introduce distributional shifts.\nWe run the experiments 10 times, and each time with one random matrix $H$. We report the testing accuracies of the two testing environments for each method.\n\n| Without label noises | $r_1=0.5$ | $r_2=0.0$ |\n|---------------------|--------------------|---------|\n| ERM | 0.573($\\pm 0.006$) | 0.153($\\pm 0.011$) |\n| WDRO | 0.576($\\pm 0.006$) | 0.159($\\pm 0.012$) |\n| KL-DRO | 0.654($\\pm 0.008$) | 0.340($\\pm 0.015$) |\n| $\\chi^2$-DRO | 0.734($\\pm 0.022$) | 0.644($\\pm 0.025$) |\n| GDRO |**0.768($\\pm 0.005$)** | **0.767($\\pm 0.035$)** |\n\nFurther, as done in our simulated experiments, we add label noises to the training data. Specifically, we randomly sample 200 data points (4% of the training data) and flip their labels. The results over 10 runs are as follows:\n\n| With label noises | $r_1=0.5$ | $r_2=0.0$ |\n|---------------------|--------------------|---------|\n| ERM | 0.573($\\pm 0.006$) | 0.152($\\pm 0.013$) |\n| WDRO | 0.575($\\pm 0.008$) | 0.157($\\pm 0.017$) |\n| KL-DRO | 0.625($\\pm 0.007$) | 0.269($\\pm 0.015$) |\n| $\\chi^2$-DRO | 0.666($\\pm 0.037$) | 0.554($\\pm 0.153$) |\n| GDRO | **0.760($\\pm 0.010$)** | **0.703($\\pm 0.061$)** |\n\nFrom the results, our GDRO outperforms all the baselines and the performance is consistent with that under the original simulated settings.\nThe results demonstrate that our GDRO could utilize the data geometric information to build a practical uncertainty set.\nFurther, the second experiment with label noises also validates our analysis that GDRO could to some extent resist the influence of label noises.\n\n\n\n\n[1] Sagawa, S., Raghunathan, A., Koh, P. W., & Liang, P. (2020, November). An investigation of why overparameterization exacerbates spurious correlations.", " The reviewer perceives that the settings of simulated experiments do not support the motivation of this paper which the reviewer summarizes as *high dimensional data approximately resides on low dimensional manifolds*. We would like to clarify that the manifold assumption implies that the data is situated on a hypersurface in the feature space such that it does not symmetrically stretch in each dimension, the latter of which is assumed by the uncertainty set of past DROs who ignore this geometric structure and turns out over-pessimistic. However, the manifold assumption does not enforce an absolutely high dimension of the feature space. \n\nIn fact, we introduce the toy example in order to showcase the failure of geometric-unaware DROs in manifold data, with a low-dimensional setting for a straightforward demonstration. The description of the setting might be confusing, so we have a more detailed introduction in the revised experiment section in Appendix B. Here we would like to briefly review the setting. The feature space is spanned by $(S, V)$, where $S$ serves as a stable(causal) feature such that the label $Y$ is generated from $S$ such that:\n$$\nY = \\alpha_SS + S^2 + \\mathcal{N}(0,0.1),\\quad V = \\alpha_V Y +\\mathcal{N}(0,1)\n$$\nNote that $\\alpha_S$ is a fixed coefficient but unknown to the algorithm, while $\\alpha_V$ changes across different environments, making $V$ a spurious (anti-causal) feature because $V$ is generated by the label $Y$ in an unstable way.\n\nIn training, $\\alpha_V$ is chosen according to:\n$$\n\\alpha_V = 1, \\text{with probability }1-r;\\quad \\alpha_V = -0.1, \\text{with probability }r\n$$\n\nNote that $\\alpha_V$ is the coefficient but unknown to the algorithm, rather than randomly distributed in an interval as the reviewer supposed (sorry for the ambiguity). Thus, the data is situated on two branches in the space spanned by $(S, V)$. The first branch corresponding to Equation 2 is centered around the parabola $V = \\alpha_SS + S^2$ with Gaussian noises; the second branch corresponding to Equation 3 is centered around the parabola $V = -0.1(\\alpha_SS + S^2)$. Thus, the data is approximately combined with two sub-manifolds in the feature space. The visualization of the manifold structure, which is similar to a 'cross shape', could be found in Figure 1 of Appendix B.\n\nIn the source data where $r$ is a small positive number, such that most samples come from the first branch, a linear model trained by ERM objective tends to predict $Y$ based on $V$ to exploit the simple linear mapping in Equation 2. However, the target data is mostly sampled from the second branch where the mapping between $V$ and $Y$ significantly shifts, leading to the failure of ERM. A DRO method is supposed to capture the minority branch in the training process and to predict based on the stable feature $S$ instead of the unstable $V$. Actually, this setting is typically adopted in the literature on OOD generalization [1,2], which contains domain shifts caused by unstable features and spurious correlation. \n\n\nExisting DRO methods either create invalid samples out of the data manifold (WDRO, see Figure 2(a)) or completely ignore the geometric structure and focus on noises instead ($f$-DRO, see Figure 2(a)), both of which fail in this toy experiment (see Table 1). In contrast, our GDRO manages to capture the minority branch for being aware of the manifold structure.\n\nStill, it could be true that the manifold assumption is more easily satisfied in high-dimensional data. Therefore, we *follow the reviewer's suggestion to include more high-dimensional experiments in the revised version*. One of them is presented below.\n\n[1] Arjovsky, M., Bottou, L., Gulrajani, I., & Lopez-Paz, D. (2019). Invariant risk minimization. \n\n[2] Kuang, K., Xiong, R., Cui, P., Athey, S., & Li, B. (2020, April). Stable prediction with model misspecification and agnostic distribution shift. ", " We sincerely appreciate your approval of the idea as well as the theoretical analysis of this work and thank you for the suggestions on experiment design. Our rebuttal consists of two parts: \n* We have revised the experiment section in Appendix B based on your suggestions, and the paper will be updated accordingly if the revision turns out satisfactory;\n* We would like to address your concerns proposed in your review, which we have summarized into **6 questions**.\n\n\n### **Q1. Source of input graph $G_0$** \nThe reviewer is confused about the source of $G_0$ which is an input graph for the algorithm and raises concerns over potential information leakage of target data. \n\nWe would like to **clarify** that for all experiments in this paper, $G_0$ is constructed as a k-nearest neighbor graph from the *training data only*. Thus, testing information is inaccessible to the training procedure in any form. Specifically, we adopt NN-Descent to estimate the k-nearest neighbor graph for large-scale data while performing an exact search for k-nearest neighbors in the case of small datasets. We feel sorry to have caused misunderstandings and have included detailed descriptions of implementation in the revised supplementary material in Appendix B. \n\nTherefore, the performance gain of GDRO in the shown experiments is credited to the algorithm's robustness to distributional shifts rather than its perception of target information.\n\n### **Q2. Estimation accuracy of the data manifold**\n\nThe reviewer is concerned about the estimation accuracy of the manifold with KNN and NN-Descent.\n\nWe first provide numerical results of NN-Descent's accuracy for the reviewer's information: *NN-Descent typically converges to above 90% recall with each point comparing only to several percent of the whole dataset on average* [3]. As for the k-nearest neighbor graph which is either estimated by NN-Descent or exactly searched for, it is shown to have well approximated the geodesic distance within local structures on the manifold [1,2]. \n\nStill, the k-nearest neighbor graph is a fundamental and basic method to represent the data structure, and manifold learning is an area with intensive research. We have to clarify that manifold learning is *not* the focus of this paper, which takes the data structure $G_0$ as input and aims at constructing a DRO objective and optimization algorithm that manages to incorporate data geometric information. Notably, since the manifold structure has long been overlooked in past DRO designs, our GDRO achieves significant performance in the experiments even with the simple KNN representation of data structure. It proves that the direction for geometric-aware DROs which we propose is valuable and promising, the uncertainty set constrained with geometric structure is reasonable, and our proposed DRO optimization method has efficiently captured the geometric information encoded in the input graph (note that no target information is leaked into $G_0$). \n\nActually, GDRO is compatible with any manifold learning method. We do believe that a more accurate estimated data structure with advanced manifold learning algorithms will further boost the performance of GDRO, and we leave it to future work.\n\n[1] McInnes, L., Healy, J., & Melville, J. (2018). Umap: Uniform manifold approximation and projection for dimension reduction. \n\n[2] Dann, E., Henderson, N. C., Teichmann, S. A., Morgan, M. D., & Marioni, J. C. (2022). Differential abundance testing on single-cell data using k-nearest neighbor graphs.\n\n[3] Dong, W., Moses, C., & Li, K. (2011, March). Efficient k-nearest neighbor graph construction for generic similarity measures. In *Proceedings of the 20th international conference on World wide web* (pp. 577-586).", " We sincerely appreciate your approval of the novelty and the theoretical contributions of this work and thank you for the recommendation of the related papers. And we address your concerns in the following:\n\n#### 1. Comprehensive discussions about the related work\nThanks for recommending us these papers. \n[1] studies the DRO problem for data generated by a time-homogeneous, ergodic finite-state Markov chain and derives the optimization algorithm for it. [2] belongs to the subtype of DRO which constrains the uncertainty set via shape constraints. As for shape constraints, one commonly used is unimodality, and [2] proposes to replace it with the orthounimodality to constitute the uncertainty set for DRO for multivariate extreme event analysis. Compared with the shape constraints, our proposed GDRO intrinsically considers the data geometric properties via Geometric Wasserstein distance, and the geometric property is learned in a data-driven way which is compatible with more complicated machine learning methods, including manifold learning and graph learning methods. Besides, we have added a more thorough discussion of the related works in the revised supplementary material (Appendix A.1).\n\n(ps. sorry that we have not found the third paper named \"Distributionally robust shape and topology optimization.\")\n\n[1] Distributionally Robust Optimization with Markovian Data. \n\n[2] Orthounimodal Distributionally Robust Optimization. \n\n[3] Distributionally robust shape and topology optimization.\n\n\n#### 2. Apply GDRO to deep neural networks\n* Firstly, GDRO is compatible with any parameterized model including DNN. Compared with ERM, the extra cost of GDRO consists of: 1) The construction of a k-nearest neighbor graph at the initialization step which is built once and for all. 2) The solution of sample weights by gradient flow which is implemented in a way similar to message propagation (since the samples weights are transferred through the edges), which scales *linearly* with sample size and accommodates parallelization by GPU (see implementation details in Appendix B in the revised supplementary material). 3) Model's parameters are updated with a weighted loss by vanilla gradient descent or stochastic gradient descent through batch training, as is usual for DNN.\n* Secondly, we have adopted MLP in our experiments on Colored MNIST, HIV, and Ionosphere data, which empirically demonstrates the adaptability of our model to DNNs.\n\n#### 3. Non-convex loss function\nWe have proved that Theorem 3.1 also holds for non-convex loss functions. And we have updated the proof in the revised supplementary material (Appendix A.3).\n\n\n#### 4. Limitations of GDRO\nThis work focuses on incorporating data geometric properties into the DRO framework and deriving the corresponding optimization algorithm, and we use the simple k-nearest-neighbor algorithm to learn the graph $G_0$. For more complicated data, the k-nearest-neighbor algorithm may be ineffective and we may need more advanced manifold learning or graph learning methods. Our GDRO framework is compatible with any of these methods, and we leave this for future work. ", " We sincerely appreciate your approval of both the motivation and novelty of this work and thank you for the advice to compare GDRO with general OOD methods beyond DRO. We would like to address your remaining concerns over this paper.\n\n1. **Support shift**: GDRO handles unseen distributions with various categories of distributional shifts as stated in the experiment section, including domain shifts, label shifts, and sub-population shifts. However, generalization to target data entirely falling out of the training data's manifold, known as the support shift or non-overlapping support, is not the focus of this work, since it is intrinsically hard to achieve without additional data or assumptions. Firstly, in standard supervised learning, covariate shift with arbitrary support is known to be intractable [1]. Some domain adaptation methods, such as invariant representation learning [2], could empirically validate the effectiveness out of support, but they still require the target distribution to be close to the source's and make strong assumptions about the data. And such methods have also utilized unlabeled target data from unseen support. Secondly, there are some DRO methods allowing for distributions with different support from the original training distribution, such as Wasserstein DRO. However, as mentioned in our paper, extending the support is quite hard in general. For example, in our selection bias simulated experiment, we visualize the results of WDRO in Figure 2 (a), which shows that although WDRO 'creates' some data points, it introduces much more label noises (red points in the third subfigure in Figure 2 (a)). Therefore, this is temporarily not the focus of this work, and we leave it for future work.\n\n2. **Training efficiency**: Training efficiency is completely not a concern for GDRO. Firstly, the graph is constructed **once and for all** at the initialization step and is fed as input to the algorithm. Secondly, we adopt NN-Descent to construct the k-nearest neighbor graph with an almost linear complexity of $O(n^{1.14})$ for large-scale datasets. Furthermore, since the sample weights are transferred along the edges of the graph, the simulation of gradient flow can be implemented in a way similar to message propagation, which scales linearly with sample size. The implementation above ensures the adaptability of GDRO to large-scale data. More detailed descriptions of algorithm implementation could be found in the revised supplementary material (Appendix B). \n\n3. **Compare with other general OOD generalization methods**: Thanks for your advice. Since our GDRO does not utilize additional data or make strong assumptions about the data, we find most of the general OOD generalization methods not suitable for comparing with the proposed GDRO. For example, most of the invariant learning and domain generalization methods require data from multiple environments in training; domain adaptation methods require additional information on the target domain. Therefore, for a fair comparison, we choose the `Environment Inference for Invariant Learning (EIIL)[3]` as another baseline from other branches of OOD generalization, and we test its performance on the Colored MNSIT data. The results are as follows:\n\n| Colored MNIST | Train | Test |\n|---------------|-------|-------|\n| ERM | 0.867 | 0.116 |\n| WDRO | 1.000 | 0.335 |\n| KL-DRO | 1.000 | 0.287 |\n| $\\chi^2$-DRO | 0.839 | 0.420 |\n| EIIL | 0.740 | 0.596 |\n| GDRO | 0.717 | 0.696 |\n\nAnd we would add this baseline for all experiments in the final version.\n\n[1] David, S. B., Lu, T., Luu, T., & Pál, D. (2010). Impossibility theorems for domain adaptation. In *Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics* (pp. 129-136). JMLR Workshop and Conference Proceedings.\n\n[2] Zhao, H., Des Combes, R. T., Zhang, K., & Gordon, G. (2019). On learning invariant representations for domain adaptation. In *International Conference on Machine Learning* (pp. 7523-7532). PMLR.\n\n[3] Creager, E., Jacobsen, J. H., & Zemel, R. (2021). Environment inference for invariant learning. In *International Conference on Machine Learning* (pp. 2189-2200). PMLR.", " ### 5. The novelty of this paper\nWe disagree with the comments that theoretical results appear to be very trivial.\n\nThe novelty of this paper includes: (1) We firstly incorporate the data geometric properties into the DRO framework in a data-driven way to address the over-pessimism problem in DRO. Our method is data-driven and friendly to deep neural networks. This idea has hardly been explored, yet can largely overcome the over-pessimism problem. Further, it opens up a promising new avenue for DRO to naturally incorporate manifold learning and graph learning methods. (2) We derive the approximate optimization algorithm for the newly-proposed objective and characterize its error rate as well as the convergence rate. All the other reviewers acknowledged our contributions, especially the reviewer aH5S who perceived that *a lot of credit should be given to the authors for the theory part: nicely formulated GDRO objective and convergence result.*\n\nAt last, we welcome any **technical** advice for our proposed method and insights into the underlying theory. And we're ready to address your concern if you still find the paper hard to follow.\n", " ### 1. Language and formatting issues\nThanks for your suggestions on the language and formatting issues in this paper. Most typos are immediately fixed in the rebuttal revision. While a few of them remain unchanged for the following considerations:\n\n- Line 146: The empirical *marginal* distribution of $X$ is formulated as $\\sum_{i=1}^n\\delta(x_i)$ instead of $\\sum_{i=1}^n\\delta(x_i, y_i)$.\n- Line 204: $\\mathcal R(p)$ is stated as the overall *objective function* of this paper, and its definition can be traced back to Equation 4. We have claimed $\\theta$ and $n$ as constants in the context, and to avoid any ambiguity, we explicitly specify $\\mathcal R(p)$ as the abbreviation for $\\mathcal R_n(\\theta, p)$ in the rebuttal revision.\n\nAdditionally, we perform an exhaustive re-examination of the paper to ensure that the revision is free of similar typos. Apart from the suggestions on language or style, we would really appreciate if you could further offer detailed comments on the *technical* content. \n\n### 2. Explanation of notations and claims\nRegarding the obstacles you encountered while going through the paper, we provide some detailed explanations below:\n\n- The notation $p$ is consistent through the article, representing a continuously differentiable curve $p(t) : [0, 1] → \\mathscr P_0 (G_0 )$ which describes the transformation of a measure on the vertex space of graph $G_0$ (see Definition 3.1). And the probability weight of the $i$-th vertex at time $t$ is abbreviated as $p_i(t)$ (see Line 165, clarification of notations). When $t$ is clear from the context, we denote the probability weight of the $i$-th vertex by $p_i$. We discuss WDRO in section 2 (related work), which considers the transformation of a measure in the Euclidean space instead of the discrete space of GDRO. In this case, $p(t) : [0, 1] → \\mathscr P_0 (\\mathbb R^n)$ naturally induces the transport plan $p(t,x):([0,1],\\mathbb R^n) → \\mathbb R$ which is the probability density of the point $x$ at time $t$ (see Line 92). The notations are common in optimal transportation theory based on which GDRO is developed (see reference [21]: *Topics in optimal transportation*). \n\n- The claim in Line 151 that the objective of GWRO (Equation 4) degenerates to a $f$-DRO as $\\beta → \\infty$ is straightforward from the fact that both $f$-DRO and GWDRO are equivalent to ERM with infinitely large $\\beta$. We've included the simple proof in the revised supplementary material.\n\n### 3. Details of experiments\n- The OOD setting is emphasized for each experiment. Further implementation details are elaborated in the supplementary material. Below is a table from the Appendix summarizing the various distributional shifts that Distributionally Robust Optimization shall handle for each experiment. We take the real-world datasets for example: the Retiring Adults dataset is targeted at subpopulation shift; Colored MNIST is targeted at concept shift, and the IonoSphere and HIV datasets are targeted at label shift. We comprehensively discuss the sources, intensity, and mechanics of each distributional shift for each experiment in section 4. In terms of loss functions, we adopt MSE for the regression task and cross-entropy for the classification task. We omit the specification of the trivial ERM objective in the original paper. Further implementation details are elaborated in the revised supplementary material, which will be added to the main body when accepted.\n\n\n| Data | Toy Example | Selection Bias | Colored MNIST | Retiring Adults | HIV | Ionosphere | Added Simulation Data |\n|---------------|:------------:|:--------------:|:-------------:|:---------------:|:----------:|:----------:|:---------------------:|\n| Kind | Simulation | Simulation | Real | Real | Real | Real | Simulation |\n| Dimension | 2 | 10 | 2352 | 10~19 | 160 | 34 | 300 |\n| Shift Pattern | Sub-population | Domain Shift | Domain Shift | Sub-population | Label Shift | Label Shift | Domain Shift |\n| Model | Linear | Linear | MLP | Linear | MLP | MLP | Linear |\n\n\n\n### 4. Related works\nIn terms of related work, Section 2 reviews the literature on DRO. Since DRO methods are characterized by the uncertainty set they adopt for robust optimization. We classify the most relevant DRO literature into two categories based on the distance metric to specify the uncertainty set: $f$-divergence and Wasserstein distance. More detailed discussions on extensive DRO methods and other general OOD works are included in the revised supplementary material.", " In this paper, the authors study DRO with data geometry considered in the uncertainty set, making use of the so-called Geometric Wasserstein distance. The authors derive an approximate algorithm for the proposed GDRO and prove its convergence. Numerical experiments are performed to demonstrate the proposed GDRO framework over ERM and other DRO frameworks. Strengths: \n- An interesting framework proposed using the Geometric Wasserstein distance\n- Extensive experiments\n\nWeaknesses: \n- The whole paper is quite hard to follow due to: \n - the lack of self-containedness—some rather non-standard notions are not well-defined or well-explained, or very hard to understand\n - unsatisfactory English writing (wrong grammar, choice of words, etc.)\n - inaccuracy in mathematics exposition, the lack of mathematical rigor and lots of abuse of notation\n- Related works are not discussed in detail, e.g., DRO\n- A lot of typos, typesetting and formatting issues \n- The settings and descriptions of the experiments are unclear\n\n\n=============================\nPost-rebuttal: Thanks the authors for their effort. The revised version has addressed my concerns and I have raised my score. Although it is allowed in this NeurIPS submission cycle, I think the authors should have provided sufficient experimental details in their initial submission. A lot of new content have been added in the revised version of the paper, including supplementary material, say for the experimental details. I feel it is quite unfair to assess the merits of the paper according to this updated version, comparing to other paper submissions. It appears to me that the authors was submitting unfinished work on the submission deadline and take advantage of the rebuttal phase. The mathematics of this article is very hard to follow due to abuse of notation. For example, $p$ has been used to represent a lot of different notions. Various claims are also stated without proof or sufficient explanations (e.g., line 151). The use of English language should also be largely improved. The theoretical results also appear to be very trivial. The description of the experiments must be improved—we don’t even know what loss functions are used in the experiments. The exact DRO problems solved in the experiments, especially for the real-world data ones, are completely unstated. The authors also have to rectify the following issues: \n\n- Typesetting or formatting issues (non-exhaustive): \n - You have to add spaces before all parentheses and citations \n - Add punctuations at the end of display style equations\n - Format tables according to the instructions in the NeurIPS paper template—no vertical lines! \n - Use italics instead of bold to emphasize words\n\n- Typos or writing issues (non-exhaustive):\n - Line 42: $\\mathcal{X}_2$—do you mean $\\chi^2$? \n - Line 60: $O(1/\\sqrt{T})$\n - Line 109: large-scale~~d~~\n - Line 128: ~~cross section~~ cross-sectional\n - Line 129: ~~algorithmic~~ arithmetic\n - Line 137: Brenier\n - Line 146: $\\delta(x_i)$ vs $\\delta(x_i, y_i)$\n - Line 159: ~~alternate~~ alternating\n - Line 161: ascent~~s~~\n - Line 193: $\\mathcal{R}_n$ vs $R_n$\n - Line 200: $\\mathcal{GW}$ vs $GW$\n - Line 204: what is $\\mathcal{R}(p)$? Not previously defined. \n - Table 1: why adding underscores between “mean” and “error”? Use a dot for abbreviations instead of an underscore. \n Relevant discussion of limitations might appear in the paper but is hard to find. \n", " This paper proposed a novel Geometric Wasserstein DRO (GDRO) method by exploiting the discrete Geometric Wasserstein distance. A generically applicable approximate algorithm is derived for model optimization. Extensive experiments on both simulation and real-world datasets demonstrate its effectiveness. Pros:\n1. The proposed method is well motivated and reasonable. This paper studied an important problem of DRO: the uncertainty set is too over-flexible such that it may include implausible worst-case distributions. To address this issue, the authors proposed to use Discrete Geometric Wasserstein distance to construct the uncertainty set, in order to constrain the uncertainty set within the data manifold. The method is somewhat novel and interesting. \n\n2. Both convergence rate and the bounded error rate are provided. And the superiority of the proposed method is also empirically demonstrated through experiments on both simulation and real-world datasets.\n\nCons:\n1. Data from unseen distributions may fall out of the manifold constructed by training data. In this case, simply constraining the uncertainty set may not be helpful for OOD generalization.\n\n2. Training efficiency. The authors use a graph to represent the manifold structure. It may be problematic for large-scale datasets since the graph needs to be estimated at every iteration.\n\n3. In the experiments, the authors only compare with ERM and DRO-based methods. It would be a bonus if some general methods for OOD generalization can be included. \n 1. Since the manifold is constructed by the training set, is it still applicable for unseen distributions? Data from unseen distributions may fall out of the data manifold. \n2. Does the graph need to be updated at every iteration? If so, it would be time-consuming to estimate the manifold for large-scale datasets. \n Yes.", " This paper considers the data geometry in the distributionally robust optimization (DRO) problems and proposes a novel framework called Geometric Wasserstein DRO (GDRO) to achieve their goal. The authors also provide some theoretical analyses, such as approximated optimization error and convergence rate, to theoretically show the strengths of GDRO. Finally, the experimental results show the effectiveness of the proposed approach. Strengths:\n- The motivation and contributions are good. This work attacks an overlooked issue in the DRO community and proposes a reasonable method to alleviate this. This would raise much more research attention in this direction.\n- This paper presents some roughly decent theoretical guarantees, which support the effectiveness theoretically.\n\nWeaknesses:\n- This paper lacks comprehensive discussions about the related work. As we all know, DRO has attracted tremendous research interest in the machine learning community, and there are many studies about DRO.\n- The proposed GDRO may has somewhat limitations, since it is not easy to apply to the deep neural networks (DNNs) (Perhaps I am wrong, but at least the authors do not mention it).\n\nAs a whole, despite some limitations, I believe the proposed method is a qualified work and could bring some new insights into the DRO community. Therefore, I tend to accept this paper. My major concerns are listed below:\n- The authors should provide more discussions related to their work, including but not limited to the following concepts:\na)\tDistributionally Robust Optimization with Markovian Data.\nb)\tOrthounimodal Distributionally Robust Optimization.\nc)\tDistributionally robust shape and topology optimization.\n- Can the GDRO apply to large DNNs?\n- In Thm.3.1, $\\ell(\\theta)$ is assumed to be convex. What about the non-convex?\n\n==================================\n\nMy concerns about this work have been well addressed. I am happy to vote for its acceptance. None\n", " This work aims to solve the problem that DRO is too “pessimistic” (the uncertainty set is too large) and often leads to poor results in practice. The motivation is that “high dimensional data approximately reside on low dimensional manifolds” (lines 6-7), so this work tries to constrain the uncertainty set on this low dimensional manifold. To do this, this work (i) uses the NN-Descent method to estimate the low dimensional manifold; (ii) formulates the GDRO objective and optimizes it with alternating optimization and proves that it converges. The authors then conduct a series of experiments on synthetic and real datasets and claim that the proposed method is better than existing DRO methods. I really like the high-level idea of this paper. “DRO is too pessimistic” is a well-known open problem in this field, and this work tries to solve this problem by constraining the uncertainty set on a low dimensional manifold, which makes lots of sense. I also think that a lot of credit should be given to the authors for the theory part: The GDRO objective is nicely formulated, and optimizing this objective with alternating optimization makes sense. The convergence result is also very nice.\n\nThe major weaknesses of this work, however, come from the experiment section. Recall that the core motivation of this work is that “high dimensional data approximately reside on low dimensional manifolds” (lines 6-7), whereas there is a huge gap between the motivation and the experiments, which makes the experiments very confusing and unconvincing.\n\nTake the first experiment in Section 4.1 as an example. First of all, the experimental setting is very confusing. I suppose that the task is to infer Y from (S, V). I also couldn’t find the definition of alpha_V, so I suppose that it is used to define V. Thus, in this task the input data is 2-dimensional, and (S, V) seems to also reside on a 2-dimensional manifold, which is the union of a number of 1-dimensional curves (if alpha_V is in [a, b] for some a and b). I don’t think this could be called “high dimensional data approximately residing on low dimensional manifolds”. \n\nThen the authors claim that GDRO is much better than other DRO on this task. I don’t know how the graph G_0 is estimated. I suppose that the authors just simply provide G_0 to the algorithm because NN-Descent is only used “for large-scaled datasets” (lines 109-110). So it seems to me that the reason why GDRO is so good is that G_0 leaks some additional information about the target distribution to it but not to other methods, not because it leverages the fact that the data resides on a low dimensional manifold.\n\nOf course, it is nice that GDRO could utilize this additional leaked information from G_0. The question is: How to get this G_0 in practice? The authors propose to estimate G_0 with NN-Descent, but they don’t demonstrate how well NN-Descent can estimate G_0 on realistic tasks. If G_0 is not well estimated, and the target distribution is outside the estimated manifold, then I imagine that GDRO could completely fail.\n\nMoreover, most of the tasks in the experiments are not really high-dimensional (<50 dimensions), and all tasks seem to follow some simple, unrealistic structures, which make it easier for GDRO to achieve high performances. It is questionable whether these good performances are transferable to real-world applications with realistic distribution shifts.\n\nA valid experimental setting I would suggest the authors try is the following: The input data comes from a low dimensional manifold in a high dimensional space (at least 200 dimensions), but the structure of this manifold is unknown (for instance, introduce randomness into the manifold structure), so GDRO must first estimate G_0 by itself. This setting is closer to the authors’ motivation that “high dimensional data approximately reside on low dimensional manifolds”. Otherwise, it is always questionable whether the performance gain of GDRO comes from the information leakage from the provided G_0, rather than its ability to estimate and utilize the low-dimensional manifold.\n\nIn summary, I really like the high-level idea and the theory part of this paper, but the experiment section does require a lot of improvement. Currently, there is a huge gap between the authors’ motivation and the experiments, making the main conclusion of this paper highly debatable. For this reason, I recommend rejecting this paper for this time, and hope that the authors could resubmit after rewriting the experiment section.\n\n****** Post Rebuttal ******\nThe authors have revised the paper as suggested, so I would like to raise my rating to accept.\n 1. I suggest the authors provide some numerical results to demonstrate how well NN-Descent can estimate the low-dimensional manifold. This is very important, because if the manifold is not well estimated and the target distribution is outside the estimated manifold, then GDRO could completely fail. Moreover, since the authors are using kNN, studying the effect of k is also important. If the method works for some k but not for others, then the authors need to elaborate on how to select a proper k with the training samples alone.\n\n2. Could the authors clarify for which of the experiments is the graph G_0 directly provided to GDRO, and for which of them is the G_0 estimated by NN-Descent?\n\n3. In Figure 3, on the Retiring Adults dataset, GDRO seems to maintain the same high performance regardless of the minor ratio. This is not a good signal. When does the performance of GDRO start to drop? For instance, does GDRO still have a very high performance when the minor ratio is 0.01? What about when the ratio is 0? If GDRO always has such a high performance, then I believe that either G_0 provides too much information to GDRO (for instance, the target distribution could be directly obtained from G_0), or there is a bug in the code.\n The limitations are not sufficiently addressed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "2LZNJECPt-", "-6QNXzgSTQR", "FXP0PnY_HAj", "ad-x1n85qcr", "2JH1tYRRgVt", "mEiDfafqZas", "cG4DwX-5rW4I", "vtV6uKnSq8z", "a9oyQCNiri", "VopZSZL8CUS", "5bkYZVz4OAQV", "Tl61RZpCF7_", "tC6kwUS7blv", "PC-pJQudCXr", "uWnaXbdNT87", "3aYAIlBue7c", "a9oyQCNiri", "eSyFkO2mEPI", "DPIGA9XSRlf", "DktFov9iZyV", "CZRnOA9M12S", "nips_2022_caH1x1ZBLDR", "nips_2022_caH1x1ZBLDR", "nips_2022_caH1x1ZBLDR", "nips_2022_caH1x1ZBLDR" ]
nips_2022_AIqC7F7xV-d
Learning Unified Representations for Multi-Resolution Face Recognition
In this work, we propose Branch-to-Trunk network (BTNet), a novel representation learning method for multi-resolution face recognition. It consists of a trunk network (TNet), namely a unified encoder, and multiple branch networks (BNets), namely resolution adapters. As per the input, a resolution-specific BNet is used and the output are implanted as feature maps in the feature pyramid of TNet, at a layer with the same resolution. The discriminability of tiny faces is significantly improved, as the interpolation error introduced by rescaling, especially up-sampling, is mitigated on the inputs. With branch distillation and backward-compatible training, BTNet transfers discriminative high-resolution information to multiple branches while guaranteeing representation compatibility. Our experiments demonstrate strong performance on face recognition benchmarks, both for multi-resolution identity matching and feature aggregation, with much less computation amount and parameter storage. We establish new state-of-the-art on the challenging QMUL-SurvFace 1: N face identification task.
Reject
This paper proposes a Branch-to-Trunk network with multiple independent branch networks and a shared trunk network for multi-resolution face recognition. This paper received three detailed reviews. While there are some merits in this work, the reviewers raised many concerns, including 1) inadequate experiments to demonstrate the superiority of the proposed method, 2) missing ablations, 3) improvement achieved by BTNet is unclear. After reading the reviews, rebuttals, and the paper, the AC concurs with the reviewers’ comments, and feels that the concerns outweigh the strength. Therefore, a rejection is recommended.
train
[ "Qk5G9ocMjJG", "cw6QAf2dLG", "bSuN2m4dPS_", "gFd1m3SQTQ", "IXcuCGpkQod", "3MU94jlfhe", "Vk9dTtzXeHp", "lclDQG6XxEC" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I appreciate the authors for the responses to my concerns. I still have some further concerns regarding to the response.\n\n**Q3.** It is unclear about the setup of $\\varphi_{mm}$. \nAs $\\varphi_{r}$ has the issue of representation compatibility, it is not suitable for cross-resolution recognition (i.e. see Lines 258-259 of the submission). Table 1 also confirms the lower accuracy of $\\varphi_{mm}$ in comparison to other methods. \nHowever, in Table 3, $\\varphi_{mm}$ is better than $\\varphi_{mr}$. Is there any inconsistency in the benchmarks?\n\n**Q4**. I understand the authors' viewpoint. However, I believe the comparison to SOTA approaches should be employed so that we can see the potential of using the proposed approach in practice.\nFrom my viewpoint, the results are not necessary to beat all SOTA approaches. A comparable result with an advantage of multi-resolution robustness is also a valuable point. ", " We would like to thank the reviewer for the careful reading and valuable comments. We address the questions and clarify the issues accordingly as described below.\n\n**Q0: Technical novelty of our paper**\n\n**[Reply]** We agree that most of the building blocks were introduced in previous works. However, the key technical contribution of the paper is a general network structure based on Branch-to-Trunk and effective training strategies based on BCT and branch distillation. Since multi-resolution face recognition is dominated by super-resolution and projection methods, our method is the first attempt to decouple the information flow conditioned on the input resolution. Therefore, we believe our method is technically novel.\n\n**Q1: Analysis of improvement achieved by our method**\n\n**[Reply]** Actually, $L_{cur}$ is an implementation of $L_{influence}$, which mainly aims at ensuring the feature compatibility instead of improving the discriminability. In our experiments, the same $L_{cur}$ is adopted for BTNet and $\\varphi_{mr}$. Though BTNet and $\\varphi_{mr}$ both improve the robustness against the resolution-variance, BTNet leaves a large gap in terms of operating schemes. Specifically, instead of rescaling the input images to a canonical scale before learning to be robust against introduced noise, BTNet operates them on different scales separately and then transforms them to a uniform scale. It makes up for the sacrificed discriminability of $\\varphi_{mr}$, which is in exchange for the adaptability for resolution-variance.\n\n**Q2: More baselines**\n\n**[Reply]** Many thanks for your suggestions. We agree that $\\varphi_{mr}$ seems naive as HR and LR images share similar weights during training and we did some tests for other alternatives. To fairly compare with the baselines, we trained the models in the same settings. For $\\varphi_{mr}$, we trained the model with a size in the candidate set [7,14,28,56,112] with equal probability of being chosen. For baseline A, we chose in the candidate set [7,14,28,56,112] with unequal probability of being chosen [0.3 0.25 0.2 0.15 0.1]. For baseline B, we randomly down-sampled the images to [4,112]. For baseline C, we finetuned $\\varphi_{hr}$ by using L2-loss to enforce the features of LR images closer to the HR ones. Higher weight for LR images (Baseline A) results in worse performance as it is difficult for the model to learn to extract discriminative features from a limited number of HR images; lower weight for LR images (Baseline B) still leads to limited robustness against low-resolution images. Directly enforcing the features closer in the output feature space (Baseline C) would affect the supervision from the class label, making the discriminative power of the model almost lost. We see our design is critical to achieving higher accuracy. We will add more complete baselines in our revised paper.\n\n||112&14 Acc.(%)|14&14 Acc.(%)|\n|:-|:-|:-|\n|Baseline A| 87.13 | 80.22 |\n|Baseline B|88.13|80.55|\n|Baseline C| - |-|\n|Ours|94.08|90.90|\n\n**Q3: Analysis of cross-resolution experiments**\n\n**[Reply]** In the experiments corresponding to Table 3, the cross-resolution features are concatenated to form the representation of the identity, i.e., one person ID. Though the features within an identity are not compatible, the components of the embeddings are still decoupled, and thus the similarity calculation, i.e., the dot product of two vectors is operated on different components respectively. From the experimental result that $\\varphi_{hr}$ performs well, we can view the HR component as sufficiently discriminative information while the LR component as weak interference information. Since $\\varphi_{mm}$ can extract more informative HR features than $\\varphi_{mr}$, it is reasonable to have higher accuracy. As $\\varphi_{mm}$ can extract more informative LR features than $\\varphi_{mr}$, the interference is much more intense for $\\varphi_{mr}$ with the resolution decrease of the LR component (i.e., from 112 to 7), and thus the gap is becoming larger.\n\n**Q4: Comparison on IJB-C**\n\n**[Reply]** In fact, our proposed BTNet is a general network architecture for representing multi-resolution information that can improve the multi-resolution representation capability of recent methods (e.g., more powerful hierarchical DNNs with improved marginal losses). We built a test benchmark for multi-resolution feature aggregation by modifying the official one of IJB-C dataset. It aims to verify the relative improvement to existing face recognition methods brought by our operating scheme, rather than the optimality of the absolute metrics.\n", " We would like to thank the reviewer for the careful reading and valuable comments. We address the questions and clarify the issues accordingly as described below.\n\n**Q1: The presentation of the results**\n\n**[Reply]** Many thanks for your suggestions. We will add more detailed explanation, especially for the counter-intuitive results in the revised paper.\n\n**Q2: The branch selection strategy**\n\n**[Reply]** We performed some preliminary tests on the strategy and found that the low-quality image may possess an underlying optical resolution significantly lower than its size due to degraded quality caused by noise, blur, occlusion, etc. Thus, there exists dislocation between the underlying optical resolution of native face images and that of a branch. To avoid introducing extra large-scale parameters for predicting the image quality, the heuristic selection strategy is used in our paper. We agree that this requires more in depth understanding and will investigate more in the future.\n\n**Q3: SOTA performance on QMUL-SurvFace**\n\n**[Reply]** For the six verification benchmarks (e.g., LFW, AgeDB-30) and IJB-C dataset, there is no official protocols for validating multi-resolution face recognition and corresponding SOTA methods. Without loss of generality, we build new protocols to validate the relative gain of our method to baselines. Different from the above datasets, QMUL-SurvFace is dedicated to surveillance face recognition challenge, which has standard testing protocols and reported SOTA method. The wide spatial resolution distribution of QMUL-SurvFace enables a more comprehensive evaluation of the performance on multi-resolution face recognition.\n\n**Q4: Paper writing**\n \n**[Reply]** Many thanks for your advice. We will improve the paper writing, carefully check and correct typos in the revised paper.\n\n**Q5: Overview figure** \n\n**[Reply]** An overview of the proposed network is illustrated in Figure 2 in the paper, which shows the basic ideas as well. For the detailed architecture of an instantiation of our network, please refer to Figure 1 in Appendix.\n", " We would like to thank the reviewer for the careful reading and valuable comments. We address the questions and clarify the issues accordingly as described below.\n\n**Q1: General face recognition tasks**\n\n**[Reply]** Generally, The testing protocols of face recognition can be categorized into (1) face verification: the model needs to output whether the input images are of the same person or the input image is that of the claimed person (1:1 problem); and (2) face identification: the model needs to output the ID if the image is any of the K persons from a given database (1:N problem). We evaluate not only face verification task on six verification benchmarks (e.g., LFW, AgeDB-30) and IJB-C dataset, but also face identification task on IJB-C and QMUL-SurvFace datasets. Results show that our method performs well consistently in above two tasks. \n\n**Q2: Comparison on QMUL-SurvFace**\n\n**[Reply]** In fact, our proposed BTNet is a general network architecture for representing multi-resolution information that can improve the multi-resolution representation capability of recent methods (e.g., more powerful hierarchical DNNs with improved marginal losses). We show that though AdaFace [a] with additional subtle strategies performs better than the best method reported in [b] but underperforms our method. As stated above, we can implement BTNet based on AdaFace and obtain more gains potentially. We will add more complete results in the revised paper.\n\n|TPIR20(%)@FPIR|0.3|0.2|0.1|0.01|AUC|\n|:-|:-|:-|:-|:-|:-|\n|RAN [b]|26.5|21.6|14.9|3.8|32.3|\n|AdaFace [a]|28.3|23.6|16.5|2.6|32.6|\n|Ours|31.2|26.9|20.6|2.5|35.4|\n\n**Q3: Explanation for influence loss**\n\n**[Reply]** Introducing the influence loss is mainly aimed to ensure the representation compatibility, and any classification loss can be used for implementation. The comparison results demonstrate that there is no significant difference among different implementations of influence loss. Moreover, in Appendix A.3, we did the ablation study on the effects of different training method alternatives, showing how each strategy (e.g., back-compatible training, branch distillation) contributes to the effectiveness of BTNet.\n\n|Implementation of influence loss|112&14 Acc.(%)|14&14 Acc.(%)|\n|:-|:-|:-|\n|CosFace|94.10|90.78|\n|ArcFace|94.17|90.88|\n|CurricularFace|94.08|90.90|\n\n**Q4: Qualitative results of successful and failure cases**\n\n**[Reply]** Thanks for your advice. Since we cannot post images in the OpenReview system, we will add visualization and analysis of both successful and failure cases in the revised paper.\n\n[a] AdaFace: Quality Adaptive Margin for Face Recognition\n\n[b] Generate to Adapt: Resolution Adaption Network for Surveillance Face Recognition\n", " Thanks for all reviewers’ careful and valuable comments. To better understand our work, we would like to give a brief recap here.\n\n**(1) What is our goal?** Matching images with arbitrary resolutions (i.e., high-resolution, cross-resolution and low-resolution) effectively and efficiently, which is quite different from the traditional face recognition task. \n\n**(2) What is the core idea of our method?** Building unified (i.e.,compatible and discriminative) representations for multi-resolution images without introducing erroneous information.\n\n**(3) How to achieve our goal via our method?** \n\n| | Compatibility | Discriminability |\n| :- |:- |:- |\n|Input preprocessing|- |w/o rescaling to a canonical size |\n|Network strcture |TNet (unified encoder) |BNets (resolution adapters) |\n|Training strategy |Back compatible training (influence loss)|Branch distillation |", " The paper proposes a Branch-to-Trunk network (BTNet) with multiple independent branch networks (BNets) and a shared trunk network (TNet) to extract feature representation for multi-resolution face recognition. The CurricularFace loss is refined as influence loss and a branch distillation loss is included for training to ensure the discriminative and compatible representation. The experiments conducted on both 1:1 and 1:N verification seems to validate the efficacy of the proposed method. ### Strengths:\n\n- The paper is well organized and easy to follow.\n\n- It is an interesting idea to decouple a face image as branches for discriminative representation learning and then couple them as the trunk for compatible representation learning. \n\n- The performance on the tasks of both cross-resolution and same-resolution identify matching looks very promising. \n\n\n### Weaknesses:\n\n- The authors claim the proposed BTNet is able to learn unified representation for face recognition. However, the experiment validation is only conducted on the face verification task. Will the learned unified representation can be beneficial to solve the general face recognition tasks?\n\n- The comparisons is inadequate to well demonstrate the superiority of the proposed BTNet. All the baselines listed in Table 2 are at least two years ago. The authors should compare the proposed BTNet with some recent face identification methods. \n\n- The necessary ablation studies are missing. In the current experimental results presentation, we only can see the overall performance gain. It is hard to guess which parts contribute more, and which parts contribute less. If the main performance gain is due to the introduction of CurricularFace loss as influence loss, then main technical contribution of TBNet may need to be doubted.\n\n- Lack the necessary qualitative results to show both successful and failure cases. See the above \"weakness\". Yes.", " In this paper, the authors introduce a Branch-to-Trunk Network (BTNet) for multi-resolution Face Recognition. \nParticularly, BTNet consists of two main components: BNet and TNet. While TNet aims at maintaining the main features flow (from high- to low-resolution), BNet is introduced for maps inputs to intermediate features with same resolution and maintain the feature compatibility when injecting them to the main flow of TNet.\nThe proposed BTNet is validated on QMUL-SurvFace and IJB-C protocols. STRENGTH\n- The paper is well-motivated.\n- The writing is easy to follow.\n- The idea of multi-branch network to overcome the limitation of the amount of information can be extracted from low-resolution input faces is interesting.\n- Experimental results show improvements in comparison to baselines and prior works.\n\n\nWEAKNESS\nGenerally, the novelty of the paper is limited as most of the building blocks (i.e. loss from CurricularFace [3] and common distillation loss) are introduced in previous works.\nAlthough the idea of a multi-branch network to overcome the robustness against low-resolution images, there are several concerns about this design. Please see the Question Section for further comments and questions. 1. The improvement achieved by BTNet is unclear: \nBTNet aims at training TNet for the main embedding flow, BNet aims at adopting the feature map extracted from the direct low-res images for making its features compatible to the feature flow of TNet. Moreover, only L_cur is adopted for the feature discriminative as [3]. \nHow can BTNet achieve improvements in comparison to prior works? How can it leave a large gap in comparison to phi_{mr} when same L_cur is adopted?\n\n2. Some baselines are missing:\n- In most experiments, phi_{hr}, phi_{mm}, and phi_{mr} are employed. However, in my opinion, these settings are quite “naive”, i.e. phi_{mr} is simply augmentation of multi-resolution images with similar weights for both HR and LR images. \nThey are acceptable for some basic baselines. The authors are recommended to have more baselines such as lower weight for LR images during training or enforce the features of LR images closer to the HR ones. \nThen, the baselines would be more complete.\n\n3. In Table 3, why is the accuracy of phi_{mm} higher than phi_{mr} in cross-resolution experiments? How about the compatibility of phi_{mm} in cross-resolution recognition? \n\n4. The comparisons in Table 3, 4 are lacking of prior works on IBJ-C. Yes", " This paper proposes a representation learning method, called Branch-to-Trunk network (BTNet), for multi-resolution face recognition. It consists of a trunk network (TNet) given in the form of a unified encoder, and multiple branch networks (BNets) that are used for resolution adaptation. A resolution-specific BNet is used on the input and the output are implanted as feature maps in the feature pyramid of TNet, at a layer with the same resolution. Using branch distillation and backward-compatible training, BTNet transfers high-resolution information to multiple branches, while keeping representation compatibility. Experiments show effective performance on face recognition benchmarks, both for multi-resolution identity matching and feature aggregation. Strengths\n- Code has been included in the submission supplementary files.\n- Experiments have been provided on several datasets. \n\nWeaknesses\nThe presentation of the results should be improved, Authors should better comment the different cases and propose explanation for specific situations where results are not intuitive in the tables.\n\nThe selection strategy for the optimal branch is not fully investigated. Author acknowledged this while exposing the limitations of their work, but I think this is something very relevant to the method that required more in depth understanding.\n\nPerformance outperform state-of-the-art on one dataset, while this does not occur on others. Authors should report motivations on why this happens. Which are the dataset characteristics that best explain this fact?\n\nThe paper writing needs some improvements. There is also some typos to check. See for example:\n- page 3, line 111: \"(3) This also effectively reduce\" --> reduces\n\n============= POST REBUTTAL ================\nThe rebuttal partially answered to my questions. I prefer to change my score to borderline accept. \n Did the authors perform experiments on the branch selection process? Even preliminary tests would be useful.\n\nA figure reporting on overview of the proposed network it would have been useful.\n\nSee also weaknesses above. Limitations have been discussed by the authors.\n\nNo ethical issues have been reported. \n" ]
[ -1, -1, -1, -1, -1, 4, 3, 5 ]
[ -1, -1, -1, -1, -1, 5, 5, 3 ]
[ "cw6QAf2dLG", "Vk9dTtzXeHp", "lclDQG6XxEC", "3MU94jlfhe", "nips_2022_AIqC7F7xV-d", "nips_2022_AIqC7F7xV-d", "nips_2022_AIqC7F7xV-d", "nips_2022_AIqC7F7xV-d" ]
nips_2022_OmLNqwnZwmY
Falsification before Extrapolation in Causal Effect Estimation
Randomized Controlled Trials (RCTs) represent a gold standard when developing policy guidelines. However, RCTs are often narrow, and lack data on broader populations of interest. Causal effects in these populations are often estimated using observational datasets, which may suffer from unobserved confounding and selection bias. Given a set of observational estimates (e.g., from multiple studies), we propose a meta-algorithm that attempts to reject observational estimates that are biased. We do so using validation effects, causal effects that can be inferred from both RCT and observational data. After rejecting estimators that do not pass this test, we generate conservative confidence intervals on the extrapolated causal effects for subgroups not observed in the RCT. Under the assumption that at least one observational estimator is asymptotically normal and consistent for both the validation and extrapolated effects, we provide guarantees on the coverage probability of the intervals output by our algorithm. To facilitate hypothesis testing in settings where causal effect transportation across datasets is necessary, we give conditions under which a doubly-robust estimator of group average treatment effects is asymptotically normal, even when flexible machine learning methods are used for estimation of nuisance parameters. We illustrate the properties of our approach on semi-synthetic experiments based on the IHDP dataset, and show that it compares favorably to standard meta-analysis techniques.
Accept
The authors propose an approach for estimating causal effects when both observational and limited experimental data exists. The authors propose falsifying effect estimates from observational data before using the effect estimate on other populations. This is an important idea that may improve reliability of causal inference. The authors provide confidence intervals for the proposed procedure. The considered problem is of clear importance; and the simplicity of its approach is appealing (cPQd). There have been some concerns about the limited empirical evaluation (icYd). The authors provide additional numerical evidence during the rebuttal period. This evidence should be added to the appendix for the camera-ready version. Note: The reviewer most critical of the paper (rating 4, icYd) does not seem to have updated their score post-rebuttal.
train
[ "7U099W4cs28", "gqjCwBWwCX2", "cFX4Q5vxHEF", "2svv-nQqFB9", "yUKXXe4iSi4A", "7x_oORmX3VA", "R-UtuE6ltxa", "uEwhJBvrtJm", "CH8opepDqJP", "1Rr0rxBMIdG", "Opof4xl0Bi", "jLTqN9O9lGLk", "cL3_3qwEs64", "H11Et8n7KxM", "kBrayJWsRg0", "8CQqW8IxvKU" ]
[ "official_reviewer", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your response to address my concerns. I adjusted the score accordingly. ", " Thank you again for your review! One of your major concerns appeared to be the lack of a \"real data\" experiment, which we have now provided.\n\nCould you please let us know if this experiment (and our other clarifications below) has helped address your concerns? We'd be glad to provide any additional clarifications in the remaining time available for discussion.", " Thank you again for your review! One of your major concerns appeared to be the lack of a \"real data\" experiment, which we have now provided. \n\nCould you please let us know if this experiment (and our other clarifications below) has helped address your concerns? We'd be glad to provide any additional clarifications in the remaining time available for discussion.", " I sincerely thank the authors for the answers. The additional real-world experiment is really helpful. I maintain my current score.", " Thank you for your helpful comments and feedback. Please see the main comment for empirical results on a real-world dataset and thoughts on the utility and practicality of our framing and setup. \n\n**This work provides a meta-algorithm that can report unbiased causal effects outside of the support of RCT with confidence intervals. This work's only potential negative social impact might happen when they falsely accept the biased estimator from an observational study that is not consistent with the estimator of RCT. The author may provide more examples and cases of what would happen when this case is true.**\n\nThank you for this insightful comment. What you pose as a concern is only a problem if we **only** accept biased estimators. However, as long as we also do not reject the unbiased estimator, the resulting interval will have the appropriate coverage of the true causal effect (since we take a union over all the non-rejected intervals). Indeed, in the above real-world experiment, we find that we include the unbiased observational study in >98% of the trials. \n", " Thank you for your feedback. We address your first point in our general comment, and focus here on the remaining points.\n\n**The proposed method (Algorithm 1) is not very novel and a bit trivial, which includes some standard techniques we often do in practice, e.g. constructing asymptotic normal estimators and doing a t-test. The author does not do much to make the method powerful, i.e., less conservative. I feel all the theoretical results are expected and follow from the DML literature.**\n\nWhile we appreciate the perspective, we see the simplicity of the algorithm as a positive factor, not a negative one. Likewise, from our perspective, using standard statistical tests (in the context of a broader algorithm) does not render the entire algorithm “not novel”, particularly given the lack of other algorithms that solve the same problem with similar guarantees.\n\nThe main insight of Algorithm 1 lies not in the construction of new hypothesis tests (as you correctly point out, the tests we use are fairly standard), but rather the conservative strategy for combining intervals, which leverages the guarantees of the underlying tests.\n\n**The method looks quite conservative in Figures 3 and 4. The interval width is similar to the simple Union, which reports the union bounds of the confidence intervals of all observational studies, with no falsification procedure.**\n\nThe difference is more striking in the real-data experiment (see our general comment, Table 1), where the length of the intervals output by ExPCS are ~20% the length of those output by the simple union.\n\nIn general, this difference is problem-dependent, depending on the bias of individual estimators and the sample size: When it is easier to reject observational estimators, the intervals of ExPCS will tend to be shorter. Regarding sample size, we demonstrate in Figure 3 (right) that ExPCS tends to have shorter intervals, relative to the simple union, as the sample size increases. In general, ExPCS will also perform better in situations where some observational studies are more severely biased. \n", " Thank you for your feedback. We address your questions (1) and (2) in our general comment, and focus here on addressing the remaining questions.\n\n**Relating methods used for effect estimation in observational data to quality of falsification. This is clear to me based on the main assumption of asymptotic normality, it is however unclear if this is the only assumption needed? I am assuming all results use the DML estimators since they satisfy the assumption?**\n\n*Regarding “unclear if this is the only assumption needed?”*: The stated assumptions (2.1-2.4) are sufficient for the stated results (e.g., Theorem 3.1), as spelled out in the theorem statement. If there is some other point of confusion regarding the necessity of assumptions, please feel free to raise this during discussion.\n\n*Regarding the use of DML estimators*: a range of estimators would also satisfy the asymptotic normality conditions, beyond DML-style estimators. For instance, well-specified parametric models fit by maximum likelihood would also satisfy this condition, as discussed in Example 2.2 on lines 167-170.\n\n**Based on falsification, why can't a stronger statement be made about an estimator that aggregates the extrapolated effects?**\n\nA stronger statement on the performance of the ExPCS algorithm (e.g., probability of correctly rejecting all observational estimators that are not consistent for the true causal effect) would require additional assumptions regarding the power of each test, which depends on the level of bias and the variance of the estimators themselves. However, such a result would not be difficult to derive: Equation (4) already provides the asymptotic probability of correctly rejecting an observational estimator. \n", " \nWe thank the reviewer for their helpful comments and feedback. We will respond to each point in turn. \n\n**The authors refer to observational estimates as \"valid\" or \"invalid\"... more precise language would focus on the bias and variance of a given estimator...**\n\nWhile we appreciate the feedback, we do give precise definitions in terms of asymptotic consistency, the property that the estimate converges to the true effect as the sample size becomes large.\n\n**The experiments in Section 4 are more of a demonstration than a convincing empirical evaluation of the proposed approach.**\n\nSee our general comment about the real-data experiment. \n\n**The homogeneity of causal effects is a standard (though highly suspect) assumption of most methods for estimating average treatment effect from observational studies**\n\nTo clarify, we are not claiming homogeneity of treatment effects within subgroups. Subgroups serve as a means of defining a reference covariate distribution, where individuals within that subgroup can still have heterogeneous treatment effects.\n\n**RCTs and observational studies often differ in ways that go far beyond the randomization of treatment assignment**\n\nThe reviewer makes the point that RCTs and and observational studies differ in ways beyond randomization of treatment assignment, and RCTs often “involve other artificial conditions that are not replicated in observational studies.” We would like to point out that there is a wealth of literature on RCT emulation using observational studies, see [2-5] for examples. Furthermore, papers on trial emulation show that when construction of the observational cohort data respects the design of the RCTs, the effect estimates of the RCTs can actually be replicated [4,5]. In our own empirical results above on the WHI dataset, we are able to replicate the RCT results (i.e. the ATE estimates for multiple outcomes) reported in [1] after careful construction of the observational cohort to match the inclusion criteria of the RCT and aligning the OS with the RCT in terms of follow-up. \n\n**The authors’ use of the term \"meta-algorithm\" seems unnecessary.**\n\n\nWe used the term originally to mean “an algorithm that takes other algorithms (i.e., estimators) as input”, where the particular estimators for GATE can be flexibly chosen, as long as they satisfy Assumptions 2.3 and 2.4. However, we can see why this might lead to some confusion, and will remove this language from the revision.\n\n**For most of the paper, the authors oddly refrain from naming the proposed algorithm.**\n\nThank you for pointing this out - we will name the algorithm earlier in the revised version.\n\n**The use of hypothesis tests, particularly given the large literature on the limitations of this basic framework, is open to criticism.** \n\nWhile we appreciate the concern, hypothesis testing (and construction of confidence intervals based on asymptotics) is nonetheless widely used in medicine and other fields (e.g., A/B testing) where we see our method being applied.\n\n\n*[1] Rossouw, Jacques E., et al. \"Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results From the Women's Health Initiative randomized controlled trial.\" Jama 288.3 (2002): 321-333.*\n\n*[2] Franklin, Jessica M., et al. \"Emulating randomized clinical trials with nonrandomized real-world evidence studies: first results from the RCT DUPLICATE initiative.\" Circulation 143.10 (2021): 1002-1013.*\n\n*[3] Hernán, Miguel A., et al. \"Specifying a target trial prevents immortal time bias and other self-inflicted injuries in observational analyses.\" Journal of clinical epidemiology 79 (2016): 70-75.*\n\n*[4] García-Albéniz, Xabier, John Hsu, and Miguel A. Hernán. \"The value of explicitly emulating a target trial when using real world evidence: an application to colorectal cancer screening.\" European journal of epidemiology 32.6 (2017): 495-500.*\n\n*[5] Dickerman, Barbra A., et al. \"Avoidable flaws in observational analyses: an application to statins and cancer.\" Nature medicine 25.10 (2019): 1601-1606.*", " \n### Results \n\nWe report, in Table 1, the above metrics averaged across all extrapolated groups. \n\n| Method | Coverage | Length | Unbiased OBS Percentage |\n|--------------|----------|--------|-------------------------|\n| *Oracle* | 0.44 | 0.068 | - |\n| ExPCS (ours) | 0.45 | 0.081 | 0.988 |\n| ExOCS | 0.28 | 0.058 | - |\n| Meta | 0.03 | 0.260 | - |\n| Simple | 0.39 | 0.416 | - |\n**Table 1** \n\nWe can glean a few high-level takeaways from these results: \n\n**Compared to the “simple” baseline, our approach has better coverage with much shorter confidence intervals.** Recall that the simple baseline takes a union over all intervals estimated from each observational dataset. Thus, this result indicates that our falsification procedure is important to get tighter intervals but still by and large retains the unbiased observational study, which is important to getting an unbiased estimate.\n\n**Compared to the Meta and ExOCS baselines, we get comparable (or much better in the case of Meta) length with substantially better coverage.** In particular, compared to meta-analysis, which is a standard procedure done in the biostatistics and epidemiology communities as discussed in the main paper’s Related Work, we get much tighter intervals and also cover the RCT estimate with higher frequency. This result is intuitive, since one will get a biased estimate if biased observational studies are included in the meta-analysis. \n\n**We get comparable coverage and interval lengths to the oracle method.** Our coverage rate is nearly identical (0.45) to that of the oracle method (0.44), with intervals that are marginally wider (0.081 vs. 0.068). Note that our slightly improved coverage is possible due to the wider intervals. \n\nNote that our measure of “coverage” may be pessimistic, because we track coverage of the RCT point estimate, as opposed to the true causal effect (which is unknown), and the confidence intervals are designed to cover the latter, not the former.\n\nOverall, we find that our real-world results are reassuring and suggest that our method of falsification followed by a combination of intervals may be useful in how biostatisticians and clinicians do meta-analyses of observational studies. \n", " ### Experimental Setup\n\nIn this analysis, we aim to show the effectiveness of our approach in a real-world setting compared to the baselines (i.e. Simple, Meta-analysis, ExOCS). We detail how we evaluate our method below.\n\nOur experimental workflow consists of the following steps: \n\n**Step 0**: Replicate the principal results from the PHT trial, given in Table 2 of [1], using the WHI OS data. In this step, we fit a doubly robust estimator of the style given in Appendix C of the supplement.\n\n**Step 1**: While treating the WHI OS dataset as the “unbiased” observational dataset (hence the need for Step 0), simulate additional “biased” observational datasets by inducing bias into the WHI OS. We construct four additional “biased” datasets (for a total of five observational datasets, including the WHI OS datasheet), where we use the following procedure to induce selection bias – of the people who were not exposed to the treatment and did not end up getting the event, we drop each person with some probability, $p$. We set $p = [0.1, 0.3, 0.5, 0.7]$ to get the four additional observational datasets.\n\nThis type of selection bias may reflect the following clinical scenario: consider a patient who is relatively healthy who does not end up taking any hormone therapy. This patient might enroll initially in the OS, but may drop out or stop responding to the surveys. If the committee running the study does not explicitly account for this drop-out rate, then the resultant study will suffer from selection bias. [2] detail additional examples of selection bias that can occur in observational studies. \n\nImportantly, this part is the only part of our setup that involves any simulation. However, in order to properly evaluate our method, we would need to know which datasets are biased and unbiased in our set. Thus, we opt to simulate the bias. \n\n**Step 2**: In this step, we wish to run our procedure over “multiple trials,” generating confidence intervals on the treatment effect for different subgroups. To do so, we compile a list of covariates, taking both from [3] as well as covariates with high feature importance in both the propensity score model and response surface model from the estimator in Step 0. We generate all pairs from this list and use each pair to generate four subgroups. We treat two of the subgroups as validation subgroups and two of them as extrapolated subgroups in that we “hide” the RCT data in those subgroups when fitting our doubly robust transported estimator. (This gives us the benefit of knowing the RCT result for the extrapolated subgroups, which is useful in evaluation). Pairs that don’t have enough support (threshold of 400 observations) in each group are removed. The total number of “trials” (or covariate pairs) that we have is 592 (and therefore 2368 subgroups).\n\n**Step 3**: For each of the covariate pairs, we will evaluate ExPCS (our method), ExOCS, Simple, and Meta. Additionally, we evaluate an “oracle” method, which always selects only the original observational study (i.e. the base WHI OS to which we have not added any selection bias) and reports the interval estimate computed on this study. To evaluate these methods, we will treat the RCT point estimates as “correct.” For each, we compute the following metrics: \n\n* **Length** – length of the confidence interval for the subgroup\n* **Coverage** – percentage of trials for which the method’s interval covers the RCT point estimate \n \nAdditionally, we report, across all trials, the percentage at which our approach retains the unbiased study after the falsification step (we call this metric: **Unbiased OBS Percentage**). \n\nNote that we utilize sample splitting when running the above procedure. Namely, we use 50% of the data as a “training” set, where we experiment with different classes of covariates and different types of bias, and then reserve 50% of the data as a “testing” set, on which we do the final run of the analysis and report results. All nuisance functions in the OBS doubly robust estimator are fit with a Gradient Boosting Classifier with significant regularization. In practice, we found that any highly-regularized tree-based model works well. \n\n*[1] Rossouw, Jacques E., et al. \"Risks and benefits of estrogen plus progestin in healthy postmenopausal women: principal results From the Women's Health Initiative randomized controlled trial.\" Jama 288.3 (2002): 321-333.*\n\n*[2] Banack, Hailey R., et al. \"Investigating and remediating selection bias in geriatrics research: the selection bias toolkit.\" Journal of the American Geriatrics Society 67.9 (2019): 1970-1976.*\n\n*[3] Schnatz, Peter F., et al. \"Effects of calcium, vitamin D, and hormone therapy on cardiovascular disease risk factors in the Women's Health Initiative: a randomized controlled trial.\" Obstetrics and gynecology 129.1 (2017): 121.*", " ## Details of WHI experiment \n\nWe begin by addressing concerns with our empirical evaluation by assessing our algorithm on clinical trial data and observational data available from the Women’s Health Initiative (WHI). The RCTs were run by the WHI via 40 US clinical centers from 1993-2005 (1993-1998: enrollment + randomization; 2005: end of follow-up) on postmenopausal women aged 50-79 years, and the observational dataset was designed and run in parallel on a similar population. Note that this data is publicly available to researchers and requires only an application on BIOLINCC (https://biolincc.nhlbi.nih.gov/studies/whi_ctos/).\n\n### Data \n\nWHI RCT — There are three clinical trials associated with the WHI. The RCT that we will be leveraging in this set of experiments is the Postmenopausal Hormone Therapy (PHT) trial, which was run on postmenopausal women aged 50-79 years who had an intact uterus. This trial included a total of $N_{HT} = 16608$ patients. The intervention of interest was a hormone combination therapy of estrogen and progesterone. Specifically, post-randomization, the treatment group was given 2.5mg of medroxyprogesterone as well as 0.625mg of estrogen a day. The control group was given a placebo. Finally, there are several outcomes that were tracked and studied in the principal analysis done on this trial [1]. These outcomes are of three broad categories: a) cardiovascular events, including coronary heart disease, which served as a primary endpoint b) cancer (e.g. endometrial, breast, colorectal, etc.), and c) fractures. \n\nWHI OS — The observational study component of the WHI tracked the medical events and health habits of $N = 93676$ women. Recruitment for the study began in 1994 and participants were followed until 2005, i.e. a similar follow-up to the RCT. Follow-up was done in a similar fashion as in the RCT (i.e. patients would have annual visits, in addition to a “screening” visit, where they would be given survey forms to fill out to track any events/outcomes). Thus, the same outcomes, including cancers, fractures, and cardiovascular events, are tracked in the observational study. \n\n### Outcome \n\nThe outcome of interest in our analysis is a “global index”, which is a summary statistic of several outcomes, including coronary heart disease, stroke, pulmonary embolism, endometrial cancer, colorectal cancer, hip fracture, and death due to other causes. Events or outcomes are tracked for each patient, and are recorded as “day of event/outcome” in the data, where the initial time-point for follow-up is the same for both the RCT and OS. At a high level, the “global index” is essentially the minimum “event day” when considering all the previously mentioned events.\n\nWe binarize the “global index,” by choosing a time point, $t$, before the end of follow-up and letting $Y=1$ if the observed event day is before $t$ and $Y=0$ otherwise. Thus, we are looking at whether the patient will experience the event within some particular period of time or not. We set $t = 7$ years. Note that we sidestep censorship of a patient before the threshold by defining the outcomes in the following way: $Y=1$ indicates that a patient is observed to have the event before the threshold, and $Y=0$ indicates that a patient is not observed to have the event before the threshold. We apply this binarization in the same way for both the RCT and OS. Extending our method to a survival analysis framing is beyond the scope of this paper, but an interesting direction for future work.\n\n### Intervention\n\nRecall from above that the intervention studied in the RCT was 2.5mg of medroxyprogesterone + 0.625 mg of estrogen and the control was a placebo pill. The RCT was run as an “intention-to-treat” trial. To establish “treatment” and “control” groups in the OS, we leverage the annual survey data collected from patients and assign a patient to the treatment group if they confirm usage of both estrogen and progesterone in the first three years. A patient is assigned to the control group if they deny usage of both estrogen and progesterone in the first three years. We exclude a patient from the analysis if she confirms usage of one and not the other OR if the field in the survey is missing OR if they take some other hormone therapy. We end up with a total of $N_{obs} = 33511$ patients. \n\n### Data Processing + Covariates\n\nWe use only covariates that are measured both in the RCT and OS to simplify the analysis. Because this information is gathered via the same set of questionnaires, they each indicate the same type of covariate. In other words, there is consistency of meaning across the same covariates across the RCT and the OS. We end up with a total of 1576 covariates.\n", " We thank you for your constructive feedback on our manuscript. In this rebuttal, we will address two points that the majority of the reviewers have brought up as weaknesses of the work: **1] concerns with respect to the “practicality” of our setup, namely how often RCTs provide consistent, group-level effect estimates** and **2] limited empirical evaluation of our approach, particularly with a “real-world” dataset**. We will address these points as a general comment to all reviewers and then provide individualized responses in additional comments below. \n\n# The practicality of our setup\n\nIn our setup, we assume that average treatment effect estimates for pre-specified subgroups $I_R$ are available for the RCT data. Here, we do allow the number of subgroups in $I_R$ to be 1, so our meta-algorithm is still valid even if **only** the average treatment effect (ATE) estimate is available for the RCT. In the case where there is more than one subgroup, the group average treatment effect (GATE) estimates can come from a prespecified subgroup analysis on the RCT data, which is common for large-scale clinical trials (see e.g., [1-3]). By using only pre-specified subgroup analyses and not post-hoc subgroup analyses, we avoid possible bias of the GATE estimates from selective reporting, which appeared to be a concern of reviewer cPqd.\n\nWe do ***not*** require that these subgroups capture all effect heterogeneity (e.g., that effects are homogenous within pre-specified subgroups), which appeared to be a concern of reviewer Gg3m. If they do, the chances of falsification by testing GATEs may be improved, but the meta-algorithm is valid under any subgroup stratification. Additionally, as reviewer hHvj has mentioned, the grouping G should align with the RCT and observational studies. Practically, the groupings for $G \\in I_R$, are determined by existing subgroup analyses in the literature. In the case where we have individual data for the observational studies, it is straightforward to produce GATE estimates for $G \\in I_R$ and construct $G \\in I_O$ that is shared among the observational studies for extrapolation and combination. \n\n*[1] Pavord, I. D., et al. (2020). Predictive value of blood eosinophils and exhaled nitric oxide in adults with mild asthma: a prespecified subgroup analysis of an open-label, parallel-group, randomized controlled trial. The Lancet Respiratory Medicine, 8(7), 671-680.*\n\n*[2] Oskarsson, P., et al. (2018). Impact of flash glucose monitoring on hypoglycaemia in adults with type 1 diabetes managed with multiple daily injection therapy: a pre-specified subgroup analysis of the IMPACT randomized controlled trial. Diabetologia, 61(3), 539-550.*\n\n*[3] Solomon, A et al. (2018). Effect of the apolipoprotein E genotype on cognitive change during a multidomain lifestyle intervention: a subgroup analysis of a randomized clinical trial. JAMA neurology, 75(4), 462-470.*\n\n# Summary of new experimental results\n\nIn order to assess our approach on real clinical data, we use RCT + observational data available from the Women’s Health Initiative (WHI). Here, each subpopulation is covered by both RCT and observational data, which is useful for evaluation; we can “hide” a subpopulation in the RCT, estimate a confidence interval using our algorithm applied to the the remaining RCT + observational data, and compare the result to the hidden RCT estimate. \n\nWe do this over a large set of possible “held-out” subgroups, yielding >2000 different scenarios on which to test our approach. Because the original observational datasets replicate the RCT results fairly well using standard methods, we create additional “biased” datasets by sub-selecting the original observational dataset in a way that induces selection bias. We evaluate each method, for each held-out subgroup, according to the length of the intervals as well as coverage of the RCT point estimates. \n\nIn the aggregate, we observe (see Table 1, copied below for convenience) that our approach:\n\n* Has much shorter intervals (~20% as large) than the “Simple Union” approach, with comparable coverage of the RCT effect estimates.\n* Has much shorter intervals (~30% as large) than the Meta baseline, and superior coverage of the RCT effect estimates.\n* Has comparable intervals to the ExOCS baseline, but with superior coverage of the RCT effect estimates.\n* Has comparable coverage and intervals compared to an oracle method that always selects only the estimator from the original observational study (excluding all estimators from the biased datasets) in the falsification step.\n\n| Method | Coverage | Length |\n|--------------|----------|--------|\n| *Oracle* | 0.44 | 0.068 |\n| ExPCS (ours) | 0.45 | 0.081 |\n| ExOCS | 0.28 | 0.058 |\n| Meta | 0.03 | 0.260 |\n| Simple | 0.39 | 0.416 |\n \nIf accepted, we will include these results (below) in the camera-ready version, where there is space for an additional page of content.\n", " The authors propose an approach to estimating causal effect for populations for which only observational data exists, given that experimental data exists for a related population. **Strengths**\n\n* The authors focus on how to exploit situations in which multiple data sets (both experimental and observational) are given, and increasingly realistic situation for problems of interest in social science, medicine, and other areas.\n\n* The authors focus not just on obtaining point estimates, but on obtaining confidence intervals on those effects.\n\n* The paper is well-grounded theoretically but also features an empirical demonstration of the approach on a well-known data set.\n\n* The paper is clear about its assumptions.\n\n**Weaknesses**\n\n* The authors refer to observational estimates as \"valid\" or \"invalid\". Any estimate will have error due to bias and variance. More precise language would focus on the bias and variance of a given estimator, rather than a binary determination of \"valid\" or \"invalid\".\n\n* The experiments in Section 4 are more of a demonstration than a convincing empirical evaluation of the proposed approach. The experiments employ only a single data set (IHDP), a practice which recently has been strongly critiqued (e.g., Curth et al. 2021). The approach used by the authors on IHDP could be applied to other RCT data (see Gentzel et al. 2021), but is not. The result is that readers are left with little empirical evidence that the method works in practice, and theoretical treatment that makes a large set of assumptions that may (or may not) be valid.\n\n* The homogeneity of causal effects is a standard (though highly suspect) assumption of most methods for estimating average treatment effect from observational studies. The authors authors assume that average treatment effect varies among subgroups, but that enough homogeneity exists that valid extrapolations can be made. However, RCTs and observational studies often differ in ways that go far beyond the randomization of treatment assignment. Randomized experiments often involve other artificial conditions that are not replicated in observational studies. Meanwhile observational studies are often substantially different from each other (and very different from experimental settings).\n\n* For most of the paper, the authors oddly refrain from naming the proposed algorithm. This leads to odd linguistic constructions such as a section headers that read \"Implementation and Evaluation of Meta-Algorithm\" and \"Meta-algorithm produces confidence intervals that cover the true GATE with nominal probability\". Then, in section 4.2, the authors name the algorithm (Extrapolated Pessimistic Confidence Sets (ExPCS)). Use the name early and throughout the paper.\n\n* The authors use of the term \"meta-algorithm\" seems unnecessary.\n\n* The use of hypothesis tests, particularly given the large literature on the limitations of this basic framework, is open to criticism. Why is it more useful to describe to describe the proposed algorithm as a \"meta-algorithm\" rather than just a simple \"algorithm\"? Is the \"meta\" prefix intended to indicate that the procedure that you describe \"is an algorithm about algorithms\", that it is \"an algorithm about meta-analysis\", or something else? To their credit, the authors are very clear about many of the limitations of their proposed approach. Clearly, the authors' approach requires that both experimental and observational data are available for a given (super) population of interest. The authors also assume that at least one observational estimate (among several) is \"valid\" across all subpopulations. This is a large assumption, given that any given estimate may have high bias or variance for any given subpopulation, and that those error properties are likely to vary substantially across different subpopulations. Finally, the authors assume that \"every observational dataset has support for all groups.\" This seems unlikely. Furthermore, as the authors state clearly: \"...we may reject an observational estimator due to failures in transportability, even if it yields unbiased estimates of the extrapolated effects.\"\n\nAgain, the authors are fairly clear about the limitations and assumptions of their proposed approach. This makes an extensive empirical evaluation all the more important as a demonstration that, even with these assumptions and limitations, the approach can produce accurate estimates of causal effect. Unfortunately, the empirical evaluation is limited to a single data set with known issues.", " This paper proposes a meta-analysis for reliably extrapolating group level causal effects from multiple observational datasets when experimental data is available for some subgroups. The first part involves falsifying effect estimates from observational data that are biased. This is based on hypothesis testing where the statistic developed compares effect estimates from observational data to that of the RCT for groups that have experimental data. Under the assumption that at least one observational data exists for each group providing a consistent estimator (and assuming that all estimates are pointwise asymptotically normal), the statistic allows to reject biased estimates efficiently. Following this confidence intervals are generated using a simple algorithm that conservatively estimates the intervals based on the intervals of the observational data. Proposed method is evaluated on semi-synthetic IHDP data compared to simple meta-analysis and baselines that do not consist of falsification. \n\n------------------------------------ Post rebuttal update ---------------------------------------------------------------------------------------------\n\nI have read the full author response and I believe they address my major concerns with the paper. I have updated the score based on the response. Strengths:\n1. The proposed falsification method is interesting although feels impractical due to concerns/clarifications I mention below.\n\n2. The paper is well written, assumptions are clearly stated.\n\n3. The simplicity of the approach is appealing.\n\n4. Interesting experimental results.\n\nWeaknesses:\n1. It is really unclear how often RCT data could be available that could provide consistent group level effect estimates. This implies that the design of the RCT itself needs to be explicitly targeted for estimating group effects. Hence I am not sure how practical this approach really is.\n\n2. Although the IHDP results are interesting, and evaluation specific to the data is thorough, I believe just one semi-synthetic evaluation is fairly limited to be convincing. 1. Regarding my concern on practicality, can the authors suggest situations/RCTs that are explicitly designed to provide group level effects (this is different from subgroup analysis, which is not something one should do either way).\n\n2. I would strongly encourage adding additional experiments, I have briefly seen the code and it will genuinely strengthen the paper.\n\n3. Relating methods used for effect estimation in observational data to quality of falsification. This is clear to me based on the main assumption of asymptotic normality, it is however unclear if this is the only assumption needed? I am assuming all results use the DML estimators since they satisfy the assumption? \n\n4. Conceptual clarification: Based on falsification, why can't a stronger statement be made about an estimator that aggregates the extrapolated effects?\n\n5. Not sure why but some proofs in the appendix are made unnecessarily concise. Please expand and make everything clear. I believe the authors have adequately described limitations of their work. Based on my clarification questions, please consider adding comments on practicality in the limitations section. ", " Randomized controlled trials are high-standard with inclusion criteria in recruiting patients but may fail to include some heterogeneous patient subgroups in the full population. On the contrary, large-scale observational studies are likely to contain more diverse patient subgroups but they can be invalid to use due to hidden bias from some unmeasured confounders. The idea of this paper is to first validate the estimators from observational studies by comparing them with the estimator based on RCTs. This comparison is made on the patient subgroups that are observed in the RCTs. This step filters out the observational studies and their estimators that are inconsistent with the RCTs. After that, the authors use the non-rejected estimators and construct conservative confidence intervals to extrapolate the treatment effects for the subgroups that are not observed in the RCT. Strengths\n1. The paper recognizes the advantages and disadvantages of RCTs and observational studies and lets them complement each other in the method proposed to estimate group average treatment effects. The idea is interesting and well motivated!\n2. I am convinced by Assumption 2.3 that at least one observational estimator is asymptotically normal and consistent for both the validation and extrapolated effects, and Assumption 2.2 which says that the RCT estimator is also consistent. I think the assumptions are necessary so that at least in large samples, we can pick up the consistent observational estimator to estimate both effects.\n3. The experiments uncover both the power and conservativeness of the proposed method. I still think the proposed idea is useful in practice.\n\nWeaknesses\n1. The groups $I_R$ and $I_O$ are given instead of being learnt from the data. I don't think this is often not the case in practice. We may know something about the effect heterogeneity but it is too strong to assume our knowledge is close to the ground truth.\n2. The proposed method (Algorithm 1) is not very novel and a bit trivial, which includes some standard techniques we often do in practice, e.g. constructing asymptotic normal estimators and doing a t-test. The author does not do much to make the method powerful, i.e., less conservative. I feel all the theoretical results are expected and follow from the DML literature.\n3. The method looks quite conservative in Figures 3 and 4. The interval width is similar to the simple Union, which reports the union bounds of the confidence intervals of all observational studies, with no falsification procedure. Why only consider group average treatment effects? could we extend the method to other causal parameters? N/A", " Randomized controlled trials (RCTs) are considered the gold standard for studying the causal relationship between treatments and outcomes, and Clinical Practice Guideline (CPG) policy recommendations are based on experimental results from RCTs. However, due to the cost (time and money) and ethical and methodological considerations, the populations in RCTs are narrow. Hence, the treatment effects outside of the support are missing. An alternative approach is to estimate treatment effects using observational data. However, the estimated treatment effects from these historical data might be biased due to the failure to control confounding effects or the existence of selection bias in the data. This work considers the problem of providing unbiased treatment effects, along with the confidence intervals, outside of the support of RCTs from the existing observational datasets. Given RCT data and multiple observational studies, this work first provides a hypothesis-testing technique to remove the observational estimates that are not consistent with the estimate of RCT. Under the assumption (Assumption 2.3) that there is a least one observational estimator that is a consistent estimator of RCT, i.e., the strong ignorability assumption holds, and the confidence intervals on the extrapolated treatment effects outside of the support of RCT is then can be provided. Throughout the experimental validation of the IHDP dataset, this work shows that their meta-algorithm can provide the confidence interval that covers the true GATE and with narrow width. Strengths:\n\n1. The paper is very written: Problem formulation, notations, assumptions, and derivations are clearly provided.\n2. Empirical results are compared with existing meta-analysis and the comparison is comprehensive.\n\nWeakness:\n\n1. A lack of a real-world dataset is provided. \nThis work provides a very solid meta-algorithm that is able to provide unbiased causal effects with confidence intervals from RCT and multiple observational datasets. However, the major question that comes to my mind is whether the use case of this approach is limited or not. I am not an expert in this area. Is it the case that many of the observational studies (or RCTs ) have a function mapping that $$ G: \\mathcal{X} \\mapsto \\{1,\\dots, I\\}.$$ and the function mappings across studies are consistent so that the subsequent analysis can be provided from the meta-algorithm developed by the authors. \n\nPerhaps it is challenging, but would it be possible to provide a real-world experimental validation using the publicly available clinical database, such as the MIMIV-IV database.\n This work provides a meta-algorithm that can report unbiased causal effects outside of the support of RCT with confidence intervals. This work's only potential negative social impact might happen when they falsely accept the biased estimator from an observational study that is not consistent with the estimator of RCT. The author may provide more examples and cases of what would happen when this case is true. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 4, 3 ]
[ "7x_oORmX3VA", "H11Et8n7KxM", "cL3_3qwEs64", "yUKXXe4iSi4A", "8CQqW8IxvKU", "kBrayJWsRg0", "H11Et8n7KxM", "cL3_3qwEs64", "1Rr0rxBMIdG", "Opof4xl0Bi", "jLTqN9O9lGLk", "nips_2022_OmLNqwnZwmY", "nips_2022_OmLNqwnZwmY", "nips_2022_OmLNqwnZwmY", "nips_2022_OmLNqwnZwmY", "nips_2022_OmLNqwnZwmY" ]
nips_2022_Yay6tHq1Nw
Improving Policy Learning via Language Dynamics Distillation
Recent work has shown that augmenting environments with language descriptions improves policy learning. However, for environments with complex language abstractions, learning how to ground language to observations is difficult due to sparse, delayed rewards. We propose Language Dynamics Distillation (LDD), which pretrains a model to predict environment dynamics given demonstrations with language descriptions, and then fine-tunes these language-aware pretrained representations via reinforcement learning (RL). In this way, the model is trained to both maximize expected reward and retain knowledge about how language relates to environment dynamics. On SILG, a benchmark of five tasks with language descriptions that evaluate distinct generalization challenges on unseen environments (NetHack, ALFWorld, RTFM, Messenger, and Touchdown), LDD outperforms tabula-rasa RL, VAE pretraining, and methods that learn from unlabeled demonstrations in inverse RL and reward shaping with pretrained experts. In our analyses, we show that language descriptions in demonstrations improve sample-efficiency and generalization across environments, and that dynamics modeling with expert demonstrations is more effective than with non-experts.
Accept
This work proposes to learn better representations for language description-based tasks like navigation, via first pretraining a dynamics model on sequences of observations without action labels and using this model to aid RL-based policy learning. Good empirical results are presented and the work has been well-received. The dynamics policy learnt helps policy learning especially with longer horizons. The authors are encouraged to incorporate the reviewer feedback into account and especially the VAE experiments and add discussion.
val
[ "mERFQJcXVbT", "EbjB3ZJQOXI", "mRtx-RQFC46", "qil9cB4a-1B", "qiKVHTuPjI", "3wp-sh0P6J6", "YWW3pjDZcy8", "mQ67FiBRN3C", "nZ0e8CYTJkV", "IgzpDcb5aYR", "81utqpezyf3", "fiQYxzkEttj", "u5Ww-BUGjn_" ]
[ "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " As suggested by ZHyY, we implemented a VAE pretraining baseline, a standard representation learning method for RL, and added its results for Messenger to Figure 6 and RTFM to Figure 5. The intermediate variable for VAE here being the representation just before the policy head. We found that VAE pretraining underperforms LDD (after considering the standard deviation). The other environments require substantially more time to run and we will not be able to finish the experiments by the end of the discussion period. We will add a detailed discussion of VAE results. Initial evidence from RTFM and Messenger suggests that LDD significantly outperforms VAE, especially when evaluation requires generalization to significantly different environment dynamics via reading (RTFM). Please let us know if you have any questions. Thank you again!", " As suggested, we implemented VAE pretraining and added its results for Messenger to Figure 6 and RTFM for Figure 5. The intermediate variable for VAE here being the representation just before the policy head. We found that VAE pretraining underperforms LDD (after considering the standard deviation). The other environments require substantially more time to run and we will not be able to finish the experiments by the end of the discussion period. Initial evidence from RTFM and Messenger suggests that LDD significantly outperforms VAE, especially when evaluation requires generalization to significantly different environment dynamics via reading (RTFM). Please let us know if you have any questions. Thank you again!", " Thank you for your prompt response!\n\n## having a representation learning comparison.\nWe will add an observation representation baseline that does not use language to this work.\n\n## post a summary of changes made \nWe have posted a summary of our changes as well as manuscript locations\n\n## zeta isn't described\nWe have defined $\\zeta$\n\nThanks again for your help!", " We thank all reviewers for their detailed response. In summary, we are glad that the reviewers found our work clearly written (W5e2, cNeZ) and novel (W5e2, ZHyY), investigating an important question of how language can help decision making (W5e2), evaluates on a variety of environments and tasks (W5e2, cNeZ, xVZE), and makes significant improvements (ZHyY). Please find our point-by-point response in individual replies. Below is a list of major changes we have made to the draft as suggested by the initial round of reviews:\n\n1. Added hyperparameter table to Appendix D\n2. Modified Related works (section 2) paragraph 3 to frame LDD in prior work on representation learning\n3. Modified Results (section 4.3) paragraph 3 to discuss results from removal of language\n4. Added note to RTFM (Figure 5) explaining higher variance that result from averaging partially complete strategies\n5. Added description of environment setup to Experiments (section 4) paragraph 1\n6. Added a VAE baseline for representation learning. Initial results for Messenger is shown in Figure 6 and RTFM in Figure 5. Experiments for other environments are ongoing.\n\nWe have also revised to clarify several comments, such as explaining $s_t$, $\\zeta$. We do not have time to run additional baselines during the response period because experiments take weeks to run, however we will add representation learning baselines (W5e2) and perform qualitative analyses (cNeZ) as suggested by the reviewers.\n\nThank you all again for your detailed feedback. Please engage with us during the discussion period as we strive to answer questions and clarify our presentation. We greatly value your help in improving this work!", " Thanks for the rebuttal response and making some of the proposed changes.\n\nI think you have addressed my concerns in writing. However, after reading the other reviews, I do agree with Reviewer W5e2 on having a representation learning comparison. The authors should compare against an observation representation learning method (or anything that doesn't focus on language) using the same dataset and also contextualize this paper with the representation learning for control field.\n\nThis could, for example, be a VAE training objective for the representation model on reconstructing those discrete objects in the scene or a contrastive objective on observations being in the same trajectory or being augmented versions of the same data (like CURL). \n\nFurthermore, I think the authors should post a summary of changes made (and references to the new updated manuscript locations) that addresses all reviews, so that we all can easily see the concerns and corresponding changes the authors made in response to all reviews. \n\nI will consider changing my score if a proper representation learning baseline is included and some discussion on that is added; thanks again for the response. \n\n\nMinor point:\nOne thing I noticed in the newest draft is that $\\zeta$ isn't described in section 3.2, despite it being in the equation. The authors should clarify this.", " We thank the reviewer for their detailed feedback. We are glad that the reviewer finds motivations and methods clearly presented and that the method is novel yet simple. We are also glad that the reviewer agrees that the improvements LDD provides are significant. Please find below our point by point response, followed by a summary.\n\n## Writing\nThank you, we made the suggested changes.\n\n## Ablation to show effect of language on downstream RL\nIn the second paragraph of 4.3, we do have experiments showing that LDD performance drops when language is removed from observations during pretraining. In these two settings, the language descriptions are removed from unlabeled demonstrations. LDD is trained to predict just the observations as the reviewer suggested. As the reviewer hypothesized, this results in worse downstream RL performance on Touchdown and on NetHack. These are the only two environments where we can remove language guidance and still have the task remain solvable. For the other environments, removing language results in missing instructions or role assignments necessary for solving the task. We’re happy to discuss whether we should include this more aggressive ablation, but as the reviewer states, removing key language required to do the task would fundamentally limit the agent’s ability to generalize to new evaluation environments.\n\n## Other baseline that use the dataset\nTo clarify: the two comparison methods do use the same datasets. In reward shaping, the difference between the expert observations and agent observations is used as a negative reward. In inverse RL, an inverse model to label the unlabeled dataset for imitation learning. We thank the reviewer for catching this, and will clarify this section in the writing.\n\n## Hyperparamters\nThank you, we will add a table for the hyperparameters.\n\nIn summary, we hope we have addressed the reviewer’s comments, specifically clarifying how the other two baselines use the same dataset of unlabeled demonstrations, and how our ablation experiment answers the question raised by the reviewer. We have made the suggested changes to the writing, as well as added a hyperparameter table to Appendix D. Given the positive qualities the reviewer highlighted, and our clarifications and changes, would the reviewer please look over the challenges, and if satisfied, consider increasing their score to 7?", " We thank the reviewer for their detailed feedback. We are glad that the reviewer finds our problem setting realistic. Please find below our point by point response, followed by a summary.\n\n## Clear explanation of the problem setting\nWe initially left out the environment description to the SILG paper, but we will describe it in more detail in the paper. We thank the reviewer for helping us improve this work. For reference, the environments studied in our experiments involve a situated interactive agent which observes symbolically (RTFM, Messenger, Nethack, Touchdown) or prose (ALFWorld) rendered visuals and language goals for some game instances. In addition, the model observes text manuals describing instance-agnostic environment rules. The learning challenge is to learn a reading agent that generalizes to new environments with different environment rules (e.g. new entity-team associations, new parts of the map). \n\n## Comparison to exploration methods\nLDD is not an exploration method. It can be used in conjunction with exploration methods such as RND (which is not applicable in positionless environments such as Touchdown, ALFWorld) and ICM. Follow on work may seek to experiment with the complementary gains provided by combining these methods, but this is out of scope for this paper. \n\nThe distillation in LDD comes not from intrinsic predictiveness of how agent actions affect the environment, but from how agent observations differ from expert observations computed by the dynamics model. LDD emphasizes similarity to an expert whereas intrinsic exploration emphasizes novelty to prior experience, hence these methods are complementary. We thank you for bringing up this point of discussion and will aim to make this even clearer in the revised draft.\n\n## Does s_t include language description?\nYes it does. Thank you for raising this question. We will make this clear in the writing.\n\n## How was reward shaping with expert applied?\nWe train a dynamics model on unlabeled expert demonstrations. Each time the agent acts, we predict what should happen according to the expert dynamics model. We use the difference between the predicted observations and the observations from following the agent policy as a penalty to the agent. That is, the reward function encourages the agent to obtain observations similar to what an expert would obtain given the same state. The difference is taken as the mean position-wise symbolic ID accuracy, which may correspond to symbols, words, or object classes depending on the environment.\n\nIn summary, we hope we have addressed the reviewers comments, specifically clarifying the exact problem setup and the differences between exploration methods and LDD. We also thank the reviewer for the clarification questions, which help improve the quality of this work. Our core contribution is a simple yet effective method to improve policy learning using unlabeled demonstrations. This method is complementary to exploration methods. We show that our method improves performances across seeds in five unique environments. We trust that the reviewer will agree that this is publishable given our clarifications and responses, and consider adjusting their score correspondingly.\n", " We thank the reviewer for their detailed feedback. We are glad that the reviewer finds our work clearly written and our evaluations appropriate. Please find below our point by point response, followed by a summary.\n\n## Just fine-tuning\nTo clarify: in addition to initialization with the dynamics model, we also use it to distill during RL which we show improves results. We want to emphasize that the main contribution of this work is not to engineer a state-of-the-art method, but to answer the research question of how language, as a part of unlabeled demonstrations, can be used to improve learning.\n\n## Simplicity\nWe think that the fact that LDD is conceptually simple is a benefit. We ablated its components clearly across seeds on five environments and show that their combination improves results. Positive contributions in science and engineering do not have the requirement to be complex. Simple ideas that work are valuable, and many ideas seem simple once explained and demonstrated. \n\n## Figure 7 gains\nThe gains in Figure 7 should not be considered as minimal. In our draft, we note that gains in dynamics modelling performance are large for environments with sparse rewards such as NetHack and Touchdown. We see that the presentation requires clarification. We thank the reviewer for their help in improving this work.\n\n## Figure 5 variance\nThe variance in Figure 5 should not be considered as a weakness for any method. For some environments, the policy can get stuck on a local optimum. For example in Figure 5 RTFM (and Figure 6 Messenger), understanding monster assignments results in 25% win rate. Further understanding that items need to be obtained results in 50% win rate. Further obtaining the correct item results in 100% win rate. Because different seeds find these local optima at different times, averaging them results in large variance across time. We see that this needs to be clarified in our draft. We thank the reviewer for their help in improving this work.\n\n## Significance of results\nOur results show that LDD consistently outperforms all methods on 4/5 environments across multiple seeds, with the exception of reward shaping on Touchdown. We argue that despite this single negative result, our experiments show that LDD is effective. We will enhance our writing to better scope our results.. We thank the reviewer for their help in improving this work.\n\n## Qualitative analysis\nOur ablation results show quantitative differences between LDD and non LDD. We agree with the suggestion of adding additional qualitative analysis. We will examine the agent behaviour with and without LDD given the same environment state, and characterize the states when the policies differ. We are open to other qualitative analyses the reviewer suggests.\n\nIn summary, we hope we have addressed the reviewers comments, specifically clarifying misunderstandings regarding distillation, variance, and the significance of results. We are also happy to discuss potential additional qualitative analysis. We hope that the reviewer considers increasing their score. \n", " We thank the reviewer for their detailed feedback. We are glad that the reviewer found our work clearly written and novel. We are also glad that the reviewer agrees that our work investigates an important question of how language can help decision making, and evaluates on a variety of environments and tasks. Please find below our point by point response, followed by a summary.\n\n## Not contextualized well enough for various prior works in representation learning for RL\nWe understand that a primary concern the reviewer has has to do with comparison to other approaches for representation learning for RL. However, the reviewer does not point out potential comparisons - what other techniques did the reviewers have in mind? We are happy to discuss potential experiments in detail.\n\nAt a high level, it is difficult to disentangle language from representation learning for language grounding environments because language is necessary to solving the task. For example, the agent must understand the instructions and the language specifications in the manuals in order to figure out policies that generalize to new environments during evaluation. Consequently, we do compare LDD to representation learning for RL methods such as reward shaping, and inverse RL, because these methods all seek to learn better language-grounded representations. Moreover, for two environments where we can control for the presence of language, we show that describing demos with language improves performance (4.3 second paragraph).\n\nFinally, reward shaping with a dynamics model is a state-of-the-art technique for representation learning for RL. For example work concurrent with ours MineDojo: https://arxiv.org/abs/2206.08853, VPT: https://openai.com/blog/vpt. In our case, we lack millions of videos and annotated text, hence we rely on smaller dynamics models.\n\nWe will add a detailed discussion of how to frame LDD within the larger landscape of representation learning for RL. Again, we are happy to discuss potential comparisons that the review has in mind.\n\n## Figure 1 and 6\nWe will update the figures in the paper. Thank you for pointing this out.\n\n## Limitations\nRegarding the limitations sections - we’re happy to discuss how to improve our limitations sections: what improvements does the reviewer suggest?\n\nIn summary, we hope we have addressed the reviewers comments, specifically with regards to how to frame LDD in work on representation learning for RL. We are also happy to discuss potential additional comparisons as well as improvements to the limitations section. We hope that the reviewer considers increasing their score. \n", " The paper proposes to learn a language-conditioned dynamics model (without action labels), and shows that the representation learned by this dynamics model can improve downstream policy training, when combined with necessary techniques such as distillation. The method is evaluated on the SILG benchmark, which includes several distinct environments. Compared to baselines such as pure RL, inverse RL, reward-shaped RL, the proposed method is shown to outperform or on par with the best of all baselines across the benchmark. It is also ablated across several variants to show dynamics modeling with language outperforms without language, and that using expert trajectories for dynamics modeling is important.\n\n--------------\nPost Author Response Update\n\nThe authors have addressed my major concern for the paper. I'm updating my review score to 6. Strengths:\n\n1. The paper text is written with clarity.\n2. The method appears to be novel for the reviewer.\n3. The paper investigates important question in the field: how can language modeling help decision-making problems?\n4. The evaluations include several distinct environments and tasks.\n\nWeaknesses:\n\nMy concern is mostly around the significance of the paper. While it’s shown to be better than various baselines, the paper is not contextualized well enough for various prior works in representation learning for RL. While many prior works in this domain may not involve language modeling, given the nature of the proposed method, it should be discussed how it differs from other representation learning methods for RL and positions the paper from this perspective. Moreover, while the paper shows that the proposed method outperforms various baselines, it is susceptible that the improvement mostly comes from the good representation modeling. Therefore, this can only be a meaningful contribution if the paper could show comparisons to prior works in representation learning for RL. For example, what if the expert trajectories are used to train a representation module that is not a dynamics model, using some objectives? Would this representation work well in these tasks? If not, is it because they don’t model language?\n\nMinor improvement include:\n\n1. While the text is relatively clear in the paper, Figure 1 is poorly made, with blurry environment pictures and unclear method visualization. Potential improvement, for instance, can be separating the “traditional training loop” from the proposed method.\n2. Fig 6b seems to be missing several learning curves. Please refer to the comments in the weakness section.\n\nTo summarize:\n\n1. It is suggested to better contextualize the work around representation learning for RL, while advocating the importance of language modeling from this perspective.\n2. In this regard, any comparison to a well-known or state-of-the-art method in “representation for RL” could be very useful to evaluate the significance of the contribution of this work. The authors did not adequately address the limitations of this work; it is only stated that the limitation is that the environments are simulated. This is an insignificant limitation and not useful to evaluate the significance of the contribution of this work.", " This paper proposes to ground policy representations on representation learned from language conditioned dynamic modeling. The paper argues that this is a cheap approach to obtain langauge grounding from observations, and illustrates across a set of different textual domains how such an approach can lead to improved RL performance. Strengths:\n\nThe paper is relatively clear to read, and the evaluation makes sense to me\n\nWeaknesses:\n\nThe underlying proposed method seems a bit trivial to me -- the author simply proposed to finetune on a dynamics model.\n\nThe benefit of using expert data to train the dynamics model compared random data in Figure 7 seems to be minimal.\n\nThe underlying rewards curves of different models have very high variance. For example see the ablations in Figure 5.\n\nThe performance of the proposed method also appears to similar to other methods (see Figure 3)\n\nIntuitively -- why does such a dynamic modeling objective lead to grounded language representations? Can the authors illustrate this more convincingly with more qualitative visualizations or quantitative grounding results? Also illustrating this qualitative difference with using random frames would be helpful. See above Yes", " This paper presents a new method, Language Dynamics Distillation(LDD), which pre-trains the model to predict environment dynamics with given demonstrations and language descriptions, then fine-tunes the final policy via reinforcement learning on pre-trained representations. The main contribution claimed by the authors is that this method can learn the RL policy more efficiently. It is especially effective when the reward is sparse or delayed, and it is a realistic setting because it assumes a cheap additional expert demonstration (ex. video) without action labeling. Strengths\n- It considers the realistic setting that assumes only cheap additional expert demonstration without action labeling.\n\nWeaknesses\n- It seems that the paper does not provide a clear explanation of the problem setting and the details of the proposed algorithm.\n- It seems that the baseline algorithms provided in the experiment are not sufficient. It would be better if it was compared with several algorithms which are proposed to address the settings of sparse and delayed rewards. (ex. RND [1], ICM [2], …)\n\n[1] Yuri Burda et al, Exploration by random network distillation, ICLR 2019\n\n[2] Deepak Pathak et al, Curiosity-driven Exploration by Self-supervised Prediction, ICML 2017\n - I think the authors should have clearly defined and explained the problem setting (ex. about language description) in Chapter 3. Does the defined s_t include a language description? If not, a detailed explanation of exactly how the language description is used seems necessary.\n- Among the baseline algorithms, how was ‘reward shaping with expert’ (line 218-219) applied specifically?\n- Can the proposed method be used with general intrinsic reward-related methods (ex. RND, ICM, …)? If so, I wonder if it still works effectively when applied with those methods.\n The authors adequately addressed the limitations and potential negative societal impact of their work.", " Sparse reward environments can be challenging for language-conditioned policies as credit assignment to the language description can prove difficult. The authors pretrain on a dataset of language-conditioned experts’ observations and distill this into the RL policy to demonstrate strong performance gains on a set of language-guided environments. I recommend accept for this paper, but would like to see a few things (mainly regarding ablations) addressed. ## Strengths\n\n**Writing:** The motivation/method are presented clearly.\n\n**Novelty:** The method is seemingly novel yet simple.\n\n**Results:** LDD seems to work pretty well and provide strong improvements over the baselines in the paper across the some of the environments.\n\n## Weaknesses\n\n**Writing:** The last paragraph of the intro is unnecessarily long and reads more like an experiments analysis section. The intro would be more readable by greatly shortening this paragraph and leaving the detailed experiments results to the experiments section.\n\n**Missing Ablation:** While I am pretty sure this baseline would be outperformed by LDD, the authors should test LDD trained without language guidance (just predicting next states). I see appendix Figure H does this partly, but this only compares with/without language inputs for dynamics modeling accuracy, not downstream RL performance.\n\nAdditionally, the authors should also include at least one other baseline that uses the same dataset in some way. For example, perhaps an LDD type of model that is pretrained with contrastive learning or some other representation learning objective (predict whether observations are from the same trajectory given the language description), to verify that the dynamics loss is the best commonly used objective to apply here.\n\n**Minor issues:** \n\n- L220: “expert policy instead of for”\n- Hyperparams: It’s useful for papers to at least have a small table/description of how hyperparameters were selected for all baselines and the main method, and how much tuning was done. None None" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 4, 4, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 3, 4 ]
[ "IgzpDcb5aYR", "qiKVHTuPjI", "qiKVHTuPjI", "nips_2022_Yay6tHq1Nw", "3wp-sh0P6J6", "u5Ww-BUGjn_", "fiQYxzkEttj", "81utqpezyf3", "IgzpDcb5aYR", "nips_2022_Yay6tHq1Nw", "nips_2022_Yay6tHq1Nw", "nips_2022_Yay6tHq1Nw", "nips_2022_Yay6tHq1Nw" ]
nips_2022_8UUtKmSRkXE
On Gap-dependent Bounds for Offline Reinforcement Learning
This paper presents a systematic study on gap-dependent sample complexity in offline reinforcement learning. Prior works showed when the density ratio between an optimal policy and the behavior policy is upper bounded (single policy coverage), then the agent can achieve an $O\left(\frac{1}{\epsilon^2}\right)$ rate, which is also minimax optimal. We show under the same single policy coverage assumption, the rate can be improved to $O\left(\frac{1}{\epsilon}\right)$ when there is a gap in the optimal $Q$-function. Furthermore, we show under a stronger uniform single policy coverage assumption, the sample complexity can be further improved to $O(1)$. Lastly, we also present nearly-matching lower bounds to complement our gap-dependent upper bounds.
Accept
This paper studies gap-dependent sample complexity in offline tabular RL. The authors show that when there is a gap in the optimal Q-function (and the density ratio between optimal and behavior policies is upper-bounded), the sample complexity can be improved from $O(1/\epsilon^2)$ to $O(1/\epsilon)$ using a pessimistic algorithm. The authors also provide gap-dependent lower bounds. The work seems correct and well-executed. It is of somewhat limited impact given the strong assumptions (tabular MDP, coverage) but fills a gap (pun intended) in the literature and the results are interesting enough to warrant acceptance.
train
[ "QSFW8RJ1Hk", "_N7SEGEXUN2", "i5LCVbNggMP", "-CGVS00YFg5", "BYXG-NPaSSL", "idZ45IIjh7", "96dA-OGy95", "xVQ8uPQ7yA-", "MRsv7RzUXN8", "5BB7NRm5kPC", "e5_vU4JJaen", "dbbMVyVLn_g", "ddUlDxa_Y9e" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your reply and for adjusting your score!\n\n+ **Setting $\\epsilon = \\mathrm{gap}_\\min$ implies finding the optimal policy:** Sorry about the confusion. We will emphasize the difference between identifying an optimal policy and a near-optimal policy in the final version.\n+ **Significance of Upper Bounds:** Thanks for your suggestion! We will include these discussions in the final version. \n+ **Lower Bounds:** Thanks for your suggestion! It is unclear if our lower bounds can be extended to any $\\mu$ and the worst $\\mathcal{M}$. In our current proof, we construct a specific $(\\mu,\\mathcal{M})$ pair to achieve the lower bound. If $\\mu$ is given, then one probably needs to design a hard instance with a more sophisticated transition kernel and reward functions. We think this could be an interesting future direction.", " Thanks for answering my questions. Most of my concerns have been addressed. I have adjusted my score accordingly :) \n\n**Setting $\\epsilon=\\mathrm{gap}_\\min$ implies finding the optimal policy**\n\nSorry that I was missing the definition of the optimal policy in Section 4. I think what confused me before is that in Section 3 it seems that the goal is to identify a near-optimal policy, but Section 4 actually considers identifying the optimal policy. Maybe it's helpful to include both objective in the problem setup? \n\n**Upper bounds and instance-dependent optimality**\n\nThanks for the clarification. I think it would be helpful to include those discussions in the paper. \n\n**Lower bound**\n\nThanks for providing a better bound. \n\nSure I understand those minimax results consider the complexity given the worst $(\\mu, \\mathcal{M})$. But maybe a more interesting result would be that for any $\\mu$, what's the sample complexity given the worst $\\mathcal{M}$. Any comment on this? \n\n\n\n", " Thanks again for your review. We hope our answers could increase your confidence. As the discussion period is close to the end and we have not yet heard back from you, we would be glad to see if our rebuttal response has addressed your concerns questions/concerns.\nWe are more than happy to discuss further if you have any further concerns and issues, please kindly let us know your feedback. Thank you for your time and help!", " Thanks again for your review. We hope our answers could increase your confidence. As the discussion period is close to the end and we have not yet heard back from you, we would be glad to see if our rebuttal response has addressed your concerns questions/concerns.\nWe are more than happy to discuss further if you have any further concerns and issues, please kindly let us know your feedback. Thank you for your time and help!", " Thanks again for your review. We hope our answers could increase your confidence. As the discussion period is close to the end and we have not yet heard back from you, we would be glad to see if our rebuttal response has addressed your concerns questions/concerns.\nWe are more than happy to discuss further if you have any further concerns and issues, please kindly let us know your feedback. Thank you for your time and help!", " Thank you for your appreciation! We'll keep modifying the details to make our work clearer and more complete in the final version.\n+ The difference between Subsampled VI-LCB and naive VI-LCB lies in that Subsampled VI-LCB has an extra subsampling process of the dataset to wipe off the dependence in one trajectory. This technique reduces an order of $H$ in the sample complexity. \n+ Actually, our analysis technique can also be applied to other LCB-style algorithms, including naive VI-LCB and PEVI-ADV [Xie, et al. 2021]. The reason why we choose Subsampled VI-LCB is that it's currently one of the optimal offline learning algorithms and relatively simpler to understand.", " Thank you for your appreciation! We'll keep modifying the details to make our work clearer and more complete in the final version.\n+ **Independence Assumption:** The independence assumption on the episodes of the offline data is widely adopted in previous papers [Rashidinejad et al., 2021, Yin and Wang, 2021, Xie and Jiang, 2021, Jin et al., 2021, Uehara and Sun, 2021, Uehara et al., 2021, Zanette et al., 2021]. On the other hand, it would be an interesting future direction to consider correlated datasets. \n+ **Proof Location:** Sorry for the inconsistency of the same lemma in the appendix and the main paper. We decompose Lemma 6.1 into Definition A.5 and Theorem A.1 in the appendix. We have added explanations in the revised version.\n+ **Subsampling:** Because the analysis of Subsampled VI-LCB is not the focus of this paper and the role of subsampling has been carefully discussed in Li et al. [2022], we just quote the lemmas without intuitive explanation. The role of subsampling is to wipe off the dependence between data points in one trajectory and avoid one extra dependence on H in the final complexity. We did not emphasize this because our analysis method can be applied to all LCB-style offline learning algorithms. Here Subsampled LCB is just chosen as one of the LCB-style algorithms with the best performance in minimax sample complexity.\n", " Thanks for your careful reading! We address **all of your concerns** below. We hope you may reevaluate your rating.\n\nUnless specified, the line numbers refer to the ones of the original submission.\n\n+ **Setting $\\epsilon = \\mathrm{gap}_\\min$ implies finding the optimal policy:** This is **NOT** correct. \n - We first give a counterexample: Consider a contextual bandit with two states $s_1,s_2$ of same probability, and two actions $a_1, a_2$. Let the reward for $a_1,a_2$ be $(1,1-2\\mathrm{gap}_\\min)$ and $(1, 1-\\mathrm{gap}_\\min)$ respectively in two states. Then a $\\mathrm{gap}\\_{\\min}$-optimal policy can be choosing $a_2$ in $s_1$ and $a_1$ in $s_2$. But such a policy is not an exact optimal policy.\n - The key difference between minimax sample complexity and gap-dependent sample complexity in MDPs is that we need the suboptimality of every choice in the time-state pair $(h,s)$ reachable for any optimal policy is bounded by $\\mathrm{gap}_\\min$, while the previous result only cares about the suboptimality at the **initial state**. And this is also one of the main difficulties of generalizing bounds in bandits to bounds in MDPs.\n - Although the statement is true for multi-armed bandits, simply setting $\\epsilon < \\mathrm{gap}_\\min$ does not imply identifying the optimal policy in MDP as we have explained in line 160-161. In addition, $\\pi$ can also be suboptimal in the states visited at later timesteps. \n+ **Clip Technique:** To the best of our knowledge, this is the first work that tries to translate the online gap-dependent analysis techniques to offline RL settings. The clipping with variance technique can be of great interest as it can be used to incorporate Bernstein bonus [Simchowitz and Jamieson, 2019]. We believe that this technique can potentially improve the dependency on $H$ in gap-dependent bounds for online MDPs.\n+ **Significance of Upper Bounds:** \n - All of our sample complexity upper bounds are for the same algorithm so it can directly adapt to different regions of $\\epsilon$. Therefore, the true upper bound would be the minimum of all the bounds provided, including the previous $O(\\epsilon^{-2})$ result. \n - As we have explained in the first question, $\\epsilon<\\mathrm{gap}\\_{\\min}$ does not guarantee to identify an optimal policy. Therefore, $\\epsilon<\\mathrm{gap}\\_{\\min}$ is actually a non-trivial region.\n+ **Lower Bounds (Theorem 7.1):** Thank you for your insightful comment on our lower bounds.\n - We do have a stronger version of Theorem 7.1, which introduces a new variable to wipe off the relationship between $P$, $C^*$ and $A$, and this version of theorem is provided in the revised submission.\n - On the other hand, the original version has already been strong enough. In line 124-129, we repeat the definition of offline learning, an instance of which is determined by a data collection policy (behavior policy) $\\mu$ and an MDP $\\mathcal{M}$ together. So gap-dependent minimax lower bound refers to the complexity with the worst given $(\\mu,\\mathcal{M})$ pair that has given $P,C^*,S,H,\\mathrm{gap}\\_{\\min}$. Similar minimax lower bounds have been proposed in Dann et al. [2017], Yin et al. [2021a], Xie et al. [2021].\n+ **Instance-dependent Optimal Algorithm:** \"instance-optimal algorithm\" (Xiao et al. [1]) is **different** from \"gap-dependent optimal algorithm\". \n - \"Instance-optimal\" refers to the optimality for a **specific problem instance** in a reasonable algorithm family. In [1], they proved that in the minimax optimal algorithm family, no single algorithm can be optimal up to a constant factor in all the problem instances. \n - \"gap-dependent bound\" in our paper refers to the **minimax optimality** over an **instance class** characterized by set a of parameters: $P,C^*,S,H,\\mathrm{gap}\\_{\\min}$. Our results actually complement the hardness result in [1]. Although no reasonable algorithm can be optimal for all individual instances, we show that our algorithm is nearly optimal for each instance class specified $(P,C^*,S,H,\\mathrm{gap}\\_{\\min})$. \n+ **Difference from Bandit Gap-Dependent Bounds**:\n - As we have explained in the first point, in the MDP setting, $\\epsilon<\\mathrm{gap}\\_{\\min}$ does not imply optimality. \n - As a result, clip technique is used for the RL setting to achieve the gap-dependent bounds.\n```\n[1] Xiao, Chenjun, et al. \"On the optimality of batch policy optimization algorithms.\" International Conference on Machine Learning. PMLR, 2021.\n[2] Lattimore, T. and Szepesvári, C., 2020. Bandit algorithms. Cambridge University Press.\n```", " Thanks for your appreciation. We address your questions below. \n+ **Assumption 3.1:** Sorry for the confusion in Assumption 3.1. $\\pi^*$ is an arbitrary (possibly stochastic) optimal policy. Note that states that are reachable by stochastic optimal policies are also reachable by deterministic optimal policies, so constraining $\\pi^*$ to deterministic optimal policy works as well. We have fixed the expression. \n+ **Dependence on $\\log(1/\\delta)$:** The log terms in Theorem 4.1, Theorem 5.1 and Theorem 5.2 were all hidden in the $\\widetilde{O}$ for simplicity. The log terms are of power 1 in all three theorems. Full analysis without hiding the log term is in Appendix B.3, B.4, B.5. We have included the $\\delta$ term in the revised version.\n+ **Missing definition of ALG:** Thank you for pointing out this! ALG is defined in line 684 and line 685 in the appendix (original version). We have put it in the main paper (line 251 to line 253 in the revised version).\n+ **Expected Lower Bound:** \n - First, it is common to provide expected lower bounds, e.g., in Xie et al [2021] and Rashidinejad et al [2021]. \n - Second, an expected upper bound follows directly from the PAC upper bound. Simple calculations show that the maximum expectation of suboptimality is $\\epsilon +\\delta H \\mathrm{gap}\\_\\mathrm{min}$. Taking $\\delta=\\frac{\\epsilon}{H\\mathrm{gap}_{\\mathrm{min}}}$ makes it a $O(\\epsilon)$ suboptimality.\n - Third, the hard instance we constructed also permits the same lower bound with constant probability, which is another commonly used lower bound. The intuition is that in the constructed MDP family (see C.2.1 in the appendix for details), the suboptimality of any policy is upper bounded by $\\lambda H\\tau$, so $\\hat{\\pi}$ must suffer from a $\\Omega(\\lambda H\\tau)$ suboptimality under some constant probability when the expectation of suboptimality is $\\Omega(\\lambda H\\tau)$. \n - Thank you for reminding us of the inconsistency in upper bounds and lower bounds. We have added a brief proof for the constant-probability lower bound in the revised version (Appendix C.2.3).", " This paper studies the gap dependent sample complexity of obtaining the optimal policy in episodic RL with offline data. The authors first define the notions of uniform optimal policy coverage coefficient and the relative optimal policy coverage. The uniform optimal policy coverage characterizes how \"explorative\" is the offline sampling policy, and relative optimal policy coverage characterizes how different are the offline sampling policy and the optimal policy. Further, the authors establish an upper bound on the sample complexity of offline RL using the optimal coverage coefficient. Next, the authors argue that optimal coverage coefficient might be too small. Hence, they provide a sample complexity based on relative optimal policy coverage coefficient. Next, the authors provide the main proof techniques in their results. Finally, the authors provide a gap dependent lower bound on the sample complexity of their algorithm. The main strength of the paper is a through study of offline RL in the episodic case. In particular, the authors provide both an upper bound and a lower bound, and they claim that they match up to some polynomial of length of the horizon H. \nThe main weakness is on the appearance of gap_m in the sample complexity. Although I understand this is the main motivation of the paper from the beginning, but this quantity can be arbitrary small. The authors might claim that gap_m also appears on the lower bound, and hence their bound is tight. But as I explained before, I am not yet convinced by the lower bound, since it is in terms of expectation rather than high probability. \nAnother issue is on the definition of ALG in the lower bound. This notion is not defined anywhere in the paper. This is critical, since it tells us about the kind of lower bound that the authors are claiming. - In Assumption 3.1, are you implicitly assuming only deterministic optimal policies? The reason is that you are denoting \\pi^*(s), which is the action taken at state s in the deterministic policy \\pi^*.\n- In the result of Theorem 4.1, why do you hide the dependency on log(1/\\delta)? I believe it is more informative if you somehow represent that dependency here.\n- In Theorem 7.1 you introduce the notion of ALG, but you have not defined this notion.\n- The lower bound on Theorem 7.1 and Corollaries 7.1 and 7.2 are in terms of expectation. But the upper bound in Theorems 5.1 and 5.2 are in high probability. How do you relate these two? As mentioned earlier, the upper bound in the paper is stated in terms of high probability, but the lower bound is in terms of expectation. If the authors address this concern of mine, I am willing to increase my score.", " This paper provides an analysis of gap-dependent sample complexity in offline reinforcement learning. The main contribution is to show that under the optimal policy coverage assumption, the sample complexity can be improved to $O(1/\\epsilon)$, compared to $O(1/\\epsilon^2)$ bounds in previous analysis. Nearly matching lower bounds are also provided by the authors. Strength:\n\nThe paper is well-written and technically sound. Studying instance-dependent analysis for offline RL is a very important topic. \n\n\nWeakness:\n\nThe proof technique is not very novel. The contributions might have limited impact. \n This paper is well written and technically sound. I tend to vote for reject due to the following concerns and questions. Please address these in the rebuttal and correct me if I misunderstand some critical part of the work. \n\n1. I am wondering why Theorem 4.1 cannot be implied by previous results by setting $\\mathrm{gap}_{\\min}=\\epsilon$. If the output policy of an algorithm is $\\epsilon$-optimal for some $\\epsilon$ smaller than the gap, then we know that algorithm can identify the optimal policy. In fact, if I understand correctly, the proof of Theorem 4.1 indeed tries to argue this. \n\n2. I like the analysis in general. By applying the clip technique, the authors can first upper bound the regret by considering a not-so-pessimistic-MDP, then applying existing techniques to continue the analysis. This makes the proof very clear and easy to follow for readers that are familiar with the literature. My main concern is that the main idea is from existing techniques in the literature. Although the paper argues that the novelty of the analysis is to consider the variance during clipping, this contribution seems to be a marginal contribution to me. \n\n3. Theorem 5.1 shows the $\\tilde{O}(1/\\epsilon\\mathrm{gap})$ upper bound. The authors argue that this significantly improves the sample complexity of $\\epsilon$ is very small compared to the gap. I think this is a very nice contribution as it provides a sharper characterization of the sample complexity. However, I am wondering for $\\epsilon>\\mathrm{gap}$, does this also mean the proposed bound is actually worse than the existing results? Also, is $\\epsilon << \\mathrm{gap}$ really interesting? Again, if an algorithm can be $\\mathrm{gap}$-optimal, we know it can identify the optimal policy. \n\n4. I have some question related to the lower bound (Theorem 7.1). In the construction, C^* is fixed to be A. Does that mean this lower bound is only correct for some data collection policies? \n\n\n5. One reason we prefer instance (or gap) dependent analysis over minimax analysis is that it can tell us if an algorithm is adaptively optimal. That is, if the algorithm is optimal for both hard and easy problem instances. However, as a recent paper pointed out, no algorithm can be instance-dependent optimal in offline RL (Xiao el al., 2021). This makes the contribution of this paper less interesting. Also, I think the paper should give some discussion about how the gap-dependent analysis considered in the paper is different with the instance-dependent analysis used in the literature (Lattimore and Szepesvari, 2020)\n\n\nXiao, C., Wu, Y., Mei, J., Dai, B., Lattimore, T., Li, L., Szepesvari, C. and Schuurmans, D., 2021, July. On the optimality of batch policy optimization algorithms. In International Conference on Machine Learning (pp. 11362-11371). PMLR.\n\nLattimore, T. and Szepesvári, C., 2020. Bandit algorithms. Cambridge University Press.\n\n Please see the previous section. ", " This paper studies gap-dependent bounds for offline tabular RL. In particular, they show that under the optimal policy coverage assumption and the minimum positive gap, pessimistic algorithms achieve the rate of $1/N$. In the additional condition that the density of the behavior policy is uniformly lower bouned at the reachable states of an optimal policy, pessimistic algorithms precisely identify an optimal policy using finite samples. They also accompany with their upper bounds with nearly-matching lower bounds for each case. The paper also proposes the so-called deficit thresholding technique, based on the technique in an online counterpart, to analyze offline tabular RL with gap information. ## Strengths: \n- the problem is novel and interesting \n- the theoretical claims are sound and interesting to the offline RL community \n- the analysis is quite general and neat, with a new technique called deficit thresholding that seems interesting \n- every upper bound is backed up by nearly-matching lower bounds \n\n## Weaknesses: \n- The results and the techniques are only limited to tabular representation and require independence assumption on the episodes of the offline data. \n- It is sometimes not trivial to locate the proof of the main result in the paper - Lemma 6.1 is particularly interesting as it gives a stronger upper bound on the sub-optimality when pessimism and gap information is used than when the gap information is ignored. However, I seem not to be able to locate the proof of Lemma 6.1 in the appendix. Do I miss something?\n- Intuitively, what is the role of subsampling in Algorithm 2 and how does it contribute to the final bound? See the weakness section. ", " In this paper, the authors considers the offline MDP problem with an episodic setting. For this type of problem, existing works provide a minimax optimal sample complexity of $O(H^3 S C^* \\epsilon^{-2})$, given a finite single policy concentrability coefficient $C^*$. In this paper, the authors further extend the discussion on the problem based additional structure of a lower bounded Q-function gap or a lower bounded visitation probability of the behavioral policy on the support of the optimal policy. Based on these additional structures, $O(\\frac{C^*}{\\epsilon \\cdot gap_{min}})$ or $O(\\frac{1}{P \\cdot gap_{min}^2})$ sample complexity are derived. Lower bounds are established and almost matches the provided upperbounds except for a factor of $H^2$. \n\nOverall, the reviewer think this is a good submission that complete a missing part of the offline RL when the problem is simple (in terms of well lower bounded gaps). The discussion is complete and the presentation is clear. Strength #1. This paper complete a missing part of the offline RL when the problem has some strictly lower bounded gaps in optimal Q function or behavioral policy. \n\nStrength #2. The discussion is complete in the sense that both lower and upper bounds are provided. \n\nStrength #3. The presentation is clear and easy to follow. \n\nWeakness: no obvious weakness found from the reviewer point of view. The reviewer may need to make some more explanation of what the subsampled VI-LCB (Algorithm 2) is doing. In particular, it will be helpful if the authors could add some intuitive discussion on why it is needed instead of simply applying the VI-LCB. NA. This is a pure theoretical result. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "_N7SEGEXUN2", "-CGVS00YFg5", "ddUlDxa_Y9e", "e5_vU4JJaen", "5BB7NRm5kPC", "ddUlDxa_Y9e", "dbbMVyVLn_g", "e5_vU4JJaen", "5BB7NRm5kPC", "nips_2022_8UUtKmSRkXE", "nips_2022_8UUtKmSRkXE", "nips_2022_8UUtKmSRkXE", "nips_2022_8UUtKmSRkXE" ]
nips_2022_bdnZ_1qHLCW
ResQ: A Residual Q Function-based Approach for Multi-Agent Reinforcement Learning Value Factorization
The factorization of state-action value functions for Multi-Agent Reinforcement Learning (MARL) is important. Existing studies are limited by their representation capability, sample efficiency, and approximation error. To address these challenges, we propose, ResQ, a MARL value function factorization method, which can find the optimal joint policy for any state-action value function through residual functions. ResQ masks some state-action value pairs from a joint state-action value function, which is transformed as the sum of a main function and a residual function. ResQ can be used with mean-value and stochastic-value RL. We theoretically show that ResQ can satisfy both the individual global max (IGM) and the distributional IGM principle without representation limitations. Through experiments on matrix games, the predator-prey, and StarCraft benchmarks, we show that ResQ can obtain better results than multiple expected/stochastic value factorization methods.
Accept
The paper is for the most part well written and contains both theoretical analyses and a comprehensive empirical study. One of the main initial concerns brought up by various reviewers is that the relation between the proposed method, resQ, and the closely related existing methods Qtran and Qplex is not 100% clear. The authors addressed this point extensively in the rebuttal. However, the theoretical advantages of resQ over Qtran remains unclear even after the rebuttal phase for one of the reviewers (despite promising empirical results) Overall, I believe the paper's strengths make up for this potential weakness and recommend acceptance. I do want to recommend that the authors take a careful look at reviewer dP9n's comments and clarify any points of confusion in the final version of this paper.
train
[ "J2Hjnr9KT8g", "te3HHzAEBvpL", "wIva2OZRj5", "k5W0XDKU-j", "sBpzRPg1c1o", "apRy92Ww3qa", "RqpRI3Z83L", "VSWMsUFYuL1", "WBhl6i7hR8C", "-ono5mY3eRI", "7hgCd9_QgaL", "8b1jKJkLpUS", "UyH7MSB-QY", "E_mg3UDz7uP", "12bUQHN_nmJ", "i4Rba3HUE5w", "K4HgraAXO6P" ]
[ "author", "author", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We would like to express our gratitute to the reviewer for this comments which help us improve the quality of this work. \n\nWe have changed the sentence *\"Achieving the IGM and the DIGM principles with low approximation errors and high sample efficiency remains an open challenge\"* to be *\"Achieving the IGM and the DIGM principles without representation limitations remains an open challenge\"*. \n\nWe have updated Figure 1 (from 7.999 to 7.9) and updated Table 2 (from 20,000 to 50,000 steps); we have update the distributional matrix (from 7.999 to 7.9 as well) and present the $Q_{jt}$ of all the methods. \n\nWe have added one more table in the experimental section to describe the $Q_{tot}$, $Q_{jt}$, $Q_i$, $Q_{tran}$ and $Q_r$ of different methods. \n\nWe have changed the description regarding QTran via removing *\"$Q_{tran}$ over-estimates sub-optimal actions\"* from the paper. ", " $Q_{jt}$ of ResQ is $Q_{tot} + w_r Q_r$ which is used to recover the optimal actions. The $Q_{jt}$ of QTran (We will write it as $Q_{tran}$ following the definition of the QTran++ paper) is defined as $\\sum_{i=1}^n Q_i + V(\\tau)$; it is used to recover the true optimal action as well. We will make this clear in the new version of this work to avoid confusion. The matrices shown in the paper is used to demonstrate the ability to recover optimal policies rather than the quality of reconstruting Q values. \n\n$Q_{tran}$ of QTran is not used to approximate the true value function, and it is stated in the QTran paper *\"We found that condition (4b) is often too loose, leading the neural networks to fail their mission of constructing the correct factors of Qjt\"*. \n\nBoth ResQ and QTran use the unrestricted value function $Q_{appro}$ to approximate $Q$, and then they learn $Q_{jt}$ to approximate the optimal policy of $Q_{appro}$. We agree with the reviewer that approximating $Q_{appro}$ to $Q$ is a simple regression problem. $Q_{appro}$ approximates $Q$ well for both ResQ and QTran. Due to limited page space, we do not show the matrix of $Q_{appro}$ in the submitted paper. We have run ResQ for longer timesteps (from 20,000 to 50,000), the $Q_{jt}$ of ResQ approximates $Q$ well, but $Q_{tran}$ does not.\n\nWe will list $Q_{tot}$, $Q_r$, $Q_{tran}$, $Q_{appro}$, and $Q_i$ in the appendix of the paper. ", " To avoid miss-understanding, let us explain the over-estimation issue of QTran further. For a non-monotonic pay-off value function $Q$ presented in the QTran paper listed as follows.\n\n 8 | -12 | -12\n \n -12| 0 | 0 \n \n -12| 0 | 0\n \t\t\nFirst, QTran learn a value function $Q_{appro}$ to approximate the true value function $Q$. And then QTran learns a function $Q_{tran}$ to approximate $Q_{appro}$. $Q_{tran}=\\sum_{i=1}^n Q_i(\\tau_i, u_i) +V(\\tau)$ (We called it as $Q_{tran}$ following the definition of the QTran++ paper by the same author, it was called $Q_{jt}$ of QTran)\n\nWe have trained QTran on $Q$ for 50,000 steps. The $Q_{appro}$ learned by QTran to approximate $Q$ is listed as follows.\n\n 8.00 | -12.01| -12.01 \n \n -12.01 | -0.00 | -0.00\n \n -12.01 | -0.00 | -0.00\n\n$Q_{appro}$ approximates $Q$ well. This matches the result shown in Table 1 c of the Qtran paper. However, $Q_{tran}$ **does not** approximate $Q_{appro}$ and the true value function $Q$ well. The $Q_{tran}$ learned by QTran is listed as follows.\n\n 8.00 | 4.44 | 4.41 \n \n 5.38 | 1.83 | 1.80 \n \n 5.32 | 1.76 | 1.73 \n\nAnd the $Q_i$ learns by QTran is $Q_1 = [3.31, 0.69, 0.62]$, $Q_2 = [4.49, 0.93, 0.90]$, and the $V(\\tau)$ is 0.20.\n\nAs we can observe from the matrix, $Q_{tran}$ **does over-estimate** the true value function $Q$ for all sub-optimal actions. $Q_{tran}$ can be written as $Q_{tran} = Q_{jt}' + V$, where $Q_{jt}' = \\sum_{i=1}^n Q_i$. Table 1 (b) of the QTran paper confirms that $Q_{jt}'$ does over-estimate values as well. \n\nThe value function $Q_{appro}$ learned by ResQ to approximated the true value function $Q$ is listed as follows. \n\n 8.01 | -11.92 | -11.92 \n \n -11.91 | 0.01 | 0.01 \n \n -11.91 | 0.01 | 0.01\n \n$Q_{appro}$ is very close to the true value function. \n\nAnd the $Q_{jt}$ (called $Q_{ResQ}$ as well) learned by ResQ to approximate $Q_{appro}$ is list as follows.\n\n 8.00 | -12.00 | -12.00 \n \n -12.00 | -0.00 | -0.00 \n \n -12.00 | -0.00 | -0.00\n \n$Q_{ResQ}$ approximates the $Q_{appro}$ and $Q$ well, and it does not over-estimate value for sub-optimal actions. \nThe utility function learns by ResQ is $Q_1 = [1.43, -2.14, -2.11]$ and $Q_2=[1.42, -2.14, -2.11]$.\n\nFor QPlex, it does not learn a surrogate approximation function $Q_{appro}$ to approximate the true value $Q$, and it learns $Q_{plex}$ to approximate the true value $Q$ directly. $Q_{plex}$ is listed as follows.\n\n 26.57 | 6.33 | 6.73 \n \n 8.25 | 19.05 | 18.95\n \n 8.28 | -0.00 | 19.09\n\n$Q_{plex}$ is quite different from $Q$. And it is not stable, $Q_{plex}$ can change quickly for each 1,000 training steps. \n$Q_{plex} = V + Adv$, where $V = max_uQ$ and $Adv = Q - V$. During learning $Q_{plex}(u, v)$ try to match $r + \\gamma max_u(Q_{plex}(u, v)) = r +\\gamma max_u(max_uQ + Adv)$. As there are two max operators during the update of QPlex, we think that this cause its learning instability. \n", " Thanks for the reviewer's quick response and insightful comments. As it is written in the reviewer's comment *\"ResQ uses $Q_r$ to compensate the difference between the true Q-value and $Q_{tot}$ while Qtran ignores this part.\"*, we want to thank the reviewer for his agreement that the residual function can help ResQ model true Q better than QTran. I think we have addressed the question raised by the reviewer's initial questions *\"what is the benefit of learning the residual function\"*, *\"better theoretical guarantees than Qtran\"*, and *\"why ResQ perform better\"*.\n\nWe would like to thank the reviewer for not disagreeing that ResQ is a generalization of QTran, QPlex, CW/OW QMIX. This is a contribution of ResQ, as different designs of the main function, mask functions, and residual functions could be developed by using the idea of residual functions. We believe that using the concept of residual functions will be beneficial to MARL, we expect that there could be more value factorization methods using residual functions developed in the future. \n\nWe would like to thank the reviewer for not disagreeing that using residual functions can satisfy the IGM and the DIGM theorems without representation limitations. QTran, QPlex, CW/OW QMIX deal with the IGM theorem only, and DMIX satisfies the DIGM theorem with representation limitations. \n\nThere may be some misunderstandings regarding QTran and ResQ. QTran learns a value function $Q_{tran}$ to approximate the true value function $Q$. $Q_{tran}$ does over-estimate values for sub-optimal actions, but QTran does not over-estimate the true value function $Q$ (it is defined as $Q_{jt}$ in QTran). In ResQ, we learn $Q_{ResQ}$ (denoted as $Q_{jt}$ in ResQ) to approximate the true value function $Q$. As it is shown in Theorem 2, $Q_{ResQ}$ does not over-estimate value for sub-optimal actions. \n\nTo discuss the relationship with QTran, We have copied Theorem 1 of QTran as follows (with adaptation to suit openreview input format). \n\n**Theorem 1 of QTran** \nA factorizable joint action-value function $Q_{jt}(\\tau,u)$ is factorized by $[Q_i(\\tau_i , u_i)]$, if \n$\\sum_{i=1}^N Q_i(\\tau_i, u_i) - Q_{jt}(\\boldsymbol{\\tau}, \\boldsymbol{u}) + V_{jt}(\\boldsymbol{\\tau}) = 0 \\quad \\boldsymbol{u}=\\bar{\\boldsymbol{u}}, \\quad (4a)$ and $\\sum_{i=1}^N Q_i(\\tau_i, u_i) - Q_{jt}(\\boldsymbol{\\tau}, \\boldsymbol{u}) + V_{jt}(\\boldsymbol{\\tau}) \\geq 0 \\quad \\boldsymbol{u} \\neq \\bar{\\boldsymbol{u}}, \\quad (4b)$\nwhere $V_{jt} = max_u Q_{jt}(\\boldsymbol{\\tau}, \\boldsymbol{u}) - \\sum_{i=1}^N Q_i(\\tau_i, \\bar{u}_i).$\n\nIn this theorem, $Q_{jt}(\\boldsymbol{\\tau}, \\boldsymbol{u})$ is the true value function, QTran does not over-estimate it. The approximated value function $Q_{tran}$ (following the definition of QTran++ which is written by the same authors) is defined as $Q_{tran}(\\boldsymbol{\\tau}, \\boldsymbol{u})= \\sum_{i=1}^N Q_i(\\tau_i, u_i) + V_{jt}(\\boldsymbol{\\tau})$. From (4b) of this theorem, $\\sum_{i=1}^N Q_i(\\tau_i, u_i) - Q_{jt}(\\boldsymbol{\\tau}, \\boldsymbol{u}) + V_{jt}(\\boldsymbol{\\tau}) \\geq 0 \\quad u \\neq \\bar{u}$. That is, $Q_{tran}(\\boldsymbol{\\tau}, \\boldsymbol{u}) \\geq Q_{jt}(\\boldsymbol{\\tau}, \\boldsymbol{u}) \\quad \\boldsymbol{u} \\neq \\bar{\\boldsymbol{u}}$. $Q_{tran}$ is the function learned by QTran to approximate $Q_{jt}$. Clearly, $Q_{tran}(\\boldsymbol{\\tau}, \\boldsymbol{u})$ can over-estimate the true value function $Q_{jt}(\\boldsymbol{\\tau}, \\boldsymbol{u})$ for sub-optimal actions. In QTran, $Q_{jt}' = \\sum_i^N Q_i(\\tau_i, u_i)$ is called ``transformed joint-action value function''. It is analogous to $Q_{tot}$ in ResQ. We will improve the writing of this work to avoid confusion.\n\nResQ, QTran, QPlex, and CW/OW QMIX are value factorization methods within the centralized training with decentralized execution regime. They use $[Q_i]$ for execution, and they suffer from the same theoretical limitation of $Q_i$. However, these $[Q_i]$ are trained with the help of value factorization methods (mixer functions). Better mixer functions can lead to better $[Q_i]$ for execution. \n\nWe agree with the reviewer that the upper bound for ResQ, QTran, QPlex, and CW/OW QMIX is to find the optimal policies which satisfy the IGM theorem. As we use neural network approximators for functions in ResQ, we will soften the claim of ResQ as it could find policies that approximate the IGM and the DIGM theorems and the experimental results are promising. ", " I would like to thank the authors for their feedback. However, it does not address my major concerns, which I raised following the Main Response.\n\nFurthermore, in the matrix game, I’m still puzzled as to why the $Q_{jt}$ of ResQ and Qtran have such a large gap to the payoff matrix. They are both trained using $MSE(r, Q_{jt})$ with sufficient exploration and no representation limitation. Therefore, it is a very simple regression problem, and the $Q_{jt}(u_1,u_2)$ should be very close to $r(u_1, u_2)$ for any action pair. Is there something I’m missing?\n", " According to the authors, ResQ can model suboptimal state-action pairs better than Qtran resulting in better performance. In general, I disagree with this claim. The $Q_{jt}$ in ResQ can model the true Q-value well or the $Q_r$ can well compensate the difference between the true Q-value and $Q_{tot}$. It’s true. But the $Q_{jt}$ of Qtran can also model the true Q-value well since it has no representation limitations. \n\nWhat really matters are the properties of $[Q_i]$s which are directly related to the action selection. $Q_i$, on the other hand, cannot capture both optimal and sub-optimal state-action pairs simultaneously. For example, in tabular setting, to represent the Q-value requires $|S||A|^n$ parameters, whereas $[Q_i]$s only contain $n|S||A|$ parameters. As a result, $Q_i$ must sacrifice some suboptimal actions in order to fit the optimal actions. In this way, the sum of $Q_i$ (i.e. $Q_{jt}'$) in Qtran will overestimate some sub-optimal actions, not the overestimation of $Q_{jt}$ itself. Similarly, the sum (or mix) of $Q_i$ in ResQ, i.e. the $Q_{tot}$, will also overestimate some sub-optimal actions. The difference is that ResQ uses $Q_r$ to compensate the difference between the true Q-value and $Q_{tot}$ while Qtran ignores this part. The authors mistakenly compare the $Q_{jt}$ in ResQ with the $Q_{jt}'$ in Qtran and conclude that ResQ matches the sub-optimal actions better.\n\nIn summary, both Qtran’s and ResQ’s $Q_{jt}$ can model the true Q-value well, and Qtran’s $Q_{jt}'$ and ResQ’s $Q_{tot}$ will overestimate some sub-optimal actions. The theoretical property of $Q_i$ is essential for maintaining optimal consistency. However, ResQ does not have a better guarantee than Qtran. Similar for Qplex, whether “ResQ can use more expressive mixers” or “places fewer restrictions” on the residual function does not really matter, because the theoretical property of $Q_i$ is the same. With the same guarantee of optimality, one could even argue that a more restricted architecture has smaller parameter space and is thus easier to learn.\n", " *Reply to Q2*\n\nResQ does not significantly affect by inequality violations. The residual function uses the negative absolute function to ensure that $Q_r \\leq 0$ (Please refer to lines 216-217 of rest\\_q\\_learner\\_central.py). We have evaluated the performance of a variant of ResQ, ResQ-MSE, which uses MSE loss to implement $Q_r \\leq 0$. We have studied the performance of ResQ-MSE in the MMM2, MMM, and 8m\\_vs\\_9m scenarios. The performance of ResQ-MSE and ResQ are similar in these three scenarios. This indicates that the violation of inequality conditions does not affect the performance significantly. \n\nWe agree with the reviewer that using the MSE loss in QTran to ensure the inequality constraints could lead to violations of the IGM principle. We have studied QTran's MSE loss caused by inequality violations in the MMM2 scenario. The MSE loss is 2e-3, which suggests that only a few inequality violations exist. \n\n*Reply to Q3*\n\nFor Table 2, the optimal value estimated by ResQ is 8.10, which is 0.1 difference from 8.00, the true optimal value. ResQ has the best approximation quality regarding the optimal action. QTran and QPlex fail to find the optimal action. The wrongly estimated optimal value of QPlex is 7.49. Its gap to the optimal value (0.51) is much larger than the gap of ResQ (0.1). For sub-optimal action values in Table 2, QPlex indeed recovers its values more closely than ResQ and QTran. However, For the matrix listed in Table 1 of the appendix, QPLEX does not model more closely the sub-optimal actions than ResQ/ResZ and QTran. \n\nIn general, QPlex does not perform well when recovering the optimal actions for difficult pay-off matrices. We have studied three more matrices to evaluate all the studied algorithms. For the matrix [[2.5, 0, -100],[0, 2, 0],[-100, -100, 3]], Only ResQ and QTran can recover the optimal action, the other algorithms cannot. For the matrix [[8, -12, -12],[-12, 0, 0],[-12, 0, 7.9]], Only ResQ, CW QMIX and QTran can recover the optimal action. QPlex does not model the sub-optimal actions better than ResQ and QTran. For the matrix [[8, -12, -120], [-12, 7.8, 7.7], [-120, -130, 7.9]], ResQ recovers the optimal action, and the other algorithms do not.\n\nWe choose 7.999 to emphasize that QTran is likely to over-estimate sub-optimal state-action pairs. QTran wrongly estimated the optimal action and enlarged the value gap. The gap estimated by QTran between the first and the second largest value is 0.02, which is larger than the gap (0.001) between 8 and 7.999. We agree with the reviewer that 0.001 difference may be too small for RL algorithms due to their stochastic nature. We will replace 7.999 with 7.9, and add all the new experimental results to the paper.", " We thank the reviewer for your time and effort in reviewing this submission. The valuable comments can help us significantly improve the quality of this work. We have added more experimental results to address the concerns of the reviewer in the appendix of the new version of this work, and we will answer the reviewer's comments as follows.\n\n**Comparison to QTran and QPLEX**\nResQ can be viewed as a generalization of QTran, QPLEX, and CW/OW QMIX, and it has a better theoretical guarantee than QTran, QPlex, and CW/OW QMIX. ResZ extends ResQ from expectation-based RL to distributional RL. It can be viewed as a generalization of DMIX and DDN. We show in this work that using the residual function for factorization can satisfy both the IGM and the DIGM theorem. Formula (8) in Theorem 2 ensures that ResQ can find a state-action value function which models all sub-optimal state-action pairs well. This property makes ResQ can achieve a lower approximation error than QTran. Please refer to the Main Response for a more comprehensive comparison to QTran, QPlex, and CW/OW QMIX.\n\n**Why does ResQ perform better**\nTo answer this question, we have evaluated the performance of 10 variants of ResQ/ResZ. We have studied why ResQ performs better in Figure 5 of the original submission. ResQ is a combination of the main function, the mask, and the residual function. In default, ResQ uses QMIX as the main function. It can be viewed as the combination of QMIX with the mask and the residual function. We have shown in Figure 5 (a) and (b), ResQ performs much better than QMIX. Further, we have studied two variants of ResQ: Qtot-Atten and Qtot-VDN. They use QAtten and VDN as the main function instead of QMIX. The experimental results in Figure 5 (a) and (b) show that Qtot-Atten and Qtot-VDN perform better than QAtten and VDN, respectively. This indicates that with the use of the residual function and the mask function, the performance of the original value factorization methods (VDN, QAtten, QMIX) can be improved. \n\nWe have studied why ResZ performs better in the original submission in Figure 5 (c). ResZ can be viewed as the combination of DAtten with the mask and the residual part. DAtten is a stochastic variant of QAtten. We find that ResZ performs better than DAtten for the MMM2 scenario. And we have studied two variants of ResZ: Ztot-DDN and Ztot-DMIX. They use DDN and DMIX as their main function, respectively. We find that Ztot-DDN and Ztot-DMIX perform better than DDN and DMIX, respectively. This indicates that for the stochastic case, using the residual functions and the mask can improve the original value factorization methods. \n\nFigure 5 (a) (b) (c) shows that using the residual functions and the masks can improve the performance of existing value factorization methods. We study whether the neural networks of the residual functions affect the ResQ significantly. The implementations of the residual functions we consider are VDN, QMIX, QAtten, feed-forward, DDN, and DMIX. As it is depicted in Figure 5 (d) and (e), for the MMM2 and the 8m\\_vs\\_9m scenarios, using a simple feed-forward (FF) network is enough for the expectation-based RL. For the stochastic case, using DAtten is preferred.\n\n**Questions**\n\n*Reply to Q1*\n\nTo study the reason why ResQ performs better than QTran, We use two variants of ResQ: Qtot-VDN and Qtot-VDN-MSE. Qtot-VDN uses VDN as the main function same as QTran. The difference between Qtot-VDN between Qtran is the residual function, the mask, and the inequality conditions. Qtot-VDN requires $Q_r \\leq 0$ and QTran requires that $Q_{tran}(\\tau, u) \\ge Q(\\tau, u) \\quad \\forall u \\neq \\bar{u}$. Qtot-VDN-MSE uses MSE loss to implement $Q_r \\leq 0$, and it uses VDN as its main function. For MMM2, Qtot-VDN can obtain a win rate of 0.6 (see Figure 5 (a)). The win rate for QTran is 0 (see Figure 3 (a)). For the 8m\\_vs\\_9m scenario, Qtot-VDN can obtain a win rate of 0.8 (see Figure 5 (h)). And QTran can obtain a win rate of 0.4. The performance gap does not come from the implementation of the main function; it comes from the residual function, the mask function, and the inequality conditions. Further, Qtot-VDN-MSE performs similarly to Qtot-VDN, and it performs better than QTran. This suggests that the performance gap between ResQ and QTran does not come from the implementation of inequality conditions for ResQ.\n", " We faithfully thank all reviewers for their insightful comments and valuable feedback. The reviewers acknowledge the novelty and originality (R1, R2, R3), the significance (R1, R2, R3), good writing quality (R1, R2, R4), promising experimental results (R1, R2, R3, R4), and the theoretical contribution (R1, R2). We will incorporate the suggestions and address the concerns in the new version of this work. We have conducted 11 more experiments to address the comments, and their results are included in the appendix of the new version. We will answer two common questions raised by the reviewers in this main response and answer other questions in a dedicated response to each reviewer.\n\n\n**Comparison to QTran, QPlex, and CW/OW QMIX** \nBesides satisfying the IGM and the DIGM theorem, ResQ can be viewed as a generalization of QTran, QPlex, CW/OW QMIX, and ResQ has a theoretical advantage over them.\n\nWe can reformulate QTran in the form of ResQ. QTran approximates the true value function $Q(\\tau, u)$ via $Q_{tran}(\\tau, u)$. $Q_{tran}(\\tau, u) = Q_{tot}(\\tau, u) + w_r(\\tau, u)V(\\tau)$, where $Q_{tot}(\\tau, u) = \\sum_i Q_i(\\tau_i, u_i)$ and $w_r(\\tau, u)=1$. And it must satisfy the inequity condition from 4b of Theorem 1 of QTran, which requires that $Q_{tran}(\\tau, u) \\ge Q(\\tau, u)$ for all non-optimal actions. Thus, QTran could over-estimate state-action value pairs. \n\nResQ learns $Q_{jt}(\\tau, u)$ to approximate the state-action value function $Q(\\tau, u)$. Theorem 1 in ResQ shows that the state-action value function $Q_{jt}(\\tau, u)$ satisfy $Q_{jt}(\\tau, \\bar{u}) \\ge Q_{jt}(\\tau, u)$, where $\\bar{u} = [\\bar{u_i}]^n_{i=1} \\quad \\bar{u_i}=$ arg$\\max_{u_i}Q_i(\\tau_i, u_i)$. Further, as it is shown in Formula 8 of Theorem 2 in ResQ, $Q_{jt}(\\tau, u) = Q(\\tau, u) \\quad \\forall u \\neq \\bar{u}$, ResQ can find a $Q_{jt}(\\tau, u)$ that matches the sub-optimal state-actions of $Q(\\tau, u)$ closely. QTran cannot guarantee that the learned approximated function $Q_{tran}$ satisfies this property. Further, ResQ uses $Q_r(\\tau, u)$ to model the residual part; it has more input and is more flexible than the residual part $V(\\tau)$ of QTran. \n\nResQ can be viewed as a generalization of QPlex as well. According to QPlex, QPlex learns a function $Q_{plex}(\\tau, u)$ to approximate the state-action value function $Q(\\tau, u)$. $Q_{plex}(\\tau, u) = V(\\tau) + Adv(\\tau, u)$, where $V(\\tau) = max_u Q_{plex}(\\tau,u)$. According to Formula (11) of the QPlex paper, we can rewrite $Q_{plex}$ in the form of ResQ as $Q_{plex}(\\tau, u) = Q_{tot}(\\tau, u) + w_r(\\tau, u) Q_r(\\tau, u)$, where $w_r(\\tau, u) = 1$, $Q_{tot}(\\tau, u)=\\sum_i Q_i(\\tau, u_i)$ and $Q_r(\\tau, u) = \\sum_i(\\lambda_i - 1) A_i(\\tau, u_i)$. QPlex places restrictions $A_i(\\tau, u_i) = Q_i(\\tau, u_i) - max_{u_i}Q_i(\\tau, u_i)$ among the main and the residual function, and the restrictions imply that $A_i \\leq 0$. In contrast, ResQ directly requires $Q_r(\\tau, u) \\leq 0$. Further, ResQ can use more expressive mixers (such as QMIX) to model $Q_{tot}$ than QPLEX (it uses VDN for $Q_{tot}$). \n\nIn summary, ResQ places fewer restrictions on the relationships between the main and the residual function than QPLEX. And ResQ can use a more expressive neural network to model the main function than QPLEX. Thus, ResQ can model Q value function in a better and more flexible way than QPLEX.\n\nCW/OW QMIX can be viewed as variants of ResQ . CW/OW QMIX learns $Q_{wqmix} = w_{tot}(\\tau, u) Q_{tot}(\\tau, u) + (1-w_{tot}(\\tau, u)) Q_r(\\tau, u)$ to approximate the true value function, where $w_{tot}(\\tau, \\bar{u}) = 1$ and $w_{tot}(\\tau, u) = 0 \\quad \\forall u \\neq \\bar{u}$. They assign high learning priorities to $Q_{tot}$, which puts the learning of sub-optimal state-action pairs $Q_r$ and $Q_{wqmix}$ in trouble.\n\n**Limitation**\nKnowing the optimal action over Q is computationally intractable, we make approximations to derive a practical algorithm. It is difficult for ResQ to find the optimal actions for a scenario requiring highly-coordinated agent exploration. The approximated mask function will fail if the approximated optimal action differs significantly from the optimal action. In ResQ, we assume that the argmax operator of $Q_{tot}$ can lead to correct optimal actions. However, this assumption does not always hold. For a discrete-action environment with a tabular Q value, we think that all state-action values can be factorized using ResQ if the masking function can find the correct optimal action. As we use neural networks to represent states and actions, the approximation error of neural networks can interact with the error of the approximated argmax operator. This may make ResQ fail. We think that combining efficient-exploration approaches (e.g., MAVEN) with ResQ can make the mask function more robust than using ResQ alone. ", " **Minors**\n1. Yes, it is a typo.\n2. The symbol \"*\" represents 0 to T, where T denotes the time step. * means that the history could be short or long. For example, $\\tau_1$ could $\\in O^1_1\\times U^1_1\\times O^2_1\\times U^2_1$ and could $\\in O^1_1\\times U^1_1$, where the superscript represent time, and the subscript is the index of a agent.\n3. Yes, it is a typo, we will fix it.\n4. Yes, we will make all the symbol more consistent.\n5. We would like to thanks the reviewer’s effort for reviewing this work. Yes, we will make all the symbol more consistent.\n6. We will replace “histories” as “history”\n7. Yes.\n8. Thanks for your suggestion. We will state in this work ``we assume the argmax operator is unique, the action with smallest index is selected to break ties if a tie exists'' \n9. Yes, the symbol n represents the index of an agent.\n10. Thanks, we will use $\\set{N}$ to represent set of agents in a more consistent manner. \n11. We have rephrased it as ``Distributional RL models full return distribution $Z(\\tau, u)$ instead of $Q(\\tau,u)$. IQN defined a distributional Bellman operator, and use it to updates its $Z(\\tau,u)$. After applying the distributional Bellman operator on $Z(\\tau,u)$, its resulting $Z(\\tau',u')$ remains in the same distribution as $Z(\\tau,u)$.''.\n12. Yes, we have fixed the typo in the new version. \n13. We have made “hard-to-factorize” consistent over the new version.\n14. Yes, it should be “joint” rather than “jointed”\n15. Yes, it is an action-observation history. \n16. Yes, it should be Table 2 a).\n17. Yes, the “from (2)” in formula (4) should be $\\bar{u}$ maximize $Q_{tot}$, and “from (2)” should be after (5). \n18. Yes it should be $Z_r(\\tau, u)$ instead of $Q_r(\\tau, u)$\n19. Yes, it should be “matrix”.", " We want to express our sincere gratitude to the reviewer. Such a thoughtful and in-depth review can help us greatly improve the quality of this work. We will improve this work based on the comments.\n\n**Weakness**\nThe proof of Theorem 1 has some similarities with QTran, but ResQ has a better theoretical guarantee than QTran. The proof of Theorem 2 is not similar to QTran. The proofs of Theorem 3 and 4 have some similarities with QTran, but they extend the idea of using residual functions from the expectation-based RL to expectation-based RL. \n\n**Majors comments**\n\n1. We have compared ResQ with QTran and QPlex in the main response.\n\n2. We have answered why ResQ can track value better than QTran and QPLEX in the main response. For CW QMIX and OW QMIX, they focus on modeling the value of the optimal actions but pay less effort to model the value of the sub-optimal actions. Thus, they do not model the value of sub-optimal actions well. We will soften the claim as \"Compared to QTran, QPLEX, and weighted QMIX, ResQ could track joint state-action value pairs more closely.\n\n3. ResQ works in the discrete-action domain. For the continuous-action domain, a new theorem analogous to the IGM theorem should be developed. And the mask function should be improved to mask out ranges of actions rather than a set of actions. We think it is promising to combine ResQ with MADDPG for the continuous-action domain.\n\n4. In a two-agent non-monotonic matrix, if one agent selects different actions, the direction of reward increment becomes different for another agent. Let's take the matrix [[8, -12, -12], [-12, 0, 0], [-12, 0, 7.999]] as an example. If agent 1 chooses the first action, the reward vector for agent 2 becomes [8, -12, -12]. The reward for agent 2 monotonically increases from the right to the left. If agent 1 chooses the third action, the reward vector for agent 2 becomes [-12, 0, 7.99]. It monotonically increases from the left to the right.\n\n5. Thanks, we will discuss them in the context of this work.\n\n6. We have explained them in the main response.\n\n7. Sorry for the mistake. It should be Theorem 1 of [8]. Theorem 1 of [8] shows that Weighted QMIX can always find a monotonically increasing function $Q_{tot}$ that shares the same optimal policy as the true value function of $Q(s,u)$. This indicates that for any $Q(s, u)$, there exists a monotonically increasing function $Q_{tot}(s, u)$ that shares the same optimal policy as $Q(s, u)$. We will improve the description of this lemma in the new version.\n\n8. The input to the residual mixer consists of state, per-agent utilities, and actions. \n\n9. Yes, the performance of approximated argmax operator depends on the initialization. The error induced by approximation gradually reduces during training. For example, the error of approximated argmax for the pay-off matrices in Table 2 and Table 1 (in the appendix) reduce to 0 in the end. However, for complex tasks such as the MMM2 scenario in the SMAC benchmark, we have observed from the game video that agents choose the wrong actions during testing, which leads to a game loss. \n\n\n10. According to the reviewer's suggestion, we have run the three stochastic RL algorithms: ResZ, DMIX, and DDN on the predator-prey benchmark. For the scenario without punishment, these three algorithms perform similarly. ResZ is more stable than DMIX. However, for scenarios with punishment, none of the algorithms can obtain a decent policy that obtains positive rewards. These distributional RL algorithms are susceptible to the relative over-generalization pathology. However, the expectation-based version, ResQ does not suffer from this problem.\n\n\n11. OW QMIX performed the best in the p=4 case, but it performed the worst in the p=0 case, and the second worst in the p=2 case. CW QMIX learned slightly faster than ResQ in the p=4 case. ResQ is the third best-performing algorithm in the p=4 case, and its performance is among the best-performing algorithms in the p=0 and p=2 cases. ResQ performs better than OW QMIX in the SMAC benchmark and matrix games all the time. We think because of the randomness of these RL algorithms and the environments, ResQ does not learn faster than OW QMIX in the p=4 case.\n\n12. Figure 5 (a) (b) (c) shows that using the residual functions and the masks can improve the performance of the existing value factorization method. The input to the residual function consists of state, per-agent utilities, and actions. And it output a scalar value. A value factorization function (mixer function) takes state, per-agent utilities, and actions as input, and they output a scalar value too. We implement the residual function using these mixer functions to build on previous well-designed mixer functions. And then, we study the performance of different residual functions in the ablation study.", " We want to thank the reviewer for the effort and time in reviewing this paper. We will take your valuable feedback into account to improve our work. \n\n**Non-monotonic matrix**\nThe matrix shown in Figure 1 is a complicated pay-off matrix (denoted as Matrix A). It is more difficult than the popular non-monotonic pay-off matrix (denoted as Matrix B). Matrix B is proposed in QTran, and its array form is [[8, -12, -12], [-12, 0, 0], [-12, 0, 0]]. This is a non-monotonic matrix. In Matrix B, there are two agents: agent 1 and agent 2. The optimal action is choosing the first action for both agents. If we let agent 2 chooses the first action, the reward vector for agent 1 becomes [8, -12, -12]. For agent 1, the reward monotonically increases from the bottom to the top (the value increases from -12 to 8). If agent 2 chooses the second/third action, the rewards for agent 1 become [-12, 0, 0]. The reward monotonically increases from the top to the bottom. Note that the direction of monotonic increment differs between the first action and the second/third actions. QMIX, VDN, DMIX, and DDN cannot learn such a non-monotonic pay-off matrix due to their representation limitations. \n\nMatrix A (the one depicted in Figure 1 of this work) is a more difficult non-monotonic matrix than Matrix B. The array form of Matrix A is [[8, -12, -12], [-12, 0, 0], [-12, 0, 7.999]]. For matrix A, if agent 1 chooses the first action, the reward vector for agent 2 becomes [8, -12, -12]. It monotonically increases from the right to the left. If agent 1 chooses the second/third action, the rewards ([-12, 0, 0] and [-12, 0, 7.99]) for agent 2 monotonically increase from the left to the right. The direction of increment is different between the first and the second/third action. If agent 2 chooses the first action, the rewards for agent 1 become [8, -12, -12]. It increases from the bottom to the top. If agent 2 chooses the second action, the rewards for agent 1 become [-12, 0, 0]. The reward increases from the top to the bottom. If agent 2 chooses the third action, the rewards for agent 1 become [-12, 0, 7.999]. The reward does not increase from the bottom to the top. Matrix A is a more challenging pay-off matrix for MARL value factorization than Matrix B. QTran, QPLEX, CW QMIX, OW QMIX cannot model this matrix well.\n\n**Limitations**\nIn this work, we assume that the approximated argmax operator can find the optimal actions, however, this does not always true. For scenarios with multiple agents and a large action space, the possibility of executing the correct optimal action decreases exponentially with the increase of agent count. In these scenarios, the argmax operator may fail. We have shown that with the perfect argmax operator, ResQ can find the optimal policy for tabular state-action values. If we approximate the state-action values, state with neural networks, the neural network approximation error could interact with the error of the argmax operator. In these scenarios, ResQ may fail.\n\n**Question**\n1. The concurrent work, Residual-Q-Network (RQN), deals with MARL value factorization too. It extends QTran by adding an individual correction factor for each utility function to compute an adjusted utility function (individual Q value). And the sum of the adjust utility function is used as the transformed Q value $Q_{tran}$ which is defined in QTran. RQN can be viewed as a special case of ResQ. It uses the sum of per-agent utility as the main function $Q_{tot}$, and the sum of individual correction factors as the residual function $Q_r$. In this work, $Q_{tran}$ over-estimates the value of sub-optimal actions (see 6b of Theorem in the RQN paper). RQN uses the word “residual” because they implement the adjusted utility function in a way similar to ResNet (Deep Residual Learning for Image Recognition, CVPR 2015). ResQ uses the word “residual” to indicate the mask-out values from the Q function. We will compare ResQ and RQN in the related work. \n\n2. \nThe bold symbols are used in Section 2; they are used to represent vectors. We will change all the notation of vectors using bold symbols to make the presentation of this work better. $Q_{jt}(\\boldsymbol{\\tau}, \\boldsymbol{u})$ takes a vector of actions (i.e., \\boldsymbol{u}) as its input. And $Q_i(\\tau_i, u_i)$ take the action $u_i$ as input. \n\n3. \nBy modifying the dimension of the last layer of the hypernet inside DMIX, we have increased the number of parameters of DMIX from 85K to 350K, which is bigger than that of ResQ (319K) and that of ResZ(316K). Let's denote DMIX with more parameters as DMIX-larger, and test its performance in the MMM2, the MMM, and the 3s\\_vs\\_5z scenarios. DMIX-larger performs slightly better than DMIX in the MMM2 scenario but worse than DMIX in the MMM and the 3s\\_vs\\_5z scenarios. ResQ and ResZ performs better than DMIX and DMIX-larger in these scenarios. This means that the performance improvement of ResQ over DMIX does not come from the use of more parameters.", " We agree with the reviewer that learning a residual function indeed incurs more computation than QMIX and VDN, but it requires a similar cost as QTran and QPlex. If we implement the residual function the same as the main function, the inference/back-propagation cost for the whole neural network could be twice the cost of using the main function alone during the training of the value function. After the value function is trained, each agent executes its action greedily according to its utility function. During agent execution, the inference cost for all the algorithms is the same. We will add a discussion of the computation cost of ResQ into the new version of this work.\n\nResQ needs a similar computational cost for inference/back-propagation as QTran and QPlex. QTran and QPlex learn $V$ functions to assist the learning of the state-action value function. The $V$ function is used to assist the implementation of the IGM principle, and its output is a scalar value. The input of QTran's $V$ consists of state only. And the input of the $V$ function of QPlex consists of state and utility values. The input of ResQ's $Q_r$ is the same as QPlex. From the view of the input and the output for these functions (V and residual functions), they need a similar computational cost for neural network interference and back-propagation. \n\n1. Reply to Q1. Regarding the complexity, we have discussed the time complexity of ResQ. For the space complexity, ResQ does not require much more parameters than other algorithms. For the MMM2 scenario of the SMAC benchmark, the number of neural network parameters of value functions for ResQ, ResZ, ResZ-DMIX, QPlex, QMIX, and DMIX are 319K, 316K, 331K, 342K, 85K, and 85K, respectively. ResQ requires a similar parameter size as other algorithms, which do not suffer from representation limitation issues. Albeit consuming fewer neural network parameters, QMIX and DMIX have representation limitation issues. For implementation complexity, we use a simple feed-forward network to implement the residual function, which is quite straightforward. \n\n2. Reply to Q2. We believe that the reviewer is referring to the paper “Mahajan et al. MAVEN: Multi-Agent Variational Exploration. NeurIPS 2019”. MAVEN deals with the inefficient exploration problem in MARL. Inefficient exploration problems could interact with the representation limitation problems. As it is stated in the MAVEN paper, this may push the algorithm towards sub-optimal policies. In MAVEN, value-based agents condition their behaviour based on a shared latent variable. And the latent variable is controlled by a hierarchical policy. MAVEN relies on the underlying value factorization method to decompose Q values. And it adopts QMIX as its value factorization method. ResQ can be used in MAVEN as its factorization method. We will compare ResQ with MAVEN in the related work in the new version of this work.\n\n3. Reply to Q3. Knowing the optimal action over Q is computationally intractable, we make approximations to derive a practical algorithm. For a scenario requiring highly-coordinated agent exploration, it is difficult for ResQ to find the optimal actions. The approximated mask function will fail if the approximated optimal action differs significantly from the optimal action. In ResQ, we assume that the approximation argmax of $Q_{tot}$ can lead to correct optimal actions. However, this assumption does not always hold. We think that combining MAVEN with ResQ can make the mask function more robust than using ResQ alone. We will discuss the limitation of this work in a discussion section of the new version.", " Post rebuttal:\n> I am updating my score post rebuttal based on the responses provided by the authors. They have answered almost all my questions satisfactorily. My increased score is contingent on the authors making the appropriate modifications to their paper as mentioned in their rebuttal.\n\n\nThis paper extends the action-value function factorization based approaches developed for centralized training and decentralized execution in cooperative multi-agent reinforcement learning problems. The main contribution is decomposing the action-value function into a factorizable component and a residual (formed by masking some state-action value pairs), such that the factorizable component alone is sufficient for each agent to independently choose their optimal actions, which also leads to joint optimality for all agents. The authors develop this method for both expected value and stochastic value (distributional) RL. The main challenges addressed by this method are improved representation capability, sample efficiency, and approximation error. The authors analyze their method theoretically and empirically through experiments on matrix games, predatory-prey and StarCraft benchmarks. Strengths:\n1. The paper is well written. The original contributions are highlighted clearly.\n2. The *ResQ* insight is a neat and useful one and I think it extends the applicability of factorization based approaches to a larger set of problems.\n3. The authors provide theoretical analyses and also perform a detailed experimental study establishing the utility of their approach and improvement over the current best approaches.\n\nWeaknesses:\n1. Most of the theoretical concepts used in the paper such as IGM, DIGM have been already established in literature (and duly acknowledged by the authors). Furthermore, as far as I understand, the residual decomposition technique is very similar to QTRAN, QPLEX and the main difference, as far as I can see, is that it generalizes the decomposition proposed in QTRAN, QPLEX. As a result, most of the proofs in the current paper are also similar to QTRAN. Is my understanding correct? Major comments:\n1. Can the authors provide a detailed comparison with QTRAN and QPLEX which also appear to have a residual decomposition, where the residual is a value function? Is *ResQ* a generalization of QTRAN and QPLEX? What are the benefits of *ResQ* over these algorithms other than generalization?\n2. I did not understand why *ResQ* can track joint state-action value pairs more closely than other algorithms. Can the authors please elaborate this point?\n3. The authors mention that they restrict attention to discrete actions. Is this a fundamental limitation of the approach? What needs to be done to extend this work to continuous action spaces?\n4. Can the authors explain why the joint state-action value function of the task shown in Fig.1 cannot be expressed well by monotonic increasing functions (Ref. Line 81, Lines 133-134)? This might be obvious, but one or two sentences explaining this would be helpful.\n5. In the Related Work section, especially in the first paragraph, the authors list several related works. I think, it would be more useful to present these works in context to the work of the present paper instead of just listing them.\n6. In Lines 124-126, the authors state: \"QTRAN over-estimates sub-optimal actions, which may lead to non-optimal decisions, whereas ResQ can estimate correctly non-optimal state-action values.\" Isn't it shown in the QTRAN paper that under appropriate factorization, QTRAN recovers the joint optimal policy, despite this over-estimation? Can the authors clarify and elaborate on this comment in the paper? This is again mentioned in Lines 135-137, but a more detailed explanation would be helpful.\n7. In Line 459, in the Appendix, the authors write that \"Proof. This Lemma was proved as Theorem 4 of [8].\" for the Proof of Lemma 1. I could not find this theorem in [8]. As this Lemma is crucial for proving Theorem 2, I could not follow that proof as well. Can the authors provide an updated reference or am I missing something here?\n8. In Fig.2, what is the input to the residual mixer in part (a)?\n9. Doesn't approximating the mask function with the current best action ($\\tilde u$) instead of the best action ($\\bar u$) make the algorithm dependent on the initialization? Does the error introduced by this approximation go to 0 as the iterations progress? Is there any theoretical or empirical evidence for this?\n10. Why ResZ was not tried for the predator-prey example?\n11. Why is ResQ slower to learn in the p=-4 case in the predator prey example shown in Fig. 4?\n12. What is the motivation behind factorizing the residual function $Q_r$ in the ablation study in Sec 5.5?\n\nMinor comments:\n1. In Line 67, the equation is written as $s_{t+1} \\sim P(\\cdot|s^t, \\mathbfit{s^t})$. Shouldn't this be $s_{t+1} \\sim P(\\cdot|s^t, \\mathbfit{u^t})$?\n2. Can the $*$ superscript be defined in Line 70 for ease of comprehension for readers?\n3. Shouldn't the policy in Line 70 be $\\pi_i(u_i|\\tau_i)$ instead of $\\pi_i(u|\\tau_i)$?\n4. In line 71, shouldn't the joint policy be $\\pi = \\langle \\pi_1, \\dots, \\pi_n \\rangle$ instead of $\\pi = \\langle \\pi_1, \\dots, \\pi_N \\rangle$?\n5. The joint action-observation history is defined as $\\tau^N$ in line 70 and as $\\mathcal{T}^N$ in line 75. Can this be made consistent throughout? \n6. In line 76, consider replacing 'histories' with 'history'.\n7. In Line 76, shouldn't it be $Q_i: \\mathcal{T}_i \\times \\mathcal{U}_i \\to \\mathbb{R}$ rather than $Q_i: \\mathcal{T} \\times \\mathcal{U} \\to \\mathbb{R}$?\n8. In general *argmax* is a set operation and hence the output is a set of candidates. It would be good if the authors add a line stating that in this paper, they assume *argmax* is either unique or the ties are broken in a consistent manner so that equation (1) holds.\n9. In equation (1), consider replacing $N$ with $n$ for consistency. I think $n$ is the index of the last agent, and $N$ is a superscript that denotes the set of all agents. I think this confusion appears in multiple locations in the paper (such as lines 78, 180, 181, 484 etc.). It would be good to rectify this.\n10. It would be good to define $\\mathcal{N}$ or $N$ as the set of agents as it is used explicitly in equation (2), and then this notation can be used consistently in the paper.\n11. I could not understand this sentence in lines 88-90: \"IQN updates its value distribution through a distributional Bellman operator, which ensures the value distribution follow the same distribution.\" Can this be rephrased?\n12. I think the symbols should be $w_i, w_j$ instead of $w_1, w_2$ in Line 91.\n13. In Line 134, it should be \"factorize\" instead of \"factorized\".\n14. In Lines 147, 148, it should be \"joint\" instead of \"jointed\".\n15. In Line 151, $\\tau$ is referred to as an observation, where in fact, it is a joint action-observation history.\n16. In Line 237, shouldn't it be \"Table 2\" instead of \"Table 1a\"?\n17. In Line 454, shouldn't \"From (2)\" be after equation (5) and not equation (4)? And the reason for (4) should be $\\bar u$ maximizes $Q_{tot}$?\n18. $Q_r(\\tau, u)$ is written in Line 497. Shouldn't it be $Z_r(\\tau, u)$?\n19. In Line 524, there is a typo: \"matrxi\" -> \"matrix\".\n\n\n\n\n\n\n I think the authors need to add more discussion around the limitation of their approach. Specifically, can all action-value functions be factorized using the *ResQ* approach? They address this partially via Theorem 2, which seems to suggest that all action-value functions can be factorized using *ResQ*. If not, under what conditions does it fail? A discussion on this point would be useful.", " Authors introduce a constraint-free residual function into the state-action value function factorization construct, thereby allowing the joint (main, residual) function to be able to express a larger class of functions than with the main functions alone. The proposed algorithm is tested against a rich set of baselines and benchmarks and shows substantial improvements. The paper is easy to follow, and the claims made in the abstract are adequately addressed, with relevant theoretical setup, literature review, and experiment results analyses. Findings from the experiments indicate that the proposed algorithm works well and explainable MARL settings such as the matrix game make the better performance attributable indeed back to the motivating example.\n\nOne potential shortcoming of the paper could be the lack of insights provided on the computational costs incurred in calculating the residual functions. As ResQ requires fully connected neural networks to estimate the residual functions, one training step of ResQ would probably be more expensive than that of other baselines. Some analyses into this matter would be a nice addition to the draft. Q1: continued from the Strengths and Weaknesses: how much complexity has been added with the ResQ function approximator NN?\n\nQ2: How would ResQ be positioned in the literature against, for example, MAVEN? MAVEN had its take on the different classes of state-action joint value function factorization with the corresponding algorithms published at the time. How would ResQ's take differ in terms of representational complexity?\n\nQ3: Under what circumstances do the approximations for the mask functions fail? What assumptions, if any, should hold in order for the approximations to remain valid? How strong would you say are those assumptions? Please do include some analyses on the computational costs of ResQ", " This paper extends distributional QMIX with a residual value function. Distributional QMIX (DMIX) an multi-agent reinforcement learning (MARL) algorithm using distributional Q-function to tackle stochasticity in MARL. DMIX decomposes the joint Q-value function of multiple agents into several sub value functions and aggregate them with a so-called mixer network. The limitation of DMIX is that each sub Q-function needs to be monotonically increasing with respect to the joint Q-function (i.e., the derivative of the joint Q-value w.r.t., the sub Q-value must be non-negative. This paper removes this limitation by introducing a residual Q-function in the factorization. The experimental results show that the proposed method improves the performance over the baselines. Originality: The use of residual decomposition is not new, yet this is the first time being used in factorizing value function in MARL. As such, I acknowledge the originality. \n\nQuality and clarity:\nThe figure 1 keeps being cited in the paper to illustrate the representation limitations and difficulty of modeling the value function. Yet, I don't understand how this makes value function representations difficult. I would like to see further explanation from the author. \n\nSignificance:\nThis work lifted the monotonicity assumption in QMIX framework. It is remarkable to my knowledge since non-monotonic payoff functions could exist in some applications. - Could the author comment on a concurrent work [1]?\n- Do the boldsymbols stand for vectors? If so, why $Q_{jt}$ takes vector of actions as inputs in Equation (1) while taking scalar actions as inputs in Equation (4)?\n- As the proposed method uses more parameters (because of residual network) than DMIX, did the author try comparing with DMIX in a larger architecture?\n\n[1] Residual Q-Networks for Value Function Factorizing in Multi-Agent Reinforcement Learning, https://arxiv.org/abs/2205.15245 I encourage the author to think of the limitation of representation limits of residual decomposition. For example, could this residual decomposition be able to account all kinds of payoff functions?", " The paper proposed ResQ, a value factorization method for converting the joint Q function to a monotonic and residual part. Theoretical results show that the proposed method can satisfy the IGM condition, and experimental results show that it performs well in the matrix game, SMAC, and Predator Prey. The paper is written clearly in general, but it could be better organized to bridge the expectational and distributional parts. The paper investigates an important issue of value factorization methods, and the experiments appear to be adequate. The main weakness of this paper, in my opinion, is that the factorization and loss, both of which are similar to Qtran, have no better theoretical guarantees than Qtran and Qplex. The paper does not go into detail on how ResQ achieves \"low approximation errors and high sample efficiency.\" It is unclear why ResQ performs better. - ResQ factorize the joint Q into the monotonic part and residual part, whereas Qtran factorize the joint Q into the monotonic part and residual part as well. The difference between ResQ and Qtran is that Qtran does not explicitly learn a residual function; instead, it employs an equality and an inequality which work similar to the coefficient w in ResQ. So, what is the benefit of learning the residual function? Does it aid in the reduction of approximation errors or the improvement of sample efficiency?\n- Both Qtran and ResQ learns a joint Q network and trains individual Qs through another MSE loss. One of the main issues of Qtran is that the IGM is not guaranteed when the loss is not well minimized, which may lead to poor results in complex tasks. Is ResQ affect by the same problem? By the way, in the submitted code, Q_r is trained using loss max(Q_r, 0), which differs from that described in the paper. \n- In matrix game, why is the Q_jt of ResQ(table 2(b)) and Qtran (table 2(f)) so different from the payoff matrix? They are expected to be the same as the payoff matrix (like the Q_jt in Qplex) because they are both trained using MSE(reward,Q_jt) with no expressive constraints. Moreover, I believe that training the network to distinguish 0.001 is inapproproate. Because of the learning rate and stochastic gradient descent, the values are likely to differ by more than 0.001 each iteration after convergence.\n N/A" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 2, 4 ]
[ "sBpzRPg1c1o", "sBpzRPg1c1o", "k5W0XDKU-j", "apRy92Ww3qa", "RqpRI3Z83L", "WBhl6i7hR8C", "VSWMsUFYuL1", "K4HgraAXO6P", "nips_2022_bdnZ_1qHLCW", "7hgCd9_QgaL", "E_mg3UDz7uP", "i4Rba3HUE5w", "12bUQHN_nmJ", "nips_2022_bdnZ_1qHLCW", "nips_2022_bdnZ_1qHLCW", "nips_2022_bdnZ_1qHLCW", "nips_2022_bdnZ_1qHLCW" ]
nips_2022_u4KagP_FjB
Spartan: Differentiable Sparsity via Regularized Transportation
We present Spartan, a method for training sparse neural network models with a predetermined level of sparsity. Spartan is based on a combination of two techniques: (1) soft top-k masking of low-magnitude parameters via a regularized optimal transportation problem and (2) dual averaging-based parameter updates with hard sparsification in the forward pass. This scheme realizes an exploration-exploitation tradeoff: early in training, the learner is able to explore various sparsity patterns, and as the soft top-k approximation is gradually sharpened over the course of training, the balance shifts towards parameter optimization with respect to a fixed sparsity mask. Spartan is sufficiently flexible to accommodate a variety of sparsity allocation policies, including both unstructured and block-structured sparsity, global and per-layer sparsity budgets, as well as general cost-sensitive sparsity allocation mediated by linear models of per-parameter costs. On ImageNet-1K classification, we demonstrate that training with Spartan yields 95% sparse ResNet-50 models and 90% block sparse ViT-B/16 models while incurring absolute top-1 accuracy losses of less than 1% compared to fully dense training.
Accept
All reviewers agree that the paper is clearly written and proposes an algorithm which is both novel and efficient. The rebuttal has clarified a number of points, and thereby adressed most of the concerns of the reviewers. The authors are thus strongly encouraged to take into account the comments of the reviewers and to add some of the clarifications that they provided in this discussion in the paper and supplementary materials.
train
[ "iUrKZjrcdRh", "Eqz86P6xjH", "zutGpJVkTK", "SX9vDO5EGaD", "n34LPZ4HZo", "NvwEkfz3V7F", "3EAoE7X3Z2Nk", "CHQhN9GdzAag", "5tw4Zgoe69w", "MpniuY1LsAb", "0z-SzKhcPQF", "AKg-6wulFw", "cxZrhQIVPMB", "sfGhMusYpZT", "mU3xynnpOKx", "yhLedqzE9YA" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I keep my score and vote for weak acceptance.", " We greatly appreciate this and would like to gently remind the reviewer to actually raise the rating to 6 as it still seems to be set at 3.", " I thank the authors for their response.\n\nIt is all clear to me now. The authors have addressed all my concerns. I believe that these clarification and details should be included in the final version, if the paper is accepted. \n\nI am happy to raise the score to 6.", " On scaling: all we are saying here is that it is not necessary to normalize the marginals to sum to 1 (as long as the marginals have strictly positive entries, the entropy is well-defined). Therefore, we can simply set $s=1$ to obtain the stated Sinkhorn updates.\n\nHere is a derivation of the update using the notation from the paper:\n\nThe cost matrix is $C = \\left[ -\\frac{v}{c} , 0 \\right] \\in \\mathbb{R}^{d\\times 2}$, the row marginals are $c$ and the column marginals are $[k, 1_d^Tc - k ]$. From [11, Lemma 2], we know that the optimal solution to Problem 2 can be written in the form $\\mathrm{diag}(\\exp\\nu) \\exp(-\\beta C) \\mathrm{diag}(\\exp [\\mu, \\mu'])$, with dual variables $\\mu, \\mu' \\in \\mathbb{R}$ and $\\nu \\in \\mathbb{R}^d$. Moreover, we can compute a sequence of iterates converging to an optimal collection of dual variables using Sinkhorn iteration.\n\nUse the degree of freedom in the dual variables to fix $\\mu' = 0$.\n\nThe Sinkhorn updates are therefore:\n\n$\\nu_{t+1} = \\log c - \\log (\\exp(-\\beta C_{\\cdot 1} + \\mu_t) + \\exp(-\\beta C_{\\cdot 2} + \\mu'_t)) = \\log c - \\log(1_d + \\exp(\\beta v / c + \\mu_t)),$\n\n$\\mu_{t+1} = \\log k - \\log \\sum_i \\exp(-\\beta C_{i1} + \\nu_{t,i}) = \\log k - \\log \\sum_i \\exp(\\beta v_i / c_i + \\nu_{t,i}).$", " I thank the authors for their reponse.\n\n> Contrary to our understanding of the reviewer’s comments, we note that the entropic regularization perspective from [11] and the iterative Bregman projection perspective from [4] yield equivalent Sinkhorn updates (see, e.g., Section 3.1 in [4], or Remark 4.8 in [CompOT]). Since we might be missing some context here, we invite the reviewer to elaborate on the statement that these two perspectives yield different Sinkhorn updates.\n\nIndeed, I did not cite the correct reference and thank the authors for pointing out. I intended to say that the KL divergence also can be used as regularizer (see Remark 4.2 in [CompOT]). In that case, it will result in a bit different Sinkhorn updates than those in [4],[11]. I also agree that [4] and [11] yield the same update.\n\n> This gives the updates $v = \\log c^* - \\log \\Big[ 1 + \\exp \\big( \\frac{\\beta s V}{c} + u_1 \\big) \\Big]$ and $u_1 = \\log k^* - \\log \\sum_i \\exp \\Big[ \\frac{\\beta s V_i}{c_i} + v_i \\Big]$. This matches the updates in Algorithm 4 up to a rescaling by the constant $s = 1^T c$, i.e. $c^* \\to c$, $k^* \\to k$, and $\\beta s \\to \\beta$. Given that it is not necessary to normalize the marginals to sum to 1, we recover the updates in lines 4 and 5 in Algorithm 4 exactly. Finally, in line 6, we divide by the cost vector to recover the solution to Problem 1 from the solution to Problem 2, as required by the substitution defined in L133.\n\nI agree with the authors response up to the previous part of this paragraph. I don't understand the rescaling step. Taking, for example $u_1$: The update in line 5 in Algo 4 reads $u_1 = \\log k - \\log \\sum_i \\Big[ \\exp \\big( \\frac{\\beta V_i}{c_i} + v_i \\big) \\Big]$. On the other hand, from the authors response, \nwe can rewrite the update as $u_1 = \\log k - \\log s - \\log \\sum_i \\exp \\Big[ \\frac{\\beta s V_i}{c_i} + v_i \\Big] = \\log k - \\log \\Big[ s \\sum_i \\exp \\big( \\frac{\\beta s V_i}{c_i} + v_i \\big) \\Big]$. Maybe I'm missing something here but I am still not clear how the variable $s$ can disappear after the rescaling. \n\nMoreover, regarding the calculation of the first column of optimal plan, $Y_{i1} = \\exp \\Big( \\frac{\\beta s V_i}{c_i} + u_1 + v_i \\Big) $, but the line 6 in Algo 4 reads $Y_{i1} = \\exp \\Big( \\frac{\\beta V_i}{c_i} + u_1 + v_i \\Big) $ (assuming that we always talk about the problem $2$, so no need to be divided by the cost $c$). How can the variable $s$ can be eliminated by rescaling?\n\nIt is possible that we are not talking about the same variables, i.e. my $v$ is not the same as $\\nu$, my $u_1$ is not the same as $\\mu$ and my $Y_{\\cdot,1}$ is not the same as $m$ (up to a multiplicative constant). In that case, probably it would be better if the authors can write down their own derivation, so that it is easier to discuss and compare.\n", " Thank you for the detailed response, especially on the choice of $\\beta$. I have read other reviews and responses, and will keep my current score (being slightly inclined for acceptance).", " Dear Reviewers,\n\nThank you once again for taking the time to provide feedback on our submission. We hope that our responses adequately addressed the concerns raised in your reviews. If not, we would be happy to further engage with you during the remainder of the current discussion period.\n\nKind regards,\n\nThe Authors", " I thank the authors for their response. I believe they answer all the concerns that I had, especially regarding the notations and the the “dense-to-sparse”/”sparse-to-sparse” nomenclature. l think both should be clarified in the final version if the manuscript is accepted. \n\nOverall I think this is a serious work with good contributions and solid background. The fact that authors wrote the OT problem in \"standard form\" at the end make the article a bit more self-content with respect to [41] and that is a good thing. \n\nTherefore I increase my score to 8. ", " We appreciate the feedback on our submission. Our responses to the specific points raised in the review are as follows.\n\n> I am slightly worried about the practicality of the method [...] Spartan has additional hyperparameter $\\beta$ which needs to be carefully selected and scheduled for the performance gain over the existing methods\n\nWhile $\\beta$ does have to be tuned to optimize the performance of Spartan, we empirically observe that Spartan improves on our baselines over a wide range of $\\beta$ values. This can be seen in Figure 4 (left), where Spartan (orange line) improves on Top-KAST (dashed line) over an order of magnitude of $\\beta$ values: from $\\beta \\approx 1$ to $\\beta \\approx 40$. This suggests that $\\beta$ does not have to be precisely fine-tuned in order to realize gains over existing methods. From our experiments with ResNet and ViT architectures, we recommend a default range of $\\beta_\\mathrm{max} \\in [10, 20]$ for unstructured sparse training, and the same range scaled by the block size for block structured sparse training (as explained in L206-209).\n\nWe additionally note that Spartan yields a substantial improvement in accuracy over Top-KAST in training block sparse ViT networks (Table 3). Given the increasing adoption of ViT-style architectures and the practical importance of block sparsity as a GPU-friendly form of parameter sparsity, we believe that Spartan is a useful tool for practitioners looking to mitigate the high inference cost of transformer-based architectures. \n\n> Spartan has all parameters active throughout the training, having very little benefit in terms of resource used for training (training FLOPs, memory).\n\nSpartan indeed incurs a higher computational cost during training compared to iterative magnitude pruning and Top-KAST due to its use of the soft top-$k$ operator (for our implementation, we measure a 5% overhead in per-iteration wall clock time over dense training, as described in Section 4.2).\n\nWhile training efficiency is not a focus of our work, we remark that Spartan is amenable to additional training-time optimizations that will improve its computational efficiency. As in Top-KAST, Spartan can take advantage of sparse kernels in the forward pass due to the top-k sparsification of the parameter vector (line 2 in Algorithm 3). In the backward pass, Spartan can similarly make use of sparse kernels when backpropagating gradients with respect to the activations of each layer (however, computing gradients with respect to the parameters will still incur a dense computational cost).\n\nAs for memory usage, Spartan incurs a similar cost to Top-KAST since both methods maintain a dense version of the parameter vector during training.\n\n> In line 60, the paper mentions that \"We show that Spartan interpolates between ...\" but I do not think this point has been rigorously shown. [...] In fact, a key idea of top-KAST is using backward sparsity [...]\n\nTo be more precise, Spartan interpolates between Top-KAST with zero backward sparsity and iterative magnitude pruning. This is due to the fact that the soft top-k operator varies from a constant scaling function at $\\beta = 0$ to the hard top-k function at $\\beta = \\infty$. We specifically consider Top-KAST with zero backward sparsity since it yields the highest test accuracy (Figure 2(b) in [23]).\n\n> Thus, I view the entropic regularization as an artificially introduced (but nevertheless neat) artifact to make the allocation softer, instead of a requirement.\n\nThis is correct — the implication in L137 that entropic regularization is necessary to efficiently solve Problem 2 is an inaccuracy on our part, and we have revised this sentence accordingly. \n\n> I wonder what backward sparsity the authors used for reproducing the top-KAST results. Are you using fully dense backwards?\n\nWe use Top-KAST with a dense backward pass, since this yields the highest accuracy for the method (see Figure 2(b) in [23]).\n\n> I am curious where the inference FLOP benefits of Spartan comes from. Could authors provide a layerwise sparsity plot? Also, how would the inference FLOPs of top-KAST be if one uses ERK layerwise sparsities?\n\nThe FLOP difference between Spartan and Top-KAST in Table 6 is due to Spartan allocating relatively higher sparsity to convolutional layers in earlier layers of the network whose outputs have larger spatial dimensions, and therefore larger per-parameter computational cost. The percentage inference FLOP cost of Top-KAST with ERK layerwise sparsities would be identical to those reported in [15, Figure 2] — i.e., 42% at 80% sparsity and 24% at 90% sparsity, which are higher than our measured FLOP costs for Top-KAST and ERK.\n", " We thank the reviewer for their consideration of our submission.\n\n> 1° Relations with Top-KAST\n\nTop-KAST maintains a dense parameter vector in memory during training, and uses a sparse, top-k masked version of these parameters to compute the forward and backward passes (see the last paragraph of Section 2.1 in [47] — “$\\alpha^t$ is best thought of as a ``temporary view’’ of the dense parameterisation, $\\theta^t$” — and Algorithm 1 in Appendix D). Thus, it incurs dense memory cost but sparse computational cost. We will revise our use of the “dense-to-sparse”/”sparse-to-sparse” nomenclature in the introduction, since this terminology is somewhat ambiguous (it would be more precise to separate the computational and memory costs of these algorithms).\n\n> 2° About notations\n\nCorrect, $\\nabla \\tilde{\\theta}_t \\nabla L(\\tilde{\\theta}_t)$ should be interpreted as “compute the gradient of the loss with respect to the sparse parameters $\\tilde{\\theta}_t$, and then apply the same k-sparse mask used to obtain $\\tilde{\\theta}_t$ to the gradient”. Here, $\\nabla \\tilde{\\theta}_t$ denotes the Jacobian of $\\Pi_k (\\theta_t)$ w.r.t. $\\theta_t$, which is simply a $d \\times d$ matrix with the entries of the mask on the diagonal. We will clarify the notation here to avoid any potential confusion.\n\n> 3° About Algorithm 3\n\n> I have trouble understanding the interest of line 2 in algorithm 3 [...] It seems to me that the two are quite redundant.\n\nWithout the additional k-sparse projection, the parameters are only approximately sparse after the application of the soft top-k operator. We find that the “leakage” of information through these non-zero parameters can result in a significant train-test mismatch, since at test time we always use an exactly k-sparse network. The addition of the k-sparse projection in the forward pass during training helps to mitigate this gap. This effect can be seen in Figure 4 (left) as the gap between the blue and orange lines at smaller values of $\\beta$.\n\n> I find these arguments quite empirical and it is difficult to feel the underlying intuition. Do the authors have some theoretical points to support them? For example, can we show for a simple function that the optimization \"goes wrong\" if we do not add this top-k step?\n\nUnfortunately, we do not currently have a theoretical justification as to why Spartan achieves better generalization error than iterative magnitude pruning and Top-KAST. More broadly, this is a difficult and fundamental problem in the field: there is very limited theoretical understanding supporting the use of particular optimizers for deep networks even in the standard supervised setting, let alone with the additional complication of sparsity constraints. \n\n> Also, it seems to me that if this step was not there, then when $\\beta = 0$ Spartan would no longer be equivalent to Top-KAST: am I correct ?\n\nThis is correct.\n\n> 5° Small remarks:\n> What are \"crude linear cost models\" (Section 5: discussion) ?\n\nThis just refers to our use of linear cost models of the form $\\mathrm{cost} = c^T m$ for a cost vector $c$ and sparsity mask $m$. This is “crude” in the sense that such a cost model doesn’t include interaction terms that encode, e.g., the notion that coherently pruning an entire row or column of a parameter matrix will yield more cost savings than pruning random entries of the matrix.\n", " We appreciate the reviewer’s feedback on our submission.\n\n> While I grasp the intuitive idea of the proposed method, e.g., the proposed method interpolates IMP and Top-KAST, I don't find the rationale behind the specific design of the proposed update rule. More detailed discussions and hopefully principled interpretation, e.g., the update of the proposed method can be interpreted as a descent step of some objective function, should be given.\n\nAlgorithms 1-3 aim to optimize the following constrained problem: “minimize $L(\\theta)$ subject to $\\lVert\\theta\\rVert_0 = k$,” where $L$ is the empirical loss. Spartan can be interpreted as a combination of (1) the dual averaging method, as previously applied in Top-KAST, and (2) a homotopy method via the use of the soft top-k function. \n\nThe use of dual averaging is principled insofar as its use is well-justified for constrained convex optimization (see, e.g., [40] and the references therein). On the other hand, homotopy methods in non-convex optimization are heuristics that aim to “guide” the optimization trajectory towards better local minima via progressive relaxations of the original problem (see, e.g., [SKC06] for an ML application of a homotopy method). In the case of Spartan, the heuristic use of the soft top-k operator (in tandem with the scheduling of the $\\beta$ hyperparameter) is similar in spirit to the longstanding technique in optimization of annealing a temperature term over time.\n\nWe will add some additional discussion of this perspective on the method to the manuscript.\n\n[SKC06] Sindhwani, Keerthi, Chapelle. Deterministic Annealing for Semi-supervised Kernel Machines. ICML 2006.\n\n> How was $-\\beta |\\theta_{ik}| / c_{ik}$ (i.e., the initial value) derived?\n\nThis is a heuristic initialization based on the observation that the dual variable $\\mu$ defines a threshold value analogous to the bias parameter in univariate logistic regression. Since we are aiming for an approximately k-sparse output, a natural guess for the value of this threshold is the top-$k$th entry of $z$, i.e., $\\beta |\\theta_{i_k}| / c_{i_k}$. We will revise the manuscript to clarify this reasoning.\n", " We thank the reviewer for their considered comments on our submission.\n\n> It appears to me that the algorithm 4 is nothing but the Sinkhorn algorithm. \n\nAlgorithm 4 is indeed an application of the Sinkhorn algorithm, as stated in L138: “we [...] solve the resulting regularized problem using the Sinkhorn-Knopp algorithm.”\n\n> It is not clear to me the motivation of the stopping criteria in the line 7. In practice (and also in the optimal transport community), it is enough to control the marginal violation (see Remark 4.6 in [CompOT]).\n\nAs is typical for optimization methods in general, several different stopping criteria are used in practice. One such choice is the marginal violation, as indicated in the reviewer’s comments. Another choice is to stop when the relative change in the objective value is sufficiently small — for instance, as used in the experiments of Cuturi (2013) [11]. We use this latter option in Algorithm 4. \n\n> The calculation of the Sinkhorn iterations does not look right to me. [...] Comparing $v$, $u$, $Y$ to $\\mu$, $\\nu$, $m$ from the lines 4,5,6 in the Algorithm 4, respectively, I do not see why they are equivalent.\n\nThe Sinkhorn updates in Algorithm 4 are correct as stated. To show this, we use the standard fact (see, e.g., Chapter 4.2 in [CompOT]) that we can always fix one dual variable to zero, due to there being one redundant constraint in the OT problem. Intuitively, this is equivalent to saying that we can always compute the $N$th row of the transportation plan given the remaining $N-1$ rows.\n\nLet us be explicit for the case of Problem 2 in the paper. Using the notation in the reviewer’s comments, we have that $Y_{i1} = \\exp(\\beta C^*_{i1} + v_i + u_1)$ and $Y_{i2} = \\exp(v_i + u_2)$. Note that the dual variables in an optimal solution are not unique: for any dual optimal $(v^*, u^*)$ and $\\delta \\in \\mathbb{R}$, $(v^* + \\delta 1_d, u^* - \\delta 1_2)$ is also dual optimal. Use this degree of freedom to fix $u_2 = 0$. This gives the updates $v = \\log c^* - \\log(1 + \\exp(\\beta sV/c + u_1))$ and $u_1 = \\log k^* - \\log \\sum_i \\exp(\\beta sV_i/c_i + v_i)$. This matches the updates in Algorithm 4 up to a rescaling by the constant $s = 1^T c$, i.e., $c^* \\rightarrow c$, $k^* \\rightarrow k$ and $\\beta s \\rightarrow \\beta$. Given that it is not necessary to normalize the marginals to sum to 1, we recover the updates in lines 4 and 5 in Algorithm 4 exactly. Finally, in line 6, we divide by the cost vector $c$ to recover the solution to Problem 1 from the solution to Problem 2, as required by the substitution defined in L133.\n\nWe note as well that by fixing $u_2 = 0$, we effectively halve the computational cost of each Sinkhorn iteration by eliminating the need to compute $u_2 = \\log(1^T c - k) - \\log \\sum_i \\exp(v_i)$. We will add a section to the Appendix that clarifies this detail of our implementation.\n\nAdditionally, we have added a section in the **rebuttal revision (Appendix G)** that details the derivation of the soft top-k backward pass (Algorithm 5).\n\n> Please specify which kind of entropic regularization you are using. More precisely, [4] use KL divergence but [11] use the negative entropy. The two types are closely related, but mathematically, they result in a bit different Sinkhorn iterations.\n\nWe have revised Sec. 3.1 in the paper to explicitly state that we apply the standard entropic regularizer defined by $H(X) = -\\sum_{ij} X_{ij} (\\log X_{ij} - 1)$.\n\nContrary to our understanding of the reviewer’s comments, we note that the entropic regularization perspective from [11] and the iterative Bregman projection perspective from [4] yield equivalent Sinkhorn updates (see, e.g., Section 3.1 in [4], or Remark 4.8 in [CompOT]). Since we might be missing some context here, we invite the reviewer to elaborate on the statement that these two perspectives yield different Sinkhorn updates.\n\n> What is the dimension $d$ of the input vectors in the applications?\n\nAs stated in L181, the ResNet-50 and ViT-B/16 architectures consist of 25.6M and 86.4M parameters respectively. Thus, the dimension $d$ is respectively 25.6M and 86.4M for each of these networks in the unstructured case. In the block structured case, the dimension is divided by the number of elements in each block.\n\n> How do the authors come up with the specific form of the initialization $-\\beta |\\theta_{i_k}| / c_{i_k}$?\n\nThis is a heuristic initialization based on the observation that the dual variable $\\mu$ defines a threshold value analogous to the bias parameter in univariate logistic regression. Since we are aiming for an approximately k-sparse output, a natural guess for the value of this threshold is the top-$k$th entry of $z$, i.e., $\\beta |\\theta_{i_k}| / c_{i_k}$. We will revise the manuscript to clarify this reasoning.\n\n> I see the definition of the projection $\\prod_k$ but wonder how is it implemented in practice?\n\n$\\prod_k$ is implemented by zeroing out the smallest $d-k$ elements by magnitude.\n", " This paper proposes a gradual neural network pruning algorithm which utilizes soft top-k mechanism for the weight update of masked/unmasked parameters. In particular, authors adopt the differentiable top-k mechanism of Xie et al. (2020), which smooths the top-k problem in a principled manner via viewing top-k as an optimal transport problem and adds the entropic regularizer to make the allocation smooth. The model applies both soft and hard top-k for the evaluation of the model, and only soft for the gradient computation. When combined with an appropriate scheduling of sparsity and entropic regularization intensity, the method achieves SOTA-like performance on ResNet-50 and ViT-B trained on ImageNet-1K. __Strength.__ First of all, I must say that the paper is very clearly written; I like how algorithms 1,2 are placed, which makes algorithm 3 very sensible and straightforward thing to do. Also, the \"dual averaging\"-based perspective toward the top-KAST update was quite new and fresh to me (although I would have appreciated slightly more detailed discussion on what dual averaging is), and I think this could be inspirational for many future works. Another big strength of the proposed method is its inference efficiency, having much lower (theoretical) inference FLOP than top-KAST.\n\n__Weakness.__ My main concern is on the _practicality_ of the proposed method. Although the central idea of the algorithm is interesting and it seems like it gives slight boost in terms of the final sparsity-to-performance tradeoff, I am slightly worried about the practicality of the method. Spartan has additional hyperparameter $\\beta$ which needs to be carefully selected and scheduled for the performance gain over the existing methods (which may require many training runs to be tuned). Also, unlike magnitude pruning (whose active parameters decrease as the training proceeds) and top-KAST (which has decreased number of peak #parameters via backward sparsity), Spartan has all parameters active throughout the training, having very little benefit in terms of resource used for training (training FLOPs, memory).\n\nHere are some nitpicks (or much minor-er concerns):\n- In line 60, the paper mentions that \"We show that Spartan interpolates between ...\" but I do not think this point has been rigorously shown. In fact, a key idea of top-KAST is using backward sparsity, which is slightly less than forward sparsity but not dense, and Spartan uses the fully dense(-but-rescaled) backward.\n- In line 137, authors seem to argue that the entropic regularization is necessary in order to efficiently solve Eq.(2) (or equivalently Eq. (1)). I do not think this is necessarily true. In fact, the solution of Eq.(1) is actually quite easy to get; for uniform c, one would simply need to perform the magnitude-based pruning (or cost-rescaled magnitude whenever c is non-uniform). Thus, I view the entropic regularization as an artificially introduced (but nevertheless neat) artifact to make the allocation softer, instead of a requirement. - Up to my knowledge, Top-KAST has two different types of sparsity; forward sparsity and backward sparsity. I wonder what backward sparsity the authors used for reproducing the top-KAST results. Are you using fully dense backwards?\n\n- I am curious where the inference FLOP benefits of Spartan comes from. Could authors provide a layerwise sparsity plot? Also, how would the inference FLOPs of top-KAST be if one uses ERK layerwise sparsities?\n\n- I believe that the inference FLOP gain is a very strong benefit of the proposed method. I suggest the authors to move them to the main text if possible. Authors adequately addressed these, in my humble opinion.", " This article introduces a new method, Spartan, which allows training neural networks with sparsity constraints. Spartan belongs to the \"dense-to-sparse\" family of methods: it maintains a dense parameter during training and makes it sparse little by little. Spartan controls the level of sparsity thanks to a parameter $\\beta$ which performs an interpolation between two popular methods to train sparse neural networks (Top-KAST ref [23] of the paper at $\\beta = 0$ and IMP [47] at $\\beta = +\\infty$). The central idea is to use, during training, a \"mask\" on the network parameters based on a soft-topk operator (implemented via regularized optimal transport).\n Overall I find this article very well written, especially the introduction, the background and the presentation of the contributions. The authors have made a notable effort of pedagogy which greatly facilitates the reading and the understanding of the article. In particular, figures 1 and 2 are very clear, didactic and I thank the authors for this effort. On the other hand, the proposed method seems to me quite relevant, and well supported by many experiments. The fact that Spartan is linked to and generalizes IMP and Top-KAST strengthens the contribution. The method seems to me quite new: it skillfully combines the soft-topk operator of Xie et al [41] with dual averaging ideas. \n\nHowever, there are a few points that are, in my opinion, unclear and would need to be clarified. These prevent me from putting a very clear accept on this article, but I would gladly change my mind depending on the authors' answers. 1° Relations with Top-KAST: \n\nIt is said in the introduction that the Top-KAST method [47] is a \"dense-to-sparse\" method, i.e. it maintains in memory a more or less dense parameter during the iterations. This point seems to me quite surprising because in [47] the authors specify that the ultimate goal of Top-KAST is to maintain sparsity during training (\"In this work we propose Top-KAST, a method that preserves constant sparsity throughout training (in both the forward and backward-passes)\"). Can the authors clarify this point?\n\n\n2° About notations:\n\nThere is an important notation which comes up regularly but which is not explained and difficult to catch. In Algorithm 1 and Algorithm 3 at the backward pass the gradient with respect to a vector is used and multiplied with the gradient of the loss. \n\nFor example in Algorithm 1 line 2 $\\theta_{t+1} = \\theta_t - \\eta_t \\nabla \\tilde{\\theta}_t \\nabla L(\\tilde{\\theta}_t)$. I don't think this notation is really standard. If I read between the lines, I understand this notation as $\\nabla \\tilde{\\theta}_t=$ \"support of $\\tilde{\\theta}_t$\" (i.e. $0$ or $1$) and the multiplication with $\\nabla L(\\tilde{\\theta}_t)$ as being a point-to-point multiplication by the support mask. More precisely $\\nabla \\tilde{\\theta}_t \\nabla L(\\tilde{\\theta}_t) = \\nabla L(\\tilde{\\theta}_t)$ if $\\tilde{\\theta}_t \\neq 0$ and $0$ otherwise. \n\nIs this correct? I think this point deserves more details because it is difficult to understand what is done in practice.\n\n3° About Algorithm 3:\n\nI have trouble understanding the interest of line 2 in algorithm 3: after finding $\\sigma^{\\beta}_k(\\theta_t)$ by a soft-topk operator on $\\theta_t$ (and as shown in figure 2) an additional step of projection onto $k$-sparse set is added $\\tilde{\\theta_t} = \\Pi_k(\\sigma^{\\beta}_k(\\theta_t))$ and the gradient of the loss is computed with respect to this $\\tilde{\\theta_t}$. It seems to me that the two are quite redundant. \n\nIt is argued in the article that this is to \"mitigate the issue of gradient sparsity\" and an ablation study is conducted on this point in section 4.2 to justify this step. I find these arguments quite empirical and it is difficult to feel the underlying intuition. Do the authors have some theoretical points to support them? For example, can we show for a simple function that the optimization \"goes wrong\" if we do not add this top-k step?\n\nAlso, it seems to me that if this step was not there, then when $\\beta = 0$ Spartan would no longer be equivalent to Top-KAST: am I correct ?\n\n4° Experimental section:\n\nThe experiments are in my opinion convincing. The idea of alternating an exploration and exploitation phase is sounded and very well illustrated by Figure 3. The experiments show well the interest of Spartan from the point of view of performance vs sparsity. Although Spartan does not outperform Top-KAST much on ResNet-50 (of the order of the variance in most cases) the ViT experiment shows the opposite and Spartan has very good performance especially in the structured case. \n\nThe computational overhead compared to dense training seems reasonable to me. On this point I think it is also important to put the Top-KAST one in perspective on the same figure: how do they compare? \n\n\n5° Small remarks: \n- The \"sparsity mask\" is never clearly defined. We understand that it is $m \\in [0,1]^{d}$ but I think it is important to say it clearly. \n- Figure 1 (c) doesn't look good in pdf because I think it's a .png or .jpg : it could be interesting to recompile the figure in pdf so that it's more readable. \n- I think that the link between equation (2) and the optimal transport could be more detailed (e.g. in appendix). Having the formulation \"min on a coupling\" could help understanding for readers who are not familiar with optimal transport. \n- What are \"crude linear cost models\" (Section 5: discussion) ?\n\nTo conclude I would like to note that it is quite rare to have a \"Limitations\" paragraph and I find this a good approach that I welcome (especially since the remarks made by the authors are good remarks that also help in understanding the method).\n\n\n------ AFTER REBUTTAL ------\n\nI thank the authors for their response which addressed most of my concerns, therefore I change my score to 8. NA", " This paper proposes a new method for pruning DNN parameters. The proposed method uses a differential top-K operator so that the proposed method interpolates between IMP and Top-KAST and balances exploitation and exploration well. # Strengths\n\n- The proposed method is simple and easy to use.\n- The proposed method outperforms existing methods.\n- The experiments include the modern ViT architecture.\n\n# Weaknesses\n\n- While I grasp the intuitive idea of the proposed method, e.g., the proposed method interpolates IMP and Top-KAST, I don't find the rationale behind the specific design of the proposed update rule. More detailed discussions and hopefully principled interpretation, e.g., the update of the proposed method can be interpreted as a descent step of some objective function, should be given. How was $-\\beta |\\theta_{i_k}| / c_{i_k}$ (i.e., the initial value) derived? The authors mentioned the limitations and social impacts well in Section 5.", " The paper presents Spartan - a \"dense-to-sparse\" algorithm to train sparse neural networks in which the sparsity of parameters can be enforced directly by a predetermined budget.\n\nThe paper has $3$ main contributions:\n- By incorporating the soft top-$k$ operator in the parameter update and by introducing the sharpness parameter, Spartan allows to interpolate between Iterative magnitude pruning and Top-$K$ Always sparse training.\n\n- Spartan shows very competitive and consistent performance over the existing methods and also the fully dense training.\n\n- The authors also study the effect of the sharpness parameter on the accuracy, as a trade-off between exploration and exploitation. Amongst core components of Spartan, the algorithm 4 leaves me much doubt.\n\n- It appears to me that the algorithm 4 is nothing but the Sinkhorn algorithm. In that case, it is not clear to me the motivation of the stopping criteria in the line 7. In practice (and also in the optimal transport community), it is enough to control the marginal violation (see Remark 4.6 in [CompOT]).\n \n - The calculation of the Sinkhorn iterations does not look right to me. Here, I present my own derivation of the Sinkhorn update. First, let us define $Y := [y, y'] \\in \\mathbb R^{d \\times 2}$ by stacking two vectors in $\\mathbb R^d$ and $C := [\\frac{V}{c}, 0_d] \\in \\mathbb R^{d \\times 2}$, where $0_d$ is the zero vector in $\\mathbb R^d$. Then clearly, $\\langle C, Y \\rangle = (\\frac{V}{c})^T y$, where $\\langle C, Y \\rangle = \\sum_{ij} C_{ij} Y_{ij}$ denotes the Frobenius product. We can rewrite the problem $(2)$ as \n\n$$\n\\min_{Y \\in \\mathbb R^{d \\times 2}_+} \\langle -C, Y \\rangle \\text{ subject to } Y 1_2 = c \\text{ and } Y^T 1_d = (k, 1_d^T c - k)^T.\n$$\n\nThe two marginals are not yet normalized. So, denote $s = 1_d^T c$, then define $c^* := \\frac{c}{s}, C^* := sC = [\\frac{V}{c^*}, 0_d]$ and $k^* := \\frac{1}{s} (k, 1_d^T c - k)^T = ( \\frac{k}{s}, 1 - \\frac{k}{s})^T$. The above problem is equivalent to\n\n$$\n\\min_{Y \\in \\mathbb R^{d \\times 2}_+} \\langle -C^*, Y \\rangle \\text{ subject to } Y 1_2 = c^* \\text{ and } Y^T 1_d = k^*.\n$$\n\nNow, this is a properly defined optimal transport problem. If we define the entropic regularization problem as: for $\\beta > 0$,\n\n$$\n\\min_{Y \\in \\mathbb R^{d \\times 2}_+} \\langle -C^*, Y \\rangle + \\frac{1}{\\beta} H(Y) \\text{ subject to } Y 1_2 = c^* \\text{ and } Y^T 1_d = k^*.\n$$\n\nwhere $H(Y) = \\sum_{ij} Y_{ij} (\\log Y_{ij} - 1)$. The Sinkhorn update now reads: for dual vectors $u \\in \\mathbb R^2$ and $v \\in \\mathbb R^d$, we have: $v = \\log c^* - \\log \\sum_j \\exp(\\beta C^*_{\\cdot, j} + u_j)$ and \n$u = \\log k^* - \\log \\sum_i \\exp(\\beta C^*_{i, \\cdot} + v_i)$, and the optimal plan $Y$ is given by: $Y_{ij} = \\exp(v_i + u_j + \\beta C^*_{ij})$. \n\nComparing $v, u, Y$ to $\\nu, \\mu, m$ from the lines 4,5,6 in the Algorithm 4, respectively, I do not see why they are equivalent.\n\nReference:\n\n[CompOT] Gabriel Peyré and Marco Cuturi (2019), \"Computational Optimal Transport: With Applications to Data Science\", Foundations and Trends in Machine Learning: Vol. 11: No. 5-6, pp 355-607. - The problem $(2)$ is indeed an optimal transport problem, but it is not formulated in the standard form, i.e. what is the cost matrix, the admissible coupling (transport plan), the two marginal constraints? This makes it difficult to calculate and justify the update of Sinkhorn iterations.\n\n- Please specify which kind of entropic regularization you are using. More precisely, $[4]$ use KL divergence but $[11]$ use the negative entropy. The two types are closely related, but mathematically, they result in a bit different Sinkhorn iterations. Based on the form of the mask $m$, I guess you are using the negative entropy, but not the KL divergence.\n\n- In the Appendix, please consider to provide the details of the:\n + Soft top-$k$ operator in $[41]$. It is the core component of Spartan, so I believe it deserves some brief description.\n + Calculation of the backward pass. Even though the authors cite theorem $3$ in $[41]$, it is complicated for outsider, thus difficult to justify if the calculation in the Algorithm 5 is correct or not. \n + Calculation of the forward pass. Please also specify the entropic-regularization problem to which the authors apply.\n\n- What is the dimension $d$ of the input vectors in the applications?\n\n- How do the authors come up with the specific form of the initialization $-\\beta |\\theta_{i_k}| / c_{i_k}$?\nDoes it come from the empirical observation, or some heuristic?\n\n- I see the definition of the projection $\\Pi_k$ but wonder how is it implemented in practice? Is it simply keeping the largest $k$ elements and setting all others to zero? The authors do discuss the limitations and potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 8, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 3, 3 ]
[ "0z-SzKhcPQF", "zutGpJVkTK", "SX9vDO5EGaD", "n34LPZ4HZo", "AKg-6wulFw", "5tw4Zgoe69w", "nips_2022_u4KagP_FjB", "MpniuY1LsAb", "cxZrhQIVPMB", "sfGhMusYpZT", "mU3xynnpOKx", "yhLedqzE9YA", "nips_2022_u4KagP_FjB", "nips_2022_u4KagP_FjB", "nips_2022_u4KagP_FjB", "nips_2022_u4KagP_FjB" ]
nips_2022_GzESlaXaN04
Hardness of Noise-Free Learning for Two-Hidden-Layer Neural Networks
We give superpolynomial statistical query (SQ) lower bounds for learning two-hidden-layer ReLU networks with respect to Gaussian inputs in the standard (noise-free) model. No general SQ lower bounds were known for learning ReLU networks of any depth in this setting: previous SQ lower bounds held only for adversarial noise models (agnostic learning) (Kothari and Klivans 2014, Goel et al. 2020a, Diakonikolas et al. 2020a) or restricted models such as correlational SQ (Goel et al. 2020b, Diakonikolas et al. 2020b). Prior work hinted at the impossibility of our result: Vempala and Wilmes (2019) showed that general SQ lower bounds cannot apply to any real-valued family of functions that satisfies a simple non-degeneracy condition. To circumvent their result, we refine a lifting procedure due to Daniely and Vardi (2021) that reduces Boolean PAC learning problems to Gaussian ones. We show how to extend their technique to other learning models and, in many well-studied cases, obtain a more efficient reduction. As such, we also prove new cryptographic hardness results for PAC learning two-hidden-layer ReLU networks, as well as new lower bounds for learning constant-depth ReLU networks from membership queries.
Accept
This work provides lower bounds in the noise free setting for learning two hidden layer networks in the Gaussian space. Overall it is a fundamental result well within the scope of Neurips, continuing a solid line of work and I cannot see any reason for rejection. The authors have engaged with the reviewers, and have committed to make minor revisions and clarifications which I am sure they will do.
val
[ "DrxgDx3lnP1", "z2NOtlvNWlK", "1fvL7JxoP88", "Fw6vLGOtT2l", "_kDzubyhJm3", "lb-HvYr3wdC", "imJuS5SAwXs" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I am satisfied with the author's response.", " Thanks for the positive feedback! There is indeed a distinction between SQ and SGD, though note that by recent results of Abbe, Sandon, and coauthors, SGD with a suitable architecture + exotic initialization is essentially “P-complete,” so the only way to rule out SGD would be to show a computational lower bound by reducing from a complexity-theoretic conjecture. Fortunately, this is precisely what our lower bound in Theorem 1.3, based on LWR, achieves. We thus view our results as strong evidence that no efficient algorithm, not even SGD, can learn two hidden layer networks. We will tweak our language so this is clearer.", " Thanks for the very positive review! We will definitely try to add a conclusion and a table comparing the existing lower bounds in the final version of our paper if space allows.", " Thanks for the positive feedback and for closely engaging with our work! We will make sure to incorporate the presentation suggestions. The comment on the tradeoff between expressivity of CSQ/SQ vs. complexity of the concept class is indeed a valid one, though establishing SQ lower bounds is significantly more challenging: whereas there are non-CSQ algorithms that circumvent the CSQ lower bounds for 1-hidden-layer [Chen-Klivans-Meka ‘20], there are no known efficient algorithms that circumvent our SQ lower bounds.", " This work provides lower bounds in the noise free setting for learning two hidden layer networks in the Gaussian space. Basically, the whole concept is to embed hard problems over the uniform in hypercube to the Gaussian space. This is not something new, this has been done before in [1] for proving lower bound in the agnostic learning halfspaces over the Gaussian distribution. In contrast, this work provides noise-free lower bounds. They provide a super-polynomial SQ lower bound, a cryptographic lower bound under the LWR assumption. Moreover, they also provide lower bounds for the query model which is more powerful than the PAC learning model. \n\nThe whole concept is the following: To embed hard problems from the hypercube to the Gaussian space, we can use a similar idea like in [1], i.e., using the sign function. The DV lift basically does that by adding 2 more hidden layers (with ReLU components). So, a hard problem with L-Layers can provide lower bounds for L+2 layers in Gaussian space.\n\nThe authors decrease the number of layers from +2 Layers to +1. To do that, first they show a way to do it using a very large network, basically, they start from an exponential construction Eq.(11) and then they decrease it to $d^m$ by make the network more sparse, using the distributional properties of the Gaussian. They introduce some error in the construction but they show that this is indeed very small. After that the hardness proofs follow from a reduction.\n\n\n\n[1] Adam Klivans and Pravesh Kothari. Embedding hard learning problems into gaussian space. # Pros\n1. This is good result. The authors provide lower bounds under several assumptions/models. \n2. This work is very well-written. Checked almost all the proofs and the claims are sound.\n\n# Cons\nNot really a con just a comment. The lower bounds are for 2-hidden layer networks where there are results for 1-hidden layer networks for the CSQ model. [GGJ+20],[DKKZ20] Basically, the trade-off is stronger model and 2-hidden layer instead of 1. In general, I would expect stronger lower bounds for 2-hidden layer network. \n\n\nOverall, I recommend for acceptance. Some presentation suggestions:\nI would suggest the authors to add after section 1.2 lines 197 some more intuition about the construction and the idea that they use to decrease the size of the network. \nSome fullstops and commas in equations are missing, i.e., eq. 2. everything is good.", " The authors prove statistical query lower bounds for learning polynomial-sized neural networks with two hidden layers. The bound is superpolynomial in the input dimension $d$ (or the query tolerance is negligible in $d$). No cryptographic assumptions are needed for these bounds to hold. The authors also show that, under the learning with rounding with polynomial modulus cryptographic assumption, no polynomial-time algorithm can learn neural networks with two hidden layers from Gaussian examples. The result is extended to neural networks with one hidden layer over the uniform distribution on the boolean hypercube. The paper is well-written and the review of the related work is thorough. A great deal of effort has been put into making sure that the paper is clear and accessible for a wide audience, particularly in the technical overview. This paper is a bit outside my expertise, but the results and techniques used are interesting and could be of independent interest. I believe this paper is relevant to the learning theory community. ### Suggestions\n\nA conclusion should be included in the main body were the paper to be accepted. If there is room, a table comparing different lower bounds setting and contrasting the current paper with related work would be a great addition. Limitations: yes\n\nImpact: N/A", " The paper establishes hardness of learning neural networks with Gaussian inputs under various assumptions:\n- statistical query learning, two hidden layers;\n- cryptographic hardness of learning with rounding, two hidden layers;\n- label query learning, existence of a family of pseudo-random functions, any fixed number of hidden layers.\n\nThe main tool for obtaining these results is a modified Danieli-Vardy transform (2021) that maps a Boolean example (x, y) to a Gaussian example (z, y') while remaining in the realizability setup. **Strengths**\n- A solid theoretical work that establishes new hardness results for the now ubiquitous neural networks.\n\n**Weaknesses**\n- The paper does not explicitly indicate the limitations of the results. For example, lines 67-68 say that \"Theorem 1.1 rules out almost all known approaches for provably learning neural networks\", but the most well-known approach to learn neural networks---SGD---is not ruled out by Thm 1.1. No questions See weaknesses above." ]
[ -1, -1, -1, -1, 7, 8, 7 ]
[ -1, -1, -1, -1, 5, 3, 3 ]
[ "z2NOtlvNWlK", "imJuS5SAwXs", "lb-HvYr3wdC", "_kDzubyhJm3", "nips_2022_GzESlaXaN04", "nips_2022_GzESlaXaN04", "nips_2022_GzESlaXaN04" ]
nips_2022_Q9lm8w6JpXi
BILCO: An Efficient Algorithm for Joint Alignment of Time Series
Multiple time series data occur in many real applications and the alignment among them is usually a fundamental step of data analysis. Frequently, these multiple time series are inter-dependent, which provides extra information for the alignment task and this information cannot be well utilized in the conventional pairwise alignment methods. Recently, the joint alignment was modeled as a max-flow problem, in which both the profile similarity between the aligned time series and the distance between adjacent warping functions are jointly optimized. However, despite the new model having elegant mathematical formulation and superior alignment accuracy, the long computation time and large memory usage, due to the use of the existing general-purpose max-flow algorithms, limit significantly its well-deserved wide use. In this report, we present BIdirectional pushing with Linear Component Operations (BILCO), a novel algorithm that solves the joint alignment max-flow problems efficiently and exactly. We develop the strategy of linear component operations that integrates dynamic programming technique and the push-relabel approach. This strategy is motivated by the fact that the joint alignment max-flow problem is a generalization of dynamic time warping (DTW) and numerous individual DTW problems are embedded. Further, a bidirectional-pushing strategy is proposed to introduce prior knowledge and reduce unnecessary computation, by leveraging another fact that good initialization can be easily computed for the joint alignment max-flow problem. We demonstrate the efficiency of BILCO using both synthetic and real experiments. Tested on thousands of datasets under various simulated scenarios and in three distinct application categories, BILCO consistently achieves at least 10 and averagely 20-folds increase in speed, and uses at most 1/8 and averagely 1/10 memory compared with the best existing max-flow method. Our source code can be found at https://github.com/yu-lab-vt/BILCO.
Accept
In this paper, the authors propose an algorithm BILCO for solving graphical time warping, an alignment method for multiple time series data. Overall, the proposed approach is interesting, and all reviewers are positive. Thus, I also vote for acceptance.
val
[ "6I1rQFfaWRP", "embrHrgeRF", "y3SXFnImFTN", "1X9c0rw8vGD", "VVLk4pQHDq8", "OiGhEFCpvnh", "l54AnfsoI_g" ]
[ "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the response and hope to see the illustrative example in the final version.", " Thank you for your valuable feedback. The major concern was **whether the high efficiency of our method is at the expense of alignment accuracy.** We would like to take this opportunity to relieve your concern.\n\nFirst, **the compared peer methods and our proposed method BILCO can achieve the same exact result for a given joint alignment problem.** They are all max-flow algorithms and can achieve the same global optimal solution, thus there is no need to compare the alignment performance between those methods. Sorry for the lack of clarity, we will stress this point in our paper. \n\nSecond, **there is no trade-off between the accuracy and efficiency of the proposed method.** The improvement of time and space efficiency is from utilizing the specific characteristics of joint alignment max-flow problem and avoiding redundant operations, rather than getting an approximate solution at the expense of alignment performance.\n\nBesides, here we may need to clarify why we compared the efficiency in Fig. 5 with respect to hyperparameter $\\kappa$. $\\kappa$, as mentioned in equation (1) and experiments, determines the assumed amount of dependency in the structural data and different $\\kappa$ may lead to different alignment accuracies in different applications. It may also impact the characteristics of the joint alignment max-flow problem and thus influence the efficiency of our proposed method (slightly). However, for the same given problem, that is, with the same hyperparameter $\\kappa$, all max-flow methods compared in the paper will lead to the same exact result.\n", " Thank you for your valuable feedback. The major concern was **whether our proposed method can only be applied to graphical time warping (GTW) and thus might be limited in the application scope.** We would like to relieve your concern from three aspects.\n\nFirst, **our proposed method was indeed designed to solve GTW problems, but GTW itself is flexible, broadly applicable, and can be used as a building block in solving many problems while utilizing structural information.** Its assumed neighbor structure and similarity strength can be application-specific or user-designed, including the special case without any neighbor relationship at all. With the flexible capability of utilizing the structural information, GTW can be integrated into more complex pipelines and applications than the original formulation may suggest. We will also stress this point in our paper.\n\nSecond, **there is already some work that extended GTW to more general multiple alignment problems.** A typical application, called neighbor-wise compound-specific graphical time warping (ncGTW) [R1], can achieve better performance for aligning liquid chromatograph-mass spectrometry (LC-MS) profiles **without predefined reference.** Its formulation shares similar characteristics with GTW and our method should also be applicable.\n\nThirdly, **the idea of our proposed method could be used to accelerate other max-flow techniques.** Although BILCO focuses on the GTW problem, the idea of using component push-relabel operation can be applied to other problems with structural subgraphs. And the bidirectional-pushing strategy can help other push-relabel-based methods with given initialization, which is already shown in Fig. 5(e) (BI-HIPR, combining the strategy with HIPR method). \n\n[R1] Wu, Chiung-Ting, et al. \"Targeted realignment of LC-MS profiles by neighbor-wise compound-specific graphical time warping with misalignment detection.\" Bioinformatics 36.9 (2020): 2862-2871.\n", " Thank you for your valuable and insightful feedback. The major concerns were: **1) the lack of introduction to graph techniques and an illustrative example makes our graph operations hard to understand and 2) the penalty for warping function dissimilarity in graphical time warping (GTW) lacks the practical meaning and may weaken the interests of our method.** We would like to take this opportunity to relieve these concerns.\n\nFirst, **we will provide a more detailed introduction to the relevant graph techniques and add an illustrative example to demonstrate our graph operations.** Your suggestion of an illustrative example is really appreciated. We will incorporate it into our revision. Moreover, we will also provide a vivid animation to demonstrate our approach as supplementary material.\n\nSecond, **joint alignment formulations with different external metrics have different focuses and advantages in different fields. In many applications, the warping functions have practical meanings and the penalty for warping function dissimilarity is meaningful.** For example, in our first example application (Fig. 1(e)), calculating signal propagation, the warping function represents the delay of the real signal at each time point relative to the reference time series. Thus, if two signals propagate similarly, the two corresponding warping functions should be close, while the same integral of the warping function could not assure a similar propagation. Or in our second example application (Fig. 1(f)), extracting depth in binocular stereo, the warping function represents the disparity between two views and can be used to derive the depth. It is natural to assume the depths of neighbor locations to be similar. In those applications, the structural information can be utilized to improve alignment performance by adding constraints on warping function dissimilarity. We will stress this point in our paper. Besides, we agree that in many applications the joint alignment should add constraints on some other external metrics (e.g., the integral of warping function, as you suggest). Thank you for pointing it out. We will consider other external metric-based similarities and extend our method in future work.\n", " This paper improves the computation speed and memory cost of GTW [Wang, et al, 2016] method, for the multiple time-series joint alignment problem. Two major enhancements are introduced. The first is excess pushing with linear component operations, which treats components (the number is much less than nodes) as the basic unit. The second is a bi-directional pushing (push excess and push deficit), where pairwise DTW initialization is used.\nComprehensive experiments on both synthetic and real world datasets are conducted to evaluate the speed and memory improvement of the proposed method.\n Strengths:\n1. The joint alignment problem is of great importance and interest to the community. The GTW method converts the alignment problem into a max-flow problem that sheds the light on solving it optimally via techniques in graph theory. This paper further pushes this direction to speed up the process and reduce the memory cost significantly. This makes the GTW approach feasible in real world applications.\n2. The paper has illustrative figures to demonstrate the approach.\n3. Time and space complexity analysis is provided.\n4. Several max-flow algorithms are included in the experiments part. The proposed method performs consistently much better than those baselines.\n\nWeakness:\n1. The paper lacks some introduction / background on graph techniques, making it slightly difficult to follow especially when defining ‘drain’, ‘discharge’ and ‘component’. With an illustrative example that demonstrates how these operations are done, would significantly improve the readability of the paper.\n2. The most critical problem lies in the definition of GTW. In GTW, the joint alignment problem is converted to a slightly different problem: constraint the similarity of pairwise warping functions / paths. The warping function could vary a lot, but the effect could be very similar. Let me use edit distance to illustrate (slightly different than DTW, but shares similar DP spirits, so I use this as an easy example). Given a reference string, there is one string that needs prepending a letter to match the reference (this is one warping). There is another string that needs appending a letter to match the reference (another warping). These two warping / editing functions are very different, but their integral is the same (both accumulate to only one insertion edit), so their effects are also similar as well. Therefore, some external metric, like the integral of the warping function, their difference should be considered as the penalty, instead of the function itself. GTW makes this definition for ease of computation via max-flow algorithms, but lacks the practical meaning as it is often hard to interpret the warping functions’ difference, and hard to apply to practical tasks like finding the centroids of time series. I think this formulation reduces the interests of GTW based approaches to the community.\n See above and I am willing to hear more opinions.\n NA", " This paper proposes an algorithm BILCO for solving graphical time warping, an alignment method for multiple time series data. The authors realized a fast and memory-saving algorithm by focusing on the special characteristics of DTW graphs. Experimental results, including results for real data, show that their two proposed methods, ELCO and bidirectional pushing, dramatically reduce computation and memory consumption. - The authors propose an algorithm for joint alignment that is much faster (10x~) and memory-saving than other methods.\n- Experimentally, the proposed method is applied to three real datasets that require multiple alignment, and shows impressive speedup and memory-saving performance.\n\n\n- The proposed method is not for general multiple time series alignment problem, but is an algorithm for solving a narrower problem, graphical time warping, in which warping paths in given neighbor relationships should be close, as written in Eq.(1). The application of the proposed method to general multiple alignment problem and other problems is not discussed.\n - Are there any other applications of the proposed method other than graphical time warping?\n\ntypo\n- l.170: source side $V_T$ This paper proposes an algorithm for solving time series data alignment, which does not involve negative social impact.", " This paper investigates the joint alignment of multiple time series. Compared to the existing max-flow method on GTW, the proposed method makes use of two properties of the joint alignment max-flow problem, i.e., joint alignment is a generalization of pairwise alignment, and a coarse approximate solution to joint alignment can be readily estimated, for significantly reducing computational time and space. Strengths:\n1. The observation of two important properties of the joint alignment max-flow problem.\n2. The corresponding designs to the two properties, i.e., Excess pushing with Linear Component Operations (ELCO) that integrates DP and the push-relabel approach, and a bidirectional-pushing strategy to utilize prior knowledge as initialization, are reasonable for improving efficiency.\n3. The complexity analysis provides good insights in comparison.\n4. The experiments on both synthetic and real datasets indicate the efficiency of the proposed method.\n\nWeakness:\n1. The experiments focus on comparing time and space efficiency of different methods. Alignment accuracy could be compared to indicate whether there is a trade-off between effectiveness and efficiency of the proposed method. 1. Is it reasonable to include the comparison of the alignment performance of the compared methods?\n2. Are there any trade-off between the alignment performance and the efficiency of the proposed method?\n The authors addressed the potential negative societal impact of the work well." ]
[ -1, -1, -1, -1, 6, 7, 6 ]
[ -1, -1, -1, -1, 4, 1, 4 ]
[ "1X9c0rw8vGD", "l54AnfsoI_g", "OiGhEFCpvnh", "VVLk4pQHDq8", "nips_2022_Q9lm8w6JpXi", "nips_2022_Q9lm8w6JpXi", "nips_2022_Q9lm8w6JpXi" ]
nips_2022_YZ-N-sejjwO
Models Out of Line: A Fourier Lens on Distribution Shift Robustness
Improving the accuracy of deep neural networks on out-of-distribution (OOD) data is critical to an acceptance of deep learning in real world applications. It has been observed that accuracies on in-distribution (ID) versus OOD data follow a linear trend and models that outperform this baseline are exceptionally rare (and referred to as ``effectively robust”). Recently, some promising approaches have been developed to improve OOD robustness: model pruning, data augmentation, and ensembling or zero-shot evaluating large pretrained models. However, there still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness. We approach this issue by conducting a comprehensive empirical study of diverse approaches that are known to impact OOD robustness on a broad range of natural and synthetic distribution shifts of CIFAR-10 and ImageNet. In particular, we view the "effective robustness puzzle" through a Fourier lens and ask how spectral properties of both models and OOD data correlate with OOD robustness. We find this Fourier lens offers some insight into why certain robust models, particularly those from the CLIP family, achieve OOD robustness. However, our analysis also makes clear that no known metric is consistently the best explanation of OOD robustness. Thus, to aid future research into the OOD puzzle, we address the gap in publicly-available models with effective robustness by introducing a set of pretrained CIFAR-10 models---$RobustNets$---with varying levels of OOD robustness.
Accept
All reviewers noted the relevance of the proposed study for the NeurIPS community. They all agreed that the paper is well-motivated, sound, and that the proposed Fourier interpolation is novel. While some reviewers had initial concerns regarding the experimental evaluation, the authors did a great job at improving their experiments and reply to the reviewers's concerns. There, we recommend acceptance.
test
[ "Tjsd7L30OlF", "P1MZNr8HDcD", "o5ZrVJDFTU3", "y9rCBB_cn1o", "_NE05ZaKOjv", "CjEOjxgio42", "5KhtknN2gRD", "iA-gNq85eFc", "Oqi-yfhfSBm", "LqwgGnxHGGZQ", "Fa21d5ve9sj", "mW-8Q0AiPk", "2EsEX4qUJtp" ]
[ "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks a lot for the authors' responses. After seeing the rebuttal and other reviewers' comments and checking the revised submission, I think my main concerns are well-addressed and I will raise the recommendation to 'weak accept'.", " Thank you all for your time and thoughtful reviews! As the discussion period comes to a close, we ask that reviewers consider the final, following points of discussion.\n\n**Reviewer RxwV**\nThank you for your positive assessment! We think the clarifications and additional analyses/discussions you suggested have greatly improved our submission. Also, we are grateful for your raising your score from 6 to 7!\n\n**Reviewer N1xd**\nThank you for your positive assessment and manuscript-improving ideas! We hope that we have adequately addressed your main concern about \"lack of distribution shifts, in particular natural distribution shifts\" with our inclusion of four additional OOD datasets (ImageNet-R, ImageNet-A, ImageNet-Sketch, and ObjectNet). With the corroborating effect of these new experiments on our original conclusions in mind, could you please consider raising your score or letting us know if there is anything else we can address before the author discussion period closes?\n\n**Reviewer YyCi** \nAs we have not yet received your feedback on our revised paper or on our responses to your original review, could you please consider confirming that we have addressed your concerns and raising your score accordingly? In addition to the clarifying improvements to our manuscript made based on your review, we wish to highlight our revised submission’s inclusion of results for four new OOD datasets for ImageNet (ImageNet-R, ImageNet-A, ImageNet-Sketch, and ObjectNet). We believe these clarifications and corroborating results have strengthened our paper. Additionally, we would be happy to address any new questions you might have in the remaining time. Thank you again for your time and consideration. \n\nFinally, we wish to clarify that, upon acceptance, we would use the extra page available to highlight the new visualizations and analyses suggested by reviewers, which are currently visible in our revised submission’s appendix (e.g., non-CLIP ImageNet results are in Appendix A.10). These additional results further support our take-home message that no one metric rules them all, a surprising conclusion given the previously-demonstrated effectiveness of in-distribution accuracy for predicting robustness in most models [1,2] and the impressive ability of our novel model metrics to predict CLIP and non-CLIP ImageNet model robustness with high accuracy even when in-distribution accuracy fails.\n\n[1] John P Miller, Rohan Taori, Aditi Raghunathan, Shiori Sagawa, Pang Wei Koh, Vaishaal Shankar, Percy Liang, Yair Carmon, and Ludwig Schmidt. Accuracy on the line: on the strong correlation between out-of-distribution and in-distribution generalization. In International Conference on Machine Learning, pages 7721–7735. PMLR, 2021.\n\n[2] Anders Andreassen, Yasaman Bahri, Behnam Neyshabur, and Rebecca Roelofs. The evolution of out-of-distribution robustness throughout fine-tuning, 2021. URL https://arxiv.org/abs/2106.15831.", " Thank you to the authors for their response. My concerns are addressed and I continue to recommend acceptance.", " Thank you for your time and effort spent reviewing our work. We believe that your concerns primarily relate to clarity/presentation, and we have addressed each of your concerns inline below. Notably, we wanted to highlight that our focus is on making progress towards identifying and understanding **effective** robustness of **out-of-line models**, which are exceedingly rare. These models behave very differently compared to standard (or on-the-line) models and are believed to be the key to solving the OOD robustness puzzle (see [a] for a more detailed discussion). In this context, our main contributions (i.e., establishing new model metrics that demonstrate strong correlation with OOD robustness for CLIP models, illustrating nuance in correlation of model metrics with OOD robustness, and an open source collection of pretrained CIFAR-10 models with effective robustness) remain unaffected by your concerns. Furthermore, we carried out new experiments (on 22 new models and 4 new OOD datasets) that show that our findings are more broadly applicable than the cases we originally considered, in turn making our contribution much stronger than initially perceived (please see our joint response). Lastly, we want to clarify that the focus of this work is very different than the focus of [1*], a point which we elaborate on in our inline responses below.\n\nWe hope that you find our response to your concerns satisfactory and that you will update your score and champion this work.\n\n[a] Andreassen, A., Bahri, Y., Neyshabur, B., & Roelofs, R. (2021). The evolution of out-of-distribution robustness throughout fine-tuning. arXiv preprint arXiv:2106.15831.\n\n>To the best of the reviewer's knowledge, this is one existing work [1*] considering both the amplitude and phase on the robustness in the frequency, while [1*] is not in this paper's reference list. The design difference is needed, at least.\n\nThank you for making us aware of this paper! Crucially, our focus is very different compared to [1] in that our goal is predicting the OOD robustness of preexisting **out-of-line models**. In our revision, we cite [1] to highlight that prior work has found that exchanging amplitude information between images can improve robustness. Please note that the design difference is as follows: our work uses a post-training analysis to gradually interpolate between two images’ amplitude (or phase) information to construct probing-images that quantify a trained model’s robustness; separately, [1] boosts robustness via a training-time data augmentation that completely exchanges images’ amplitude information rather than interpolating. \n\n[1] Chen, G., Peng, P., Ma, L., Li, J., Du, L., & Tian, Y. (2021). Amplitude-phase recombination: Rethinking robustness of convolutional neural networks in frequency domain. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 458-467).\n\n> The Power spectral densities in Figure 2 are nearly the same as those in [36]. A citation of [36] is needed in Sec. 4.1.\n\nWe have added a citation to [36] in Section 4.1, which has been merged with appendix Section A.5 in the revision (due to space constraints). Please note that these PSDs themselves are not claimed to be a contribution of our paper; however, the PSD analysis of CIFAR-10.1 is novel (never before visualized) and required the approach described in Section A.5. More importantly, these PSDs are connected to our claim that no one metric rules them all. Indeed, Section A.5 clarifies that the reason we include these PSDs is to visualize the spectral properties of the distribution shifts we study on CIFAR-10, which can foster intuition for why “each distribution shift is unique” and “has its own problem structure” (RxwV).", " > The experiment setting is a little strange. The experiments in Secs. 4.2 and 4.3 are conducted on CIFAR-10, while ImageNet is used in Sec. 4.4. From the reviewer's view, it is better to conduct the experiments on one consistent dataset or on both datasets.\n\nWe respond in the joint response, reproduced here for convenience:\n\nThank you for raising this point! Our revision clarifies that all of our experiments are driven by the need to study effectively robust or “out of line” models, which are quite rare (see [1]) but can be found among CIFAR-10 and ImageNet models. For CIFAR-10, these out-of-line models have been produced via model pruning [2] and data augmentation [3], so these are the settings we consider on that dataset in Figures 2, 3, 7–10 and Tables 1, 2, 4, 5. For ImageNet, out-of-line models have been produced via CLIP fine-tuning [4] and data augmentation [3], so we consider these settings in Figures 4, 11, 12 and Tables 3, 8, 9. To enhance our presentation’s consistency across these settings, our revision now includes visualizations of the most predictive metrics for each setting, whereas before this information was only visible for CLIP models—e.g., Figure 12 plots the metric most predictive of robustness for models trained on ImageNet with and without data augmentation and Figure 9 plots the metric most predictive of pruned CIFAR-10 model robustness. We emphasize that out-of-line models are difficult to find, and that our experiments are carefully selected to increase our understanding of robustness in these rare settings where it has been shown to occur.\n\n[1] Andreassen, A., Bahri, Y., Neyshabur, B., & Roelofs, R. (2021). The evolution of out-of-distribution robustness throughout fine-tuning. arXiv preprint arXiv:2106.15831.\n[2] Diffenderfer, J., Bartoldson, B., Chaganti, S., Zhang, J., & Kailkhura, B. (2021). A winning hand: Compressing deep networks can improve out-of-distribution robustness. Advances in Neural Information Processing Systems, 34, 664-676.\n[3] Croce, F., Andriushchenko, M., Sehwag, V., Debenedetti, E., Flammarion, N., Chiang, M., Mittal, P., & Hein, M. (2020). Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670.\n[4] Wortsman, M., Ilharco, G., Kim, J. W., Li, M., Kornblith, S., Roelofs, R., Gontijo-Lopes, R., Hajishirzi, H., Farhadi, A., Namkoong, H., & Schmidt, L. (2022). Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7959-7971).\n\n> Since all the experiments are conducted on corruption benchmarks (CIFAR10-C and ImageNet-C), the separated experimental results on each type of corruption are suggested to add.\n\nPlease note that these results are already in the original submission, with representative corruptions (spread among low, medium, and high frequencies) in the main text figures and the remaining corruptions in the appendix (each visualized individually). As stated in the text, the trends reflected in the main text results hold in the appendix results. \n\nIt is also important to note that not “all” of the experiments use “corruption benchmarks”. Indeed, many of the results use natural distribution shifts like CIFAR-10.1 and ImageNet V2 in the original submission, and the new submission adds natural distribution shifts via ImageNet-A and ObjectNet (as well as less natural shifts via ImageNet-R and ImageNet-Sketch). Each of these benchmarks is also visualized individually.", " Thank you for your positive and constructive feedback. We respond to individual questions/concerns inline below. Please note our summative comments in the joint response.\n\n> My thoughts are that the main weakness of this paper was the lack of distribution shifts, in particular natural distribution shifts which are more likely to occur in the world. I would have appreciated results for additional distribution shifts including ImageNetR or ObjectNet.\n\nIn the original manuscript we include results on CIFAR-10.1 and ImageNetV2, both of which are natural distribution shifts. In the revised paper, we have added results on ImageNet-R, ImageNet-A, ImageNet-Sketch, and ObjectNet. We find that our Fourier interpolation metrics continue to produce high correlations (>=0.93 R2) on all of these additional datasets. We thank the reviewer for the suggestion as we believe adding these additional datasets serves to further strengthen and validate our results!\nPlease note that we provide more detailed results for these datasets in our response to the related question below. Additionally, we note that not all of these four new datasets contain strictly natural distribution shifts, but they include the datasets requested by the reviewer (i.e., ImageNet-R and ObjectNet) and more.\n\n\n> In addition, it's not clear if there is a takeaway for designing better models. \n\nWe have added a discussion to our revised conclusion, recommending that future work consider adapting our Fourier interpolation metrics as regularizers or data augmentation strategies to use during training, as this may improve robustness. \n\n> Finally, while a conclusion of this paper is that there is no one metric to rule them all, there is only one column of plots without ID accuracy on the x-axis. Consider showing other metrics on the x-axis in the appendix.\n\nWe agree; thanks for the suggestion! We have added Figures 9 and 10 to the revised appendix to visualize the most predictive metrics on each of the CIFAR-10 distribution shifts, and Figure 12 to visualize the most predictive metrics on each of the ImageNet distribution shifts for non-CLIP models.\n\n> Why do the authors think that proposed metrics correlate well for ensembled CLIP models?\n\nOur experiments show that CLIP models achieve robustness precisely when they exhibit insensitivity to perturbations of Fourier amplitude (and sometimes phase). We designed these Fourier interpolation metrics to align with our human notion of visual semantic meaning, since human viewers are insensitive to perturbations of Fourier amplitude. It is tempting to believe that CLIP models correlate well with these metrics because they share a similar notion of semantic meaning, perhaps due to the way they were trained alongside natural language, but this hypothesis remains to be tested by future research. Our goal in this paper is to show that CLIP robustness does indeed correlate well with our proposed metrics.\n\n> Can any of the proposed metrics be used as objectives to train better models?\n\nPossibly, and we suggest this in our revised conclusions section. The aim of our paper is to explore and compare candidate metrics that might help explain model robustness, a necessary first step that we hope will be followed up by future work that adapts our metrics into regularizers or augmentation strategies for training more robust models.\n\n> What happens on additional natural distribution shifts such as ImageNetR or ObjectNet or a shift from WILDS?\n\nWe thank the reviewer for the question! Our revised manuscript shows that our metrics are particularly illuminating with respect to CLIP performance on these datasets: On ImageNet-R, one of our newly introduced metrics attains 0.93 R2; in-distribution accuracy attains 0.31 R2. On ObjectNet, two of our newly introduced metrics attain 0.96 R2; in-distribution accuracy attains 0.48 R2. On ImageNet-Sketch, two of our proposed metrics attain 0.95 R2; in-distribution accuracy attains 0.52 R2. On ImageNet-A, one of our proposed metrics attains 0.97 R2; in-distribution accuracy attains 0.42 R2. These results along with R2s for non-CLIP models are visible in Figure 4 and Table 3 of the revised manuscript (also, the most predictive metrics for non-CLIP models are visualized in Figure 12). Again, we note that not all of these datasets contain strictly natural distribution shifts, but they include the datasets requested by the reviewer (i.e., ImageNet-R and ObjectNet) and more.\n\nWe did not evaluate on the WILDS distribution shifts because, to the best of our knowledge, the literature has not yet shown models with effective robustness on most of these more challenging shifts (see https://wilds.stanford.edu/leaderboard/ for current results on WILDS).\n", " Thank you for your positive and constructive feedback. We respond to individual questions/concerns inline below. Please note our summative comments in the joint response.\n\n> Lack of causal explanations: Even though the authors use words that imply the results provide a causal understanding of why effective generalization happens, all of the results are correlational. It's misleading to claim that the experiments can answer \"why\" questions. I believe it's very important that this is fixed before the camera-ready version, should the paper be accepted. (I'm happy to hear arguments against this criticism) In my opinion, this is an important limitation that can be fixed simply by removing any causal claims from the paper - the existing results are still worthwhile.\n\nWe agree! Please note that we have revised our discussion (particularly in the “Results” section) to reflect that our experiments provide correlational evidence for and against various hypotheses regarding why effective robustness might emerge. For example, the first paragraph of Section 4 now mentions that establishing causality requires rigorous analysis (as exemplified by the movement from empirical findings to causality found in generalization analyses going from Jiang et al. [18] to Dziugaite and Drouin et al., NeurIPS 2020 https://arxiv.org/pdf/2010.11924.pdf). Importantly, this discussion makes clear that our contribution of providing models and candidate metrics may inform this future work on causality in the OOD/ER space.\n\n> No emphasis on architecture: It'd be interesting to see a discussion of whether model architecture has any impact on effective generalization. The authors likely have enough empirical data to address this - a discussion of this would have made the paper stronger.\n\nThank you for the suggestion! Architecture-specific trendlines are shown in the CIFAR-10 figures, and we have added a related discussion in the text (please see the revision’s Section 4.1). In most cases, the different architectures exhibit similar trends, but sometimes they differ or are simply offset from each other: e.g., Conv8 is often more “out-of-line” than the other architectures. On ImageNet, we include a range of model architectures in the standard models and find that they are mostly non-robust, and it is the training data that influences out-of-line robustness more than it is the architecture (e.g., please see the discussion of ResNet-50s trained with and without data augmentation in our revision’s Appendix A10). Importantly, we mention this as a worthwhile direction for further, future work, which may become especially relevant as more robust architectures are introduced.\n\n> Lack of mechanistic understanding: Despite a plethora of empirical data, I don't think I've acquired a deeper (i.e. mechanistic) understanding of why/how effective robustness occurs. I also am not sure if a practitioner also leaves the paper with a solid understanding of which techniques to use to achieve effective robustness, beyond using methods from already existing literature… This is not to say that the presented results are non meaningful - perhaps follow-up works will fill these gaps.\n\nThank you for this comment! In our revised paper, we suggest that future researchers might use our metrics, results, and released models to develop a causal understanding of ER as well as practical methodologies to induce it, for example by applying our Fourier interpolation metrics as regularizers during training. Regarding a mechanistic understanding, effective robustness has only been recently identified in the literature, and there has been slow progress in terms of explaining why ER occurs and designing procedures to induce ER. However, our results and Figure 1 suggest that alignment with human perceptions of semantic meaning in images may be important to effective robustness. In particular, we show that models that are less sensitive to Fourier amplitude interpolation (and sometimes also Fourier phase interpolation) also tend to be more robust across a wide variety of natural and synthetic distribution shifts. ", " > Could you explain which models you evaluated on ImageNet? The description (between lines 109 and 112) is a bit vague. Which architectures are included, for example?\n\nThank you for raising this point! We have added the full list of models to the revised paper (in Appendix A.1). \n- We use the following models from Torchvision (details at https://pytorch.org/vision/0.8/models.html): alexnet, vgg11, vgg11_bn, vgg13, vgg13_bn, vgg16, vgg16_bn, vgg19, vgg19_bn, resnet18, resnet34, resnet50, resnet101, resnet152, squeezenet1_0, squeezenet1_1, densenet121, densenet169 ,densenet161, densenet201, googlenet, shufflenet_v2_x1_0, mobilenet_v2, resnext50_32x4d, resnext101_32x8d, wide_resnet50_2, wide_resnet101_2, and mnasnet1_0. \n- We use the following models from the RobustBench model zoo on ImageNet corruptions (details at https://github.com/RobustBench/robustbench#model-zoo): Geirhos2018_SIN, Geirhos2018_SIN_IN, Geirhos2018_SIN_IN_IN, Hendrycks2020Many, and Hendrycks2020AugMix. \n- From the CLIP family, we use 33 models formed by weight-space interpolation between zero-shot and fine-tuned ViT-B/16, ViT-B/32, and ViT-L/14 (11 models from each architecture).\n\n> Line 153: Could you clarify what you mean by \"we interpolate the lowest 40% of image frequencies\"? Are these the frequencies that occupy 40% of the energy? Or perhaps something else? Also, do you leave the rest of the frequencies untouched?\n\nWe clarify in our revision's Appendix A.3 that the 40% is with respect to the maximum spatial frequency of the image (a function of the pixel resolution), not based on the actual power spectral density of the image. The interpolation is done using a square mask on the DFT of the image, which interpolates the lower image frequencies but leaves the higher image frequencies unperturbed. We chose these frequency cutoffs based on visualizing the resulting amplitude and phase interpolating paths and visually verifying that the phase paths destroy semantic content (including frequencies that are too high in the phase interpolation can introduce semantically meaningful content from the other/non-original image) and the amplitude paths preserve semantic content (including frequencies that are too high in the amplitude interpolation can produce images that are a bit too corrupted for a human to confidently classify).\n\n> Line 280: Are the 11 CLIP models obtained by interpolating the same pretrained and finetuned versions of CLIP?\n\nYes, all 11 models are CLIP ViT-B/16, interpolating between zero-shot and fine-tuned in increments of 10%. However, we have updated our paper to include additional results (using the same weight interpolation procedure) on two more CLIP architectures, ViT-B/32 and ViT-L/14, bringing the total number of CLIP models we evaluate to 33. Thank you for the question—we believe our results are strengthened by the addition of these models. \n\n> Nitpick: It perhaps could be interesting to add a plot in the Appendix that supplements Figure 1 in that instead of tracing amplitude or phase interpolations along a single axis, a grid of images can be provided where on the left top and right bottom, we have the unperturbed images, and in the middle we have various (mixed) interpolations.\n\nThank you for the suggestion! Although this might be a visually interesting figure, the various mixed interpolations you suggest are not utilized by our methodology and thus we are concerned they may create confusion. We would be happy to add this figure, however, if you could please provide a context or motivation for its inclusion.\n\n> Limitations: Only image-based distribution shifts: The proposed metrics (and the analysis) is only concerned with image data. Experiments have no causal power: Despite authors' claims, the experiments are fully correlational, and this should be acknowledged so. (see weaknesses for above)\n\nOur revision’s discussion clarifies that our focus on image data is a limitation of both our work and the broader OOD robustness literature. In particular, we focus on images because there are very few non-image domains in which effective robustness has been identified in the literature. However, prior work has shown NLP embeddings have intuitive spectral properties [1], suggesting that future work may find that our Fourier approach to creating semantics-preserving corruptions can be modified to provide insight when using other data modalities.\n\nWe address the “why”/causal limitation above, and in the revised paper.\n\n[1] Tamkin, A., Jurafsky, D., & Goodman, N. (2020). Language through a prism: A spectral approach for multiscale language representations. Advances in Neural Information Processing Systems, 33, 5492-5504.", " We deeply appreciate the reviewers’ time and thoughtful feedback on our work. We are happy to hear that reviewers found our study of the effective robustness puzzle to be “extensive” (RxwV), “comprehensive” (YyCi), “very interesting” (N1xd), and “likely useful” (N1xd, RxwV). Importantly, reviewers recognized the “compelling case” (RxwV) for our proposed metrics that “capture the Fourier sensitivity” (YyCi) of models and “may be potentially useful in further understanding CLIP’s robustness” (N1xd).\n\nRelatedly, questions raised by reviewers led to our findings being more broadly applicable than the cases we originally considered. In the revised paper, we analyze 33 CLIP models (11 for each of ViT-B/16, ViT-B/32, and ViT-L/14) rather than the 11 (ViT-B/16) in the original submission. We show that our CLIP findings extend to 4 new out-of-distribution datasets for ImageNet (based on N1xd’s suggestion). Specifically, using our proposed Fourier interpolation metrics we are able to predict the robustness of 33 CLIP models on **ImageNet-A**, **ImageNet-Sketch**, **ImageNet-R**, and **ObjectNet** with **0.97**, **0.95**, **0.93**, and **0.96** R2, respectively. \n\nWithout our metrics and relying on in-distribution validation accuracy as a predictor, the R2s on these datasets drop to 0.42, 0.52, 0.31, and 0.48, respectively. This gap in correlation quality underscores the difficulty of predicting OOD robustness and the value added by our new metrics. Critically, these results are in agreement with the results we presented in our original submission, which showed our metrics “correlate well with effective [robustness] of models from the CLIP family” (RxwV); without these metrics, practitioners are left without reliable means to predict the robustness of “out of line” models like CLIP. \n\nNotably, reviewers also highlighted the potential usefulness (N1xd) of RobustNets, the set of out-of-line CIFAR-10 models we release to \"facilitate further research\" (RxwV) into effective robustness. Our study finds these models may be particularly useful in the setting of pruned CIFAR-10 models, which can exhibit effective robustness but remain poorly understood.\n\nHowever, reviewers appropriately noted areas where our communication of results could be improved, and our revised manuscript carefully addresses these areas:\n\n- Noting that our “experimental setting is a little strange”, Reviewer YyCi suggested that we use “one consistent dataset or both datasets” for each experiment. Our revision clarifies that all of our experiments are driven by the need to study effectively robust or “out of line” models, which are quite rare (see [1]) but can be found among CIFAR-10 and ImageNet models. For CIFAR-10, these out-of-line models have been produced via model pruning [2] and data augmentation [3], so these are the settings we consider on that dataset in Figures 2, 3, 7–10 and Tables 1, 2, 4, 5. For ImageNet, out-of-line models have been produced via CLIP fine-tuning [4] and data augmentation [3], so we consider these settings in Figures 4, 11, 12 and Tables 3, 8, 9. To enhance our presentation’s consistency across these settings, our revision now includes visualizations of the most predictive metrics for each setting, whereas before this information was only visible for CLIP models—e.g., Figure 12 plots the metric most predictive of robustness for models trained on ImageNet with and without data augmentation and Figure 9 plots the metric most predictive of pruned CIFAR-10 model robustness. We emphasize that out-of-line models are difficult to find, and that our experiments are carefully selected to increase our understanding of robustness in these rare settings where it has been shown to occur. \n\n- Relatedly, reviewer N1xd correctly points out that “there is only one column of plots without ID accuracy on the x-axis.” In line with their suggestion, the revised paper shows “other metrics on the x-axis in the appendix” for the CIFAR-10 models and non-CLIP ImageNet models (again, previously we only showed the most predictive metric for CLIP ImageNet models). We agree with the reviewer that these new figures (9, 10, and 12 in our revision’s appendix) help illustrate our findings that there is “no one metric to rule them all”; notably, these new figures also help illustrate that our proposed Fourier interpolation metrics often (particularly on ImageNet) correlate significantly better with robustness than previously-proposed metrics.", " - Finally, as noted by Reviewer RxwV, our experiments are correlational rather than causal. Our revision clarifies that our goal is to compare candidate robustness metrics and provide experimental evidence as to which are most promising for future work (e.g. to determine causality and design training improvements). While we fully agree that our present results are correlational only, we do believe that our aforementioned strong results on new datasets suggested by Reviewer N1xd (i.e., ImageNet-A, ImageNet-R, ImageNet-Sketch, and ObjectNet) are a good indicator that our newly-introduced Fourier interpolation metrics merit further research, and that our contribution of providing models (RobustNets) and an “extensive/comprehensive empirical study” (RxwV, YyCi) is critical to these next steps.\n\nWe respond to each reviewer’s specific comments and questions individually below. Through these answers and clarifications, we believe we address the concerns of each of the reviewers, highlight the significance of our findings, and convey the crucial role RobustNets and our Fourier interpolation metrics can play for future research addressing the OOD robustness puzzle. We are hopeful that reviewers will consider our answers, increase their ratings, and recommend acceptance. Please feel free to follow up with any additional questions.\n\nThank you, \n\nPaper3764 Authors\n\n[1] Andreassen, A., Bahri, Y., Neyshabur, B., & Roelofs, R. (2021). The evolution of out-of-distribution robustness throughout fine-tuning. arXiv preprint arXiv:2106.15831.\n\n[2] Diffenderfer, J., Bartoldson, B., Chaganti, S., Zhang, J., & Kailkhura, B. (2021). A winning hand: Compressing deep networks can improve out-of-distribution robustness. Advances in Neural Information Processing Systems, 34, 664-676.\n\n[3] Croce, F., Andriushchenko, M., Sehwag, V., Debenedetti, E., Flammarion, N., Chiang, M., Mittal, P., & Hein, M. (2020). Robustbench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670.\n\n[4] Wortsman, M., Ilharco, G., Kim, J. W., Li, M., Kornblith, S., Roelofs, R., Gontijo-Lopes, R., Hajishirzi, H., Farhadi, A., Namkoong, H., & Schmidt, L. (2022). Robust fine-tuning of zero-shot models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 7959-7971).", " **High level goal:** The authors are interested in an empirical study of models that display *effective robustness* (those that are \"above the line\" that's traced by in and out-of-distribution accuracies of models that don't display effective robustness) in the image domain. \n\n**Gap in literature that the paper contributes in addressing:** The conditions (on OOD data, model and training techniques) under which effective generalization is achieved is not well understood. \n\n**Main contributions:**\n * Using previously proposed metrics that are known to correlate with OOD generalization (including some new metrics based on the Fourier spectra of the images and models), the authors study how different types of pruning, data augmentation and ensembling affect effective generalization. \n * The authors release the RobustNets suite of models that display effective robustness to facilitate further research in this space. \n * The Fourier-based metrics proposed by the authors correlate well with effective generalization of models form the CLIP family. It's also interesting that these metrics don't correlate particularly strongly when other (mostly convnet based) models are used, which is useful in gaining indirect knowledge about the types of functions represented by these networks. \n\n\n\n\n\n STRENGTHS:\n* **Relevance:** The focus of the paper is well within the scope of NeurIPS, especially given the recent interest in gaining a deeper understanding of OOD generalization. \n* **Comprehensive evaluation:** The experiments are quite extensive, providing a plethora of useful data for understanding the conditions under which effective generalization occurs. \n* **Usefulness of the spectral characteristics of in and OOD data:** It turns out that the spectral characteristics of in and OOD data gives useful information regarding on which distributions shifts are more likely to induce \"accuracy-on-the-line' style phenomenon. \n* **Evidence that each distribution shift is \"unique\"** The authors make a compelling case that each distribution shift has it's own problem structure, and there's not single metric that correlates well with generalization across all shifts (at least among the ones explored in the paper)\n\n\nWEAKNESS:\n* **Lack of causal explanations:** Even though the authors use words that imply the results provide a causal understanding of why effective generalization happens, all of the results are correlational. It's misleading to claim that the experiments can answer \"why\" questions. I believe it's very important that this is fixed before the camera-ready version, should the paper be accepted. (I'm happy to hear arguments against this criticism) In my opinion, this is an important limitation that can be fixed simply by removing any causal claims from the paper - the existing results are still worthwhile. \n* **No emphasis on architecture:** It'd be interesting to see a discussion of whether model architecture has any impact on effective generalization. The authors likely have enough empirical data to address this - a discussion of this would have made the paper stronger. \n**Lack of mechanistic understanding:** Despite a plethora of empirical data, I don't think I've acquired a deeper (i.e. mechanistic) understanding of why/how effective robustness occurs. I also am not sure if a practitioner also leaves the paper with a solid understanding of which techniques to use to achieve effective robustness, beyond using methods from already existing literature. \n * This is not to say that the presented results are non meaningful - perhaps follow-up works will fill these gaps. \n\n================\n**Post rebuttal update**\nI thank the authors for their response. Since some of my important concerns are addressed, which I've reflected on my review by increasing the score. * Could you explain which models you evaluated on ImageNet? The description (between lines 109 and 112) is a bit vague. Which architectures are included, for example? \n* Line 153: Could you clarify what you mean by \"we interpolate the lowest 40% of image frequencies\"? Are these the frequencies that occupy 40% of the energy? Or perhaps something else? Also, do you leave the rest of the frequencies untouched?\n* Line 280: Are the 11 CLIP models obtained by interpolating the same pretrained and finetuned versions of CLIP? \n\n\nNitpick: It perhaps could be interesting to add a plot in the Appendix that supplements Figure 1 in that instead of tracing amplitude or phase interpolations along a single axis, a grid of images can be provided where on the left top and right bottom, we have the unperturbed images, and in the middle we have various (mixed) interpolations. **Only image-based distribution shifts:** The proposed metrics (and the analysis) is only concerned with image data. \n**Experiments have no causal power:** Despite authors' claims, the experiments are fully correlational, and this should be acknowledged so. (see weaknesses for above)\n", " This paper carries out a comprehensive study of the Distribution Shift Robustness through a Fourier lens and designs new 'Fourier sensitivity metrics' that capture Fourier sensitivity of models as test data moves farther away from the training data manifold. It provides the theoretical analysis of the OOD robustness on pruning, data augmentation, and weight ensembling. Strength: \n(1) This paper considers the OOD robustness on pruning, data augmentation, and weight ensembling.\n(2) This paper designs new metrics that capture the Fourier sensitivity of models.\n\nWeaknesses:\n(1) To the best of the reviewer's knowledge, this is one existing work [1*] considering both the amplitude and phase on the robustness in the frequency, while [1*] is not in this paper's reference list. The design difference is needed, at least.\n\n(2) The Power spectral densities in Figure 2 are nearly the same as those in [36]. A citation of [36] is needed in Sec. 4.1.\n\n(3) The experiment setting is a little strange. The experiments in Secs. 4.2 and 4.3 are conducted on CIFAR-10, while ImageNet is used in Sec. 4.4. From the reviewer's view, it is better to conduct the experiments on one consistent dataset or on both datasets.\n\n(4) Since all the experiments are conducted on corruption benchmarks (CIFAR10-C and ImageNet-C), the separated experimental results on each type of corruption are suggested to add.\n\n[1*] Amplitude-Phase Recombination: Rethinking Robustness of Convolutional Neural Networks in Frequency Domain. ICCV 2021\n\n\n-------------------------------------------------------\nPost-rebuttal reviews:\nMy main concerns are on the experimental settings, which have been well-addressed in the rebuttal. I will raise the recommendation to 'weak accept'.\n The questions are listed in the pros and cons. N.A.", " This paper takes metrics which predict OOD robustness (such as ID accuracy, model jacobian norm, etc.) and then looks at the correlation between these metrics and OOD accuracy for methods which can produce robust models. This paper observes that while ID accuracy is a good predictor of robustness for natural distribution shifts (ImageNet -> ImageNetV2), for some synthetic distribution shifts there are better metrics. In general they find \"no single metric rules them all\". The paper also introduces new metrics which correlate will with the robustness of ensembled CLIP models. Strengths:\nOverall the results are very interesting and the empirical findings are likely useful to the community. Moreover, the proposed metrics seem that they may be potentially useful in further understanding CLIPs robustness. Finally, robustnets could be a useful resource.\n\nWeaknesses:\nMy thoughts are that the main weakness of this paper was the lack of distribution shifts, in particular natural distribution shifts which are more likely to occur in the world. I would have appreciated results for additional distribution shifts including ImageNetR or ObjectNet. In addition, it's not clear if there is a takeaway for designing better models. Finally, while a conclusion of this paper is that there is no one metric to rule them all, there is only one column of plots without ID accuracy on the x-axis. Consider showing other metrics on the x-axis in the appendix. - Why do the authors think that proposed metrics correlate well for ensembled CLIP models?\n- Can any of the proposed metrics be used as objectives to train better models?\n- What happens on additional natural distribution shifts such as ImageNetR or ObjectNet or a shift from WILDS?\n Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 3 ]
[ "_NE05ZaKOjv", "nips_2022_YZ-N-sejjwO", "CjEOjxgio42", "mW-8Q0AiPk", "mW-8Q0AiPk", "2EsEX4qUJtp", "Fa21d5ve9sj", "Fa21d5ve9sj", "nips_2022_YZ-N-sejjwO", "nips_2022_YZ-N-sejjwO", "nips_2022_YZ-N-sejjwO", "nips_2022_YZ-N-sejjwO", "nips_2022_YZ-N-sejjwO" ]
nips_2022_A0ejsEHQu9w
Gradient-Free Methods for Deterministic and Stochastic Nonsmooth Nonconvex Optimization
Nonsmooth nonconvex optimization problems broadly emerge in machine learning and business decision making, whereas two core challenges impede the development of efficient solution methods with finite-time convergence guarantee: the lack of computationally tractable optimality criterion and the lack of computationally powerful oracles. The contributions of this paper are two-fold. First, we establish the relationship between the celebrated Goldstein subdifferential~\citep{Goldstein-1977-Optimization} and uniform smoothing, thereby providing the basis and intuition for the design of gradient-free methods that guarantee the finite-time convergence to a set of Goldstein stationary points. Second, we propose the gradient-free method (GFM) and stochastic GFM for solving a class of nonsmooth nonconvex optimization problems and prove that both of them can return a $(\delta,\epsilon)$-Goldstein stationary point of a Lipschitz function $f$ at an expected convergence rate at $O(d^{3/2}\delta^{-1}\epsilon^{-4})$ where $d$ is the problem dimension. Two-phase versions of GFM and SGFM are also proposed and proven to achieve improved large-deviation results. Finally, we demonstrate the effectiveness of 2-SGFM on training ReLU neural networks with the \textsc{Minst} dataset.
Accept
The authors introduce two derivative-free algorithms for computing the Goldstein stationary points in the context of nonconvex nonsmooth optimization, and show that they enjoy polynomial complexity (in expectation), while the dimension dependence (which is unavoidable in the derivative-free setting) is worse by only an sqrt(d) factor compared to the convex/smooth case. A high-probability bound with a two-phase scheme was established as well. The reviewers described the strengths of the paper in the following way: - The paper is well-written and easy to follow. - The theoretical result in this paper is interesting. - The authors established a novel optimality criterion for non-smooth non-convex Lipschitz function called (δ,ε)-Goldstein stationary point and proposed a gradient-free algorithm with its stochastic version by deriving the Goldstein subdifferential and uniform smoothing technical. - Overall, it provides some solid theoretical results on gradient-free methods for the nonsmooth nonconvex optimization. Some criticism was raised, but the authors managed to address it in their rebuttal. I agree with the collective judgment of the reviewers that this paper clearly passes the bar of acceptance. Please make sure that all criticism is properly addressed in the camera-ready version of the paper as well. Congratulations on a nice paper! AC
train
[ "RsYMIbxhlx1", "MyrGIhpzvBK", "ZfVVhAiuwKL", "bUILwlcefgi", "pIe_eYRUC_B", "2aZjFJc-7v2", "ksJioR-u1p", "dguYxaTiyYu" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the authors' responses.\n\nI have read all responses and reviews. The authors have solved my concerns. So I keep my score.", " Thank you for your encouraging comments and positive evaluation! We reply to your questions point-by-point below, and will color all relevant revisions in our paper in ${\\color{blue} \\textrm{blue}}$. \n\n1. **The Lipschitz continuous condition used in the paper may be not mild. Could we further relax the Lipschitz continuous condition in the gradient-free methods for nonsmooth nonconvex optimization?**\nWe thank the reviewer for pointing this important note out to us. The Lipschitz continuous condition can be relaxed to local Lipschitz continuous condition and all the results still remain the same under minor modification. Indeed, the local Lipschitz condition is enough to guarantee that the randomized smoothing works with sufficiently small \\delta. However, it seems that the complete relaxation of the Lipschitz continuous condition is impossible. For gradient-based methods, the Rademacher's theorem (i.e., any Lipschitz function is almost everywhere differentiable) will never hold and we only have the abstract definition of Clarke stationarity (Definition 2.2) without Proposition 2.1. This rules out all the existing theoretical analysis under the randomization scheme. For gradient-free methods, the randomized smoothing will not give a differentiable function with Lipschitz gradients. This also rules out the theoretical analysis in the current manuscript. Nonetheless, there exist finite-time convergent algorithms for some certain classes of non-Lipschitz and nonconvex optimization problems; for example, see the reference: Bian et.al., Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization, Mathematical Programming, 2015. \n\n2. **In the convergence analysis (Theorem 3.2,3.4-3.6), do we need to choose some small parameter $\\delta$ that relies on $d$ and $\\epsilon$, as in the existing zeroth-order methods for smooth optimization?**\nThank you for your helpful comments! We have discussed the difference between the role of $\\delta$ in our setting and that in smooth optimization. Indeed, we highlight that $\\delta > 0$ is the desired tolerance in our setting. Indeed, $(\\delta, \\epsilon)$-Goldstein stationarity (see Definition 2.3) relaxes $\\epsilon$-Clarke stationarity and our methods pursue an $(\\delta, \\epsilon)$-stationary point since finding an $\\epsilon$-Clarke point is intractable. This is different from smooth optimization where $\\epsilon$-Clarke stationarity reduces to $\\nabla f(\\textbf{x}) \\leq \\epsilon$ and becomes tractable. In this context, the existing zeroth-order methods are designed to pursue an $\\epsilon$-stationary point. Notably, a $(\\delta, \\epsilon)$-Goldstein stationary point is an $\\epsilon$-stationary point in smooth optimization if we choose $\\delta$ that relies on $d$ and $\\epsilon$. That is why the existing zeroth-order methods set $\\delta$ based on $d$ and $\\epsilon$ in smooth optimization. Based on the above arguments, we can set the desired tolerance $\\delta$ independent of $d$ and $\\epsilon$. \n", " 4. **It seems that the authors trained a quite simple convolutional neural network model on image classification task rather than some more modern and efficient models like [R2] and [R3] according to the experiment results, which is insufficient for validating the effectiveness of the proposed methods.** \n\n **[R2] He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. CVPR, 2016**\n\n **[R3] G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, \"Densely connected convolutional networks,\" CVPR, 2017**\n\n We would like to thank the reviewer for the helpful comments for potential improvement on the experimental section of the paper. We agree that the numerical results in the previous draft are inadequate to make the point of the paper, and we have conducted additional experiments to better demonstrate the effectiveness of the proposed scheme. For example, we include INGD and present the numerical results on the larger CIFAR10 dataset. We agree that it would be better to validate the effectiveness of the proposed methods using some more modern and efficient models in [R2] and [R3]. However, the practical implementation of our methods for these models would require much more research and may be beyond the scope of this paper. Thus, we cite these references in the revision and reserve further empirical study as future work. \n\n5. **Grammar mistakes. Line 342, “We have proposed and analyzed a class of…” “have proposed and analyzed” should be replaced by “propose and analyze”.**\nThank you for pointing it out to us. We have fixed it in the revision. \n\nThanks again for your remarks! We hope and trust that our replies have alleviated your concerns regarding the merits of our submission, and we look forward to an open-minded discussion if any such concerns remain.", " Thank you for your time and your input. We reply to your main questions point-by-point below, and we have colored all relevant revisions in our paper in ${\\color{blue} \\textrm{blue}}$. \n\n1. **The authors proposed a class of subdifferential-based gradient-free algorithms. What is the advantage of the proposed algorithms compared to the gradient-based methods and zeroth-order methods?**\nWe sincerely apologize for the confusion about terminology. **Our algorithms belong to the class of zeroth-order methods**, in the sense that only (noisy) function value is available at each point but gradient information (i.e., subdifferential information) is not available. Note that the optimality notion (i.e., Goldstein stationarity) is indeed defined based on the subdifferential of the function. **However, this does not mean that our algorithms will use the subdifferential information in their scheme**. To our knowledge, our algorithms are **one of the first set of zeroth-order methods** to handle nonsmooth nonconvex optimization with Lipschitz objective function. \n\n “Gradient-based” methods are usually applied where gradient information is available at each point. Compared to “gradient-based” methods, our methods achieve the finite-time convergence to an approximate Goldstein stationary point even when we only have a function-valued oracle (or a zeroth-order oracle). \n\n A major reason that we consider the construction of gradient-free methods instead of gradient-based methods is that: **the gradient information sometimes is not readily available for application problems where we only have access to a noisy function value at each point**. This lack of gradient information is a common issue in the context of simulation optimization (Nelson, 2010) and (Hong et al., 2015), where the objective function value is often achieved as the output of a black-box or complex simulator, for which the simulator does not have the infrastructure needed to effectively evaluate gradients; we will also refer to (Ghadimi and Lan, 2013) and (Nesterov and Spokoiny, 2017) for comments on the lack of gradient evaluation in practice. \n\n2. **The authors compared the performance of the proposed algorithms with different choices on MNIST, which is literally a small-scale and simple dataset. Why not use larger datasets to verify the excellent performance of the proposed algorithm?**\nThank you for your helpful comments! We agree and have added the experiment with a larger CIFAR10 dataset in the revision; please see Appendix G in the revision. Due to time constraints, we only compare our 2-SGFM with SGD and the numerical results demonstrate the similar performance on MNIST dataset. We will add other methods in the final revision. \n\n3. **The author's comparison algorithms are too few, which can be compared with some more advanced zeroth-order and gradient-free optimization algorithms like INGD from J. Zhang, H. Lin, S. Jegelka, S. Sra, and A. Jadbabaie. Complexity of finding stationary points of nonconvex nonsmooth functions. In ICML, pages 11173–11182. PMLR, 2020.**\nThank you for your helpful comments! We have added the INGD in the revision; please see Figure 1 in the page 9 of main context. The numerical result is consistent with Figure 1 in Zhang et. al. INGD outperforms SGD and is competitive with ADAM and AdaGrad. However, we hope to emphasize that INGD is the gradient-based optimization algorithm and requires gradient information at each point. This is in contrast to our method, which is the gradient-free (or zeroth-order) method. ", " Thank you for your encouraging comments and positive evaluation! We reply to your questions point-by-point below, and will color all relevant revisions in our paper in ${\\color{blue} \\textrm{blue}}$. \n\n1. **Proposition 2.3 basically repeats [78, Lemma 8]. In the proof, it might miss a norm in L629 and L634.**\nThank you for pointing this out to us. We have emphasized that Proposition 2.3 is a restatement of [78, Lemma 8] and fixed the missing norm in the revision. \n\n2. **The step size of Algorithm 1,2,3 seems dependent on the $\\Delta$, which is usually unknown in practice and is not necessary in the first-order setting for Lipschitz functions, e.g., in [79].**\nWe thank the reviewer for this helpful comment. Indeed as the reviewer pointed out, this $\\Delta$ is generally not necessary in the first-order setting for Lipschitz functions. In contrast, for zeroth-order settings, it seems that the information about $\\Delta$ is necessary for setting a step size and proving tight theoretical guarantees. Not knowing $\\Delta$ may lead to worse complexity for the algorithm, compared to knowing $\\Delta$. \n\n Specifically, the choice of the step size of Algorithm 1,2,3 is crucial to derive the desired main results on complexity; see the key inequalities in Line 717 and 777 from the appendix. If we use a naive step size rule without the prior knowledge of $\\Delta$, the proved results may not hold. For example, if we set $\\eta = \\frac{\\delta}{10L}\\sqrt{\\frac{1}{cd^{3/2}T}}$, the complexity bound will become worse, i.e., $O\\left(d^{\\frac{3}{2}}\\left(\\frac{L^2(\\Delta + \\delta L)^2}{\\delta^2\\epsilon^4} + \\frac{L^4}{\\epsilon^4}\\right)\\right)$. Such a phenomenon is in contrast to the first-order setting where [79] developed an algorithm based on Goldstein subgradient method and set the step size as $\\delta$. \n\n A bit on the **positive** side is that, technically the zeroth-order setting does not need exact knowledge of $\\Delta$, which as the reviewer pointed out is not available in practice. Knowing an estimate of $\\Theta(\\Delta)$ would suffice for the algorithm design and theoretical results to go through. Of course, if the provided estimation of $\\Delta$ is too loose, the resulting complexity can be adversely affected but up to a constant. In practice, we would recommend users to identify an estimation of \\Delta given the specific problem setting, but indeed the complexity results can comprise for the zero-order setting if no such information about $\\Delta$ is available. \n\n Intuitively, the first-order information gives more information than zeroth order information such that the step size can be independent of more problem parameters while not sacrificing the complexity results. In the revision, we have highlighted this point and remarked that the design of a more practical step size rule is a promising future direction. \n\n3. **The following reference computing Goldstein stationary points concurrent partly to [30] might be relevant: L. Tian, K. Zhou, and A. M-C. So. On the finite-time complexity and practical computation of approximate stationarity concepts of Lipschitz functions. ICML, 2022.**\nThank you for pointing this relevant reference to us. We have added adequate discussion about the reference in the revision. \n\n4. **\"Clark\" -> \"Clarke\".**\nThank you for pointing this typo out to us. We have fixed it in the revision. \n", " This paper introduced two zero-order algorithms to compute the Goldstein stationary points for nonconvex nonsmooth problem. In contrast to the first-order case, the dependence on dimension is unavoidable for algorithms that only use function values. The authors show the gradient of a randomized smoothed function with a delta-ball is belong to the Goldstein delta-subdifferential, which forms the basis for the new gradient-free algorithms. They show the new algorithms compute a Goldstein stationary point in expectation within polynomial oracle complexity and the dimension dependence is only sqrt(d) worse than the convex/smooth case. They also proved a high-probability bound with a two-phase scheme.\n As nonconvex nonsmooth problems are everywhere especially in the DL setting, new practical algorithm with finite time oracle complexity is important and desirable nowadays. This paper studies the computation of Goldstein approximate stationary point, which has exhibited attractive algorithmic consequence in recent years. The main contributions are two zero-order finite-time methods which are further built upon an interesting observation that the randomized smoothed function with a delta-ball is belong to the Goldstein delta-subdifferential. The paper is well-written and easy to follow. My comments are as follows:\n\n* Proposition 2.3 basically repeats [78, Lemma 8]. In the proof, it might miss a norm in L629 and L634.\n* The step size of Algorithm 1,2,3 seems dependent on the \\Delta, which is usually unknown in practice and is not necessary in the first-order setting for Lipschitz functions, e.g., in [79].\n* The following reference computing Goldstein stationary points concurrent partly to [30] might be relevant:\n\n[R] Lai Tian, Kaiwen Zhou, and Anthony Man-Cho So. On the finite-time complexity and practical computation of approximate stationarity concepts of Lipschitz functions. ICML, 2022.\n\n* L45: \"Clark\" -> \"Clarke\" See main comments above. Yes.", " This paper established a novel optimality criterion for non-smooth non-convex Lipschitz function called (δ,ε)-Goldstein stationary point and proposed a gradient-free algorithm with its stochastic version by deriving the Goldstein subdifferential and uniform smoothing technical. The last-iterate convergence analysis of the proposed methods was given, which guarantee the convergence to a (δ,ε)-Goldstein stationary point with high probability for both deterministic and stochastic version. State-of-art lower bounds of total oracle calls are given in terms of δ,ϵ and Λ for the proposed methods. Numerical experiments have shown the effectiveness of the proposed methods. The theoretical result in this paper is interesting. The authors established a novel optimality criterion for non-smooth non-convex Lipschitz function called (δ,ε)-Goldstein stationary point and proposed a gradient-free algorithm with its stochastic version by deriving the Goldstein subdifferential and uniform smoothing technical. Though the paper is theoretically sound, there are still some questions need to be discussed in this paper:\n\n1.\tThe authors proposed a class of subdifferential-based gradient-free algorithms. What is the advantage of the proposed algorithms compared to the gradient-based methods and zeroth-order methods?\n\n2.\tThe authors compared the performance of the proposed algorithms with different choice on MNIST, which is literally a small-scale and simple dataset. Why not use more larger datasets to verify the excellent performance of the proposed algorithm? In addition, the author's comparison algorithms are too few, which can be compared with some more advanced zeroth-order and gradient-free optimization algorithms like INGD[R1].\n[R1] J. Zhang, H. Lin, S. Jegelka, S. Sra, and A. Jadbabaie. Complexity of finding stationary points of nonconvex nonsmooth functions. In ICML, pages 11173–11182. PMLR, 2020.\n\n3.\tIt seems that the authors trained a quite simple convolutional neural network model on image classification task rather than some more modern and efficient models like [R2] and [R3] according to the experiment results, which is insufficient for validating the effectiveness of the proposed methods. \n[R2] He, K., Zhang, X., Ren, S., and Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778, 2016\n\n[R3] G. Huang, Z. Liu, L. Van Der Maaten and K. Q. Weinberger, \"Densely Connected Convolutional Networks,\" 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017\n\n4.\tGrammar mistakes. Line 342, “We have proposed and analyzed a class of…” “have proposed and analyzed” should be replaced by “propose and analyze”.\n The authors proposed a class of subdifferential-based gradient-free algorithms. What is the advantage of the proposed algorithms compared to the gradient-based methods and zeroth-order methods? The authors compared the performance of the proposed algorithms with different choice on MNIST, which is literally a small-scale and simple dataset. Why not use more larger datasets to verify the excellent performance of the proposed algorithm?\n\n\n\n----------after feedback---------------\n\nThe authors address my comments well, so I increased my score.", " This paper studied the gradient-free (zeroth-order) methods for the nonsmooth nonconvex optimization problems, and provided the solid theoretical analysis for the proposed gradient-free methods. Notably, it established a useful relationship between the Goldstein subdifferential and uniform smoothing via appeal to the hyperplane separation theorem. Some experimental results demonstrate the effectiveness of the proposed methods. Novelty of this paper: Overall, it provides some solid theoretical results on gradient-free methods for the nonsmooth nonconvex optimization.\n\nWeakness of this paper: The Lipschitz continuous condition used in the paper may be not mild. \n Some comments:\n\n1)\tCould we further relax the Lipschitz continuous condition in the gradient-free methods for nonsmooth nonconvex optimization ?\n\n2)\tIn the convergence analysis (Theorem 3.2,3.4-3.6), do we need to choose some small parameter $\\delta$ that relies on $d$ and $ \\epsilon$, as in the existing zeroth-order methods for smooth optimzation? \n Yes" ]
[ -1, -1, -1, -1, -1, 7, 6, 7 ]
[ -1, -1, -1, -1, -1, 4, 2, 4 ]
[ "MyrGIhpzvBK", "dguYxaTiyYu", "ksJioR-u1p", "ksJioR-u1p", "2aZjFJc-7v2", "nips_2022_A0ejsEHQu9w", "nips_2022_A0ejsEHQu9w", "nips_2022_A0ejsEHQu9w" ]
nips_2022_ZqgFbZEb8bW
Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning
People say, "A picture is worth a thousand words". Then how can we get the rich information out of the image? We argue that by using visual clues to bridge large pretrained vision foundation models and language models, we can do so without any extra cross-modal training. Thanks to the strong zero-shot capability of foundation models, we start by constructing a rich semantic representation of the image (e.g., image tags, object attributes / locations, captions) as a structured textual prompt, called visual clues, using a vision foundation model. Based on visual clues, we use large language model to produce a series of comprehensive descriptions for the visual content, which is then verified by the vision model again to select the candidate that aligns best with the image. We evaluate the quality of generated descriptions by quantitative and qualitative measurement. The results demonstrate the effectiveness of such a structured semantic representation.
Accept
All three reviewers have voted weak accept to this paper; the authors have engaged well with the reviewers and have improved their paper. I also recommend acceptance.
train
[ "l6e4KnEuh7Z", "NzbZRsFAJG", "mcmOrXFlWWp", "9Ew05muOW1j", "tyipraZCevA", "IRF9JybSAzQ", "emicOQp4QEC", "zR8wsN-4j1oS", "Zyj1579MCBF", "igmv8-tqZ1", "3b83J1IspH", "842cSKG2xv", "86L8AjVgMyC", "W8dGNQqiAnK", "raVOIUU3f6i", "AnC9BY2BOO_", "9PDdsUkyp9A", "5TYyT_N3awm" ]
[ "official_reviewer", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I would like to thank the authors for the rebuttal. About the baseline captioners, you can maybe take baselines in original BLIP paper such as LEMON or even some older models such as AoANet, but given the time frame, it is not needed. Most of my concerns are resolved and I still recommend acceptance for this paper.", " Thanks again for the great review!\n\nAs the end of the rebuttal phase is approaching, we would like to double-check whether you have any remaining concerns or questions we can address. We are also happy to provide additional information or clarification if needed.\n\nPlease let us know if you have any further questions or concerns :)\n", " Thanks again for the great review!\n\nAs the end of the rebuttal phase is approaching, we would like to double-check whether you have any remaining concerns or questions we can address. We are also happy to provide additional information or clarification if needed.\n\nPlease let us know if you have any further questions or concerns :)\n", " Thanks again for the great review! \n\nAs the end of the rebuttal phase is approaching, we would like to double-check whether you have any remaining concerns or questions we can address. We are also happy to provide additional information or clarification if needed.\n\nPlease let us know if you have any further questions or concerns :)", " Thanks a lot for your comments and suggestion! We appreciate your contribution in improving our work!", " Thank you for your time to think about my comments and suggestions. I stick with my pre-rebuttal rating as I have already recommended acceptance.", " Thanks for the constructive feedback!\n\n### Q1: Other metrics\nPlease refer to response to all reviewers: Other metrics besides SPIPE. We reported other metrics in Appendix B. \n\n### Q2: Is SPIPE correlated with human perception? \nFrom our observation, yes. For example, SPIPE suggests that in terms of completeness, human annotations>BEST>BLIP>Socratic model. For human evaluation, we have human annotations>BEST, BEST>BLIP, BEST>Socratic model. The conclusion is in accordance. Unfortunately, we are not able to perform large-scale comparisons on correlations to human evaluation, as it requires at least dozens of IPC methods to construct a statistically meaningful comparison.\n\n### Q3: Have authors considered exploring tasks beyond IPC? For instance, VQA. \nYes. The visual clues are a faithful and detailed description of the image, which can be used to answer visual questions leveraging the question answering ability of language models. Specifically, we replaced the ending of the prompt to be the question, e.g., we replace the “ *Describe the image in detail:* ” by “ *What is the man holding?* ”. We benchmark its performance in two Visual Question Answering (VQA) datasets -- we use the GQA dataset for probing the capability of scene understanding, and the OK-VQA dataset for the awareness of the commonsense knowledge. \n\nThe table below shows the evaluation results. BEST outperforms Socratic models significantly, suggesting our visual clues are better image representations. We also benchmark the accuracy on BLIP (finetuned on VQA v2 dataset and Visual Genome dataset) for reference, which is not directly comparable since its pretrain and finetune datasets have a significant overlap with the evaluation datasets.\n\n| Dataset | Method | Evaluation | Accuracy |\n|----------------|-----------------------------|-----------------------|-----------------|\n| GQA | Socratic model | Generative | 24.95 |\n| | | Discriminative | 26.89 |\n| | BEST | Generative | 37.00 |\n| | | Discriminative | 39.93 |\n| | BLIP (not zero-shot) | Exact match | 47.58 |\n| OKVQA | Socratic model | Generative | 16.50 |\n| | BEST | Generative | 28.89 |\n| | BLIP | Exact match | 43.62 |\n\nThe experiment details can be found in Appendix F. We observe that some of the failure cases are not actually wrong answers. For example, for an image with elephant drinking water by a river, and question “ *What place is pictured?* ”, the ground truth answer is “ *Shore* ”, while our generated answer is “ *Africa* ”, which is also not wrong.\n", " ### Q3: Improvement over BLIP is not significant\nPlease refer to response to all reviewers for our discussion on the computation cost. Again, although GPT-3 seems to be a large model, with the development of cloud services, all practitioners need to do is to call the APIs. \n\nWe disagree that our improvement over BLIP is not significant. Quantitatively, BLIP is trained on VG data, so is well-aligned with the domain. In contrast, BEST-general domain is not leaning towards any domains and is capable of handling images in the wild. Still, the SPIPE F-score of BEST-general domain is 15.8% higher than BLIP. \n\nMore importantly, human evaluation shows BEST is **statistically significantly** better than BLIP in every aspect: accuracy, completeness, coherence, and humanlikeness.\n\nQualitatively, as shown in the examples in Figure 6, 8, 9, and 10, the level of detailedness and language quality of BLIP is not comparable to BEST. Take one example from Figure 6, \n> BEST: This image captures a surfer performing a cutback maneuver at the Superbank surf competition. The surfer is positioned in the middle of the frame and is relatively small in comparison to the surrounding waves. The waves are large and crashing, providing an impressive backdrop for the surfing action.\n\n> BLIP: A man riding a wave on top of a surfboard.\n\nBEST describes the scene vividly, while BLIP only provides a high-level summarization.\n", " ### Q2: Difference from PICa\nWe list the differences between PICa and BEST as follows:\n\n1. To answer the visual questions, we only need to extract the key visual concepts from the image, while IPC requires a comprehensive understanding of the image. It is relatively easy to represent the key concepts using text prompt, while it is not trivial to exhaust the details in an image -- after all, an image is worth a thousand words.\n2. As a VQA method, PICa outputs deterministic single words or simple phrases, while our framework outputs creative long texts. This introduces many difficulties – how to control the level of detailedness? How to remove hallucination while maintaining creativeness? How to define a “good” output?\n3. PICa exploits the whole training set of OK-VQA to construct the prompt examples, while we do not involve any training data.\n\nThe only common ground between BEST and PICa is to use GPT-3 to obtain the final outputs. Our work is NOT a direct extension of PICa.\n\n### Q3: On the metric SPIPE\n“ *There is no experimental proof in the article that the new evaluation metric SPIPE could evaluate IPC better than the previous metrics, such as SPICE and METEOR.* ” \n\nPlease refer to response to all reviewers: Other metrics besides SPIPE. We have added discussions on why other metrics cannot work well here, and also reported other metrics. \n\nAs shown in the SPICE paper [r2], evaluation metrics based on semantic propositional contents can achieve better correlation with human evaluation. SPICE takes reference text for comparison, because for common captioning benchmark, there are multiple high-quality references, which together form a good description to the images. In IPC task, however, the reference is far from enough. In contrast, the scene graphs form a better description for the image. That is the reason we extend the SPICE metric to our SPIPE metric.\n\nWe understand that we do not perform large-scale comparisons on correlations to human evaluation, like what SPICE did. Not only because the conclusion is already verified in the SPICE paper, but also because it requires at least dozens of IPC methods to construct a statistically meaningful comparison, which is not available online. Although not statistically meaningful, we can clearly observe some of the correlations between SPIPE and human evaluation: SPIPE suggests that in terms of completeness, human annotations>BEST>BLIP>Socratic model. For human evaluation, we have human annotations>BEST, BEST>BLIP, BEST>Socratic model. The conclusion is in accordance.\n\n“ *There is also no analysis of whether the SPIPE evaluation metrics can evaluate different aspects.* ”\n\nPrecision is defined as the fraction of visual concepts related to the image among the concepts in the generated paragraph, and recall is the fraction of visual concepts in the paragraph among all the image concepts. It is intuitive that precision and recall corresponds to accuracy and completeness. \n\n“ *Besides, SPIPE needs human-annotated graphs for each paragraph in the dataset, which limits the generalizability of this metric.* ” \n\nOther existing evaluation methods require human-annotated texts. To the best of our knowledge, all commonly adopted automatic evaluation metrics require human annotations. \n\n“ *As a result, only using the SPIPE metric in all the experiments is not convincing.* ” \n\nThe golden standard of text evaluation is not any of the evaluation metrics, but human evaluations, and BEST output is much better than other baselines according to the human evaluations. It is not true we “ *only using the SPIPE metric* ”. In addition, we reported other metrics in Appendix B.\n\n[r2] Peter Anderson, Basura Fernando, Mark Johnson, Stephen Gould. 2016. SPICE: Semantic Propositional Image Caption Evaluation.\n", " Thanks for the valuable comments!\n\nWe would like to first highlight that our framework **does not involve any training or training data**, which is essentially different from other IPC methods and PICa.\n\n### Q1: Comparison with traditional IPC methods\nThe settings of BEST and traditional IPC methods are unfair to compare, since traditional IPC methods are fully trained on labeled task data while our approach is almost zero-shot. As a result, the current metrics on these benchmarks are hard to provide a fair score to two language styles although their meanings are the same to human justification. For example, human annotators without domain specific knowledge tend to say, “ *The man is riding a white surfboard* ”, while our framework outputs “ *This image captures a surfer performing a cutback maneuver* ”. For the current metrics, the difference in language styles would lower the scores of our framework compared with IPC since their well-trained language style is closer to the human annotation. As a matter of fact, our BEST scores in Appendix B Table 5 are not as good as IPC methods. For example, the scores for paper [r1] are BLEU-4 10.58, METEOR 17.86, CIDEr 30.63.\n\nAs a matter of fact, our framework and traditional IPC methods are essentially different pipelines. Our framework has unique advantages that traditional IPC methods do not have:\n\n1. **Easier implementation**, as discussed in response to all reviewers computation cost.\n2. **Versatile to different application scenarios**, as suggested by Figure 5.\n3. **Robust to domain shift.** As suggested Figure 1 of [r1], for a black and white image which the model is not trained on, the inference of the well-trained model does not make sense. The output is like:\n> *A man is skateboarding on a skateboard. He is wearing a black shirt and black pants. He is wearing a black cap and a black hat. A man is wearing a black cap and a black shirt. A man is wearing a black shirt and a black pants. A man is wearing a black shirt and a black pants. A man is wearing a black shirt and a black pants…… A man is wearing a black shirt and a black pants.*\n\n&ensp;&ensp;&ensp; Yet the man actually wears a white hat. This is a common issue of models trained on small language datasets. \nIn comparison, our Figure 9 and 10 include some black and white examples. \n\n4. **Better language quality.** The large language models are trained on tremendous amount of data, leading to much better capability in text generation. Figure 6, 8, 9, and 10 show a few examples of the smooth, coherent paragraphs. In contrast, here is a successful example from Figure 1 of [r1]:\n> *Two people are sitting on a bench. The elephant is sitting on the dirt. The man is sitting on top of the elephant. The woman is wearing a white shirt. The man is wearing a black shirt. There is a tree behind the elephant. There are trees on the ground. There are trees in the background.*\n\n&ensp;&ensp;&ensp; We cannot even tell how many people are there according to the text (as a matter of fact, there are two.), due to lack of coherence among the sentences.\n\n5. **Awareness of commonsense knowledge.** As suggested by Figure 1, our framework output background knowledge like “ *This image captures the everyday life of Cubans, with their traditional horse-drawn carts still in use.* ”, which smoothly complete the paragraph. Traditional IPC methods cannot have enough training data to enable such capability.\n\nWe have added this discussion in Appendix D to prevent confusion for the readers.\n\n [r1] Luke Melas-Kyriazi, Alexander Rush, George Han. 2018. Training for Diversity in Image Paragraph Captioning. (Although written in 2018, this work rank #2 in Paper With Code: https://paperswithcode.com/sota/image-paragraph-captioning-on-image-paragraph)\n", " Thanks for the valuable feedback!\n\n### Q1: Why BEST is better than Socratic model\n\nThe major issue of Socratic model is that its prompt contains inaccurate/useless information, and is not informative enough. Its prompt is \n\n*I am an intelligent image captioning bot. This image is a {img_type}. There {num_people}. I think this photo was taken at a {place1}, {place2}, or {place3}. I think there might be a {object1}, {object2}, {object3},... in this {img_type}. A creative caption I can generate to describe this image is:* \n\nThere are three issues with this prompt:\n\n1. Useless information. We find a much shorter prompt, e.g.,\n*\"This is a {img_type} taken at {place1} containing {object1}, {object2}, {object3}… Generate a caption:\"*\nhas comparable (or even better) performance as the one above.\n2. Inaccurate information. As we discussed in Section 7, VL models trained with contrastive loss cannot differentiate some details in the sentence. We observe that the *num_people* obtained by Socratic model is basically a random guess.\n3. Lacking information. The *img_type* only contains four categories, *photo*, *cartoon*, *sketch*, *painting*. The object list is a list of common objects, not as comprehensive as ours. The place information usually redundant given the object information. \n\nIn contrast, our prompt is much more informative and well-constructed. We have included this discussion in Appendix D. \n\n### Q2: Training cost\nOur framework does **not** require training. We exploit the existing pretrained models directly for inference. Please refer to the response to all reviewers for more discussions on the computation cost. \n\n### Q3: Color for $f_t(\\cdot)$\nAs introduced in Section 3.1, the open vocab tagger, i.e., the orange block in Figure 2, is composed of an image encoder$ f_v(\\cdot)$ and a text encoder $f_t(\\cdot)$. We have improved the descriptions in Section 3.1 to make it clearer. \n\n### Q4: Missing Florence reference\nThanks for pointing it out. We have added the reference in Line 35.\n\n### Q5: How are tags generated\nAs mentioned in Section 5.1, we collect the most frequently searched 400K queries in Bing Search as the input tag list.\n\n### Q6: Add bold headings in Section 3.1\nThanks for the great suggestion. We have added bold headings in Section 3.1.\n\n### Q7: Intuition behind sentence removal when selecting candidate (Line 147/148)\nLarge language models sometimes have hallucination issues, i.e., it might generate unrelated sentences in the paragraphs. For example, a paragraph beginning with \" *A couple is hugging on the beach.*\" is likely to be followed with \" *It's a beautiful day and they're enjoying the sun and each other's company.*\" even if there is no visual clue suggesting the weather. Therefore, we divide the generated paragraph into sentences, and process the sentences with the open vocab tagger again – if there is a sentence that is not aligned with the image, i.e., the alignment score between the sentence and the image is below $\\gamma$, we remove it. We have added more explanation in Section 3.3 to make it clearer.\n\n### Q8: Selection on $\\tau$\n$\\tau$ is the sampling temperature for GPT-3 model. GPT-3 allows a temperature from 0 to 1. Intuitively, a larger $\\tau$ value encourages the model to have more creative outputs, while $\\tau=0$ corresponds to argmax sampling, and is more suitable for well-defined answers (e.g., answers for \" *What is one plus one?* \"). Our application is creative in nature. Moreover, larger $\\tau$ value promotes the generated multiple candidates to be more different, so we have a higher chance to select a good one. Therefore, we adopt $\\tau=0.8$, which is a relatively high temperature. We have added more discussion in Section 5.1.\n\n### Q9: Tag discussion on Line 206\nWe believe it is better to introduce how the tags are collected in the experiment section. We want our framework to be general and can handle tag lists from different domains. For example, if the images are from medical domain, we can replace the tags with medical terminologies. Therefore, in section 3.1, we do not introduce how the tags are collected. \n\n### Q10: Exact inputs to BEST\nThanks for the great suggestion! We include a few prompt examples in Figure 7. It should be straightforward to construct the prompts by looking at the examples. We believe it would be easier to understand than plain tables. Again, there is no module re-trained. All we did was the inference.\n", " Thanks for the constructive feedback! We appreciate your insight on bridging across modalities using unimodal models is a good way to exploit the data in the world.\n\n### Q1: Errors are compounding\nIndeed, we observe that the errors in the final output are majorly due to the errors in local tagging / captioning module, since the local regions may be small, and therefore have a domain shift from their training data. To alleviate the issue, we filter out the too-small regions. \n\nYet this actually suggests our framework has better interpretability. Consider an end-to-end encoder-decoder model, the improper configuration in encoder may also propagate to the final output. But if there are errors in the output, it is very difficult to diagnose whether the problem is in encoder or decoder. In contrast, visual clues explicitly show which module needs improvement. And thanks to our framework’s composability, if better local tagging and captioning modules appear in the future, we can easily replace the modules without any extra training cost.\n\n### Q2: Other metrics\nPlease see the response to all reviewers. \n\n### Q3: Include scene text\nThanks for the suggestion. We added one example in Section 6, where an OCR module is plugged in. For text-heavy scenarios, the variant with OCR indeed works better.\n\n### Q4: VisualGPT\nWe have added discussions in Section 2.\n\n### Q5: Baseline for long captioner\nAs you may notice, this is not a fair comparison, as our framework does not require image / long text pairs for training at all. Yet we agree that having a model trained on WIT or Localized Narratives for reference is useful. Do you mind sharing with us the model in your mind that should be used here for comparison? We will include the comparison in the next version.\n\n### Q6: Proprietary components\nTwo major proprietary components are the Florence model and the 400K bing queries. Both will be available to the public soon via API. Once they are publicly accessible, we will open-source our code as well.\n\n### Q7: Non-proprietary pipelines\n\nGPT3 is publicly available via API. We replaced the Florence model with CLIP (Vi), and here are the scores:\n\n| Tagger | F-score | Precision | Recall |\n|----------------|------------------|-----------------------|-----------------|\n| Florence | 10.0 | 17.5 | 7.6 |\n| CLIP | 7.8 | 16.4 | 5.6 |\n\nQualitatively, the difference between Florence and CLIP lies in the accuracy of tagging. We observe there are more irrelevant tags in CLIP results than Florence. \n\nAlso, we have tested our framework on fully open-sourced language models, e.g., GPT2 (gpt2-xl). But the output paragraphs do not make sense at all. As discussed in [r1], some capabilities of language models only emerge when the models are large enough. So GPT2 does not have the zero-shot visual clue understanding and summarization ability we use here. However, GPT2 may still work with some finetuning.\n\n### Q8: Examples for visual clues\nWe have added a few examples of visual clues in Appendix A Figure 7. \n\n### Q9: Prompt for tag match\nYes, we followed the settings of CLIP.\n\n### Q10: Intuition behind the paragraph on line 38\n“ *The visual clues are interpretable, not only for humans, but also for machines…… while not cluttered with irrelevant information from the visual clues.* ” \n\nThis is based on our observation that large language models can process almost any textual information. For example, it generates reasonable output with inputs like “ *tell me more about Labrador* ” or “ *what does Mount Rainier look like* ”. So we believe that the model can digest visual clues and provide related information about the visual concepts. \n\n“ *Whereas this open-loop process could potentially suffer from object hallucination issues …… back to the original image.* ” \n\nThe hallucination issues of language models are documented extensively in the literature, e.g., [r2][r3]. We have added the references to our paper.\n\n### Q11: Data leakage\nWe have removed the claims on this is not a data leakage concern. Indeed, this is the reason that we do not claim our framework as “ *zero-shot* ”, despite that it can handle the images in the wild in a zero-shot way. Yet it is too clumsy to retrain BLIP-large model without VG data. \n\n### Q12: Synthetic data\nWe generate 15K of synthetic data since this is just a proof-of-concept experiment. A larger amount of data may lead to better performance. We have added more experiment details in Section 6. \n\n[r1] Jason Wei, Yi Tay, Rishi Bommasani, et al. 2022. Emergent abilities of large language models\n\n[r2] Joshua Maynez, Shashi Narayan, Bernd Bohnet, and Ryan Thomas Mcdonald. 2020. On faithfulness and factuality in abstractive summarization.\n\n[r3] Chunting Zhou, Graham Neubig, Jiatao Gu, et al. 2020. Detecting Hallucinated Content in Conditional Neural Sequence Generation.\n", " Here is a summary of changes in our draft to reflect reviewers’ opinions in detail.\n\n - (ErrM) Section 1 line 35: Add Florence citation.\n - (b7uK) Section 1 paragraph 38: Adjust wording and add citation for the hallucination issue of language models.\n - (b7uK) Section 2: Add discussions on VisualGPT.\n - (ErrM) Section 3.1: Add bold headings and revise the introduction on the tagger to make it clearer.\n - (ErrM) Section 3.3: Add more intuition on what sentences are removed.\n - (b7uK) Section 5.1 Footnote 2: Remove the claim on the data leakage is not a concern.\n - (ErrM) Section 5.1: Add more details on why selecting $\\tau=0.8$.\n - (b7uK) Section 5.3, Table 2: Add ablation study with CLIP model.\n - (b7uK) Section 6: Add more experiment details for the synthetic data experiment.\n - (b7uK) Section 6: Add an example with scene text.\n - (b7uK, ErrM) Appendix A Figure 7: Add a few examples of the prompts for language model.\n - (ErrM) Appendix D: Add discussions on why Socratic model cannot perform well. \n - (8CQ9) Appendix D: Add why it is unreasonable to compare our framework to traditional IPC methods.\n - (ErrM, 8CQ9) Appendix D: Add discussions on the computation cost. \n - (WzAU): Appendix F: Add experiment results on VQA tasks.", " We thank all the reviewers for their time and effort in reading and reviewing our paper. \nPlease kindly refer to the individual responses below for our response to each question. We have also updated our draft to reflect reviewers’ opinions in detail.\n\n&nbsp;\n### Other metrics besides SPIPE (To b7uK, 8CQ9 and wZAU)\n\n**The other metrics were already reported in Appendix B (Table 5)**. The proposed framework still performs the best. From the table we can also see why the n-gram based methods do not work well – although BLIP and Socratic models have meaningful outputs, the resulted scores can be nearly zero. This is because, on the one hand, the output paragraphs can be very versatile. It is not meaningful to ask the models to output specific sentences/n-grams. On the other hand, the reference text is not of high quality. For each image, only one reference paragraph is provided (unlike other caption benchmarks, e.g., COCO). And there are usually many details missing in the reference text, not as exhaustive as the scene graphs. Take the first image of Figure 6 as an example. Although the human annotated paragraph is already a long one, it does not mention the color of the table and chair, and there is a water bottle in the side of the backpack. \n\nWe also conducted human evaluation, which we believe should be the golden standard, instead of any automatic evaluation methods. \n\n&nbsp;\n### Computation cost (To ErrM and 8CQ9) \n\nOur inference-only framework involves **no training** and **no data** for training. Therefore, the cost of building such a pipeline is much lower than traditional frameworks.\n\nTraditionally to build a machine learning application, researchers need to 1). Collect a dataset; 2). Select a training framework; 3). Train the model with repeated hyper-parameter tuning. And then we can have a model that is specifically designed for such an application. If the business need is adjusted, the process needs to be repeated to accommodate the shifted application domain.\n\nIn contrast, what we propose is a light-weighted solution: The pretrained models are either available via API service (e.g., GPT3, captioner), or will be available soon (e.g., Florence tagger). To build our framework, researchers only need to plug in the APIs. As suggested by Figure 5, it can handle various scenarios with only minor modification. \n\nOne may argue that some applications require fast inference, which our large pretrained model cannot handle. Yet, using the synthetic data generated by BEST (~20K), we have successfully replaced GPT-3 by a DeBERTa-large model with comparable performance. Using as a data generation pipeline, BEST is faster and more stable than recruiting human laborers. \n\n\n", " The visual clues paper tries to leverage the recent advances in both vision-and-language and natural language understand fields to generate comprehensive and rich descriptions of the images. The model works in three steps: (i) generate a set of visual clues (i.e. information) about the image to embed or encode rich visual information (ii) pass these visual clues as a prompt to a large language model to generate candidate paragraphs (iii) select the best paragraph from the candidates. The paper also purposes a new metric to better evaluate long descriptions using scene graph comparisons. The human and automatic evaluations both suggest that the descriptions generated by this method are almost on par with the human annotations. Legend: S (Strength), W (Weakness), C (General comment)\n- C: The paper is very well written and easy to understand. Though, there are some things which are not clear and not well-mentioned probably due to the fact that some of the models used in this paper are proprietary. \n- S: The paper connects the recent advances in the vision-and-language and NLP fields really well by leveraging almost only the pretrained models to generate very coherent and comprehensive descriptions of the images. Since there is almost no training involved, it further suggests how unimodally strong models can be bridged across modalities leveraging abundantly available unimodal data in the world.\n- S: The approach to generate visual clues is quite exhaustive and covers various sources of visual information: tags using a contrastively trained VL model, object detection model whose bounding box classes are again calculated by the contrastively-trained VL model, and a full-image caption generator. \n- S: One important contribution of this paper is a new automated metric for evaluating longer paragraph descriptions of the images as existing n-gram based metrics are not suitable for the variety seen in longer descriptions. The metric uses scene graph based similarity and is an extension to SPICE metric.\n- S: Human evaluations on the outputs of the model suggest that model's outputs are almost on par in terms of completeness and humanlike but fall behind in accuracy and coherence.\n- W: The previous point brings me to one of the weaknesses of this approach is that since we are relying on pretrained models in the complete pipeline, any errors are eventually compounding as we go across the pipeline which means that if any wrong content is passed via visual clues to the text generator, it would add a compound on top which is why maybe the accuracy and coherence are low.\n- W: It is hard to understand the significance of SPIPE without seeing results on other metrics for the baselines as well. As of now, only SPIPE results are reported and there is no discussion on why ngram metrics probably don't work on the paragraph comparison. \n- W: The model seems to ignore scene text in the image which might be an issue in real world scenarios as scene text is omnipresent. Even when you are reading this review, you are reading scene text.\n- W: There have been works in past such as VisualGPT [1] which try to adapt the language model directly to image captioning. A comparison or discussion with them would be helpful.\n- W: The baselines except BLIP are somewhat weak. Even BLIP when finetuned on COCO captions would be more inclined to generate shorter captions. This is also evident from the examples in supplementary. Since COCO captions are small, it would make more sense to use and compare against captioners trained on dataset which contain larger image descriptions such WIT and Localized Narratives which the paper doesn't do as of now. Most of the models used as baselines are tuned for generating single sentence captions.\n- W: Most of the models used in this paper are proprietary and not available to the public so it is hard to make any comparison down the line. What would be interesting is to use CLIP instead of Florence along with a publicly available large language model to see how this approach performs.\n\n[1] Chen, Jun, et al. \"Visualgpt: Data-efficient adaptation of pretrained language models for image captioning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022. - It would be great to see the exact visual clues as well for the images in the supplementary. Without that context, it is hard to understand what was passed to GPT3 to generate the description.\n- Will any information about the 400K top bing queries be provided?\n- There is no basis behind claiming that n-gram based metrics don't work for larger paragraph generations for images.\n- When the tags are matched with the images, are the tags directly matched or is some sort of prompt used?\n- What is source/intuition behind the paragraph on line 38\n- I disagree with footnote 2 and believe that this overlap is somewhat impactful. The Visual Genome captions are not just semantic propositions but are full captions. Also, in a single image, there are multiple crops for which we have the captions. Now, when these crops along with their captions compound, they do provide a lot of information about the image which is non-trivial leak\n- On line 292, how much BEST-generated data was the BLIP-large model trained on? Yes, the authors have discussed the limitations. In general, this model because it is developed on proprietary models will not be available so can't have direct impact. ", " The paper presents a general framework (BEST) for generating semantic visual clues by capturing information through tags, captions, object detections by leveraging existing pre-trained visual LMs and textual LMs. The framework enters them in an ascending order of their quality (highest quality predictions at the end): local descriptors, captions, and tags as input to a large-scale LM to generate K candidate paragraphs. The last step leverages the vision language model to select the top K candidates that align best with the image. The paper presents an extensive set of experiments against state-of-the-art to show the effectiveness of the proposed framework. The paper has both ablation and qualitative results to demonstrate how each component contributes to the final performance and quality of the generated paragraphs. The paper also proposed an evaluation metric: SPIPE that captures Accuracy, Completeness and Coherence. \n Strengths: \n1. Overall, the paper is well written and is easy to read.\n2. The paper presents a general framework (BEST) for generating detailed image caption paragraphs. The quantitive/qualitative evaluation results show the effective of BEST against BLIP-large and Socratic model in Table 1. \n3. Ablation results in Table 2 show the effectiveness of each component of BEST and Table 3 captures Accuracy, Completeness and Coherence components of BEST against Ground-truth/BLIP/Socratic.\n4. Fine Tuning BLIP-large also improves results as shown in Table 4.\n5. The paper presents many real-world setting where BEST could be useful. May detailed results are presented in Figure 5. \n6. The Section 7 clearly calls out limitations and further improvements to the BEST. \nWeakness: \n1. Given Socrate model is solving a similar problem: it will be good to discuss how and why BEST is so much better than Socrates model in evaluation results. \n2. In related work, more details around how BEST model is different than Socrate model with be different. \n3. Given, there are many pre-trained model involved and also fine tuning is presented. How much it costs in terms of time/resources to train BEST? How much more expensive is training of BEST model when compared to Socrate model? \n Detailed questions/comments\n\nIn Figure 2, could authors highlight what color is being used to represent the text encoder as f_t(.)? \n\nLine 35: Florence reference is missing. Please refer at the first introduction. There is a reference later e.g. on Line 106. \n\nLine 112: How are tags generated {t_i}_{i=1}^{N}? \nSection 3.1 add/create sub-subsections or bold headings to improve organization. \n\nLine 147/148: Could more intuition be added to what sentences are kept or removed from Candidate selection Step?\n\nLine 204: Some details on how tau temperature needs to be adopted in practice (intuition is needed)? What happens if \\tau is 100 vs 0.8 as used in the paper? \n\nLine 206: Tags are discussed. How you refer this back on Line 113 where it was unclear how these N tags are selected. \n\nSome more clarification what are the exact inputs to BEST will improve reproduction of the results. What models need to be re-trained/re-run etc? As the paper is leveraging most of the pre-trained models. It will be good to summarize in a Table/Figure how each component is participating in the final BEST model. \n Already captured in Section 7 of the paper. Authors could add some \"additional societal biases that textual language models suffer from\". Add some details on pros/cons of training these large models to environment. ", " The authors propose a new framework for generating image paragraph captioning in this paper. The framework extracts text descriptions from images using different pretraining modules, subsequently feeds these extracted modules to a language model for paragraph generation, and finally selects the optimal paragraph from several candidate results using a cross-modal selection module as the final result. Also, to better measure this task's results, the authors propose a new evaluation method SPIPE based on the SPICE evaluation method. Strengths\n1. The framework proposed in this paper can generate a paragraph for a given image, which is much longer than the previously widely used image caption.\n2. The authors propose a new evaluation metric SPIPE to measure the results of IPC.\n\nWeakness\n1. Lack of comparison with other traditional IPC methods. There are many previous IPC methods mentioned in the Related Works section. but their results are not compared with the proposed methods' results in the experimental section.\n2. I think this work seems like a direct extension of PICa to the IPC task. The novelty may be limited.\n3. There is no experimental proof in the article that the new evaluation metric SPIPE could evaluate IPC better than the previous metrics, such as SPICE and METEOR. There is also no analysis of whether the SPIPE evaluation metrics can evaluate different aspects. Besides, SPIPE needs human-annotated graphs for each paragraph in the dataset, which limits the generalizability of this metric. As a result, only using the SPIPE metric in all the experiments is not convincing.\n4. In Table1, compared with BLIP-large, BEST–general domain improve 1.2 in F-score. Due to more processes (Candidate Synthesis&Selection) and higher demands on computing resources (GPT-3), I don't think the improvement is significant.\n See the “weakness”.\n\nHope the authors improve this paper according to the “weakness”. The authors have addressed some of the limitations and potential negative societal impact of their work.", " This paper introduces a new method for image paragraph captioning, where the goal is to generate long, coherent and informative textual descriptions of an image. Their method uses several large pretrained models, combined without the need for further training. From the image, open-vocabulary models like CLIP generate tags, object detectors generate a list of objects along with their locations, and image-captioning models generate (shorter) captions of the entire image or image regions. All of these outputs are combined into a natural language prompt (called \"visual clues\" by the authors), which is then fed to a language model like GPT-3. The language model then generates multiple candidate captions from the visual clues, which are then ranked by a contrastive model like CLIP. The authors also propose SPIPE, a new evaluation metric for measuring performance on image paragraph captioning. Using this metric, the authors show that their method outperform other baselines. The authors also conduct human evaluations, showing that their method typically outperforms previous ones, and can be competitive with human annotations. **Strengths**\n\n1) The ideas presented in this work are (to the best of my knowledge) novel, and the research direction is timely.\n\n2) The experimental setup and results are solid.\n\n3) The paper is very clear and well written.\n\n**Weaknesses**\n\n1) One concern is that a great portion of the claims that their method is better than previous work rely on a metric proposed by this own work. The authors explain why other evaluation metrics might not be ideal, but I believe it would still be informative to report them in the paper, especially since there exists image paragraph captioning benchmarks that use other evaluation metrics. 1) Is SPIPE correlated with human perception?\n\n2) Have authors considered exploring tasks beyond IPC? For instance, VQA. To the best of my knowledge, authors adequately discuss the limitations of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 3 ]
[ "842cSKG2xv", "9PDdsUkyp9A", "AnC9BY2BOO_", "raVOIUU3f6i", "IRF9JybSAzQ", "5TYyT_N3awm", "5TYyT_N3awm", "Zyj1579MCBF", "igmv8-tqZ1", "9PDdsUkyp9A", "AnC9BY2BOO_", "raVOIUU3f6i", "nips_2022_ZqgFbZEb8bW", "nips_2022_ZqgFbZEb8bW", "nips_2022_ZqgFbZEb8bW", "nips_2022_ZqgFbZEb8bW", "nips_2022_ZqgFbZEb8bW", "nips_2022_ZqgFbZEb8bW" ]
nips_2022_vQzDYi4dPwM
Size and depth of monotone neural networks: interpolation and approximation
Monotone functions and data sets arise in a variety of applications. We study the interpolation problem for monotone data sets: The input is a monotone data set with $n$ points, and the goal is to find a size and depth efficient monotone neural network with \emph{non negative parameters} and threshold units that interpolates the data set. We show that there are monotone data sets that cannot be interpolated by a monotone network of depth $2$. On the other hand, we prove that for every monotone data set with $n$ points in $\mathbb{R}^d$, there exists an interpolating monotone network of depth $4$ and size $O(nd)$. Our interpolation result implies that every monotone function over $[0,1]^d$ can be approximated arbitrarily well by a depth-4 monotone network, improving the previous best-known construction of depth $d+1$. Finally, building on results from Boolean circuit complexity, we show that the inductive bias of having positive parameters can lead to a super-polynomial blow-up in the number of neurons when approximating monotone functions.
Accept
Surprisingly strong result about the expressive power of monotone networks
train
[ "QVrjiiO2XXH", "8kdNcm8VD7l", "D248y5_AZLV", "0QLCnqjAE91", "2kvDTAJPaA", "RmE8svGi4XX" ]
[ "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you very much for your feedback and for raising several important and interesting questions. \n\n- “These are interesting theoretical findings and help us understand the role of depth in deep learning.”\n\n**Comment**: Thank you! Indeed, this is a theoretical paper focused on mathematical questions.\n\n- “The threshold activation function is not continuous, which is difficult for implementations.”\n- “Is it possible to derive similar results when a continuous activation function is used?”\n\n**Answer**: We agree that the threshold activation is incompatible with gradient-based algorithms. Despite this, theoretical insights about threshold neural networks can help shed light on questions related to networks with other activations. One indication comes from the recent literature about networks with threshold gates: e.g., references [25,35] in our paper. Another reason is that results regarding thresholds can often generalize to differential units such as sigmoids; see our answer below. Finally, our study introduces a scenario where a function can be approximated at constant depth but not at depth 2 (regardless of the size). We are unaware of any similar phenomenon in the literature and believe that illustrating it for the first time is of interest, even when considering a non-differentiable activation. \n\nLet us comment a bit more about continuous activations. Since we consider monotone networks, it makes sense to focus on monotone activations. The monotone activations we looked at can roughly be classified into two classes: ReLU-like activations and sigmoidal activations of sigmoidal type. From the proof of Lemma 1, we see that the convexity of ReLU (and most of its variants) turns out to be a severe hindrance to approximation in the monotone setting. As for sigmoidal activations, observe that the threshold function can be approximated arbitrarily well, almost everywhere, by sigmoids. Thus, our approximation results apply, mutatis mutandis, to sigmoid activations as well. However, to state such results would require us to introduce an approximation parameter that will depend on the separation between data points. For this reason, we chose to focus on the threshold function, which allows us to state clean theorems while capturing the intricate subtleties introduced by having monotone constraints. Please let us know if you have any other concerns concerning the activation function. We are happy to include and expand the discussion in the final version if you think it is appropriate. Furthermore, if you have any other suggestions for other types of differentiable and monotone activation functions that could be interesting to investigate, please let us know.\n\n- “The monotonicity of data or multivariate functions is often ruined by noise in practice.”\n\n**Answer**: The effect noise has on monotone data is an interesting problem. Let us note that even mild noise models can drastically change the considered setting. Indeed, any reasonable (non-monotone) noise could turn monotone functions into non-monotone, for example, if the function is constant. Since our considered architectures always produce monotone predictors, there can be no hope in approximating such noisy functions beyond the noise level, at least not in the general case. To cope, we could add extra assumptions, like bounded noise and strict monotonicity or convexity. Another option would be to relax our constraints to allow some weak non-monotonicity. Alternatively, as in isotonic regression, one could ask for the best monotone network which fits the data, but this would require restricting the size/depth of the network beforehand. As you can see, while being an exciting and relevant question, adding noise would lead to results of very different flavors and seems out of scope for the current paper. We will mention that a limitation of this work is that it considers the noiseless setting, but we also see it as a direction for future research that can build upon and expand our work. Having said that, there is also room for optimism when considering monotone interpolation in practical settings. Monotonicity may indeed be a brittle property, but nevertheless, a theoretical analysis of the noiseless setting can be of value. We observe that several papers, such as [8,11,19, 22, 30], work with monotone data sets and train monotone networks. Those references can serve as an indication that dealing with monotone data sets is not entirely hopeless. \n\n**In conclusion:** \nAs you can see from the above and the other reviews, this is a highly theoretical paper aiming to understand the interplay and possible tradeoffs between depth, efficiency, and monotonicity for approximation and interpolation problems. We hope we have explained more clearly the significance and relevance of our work. We also hope you will kindly re-evaluate your score and judge the paper according to its theoretical contributions.\n\nWe would be happy to address any other concerns you might have.\n", " Thank you very much for your feedback. We appreciate your many suggestions, which will improve our paper's readability.\n\nBelow we address your questions and the points you've raised:\n\n- \"I particularly enjoyed Section 5, where the authors used ideas from monotone complexity theory of Boolean functions to construct a monotone function that are much harder… \nto approximate with monotone neural networks\"\n\n**Comment**: Thank you! This technique of reducing lower bounds for neural networks to lower bounds on circuits seems to be quite powerful. We think there are more connections to be made between complexity theory and neural networks and hope this paper will lead to more results in this direction. \n\n- \"There is the additional problem of training that is not discussed here.\" \n\n**Answer**: We agree. The scope of this paper is expressivity questions. There are many interesting questions regarding optimizing the loss and generalization bounds for monotone neural networks. We hope our paper will lead to further study of these questions, and we will mention some of them in a future research subsection. \n\n- \"It would be nice if the authors could reference the classical literature on monotone approximation\"\n\n**Answer**: Thank you very much for the references. We will be sure to incorporate these references, look for more sources, and add a proper discussion. \n\n- \"The manuscript would benefit from another round of polishing\"\n\n**Answer**: Since submitting the paper, we have made an independent pass and made many minor changes to the style. We also plan to make another pass before submitting the final version, and we will address all comments not captured in our previous pass. Thank you for bringing these errors to our attention.\n\n- \"In Theorem 2, the authors prove that if the f is L-Lipschitz then there is a nearby 4-layer neural network with O(d(Lsqrt(d)/eps)^d) neurons. This is a super-exponential blow-up in d. Do the authors believe that O(d(Lsqrt(d)/eps)^d) is necessary? Usually, one expects that as d increases the approximation power of the network increases.\"\n\n**Answer**: This is a terrific question. We do not know whether our obtained bound is optimal, but it seems that at least some exponential blow-up is necessary. One factor which comes into play is the diameter of the unit cube, which scales with the dimension. Thus, as the dimension increases, there is 'more space to cover. Since our monotone networks end up being piecewise constant, it seems reasonable to expect that the number of regions on which the network is constant scales by a power of d. This leads to exponential terms in d.\n\nAs you note, usually, higher dimensions allow for more parameters which tend to improve approximation power when considering the number of neurons. This difference between the general and monotone setting is precisely what we have tried to uncover in our work and why we find those questions so exciting.\n\n- \"I do not understand the comment on line 125 that [8] also constructs a neural network with comparable number of neurons to Theorem 2. It sounds like it might only take O( (L/eps)^d ).\"\n\n**Answer**: Indeed, the number of neurons will scale like a power of L/eps. The use of iterated Riemann sums allows to bypass the so-called 'curse of dimensionality, and so it ignores the sqrt(d) factor, which is the diameter of the cube. However, note that the need to re-evaluate iterated integrals for each dimension successively could make the power larger than d. We intended to highlight possible similarities in the dependence on L and epsilon. We will make sure to emphasize that the size is comparable up to dimensional factors.\n", " Thank you very much for your positive feedback!\n\nRegarding your question:\n\n- “At surface level it appears that results might be dependent on the choice of activation function. Is it possible to throw some light on this aspect? If in practice the threshold function is approximated by sigmoid, will the results still hold?”\n\n**Answer**: Indeed, we copy here part of our response to Reviewer 3 (CbVM). \n\nObserve that threshold functions can be approximated arbitrarily well, almost everywhere, by sigmoids. Thus, our approximation results apply, mutatis mutandis, to sigmoid activations as well. However, to state such results would require us to introduce an approximation parameter that will depend on the separation between data points. For this reason, we chose to focus on threshold activation, which allows us to state clean theorems while capturing the intricate subtleties introduced by having monotone constraints.\n", " In this work, authors study the problem of finding size and depth efficient monotone neural networks that interpolates the monotone datasets. Among two popular choice of activation functions, i.e., ReLU and threshold activation function, authors proved that there are monotone functions that cannot be approximated within arbitrary small error by monotone neural networks with ReLU activation function. Thus, in this study monotone neural networks with threshold activations are considered. \n\nFirstly, authors showed that 2-layered monotone neural network with threshold activation cannot interpolate monotone dataset. Later on they showed that there exist 4 layered monotone neural network with threshold activation function that can interpolate monotone dataset with n points in Rd provided the size of neural network is O(nd). The direct implication of interpolation result is that 4 layered monotone neural networks can approximate arbitrary well any monotone function over [0,1]d. This is significant improvement over best-known previous result, which states that monotone neural network with threshold activation can approximate any monotone function however depth of the approximating network will scale linearly with the dimension of the input data. \n\nThe authors further investigated on the size required to approximate any monotone function arbitrarily close by the constant depth monotone neural network with threshold activation function. The authors showed that the inductive bias of having positive parameters can lead to a super-polynomial blow-up in the number of neurons when approximating monotone functions.\n The paper discusses on the proof that there exist 4 layered monotone neural networks with threshold activation that can interpolate the monotone dataset with n points in Rd. Its direct implication proved that there exists the constant depth monotone network with threshold activation that can approximate the monotone functions arbitrary well. This result comes with the tradeoff over the size of the network. \nThe paper is well written, and references are adequate.\n At surface level it appears that results might be dependent on the choice of activation function. Is it possible to throw some light on this aspect? If in practice the threshold function is approximated by sigmoid, will the results still hold? N/A", " A monotonic relationship occurs when an increase (decrease) in an input value gives an increase (decrease) in the output value. Monotone ReLU neural networks preserve this structure in the data by ensuring that the network's weights are positive. There are fundamental questions regarding monotone neural networks and the manuscript investigates: (1) [universality] Can a monotone network of depth L approximate every monotone dataset? [The manuscript proves that L = 2 is not good enough, but L = 4 is.] (2) [Approximation power] Do monotone networks need to be much larger than standard networks? [The manuscript proves that there are maps such that the difference can be larger than any polynomial in d (where d is the number of input neurons). The manuscript presents a set of compelling universal approximation theory results for monotone neural networks. I particularly enjoyed Section 5, where the authors used ideas from monotone complexity theory of Boolean functions to construct a monotone function that are much harder (in terms of number of parameters in the neural network) to approximate with monotone neural networks. \n\n- There is quite a big gap between O(d) neurons required in Theorem 3 and the O(d(Lsqrt(d)/eps)^d) neurons in Theorem 2. Therefore, one expects that this manuscript is not the final word. \n\n- This manuscript only describes universal approximation theory results, which reveal that monotone neural networks can be far less expressive. There is the additional problem of training that is not discussed here. Therefore, it seems that monotone neural networks are far from becoming a mainstream neural network model. \n\n- The problem of monotone approximation has been considered in approximation theory since the 1970s. In the multivariable setting, ideas such as: (1) Monotone tensor product regression splines (for dimension <=5), see [1], (2) Triangulation based monotone splines, e.g., see [2] and [4]. (3) Kernel regression, see [3]. While the results in this manuscript probably do not exist in this literature, it would be nice if the authors could reference the classical literature on monotone approximation \n\n[1] G. Beliakov, Shape preserving approximation using least squares splines, Approximation Theory Appl., 16 (2000), pp. 80–98.\n[2] P. Costantini and C. Manni, A local shape-preserving interpolation scheme for scattered data, Comput. Aided Geom. Des., 16 (1999), pp. 385–405.\n[3] P. Hall and L. Huang, Nonparametric kernel regression subject to monotonicity constraints, Ann. Stat., 29 (2001), pp. 624–647.\n[4] K. Willemans and P. Dierckx, Smoothing scattered data with a monotone Powell-Sabin spline surface, Numer. Algorithms, 12 (1996), pp. 215–232.\n \nWRITING \nThe manuscript would benefit from another round of polishing. Though, I find that manuscript is clearly written. Here is a small subset of grammar mistakes that I marked while reading. (I imagine that there are many more.)\n\np1, line 25: \"modeling monotonic relationship\" should probably read \"modeling the monotonic relationship\"\np4, line 134: \"requires much larger size\" should read \"requires a much larger size\"\np4, line 154: \"may result with negative\" should read \"may result in negative\"\np4, line 161: \"First, as using both min and max gates in the same architecture with positive parameters do not fall into the modern paradigm of an activation function.\" sounds better as \"First, using both min and max gates in the same architecture with positive parameters does not fall into the modern paradigm of an activation function.\"\np5, line 175: \"applies for arbitrary dimension larger than 1\" should read \"applies to arbitrary dimensions larger than 1\"\np5, line 180: \"One difference of our\" should read \"One difference between our\"\np5, line 185: \"require super polynomial\" should read \"require a super polynomial\" \np5, line 189: \"function m, that requires\" should read \"function m, which requires\"\np5, line 206: \"Each layer is composed from\" sounds better as \"Each layer is composed of\"\np6, line 229: \"positive is indeed without\" should be \"positive holds without\"\np9, line 342: \"can can\" should be \"can\" \np9, line 344: \"each gate by a\" sounds better as \"each gate with a\"\np9, line 361: \"theses copies\" should read \"these copies\" - In Theorem 2, the authors prove that if the f is L-Lipschitz then there is a nearby 4-layer neural network with O(d(Lsqrt(d)/eps)^d) neurons. This is a super-exponential blow-up in d. Do the authors believe that O(d(Lsqrt(d)/eps)^d) is necessary? Usually, one expects that as d increases the approximation power of the network increases. \n\n- I do not understand the comment on line 125 that [8] also constructs a neural network with comparable number of neurons to Theorem 2. It sounds like it might only take O( (L/eps)^d ). There are limited potential negative societal impact.", " The authors consider interpolation of monotone data (Theorem 1) and approximation of monotone functions (Theorem 2) by neural networks of 4 layers induced by the threshold activation function. The required number of neurons for the approximation is analyzed in Theorem 3. The results in the paper verifies the role of depth in interpolation and approximation of monotone functions. These are interesting theoretical findings and help us understand the role of depth in deep learning. \n\nHowever, the results cannot be sued in practical applications. The threshold activation function is not continuous, which is difficult for implementations. The monotonicity of data or multivariate functions is often ruined by noise in practice. \n\n\n The authors should comment on how noise could be taken into consideration in their study. Is it possible to derive similar results when a continuous activation function is used? No. It would be interesting to consider noisy data and continuous activation functions. " ]
[ -1, -1, -1, 7, 7, 4 ]
[ -1, -1, -1, 1, 3, 4 ]
[ "RmE8svGi4XX", "2kvDTAJPaA", "0QLCnqjAE91", "nips_2022_vQzDYi4dPwM", "nips_2022_vQzDYi4dPwM", "nips_2022_vQzDYi4dPwM" ]
nips_2022_SHMi1b7sjXk
Test-Time Training with Masked Autoencoders
Test-time training adapts to a new test distribution on the fly by optimizing a model for each test input using self-supervision. In this paper, we use masked autoencoders for this one-sample learning problem. Empirically, our simple method improves generalization on many visual benchmarks for distribution shifts. Theoretically, we characterize this improvement in terms of the bias-variance trade-off.
Accept
This paper performs test-time (unsupervised) adaptation to improve generalization performance (e.g., under distribution shift). The reviewer's concerns were mostly about clarification, both in experimentation as well as overall contribution (e.g., concerns about novelty). The discussion was concise and easy to follow, and it seems that the authors addressed most of the outstanding concerns of the reviewers to the point that there's a clear consensus. I therefore recommend acceptance of the paper. As far as reviews, overall the discussion was light so there isn't a great deal of signal. ZRRX had the most substantial review in terms of initial content, but did not participate in further discussion.
train
[ "d8CKqe6n2Yq", "lr3orbN3a-mA", "Rl8f8-Grvms", "cYKBLYAX_ONG", "48DJtCVAW6yb", "tAEpn3KLb0F", "KM2jZyvp158", "66yvO1dwbE", "Hlnpo31Iwqw", "WWKikAS5Nzm", "ULNCiLay5Ft" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for providing all your comments and updating the paper!\n\nBased on the response to my and other reviewers, I am overall happy with the quality of the work, and increasing my scores to lean towards acceptance.", " Dear ACs and reviewers, \n\nThank you again for the detailed feedback on our work. We hope we have addressed your main concerns, and would greatly appreciate if you are able to review our changes, comments and updates.\n\nPlease do not hesitate to let us know if we can provide any further information or clarification during the author-reviewer discussion period. \n\nThank you!\n\nPaper3751 Authors", " Thank you very much for your review.\n\n**“The writing is not clear in a couple of places, and needs improvement.”** \nWe have added more formal descriptions of our method in Section 3; see Eq. 1 and Eq. 2, and the surrounding text. We have also rewritten the introduction. We really appreciate you taking the time to suggest specific places that need to be improved in our writing. We have changed all of them accordingly in the revision. \n**- Line 53**: Removed altogether, as you suggested. \n**- Line 177**: Your understanding is correct. We have rewrote that part in the revision. \n**- Foot note #3 below Line 196**: We actually meant better, but our wording can be confusing. \"For contrast, the ResNet baseline of TTT-Rot is somehow much better than our ViT baseline...\" meant for the contrast corruption type, but could be mistaken as for contrast of the two baselines. We have modified that footnote in the revision. \n**- Paragraph of Line 199**: Yes, you understood correctly, despite our lack of clarity there. We have modified that part of the main text, as well as the caption of Table 2, in the revision thanks to your suggestion. \n\n**“I do not see how the data matrix affine transform as a proxy for distribution shift applies in the real world setting.”** \nOur theory does not try to model real-world distribution shifts exactly, as they are usually too complex and high dimensional. We only provide an abstraction that tries to preserve the simplest essence of our method. In our linear abstraction, linear transformations actually represent the most general form of distribution shifts. Despite these being only an abstraction, there are real world distribution shifts that are linear (color transforms, contrast changes, etc). \n\n**“The bias-variance terminology is confusing and should be removed, as it is trying to convey a different meaning from what it traditionally means.”** \nWe have added more explanation for the bias-variance terminology at the beginning of Section 5. We hope this would make it clear that these terms convey exactly the same meanings as they do traditionally.\n\n**Regarding the theoretical results: “...how the specific strategy chosen in this work is better than the existing TTT algorithm that uses the rotation prediction task.”** \nThe theory intends to explain why test-time training with reconstruction is better than not. We can only answer empirically that it is better than past forms of self-supervision, and explain with ablations such as in Table 1.\n\n**The theory “...does not provide any insight into why TTT really helps…”** \nPlease see our general discussion on this matter, which we copy here.\nTwo of the reviewers, vzDf and ZRRX, suggest removing the theory section because they did not find it useful. Reviewer QNmj, on the other hand, “appreciates the theoretical explanation, which is interesting”. Since the machine learning community is diverse, we respectfully propose to keep the theory section as some people might benefit from its perspective (and others are free to ignore). We hope that the connection between the theory and the proposed method is clearer in the revision, as we have added more explanations. If the area chair decides that its potential benefit does not justify the space usage, we will move it into the appendix.\n", " Thank you very much for your review.\n\n**“Does the proposed work on single test images? Or it has to be trained on many test images from the same domain…”** \nWe use only a single test image at a time, and treat each image independently (no online accumulation of weights). This is different from works such as TENT and TTT++ (Liu et al. 2021), which require a batch of test images from the same distribution. In response to your question, we have rewritten the introduction to make this clearer, and added formal descriptions of our method in Section 3 (Eq.1, Eq.2, and the surrounding text) to make sure no confusion is left about this.\n\n**“On ImageNet-C's level 1-4, baseline results are missing.”** \nThe results for the ViT probing baseline and TTT-MAE on ImageNet-C level 1-4 are in the supplementary materials (Table 8-11) of the submission. Thanks to your comment, we added a clarification in the main text (Section 4.2) of the revision. The results of Sun et al. on ImageNet-C level 1-4 can be found in the appendix of their paper. If you would like us to copy these to our appendix for ease of comparison, we would be happy to do so.\n\n**“What is the standard mCE number on ImageNet-C following the standard metric?”** \nFor our default setting of ViT probing, the baseline mCE is 57.5, our mCE using test-time training is 51.0.\n\n**“Would the approach's number better than other non-test-time training ViT method? Such as the Discrete ViT…”** \nThe best result of Discrete ViT, in terms of mCE on ImageNet-C, is 46.2, better than ours of 51.0. However, this version of Discrete ViT uses RandAug while we do not. As argued in our submission, since RandAug contains many of the corruptions in ImageNet-C as augmentations, using it for training defeats the purpose of the benchmark and goes against the rules stated by the creators. Without augmentations, Discrete ViT has an mCE of 74.8, worse than ours. Note that this result uses ViT-B, while ours uses ViT-L, so these numbers are still not directly comparable. If you desire, we can rerun our main experiments with ViT-B to directly compare, and include a detailed discussion in the empirical results section. We have also added a citation for Discrete ViT in the revision.\n\n**“Should add more intuition for theoretical justification part…”** \nThank you for the suggestion. We have added more intuitive explanations in the revision, at the beginning of Section 5.\n\n**“The theoretical part is not linked to the accuracy, why the accuracy gets better?”** \nThe theory uses, for the main task, regression instead of classification. This is common practice in the theory community since 0-1 classification accuracy is often mathematically intractable. \n\n**Regarding the theoretical results: “Why the proposed MAE outperforms past method…”** \nThe theory intends to explain why test-time training with reconstruction is better than no test-time training. We can only answer empirically that it is better than past forms of self-supervision, and explain with ablations such as in Table 1.\n\n**“…would simple PCA reconstruction also improve accuracy based on the theory?”** \nNot quite, since our theory uses regression loss instead of classification accuracy. Besides that, the answer is yes.\n\n\nReference: \nYuejiang Liu, Parth Kothari, Bastien van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, Alexandre Alahi: *TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?* (NeurIPS 2021)\nYu Sun, Xiaolong Wang, Zhuang Liu, John Miller, Alexei Efros, Moritz Hardt: *Test-Time Training with Self-Supervision for Generalization under Distribution Shifts* (ICML 2020)\nDequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, Trevor Darrell: *Tent: Fully Test-time Adaptation by Entropy Minimization* (ICLR 2021)\n", " Thank you very much for your review.\n\n**“I feel that the paper falls short in the contributions part (…) the novelty seems to be quite limited in the paper. As such, there are very limited technical contributions that the paper makes.”** \nWe believe there are many different ways to make a contribution in our field. For instance, it could be argued that the MAE paper (He et al. 2021) also lacks novelty – after all, it just combines the old ideas of denoising autoencoders and context encoders with a new transformer architecture. Nonetheless, MAE is clearly an important contribution, because it shows something that works now but didn’t quite work before. \nWe believe our paper falls into the same category. Generalization under distribution shifts is a fundamental problem, and test-time training (TTT) offers a promising solution. While the idea of TTT is indeed “pre-existing”, it has never shown more than 4% improvement. The lack of empirical success has been discouraging. Subsequent papers such as TENT and TTT++, in fact, have been working on alternative settings of adapting on a batch of test data, or the entire test set from the same distribution, because adapting on the single test input itself is hard. \nOur paper is the first to show that TTT offers substantial improvements (10% - 20%) for many distribution shifts on top of a highly competitive baseline. Yes, our idea is simple, but simplicity is often desirable in science. Making this simple idea work involves good design choices, as stated in the paper. Like for MAE, we believe this is an important contribution.\n\n**“suggestions to make the contributions stronger include …(showing) impact of masking ratio in MAE.”** \nThanks for the suggestion. We performed experiments on ImageNet-C (level 5) using masking ratios of 50%, 75% (default) and 90%. Interestingly, 50% masking performs better than 75% on a few of the corruptions (and worse on others). These results are in Table 13 of the updated supplementary materials.\n\n**“Instead of always using the same pre-trained MAE checkpoint, the authors can train or use different MAE models trained on varied datasets apart from ImageNet…”** \nOur main experiments follow the standard practice of ImageNet-C (also -A and -R) evaluation by training only on ImageNet. Following your suggestion, we have experimented with MAE pre-trained on COCO. We train ViT-probing on ImageNet using features from this MAE checkpoint, then perform test-time training on ImageNet-C (level 5), in the same fashion as the main experiments of our submission. Both the baseline and TTT-MAE using the COCO pre-trained checkpoint perform worse than that using ImageNet. This is not surprising since there is a distribution shift from COCO to ImageNet for the pre-trained weights. These results are in Table 15 of the updated supplementary materials.\n\n**“The code is not submitted with the paper, raising reproducibility concerns.”** \nWe planned to release the code upon acceptance, as stated in footnote 1. Following your advice, we have also attached it in the updated supplementary materials. Hopefully, this should address your concerns. \n\n**“…ablations with different types of MAE architectures…”** \nWe are not sure what you mean by different types of MAE architectures, since all the MAEs in He et al. 2021 use ViTs. We do have a number of ablations that compare: normalized to unnormalized pixels loss (Appendix A.2, Table 6), training only the encoder to both the encoder and decoder (Appendix A.2, Table 7), and SGD to AdamW (Section 3, Figure 2).\n\n\nReference: \nKaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick: *Masked Autoencoders Are Scalable Vision Learners* \nYuejiang Liu, Parth Kothari, Bastien van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, Alexandre Alahi: *TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?* (NeurIPS 2021)\nDequan Wang, Evan Shelhamer, Shaoteng Liu, Bruno Olshausen, Trevor Darrell: *Tent: Fully Test-time Adaptation by Entropy Minimization* (ICLR 2021)\n", " Thank you very much for your review.\n\n**“An explicit mathematical formulation of the training and test-time training losses would make the paper easier to read.”** \nThank you for your suggestion. We have added explicit losses to the revision as Eq. 1 and Eq. 2 in Section 3, as well as paragraphs around them to give more formal explanations.\n\n**“The paper would greatly benefit from including more self-supervised losses as baselines.”** \nGreat idea. We ran an additional baseline using contrastive loss, as in TTT++ (Liu et al. 2021), using a single test input instead of a batch. The positives are augmented versions of that test input, and the negatives are sampled from the training set. The results on ImageNet-C (level 5) are in the updated supplementary materials, Table 14. The summary is that test-time training with contrastive loss hurts performance when done on one test input at a time.\n\n\nReference: \nYuejiang Liu, Parth Kothari, Bastien van Delft, Baptiste Bellot-Gurlet, Taylor Mordan, Alexandre Alahi: *TTT++: When Does Self-Supervised Test-Time Training Fail or Thrive?* (NeurIPS 2021)", " We appreciate all reviewers for their helpful feedback. We have uploaded a revision incorporating the feedback, with new content highlighted in blue. \n\nThe reviewers find our approach to be “novel” (evQW), “simple” (evQW, ZRRX), and “effective” (evQW, QNmJ). The reviewers also note that our empirical results on “multiple large scale datasets” (evQW) show “significant improvement” (ZRRX, QNmJ). We thank the reviewers for the positive feedback.\n\nTwo of the reviewers, vzDf and ZRRX, suggest removing the theory section because they did not find it useful. Reviewer QNmj, on the other hand, “appreciates the theoretical explanation, which is interesting”. Since the machine learning community is diverse, we respectfully propose to keep the theory section as some people might benefit from its perspective (and others are free to ignore). We hope that the connection between the theory and the proposed method is clearer in the revision, as we have added more explanations. If the area chair decides that its potential benefit does not justify the space usage, we will move it into the appendix.\n", " The authors propose to improve generalization and robustness under distribution shift by cloning the network and training the clone for on multiple masked copies of each incoming example at evaluation time using an auxiliary masked reconstruction loss. Strengths:\n\n* The paper is easy to follow and well written.\n* The experimental protocol is built around multiple large scale datasets.\n* The method is novel, simple, and effective.\n\nWeaknesses:\n* An explicit mathematical formulation of the training and test-time training losses would make the paper easier to read.\n* Compares reconstructions solely to a rotation prediction self-supervised loss. The paper would greatly benefit from including more self-supervised losses as baselines. The authors have clearly outlined the additional computational cost incurred by the use of their approach.\n", " Summary:\nThe paper performs training using test set images using the idea of masked autoencoder to further improve the performance. Experiments show that this approach improves the performance on out-of-distribution images. They also present a theoretical analysis suggesting the reasons for the observed improvements under distribution shifts. Strengths and Weakness:\n\n- The paper leverages a relatively new idea of self-supervised pre-training to obtain good results on out of distribution image classification tasks.\n\n- I feel that the paper falls short in the contributions part. Additionally, the novelty seems to be quite limited in the paper. The idea of masked autoencoders is quite known and the paper just applies it to the pre-existing task of test-time training. As such, there are very limited technical contributions that the paper makes. In my opinion, the paper is looking more suitable for a workshop.\n\n- I felt the theoretical analysis part may not be adding much value and can be moved to the appendix. That would help in saving space, which could be used to write up more interesting analysis or new experiments. Some suggestions to make the contributions stronger include doing ablations with different types of MAE architectures showcasing the pros and cons of each, impact of masking ratio in MAE on the test-time training training.\n\n- Instead of always using the same pre-trained MAE checkpoint, the authors can train or use different MAE models trained on varied datasets apart from ImageNet and then perform experiments as to which kind of datasets are more robust to test-time training and offer more improvements.\n\n- The code is not submitted with the paper, raising reproducibility concerns.\n Please see the strengths and weakness section\n\n** Update after the authors response **\n\nUpdating my scores to lean towards acceptance. Yes, the limitations have been included in the paper.", " The paper proposes to use test time training on the test images to improve generalization under distribution shifts. Using the establishe MAE method, the method add an additional classifier branch on the encoder for classification. To train the model, the paper studies three ways: 1) finetuning classifier and encoder, 2) ViT probing 3) joint training MAE and classifier, where ViT probing works the best. The test time training only trains the MAE part. Results are shown on ImageNet-C, R, and A.\n\n Strengths:\n1. Using MAE as test-time training improves generalization on shifted distributions is effective from the presented results. Results from 4 datasets have been shown, with significant improvement.\n\n2. The paper is well presented.\n\n3. The reviewer appreciates the theoretical explanation, which is interesting.\n\nWeakness:\n1. On ImageNet-C's level 1-4, baseline results are missing. What is the standard mCE number on ImageNet-C following the standard metric? Would the approach's number better than other non-test-time training ViT method? Such as the Discrete ViT [1], which uses VQVAE for encoding, similar to the MAE task for reconstruction, but does not require test time training.\n\n2. Should add more intuition for theoretical justification part, such that without reading the equation this part is also clear.\n\n[1] Discrete Representations Strengthen Vision Transformer Robustness. ICLR 2022. https://arxiv.org/abs/2111.10493 1. Does the proposed work on single test images? Or it has to be trained on many test images from the same domain, with batchsize>1.\n\n2. The theoretical part is not linked to the accuracy, why the accuracy gets better? Why the proposed MAE outperforms past method, would simple PCA reconstruction also improve accuracy based on the theory? None.", " This paper presents a test-time training algorithm to address the degradation of classification performance typically seen in deep learning models. The paper combines two ideas— self-supervised learning via masked autoencoders, and test-time training (TTT) using a self-supervised objective. Specifically, the paper applies the recently proposed masked auto-encoder (MAE) framework for self-supervised training, towards test-time training. To do so, the authors propose to freeze the feature network of this model and learning a classifier head on top during training, and then fine-tuning the feature network to minimize the reconstruction loss on the test sample to address distribution shift. The authors show a significant improvement in performance across different datasets using their approach over the existing state-of-the-art using test-time training method. Strengths:\n\n- simple approach.\n- The experimental results show significant improvement across different datasets in the test-time training paradigm.\n- ablation analysis on test-time training optimizer AdamW vs SGD.\n\nWeaknesses:\n\n- theoretical justification is largely disconnect from the proposed method. I do not see how the data matrix affine transform as a proxy for distribution shift applies in the real world setting. It does not provide any insight into why TTT really helps, or how the specific strategy chosen in this work is better than the existing TTT algorithm that uses the rotation prediction task.\n- The bias-variance terminology is confusing and should be removed, as it s trying to convey a different meaning from what it traditionally means.\n- The writing is not clear in a couple of places, and needs improvement.\n Questions and suggestions:\n\n- Line 53: “One way to turn extrapolation into interpolation again is to make the training distribution wider, and the training set larger, to cover the test distribution.”. This statement is misleading because it assumes that it is always possible for a single hypothesis function to perform well on both training and test distributions. But this may not be necessarily true (e.g. label shift where p(y|x) changes). This needs to be either clarified as the scope of this statement, or removed all together.\n\n- Line 177: Not very clear. My understanding is that the baseline is the pre-trained MAE model which is kept fixed, and only a classifier head is trained using labels.\n\n- Foot note #3 below Line 196: You mean much worse? Fig 3 shows TTT-Rot is worse than ViT baseline.\n\n- Paragraph of Line 199: Not clearly stated what is the difference between Probing and TTT-MAE in table 2. If the goal is to talk about training-time training, and the 1st 3 rows are without TTT, then it should be explicitly mentioned, because mentioning TTT-MAE results alongside is confusing. Limitations:\n\n- As mentioned above, the theoretical justification is too simplistic and disconnected from the proposed method and the real world setting. I think it is better to remove it from the paper.\n\n- Some ablation studies would be great that provide some insight into why the proposed framework works better compared to the existing rotation prediction based TTT framework. Some intuition is discussed by authors along the lines that rotation prediction is not a well defined task for certain class of images (e.g. top or bottom view images where rotation has no semantic meaning). But a clearer ablation in terms of the differences in the features learned by the two models would be more helpful." ]
[ -1, -1, -1, -1, -1, -1, -1, 8, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 5, 3, 4, 4 ]
[ "48DJtCVAW6yb", "nips_2022_SHMi1b7sjXk", "ULNCiLay5Ft", "WWKikAS5Nzm", "Hlnpo31Iwqw", "66yvO1dwbE", "nips_2022_SHMi1b7sjXk", "nips_2022_SHMi1b7sjXk", "nips_2022_SHMi1b7sjXk", "nips_2022_SHMi1b7sjXk", "nips_2022_SHMi1b7sjXk" ]
nips_2022_o4uFFg9_TpV
Visual Prompting via Image Inpainting
How does one adapt a pre-trained visual model to novel downstream tasks without task-specific finetuning or any model modification? Inspired by prompting in NLP, this paper investigates visual prompting: given input-output image example(s) of a new task at test time and a new input image, the goal is to automatically produce the output image, consistent with the given examples. We show that posing this problem as simple image inpainting -- literally just filling in a hole in a concatenated visual prompt image -- turns out to be surprisingly effective, provided that the inpainting algorithm has been trained on the right data. We train masked auto-encoders on a new dataset that we curated -- 88k unlabeled figures from academic papers sources on Arxiv. We apply visual prompting to these pretrained models and demonstrate results on various downstream image-to-image tasks, including foreground segmentation, single object detection, colorization, edge detection, etc. Project page: https://yossigandelsman.github.io/visual_prompt
Accept
The paper discusses a way to use pre-trained models for downstream tasks. Reviewers generally appreciated the paper but had questions regarding baselines, details, dataset, etc. The rebuttal addressed most of these concerns prompting the reviewers to raise their recommendation. However some questions remained (e.g., https://openreview.net/forum?id=o4uFFg9_TpV&noteId=wPCrVV96hE). AC concurs with the unanimous reviewer recommendation.
test
[ "0Li1merIM5", "xosS56mQJ7o", "zX1tE3bIsMx", "wMnbkVwkKcS", "nOYiCMM0v6", "wPCrVV96hE", "n3t6ZElPx79", "4JrW35Vd14S", "zoOAXGL-roY", "C_HOQ1Tx4JW", "loUu1RsaBx", "reiRXFhTdXT", "drBS-fm3r-", "mK7IwkyFCfG", "fjf4cYrRQKH", "OKdVWUlnpkD", "PFLRw_rmwCZ", "p54Qo0FPIyf", "lS_JXTtMECsQ" ]
[ "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the questions and suggestions. As the discussion phase ends today, we will revise the paper to include the missing technical details and the training/inference figure suggested after the discussion period. Additionally, the code will be made publicly available.\n\nQ: **During testing, did you test tasks that were not seen during training, some new image tasks?** We evaluate our model on a variety of tasks that are unlikely to have been seen during training. For example, our Synthetic Data Study (Section 4.3) presents a family of 6 previously unseen tasks like Color, Shape, and Size change. In addition, in the Supplementary Material (Section 12), we include tasks like image contrast change, text font completion, and transforming an image to grayscale.\n\nQ: **“The training and inference process of MAE-VQGAN is still not clear.”, “I think adding algorithm figures and detailed explanations of training and inference are necessary”**. During training, MAE-VQGAN model is trained on masked auto-encoding similarly to standard MAE except for a single change. Whereas in MAE the model is trained to complete raw pixels, MAE-VQGAN decoder is trained to predict VQGAN codebook visual tokens (Section 3.1). Assuming we use a ViT model that patchifies a 224x224 image to 14x14 patches, MAE predicts 16x16x3 values per patch, and MAE-VQGAN predicts 1024 values per patch that represent a distribution over the pre-trained VQGAN codebook. \n\nIn inference, we feed Visual Prompts to MAE-VQGAN that predicts VQGAN visual tokens (Section 3.2). Decoding the predicted visual tokens to pixels is done using the VQGAN codebook decoder (See Section 3.1, line 115).\n\nQ: **Could you provide more explanation of how you encode the support images + query? Did you combine them first and then resize to 224x244?** Yes, to construct the Visual Prompt (inference only) we combine the support images + query and then resize them to 224x224. Before combining everything, we add a white border between the images, to create a separation between images in the grid.\n\nQ: **If you combine them first and resize to 224*244, then why the VQGAN pre-trained on ImageNet can deal with such images that are quite different from the images in Imagenet?** We empirically observe that the reconstruction quality of the VQGAN encoder-decoder is plausible and does not suffer from a large domain shift (e.g., it can reasonably reconstruct grid images with black and white segmentation masks). However, as we acknowledge in the Limitations (see lines 253-255), the ImageNet pretrained VQGAN codebook might suffer from domain shift. To alleviate this difficulty, it is possible to train a new codebook on a larger and more diverse dataset.\n\nQ: **Could you explain clearly the input and output of the VQGAN part and the MAE part?** Like in a standard MAE, the input to MAE-VQGAN is an image and an associated mask. The mask corresponds to the set of patches that comprise the image. Differently from MAE, the output of MAE-VQGAN are VQGAN codebook visual tokens. For example, based on a ViT model that patchifies a 224x224 image to 14x14 patches, MAE predicts 16x16x3 values per patch, and MAE-VQGAN predicts 1024 values per patch that represent a distribution over the pre-trained VQGAN codebook.\n\nDuring training, we use the VQGAN codebook encoder to map an image to a list of visual-tokens (each 224x224 image translates to 14x14 codebook indices) to provide ground-truth for MAE-VQGAN predicted visual tokens. \n\nDuring inference, to decode the visual tokens predicted by MAE-VQGAN to pixels, we take the argmax and then use the VQGAN codebook decoder that translates the set of 14x14 tokens to a 224x224x3 image.\n\nQ: **When you train MAE, did you feed in the combined image grid or every single image?** We only construct the image grid (Visual Prompt) during inference time. In training, our model is trained to predict the visual tokens of randomly masked regions.\n\nQ: **What do you mean 'However, the input to MAE-VQGAN is pixels and not visual tokens (unlike in the original VQGAN model).'?** MAE-VQGAN relies on a standard ViT backbone like MAE. This is different from the original VQGAN transformer model that first encodes the image using the VQGAN codebook encoder before processing it.\n\nQ: **How did you generate the mask of the query? If the grid is 3x2, the mask might be encoded into a half token, such as 1.5 tokens. How will you decode that? Is the basic block of a binary mask corresponding to a pixel or a patch of ViT?** The basic block of the binary mask corresponds to a patch of ViT. If the size is not divisible by the number of tokens (e.g, 1.5 tokens like you mentioned) we round up and mask a larger region (e.g, 2 tokens). We plan to clarify it in the Supplementary.\n\nQ: **During training, do you calculate the loss over the whole picture or the masked part?** As in MAE, the loss is calculated only on the masked part. \n", " Thank you for your response. I understand the author's comment on requiring annotations for ImageNet training. I still wonder how the model would perform by reformulating existing datasets into a grid-like structure using additional annotations, as prompting works best by minimizing the gap between pre-trained and downstream tasks. This can come as future work. I will modify my final score to a weak accept. ", " Thank you for the comment and for raising your score.\n\n> Reformulating ImageNet images into a grid-like structure and pre-training the in-painting model does not need any annotations\n\nCan you elaborate on the exact way you propose to reformulate ImageNet images into a grid-like structure without annotations? \n\nFor example, choosing random ImageNet images then placing them in corresponding grid cells is one way. However, since there is no rule to infer one grid cell from the other, the model will learn to complete each cell individually while ignoring the rest of the image. Therefore the model will not learn completions that are relevant to the tasks we discuss in the paper (e.g segmentation/colorization).\n", " I have read the rebuttal and the other reviews. The answers were generally helpful, but I do have one unaddressed point:\n\n> Can’t you reformulate existing datasets (e.g. ImageNet) into a grid-like structure and pre-train the model? Have you tried this? How does this performance compare to having the models pre-trained on the Figures Dataset with random masking?\n\nReformulating ImageNet images into a grid-like structure and pre-training the in-painting model does not need any annotations, but the authors did not provide the comparison. In general, I am raising my rating to a borderline accept.", " Thanks for your reply. However, there are still some details missing. \n\n1. During testing, did you test tasks that were not seen during training? I do not mean new testing examples, but instead new tasks, such as converting RGB images to BGR.\n\n2. The training and inference process of MAE-VQGAN is still not clear. Could you explain how you encode the support images + query? Did you combine them first and then resize them to 224 $\\times$ 244 or resize each image to 224 $\\times$ 224 and then combine them?\n\nIf you combine them first and then resize them to 224 $\\times$ 244, then why the VQGAN pre-trained on ImageNet can deal with such images that are quite different from the images in Imagenet? And did you give a mask of the query image, which means the VQGAN tasks (resized support images + query image + query mask) as input and generate the tokens? \n\nIf you resize each image to 224 $\\times$ 224 and then combine them? How could you deal with more support examples? In MAE, they use ViT (max sequence length=1024). If you have > 2 examples (4 support images + 1 query image, each generating 16 $\\times$ 16 tokens), the token sequence is too long to feed them into ViT.\n\n3. What do you mean 'However, the input to MAE-VQGAN is pixels and not visual tokens (unlike in the original VQGAN model).'?\nCould you explain clearly the input and output of the VQGAN part and the MAE part? When you train MAE, did you feed in the combined image grid or every single image? \n\n4. How did you generate the mask of the query? If the grid is 3 $\\times$ 2, the mask might be encoded into a half token, such as 1.5 tokens. How will you decode that? Is the basic block of a binary mask corresponding to a pixel or a patch of ViT? During training, do you calculate the loss over the whole picture or the masked part? \n\nThere are still many details missing. I think adding algorithm figures and detailed explanations of training and inference, or providing the code are necessary.", " I have the same question on how the visual prompt is constructed.\n\nFor example, you have two support examples (for example, each example has an RGB image and a segmentation map), and a query image. You said you create an image grid of (n + 1) * 2 cells (L125). \n\nDid you resize this large combined image grid to 224*224? And then feed it into the pre-trained VQGAN? \n\nIf so, it is questionable why the VQGAN trained on ImageNet (a single image) can encode the combined image grid well. The combined image grid should be very different from images in ImageNet.\n\nPlease let me know your answers.", " I have read the rebuttal and the other reviews. I am maintaining my original evaluation of accept.", " I have read the other reviewes, the responses and the updated paper. The additional experiments on val vs. train bias and the few-shot baselines are valuable.\n\nThe provided explanations were quite helpful and I did not find any major unadressed points in the other reviews.\nThus, I am raising my rating to recommend acceptance. ", " Reviewer WRn2 expressed a concern that this paper did not provide a sufficient level of detail on the nature of the arXiv figure dataset. In particular, this reviewer drew attention to the lack of clarity around licensing information for the individual arXiv papers and also suggested that the authors provide a datasheet documenting the collection process. The authors have since provided a datasheet as supplementary material, but the issue around individual arXiv paper licenses remains. The 'arXiv license' they link to in their datasheet is not the only license available to authors who publish on arXiv; did the authors restrict their data downloads to such papers, or are they conflating this license with the other license types? They should clarify this. \nThe datasheet claims that there are no people in the dataset, but Figures 2 and 6 of the paper (derivative figures from other papers) do contain images of people. \nThe authors have not discussed the societal impact of their proposal, for example the potential misuse of image manipulation technology. One could imagine prompting the inpainting model with unsavory images to realistically falsify events or generate harmful images. Do the authors have a proposal for mitigating negative outcomes such as this? The authors have responded to reviewers by creating and uploading a datasheet. As stated above, the authors should clarify their compliance with the different licenses made available to those who upload content to arXiv. ", " Thank you for your positive comments and suggestions. We address your comments below.\n\nQ: **Comparison to baselines**. Thank you for suggesting this, we include comparisons to finetuning and state-of-the-art classic few-shot approaches. But note that classic few-shot models should be considered an upper bound of our approach as they are trained on large amounts of clean and annotated (baseclass) data and were designed for a specific task. We discuss this in the shared comments and in the Supplementary Material Section 6.1. \n\nQ: **How did you combine the MAE and VQGAN? During training, did you train VQGAN first? And then use MAE to reconstruct the visual tokens? During inference, do you encode the whole combined image using VQGAN first? And then use MAE to fill the missing part (visual tokens instead of pixels)? And then use VQGAN decoder to reconstruct the image?** MAE-VQGAN utilizes an ImageNet pre-trained VQGAN codebook [1] which is fixed during training and inference, we clarify it in the revised manuscript (lines 110-111). As you mentioned, instead of directly regressing pixels, the MAE-VQGAN model predicts visual tokens which can then be decoded into pixels. However, the input to MAE-VQGAN is pixels and not visual tokens (unlike in the original VQGAN model).\n\n[1] https://github.com/CompVis/taming-transformers\n\n\nQ: **How do you assign probabilities to visual tokens in L111?** The MAE-VQGAN model outputs a distribution over the visual tokens via a softmax layer. This is unlike MAE which is trained to regress pixels. We clarify this in the revised manuscript (line 112).\n\nQ: **Why do you use MAE to fill the missing parts? Why not just use the transformer from VQGAN to generate the missing parts directly? Is the MAE part necessary?** We found that using the full VQGAN model leads to worse performance (see Tables 1 and 2). We believe the main reason is sequential decoding, which is one of the assumptions of VQGAN (we discuss this in the original submission, lines 204-205 and 222-223).\n\nQ: **Does the prediction of z^{\\hat}k related to z^{\\hat}{k-1} in the eqn under L113? Does each z^{\\hat}_k are predicted only based on x and m?** Yes, here we do not assume an order, e.g, we do not condition the z^{\\hat}k on the prediction of z^{\\hat}{k-1}. However, exploring different decoding strategies is a promising research avenue and we leave this for future work.\n\nQ: **In the eqn right above L119, is m a mask of the part needs to be filled?** Yes. We mention this in the original submission (Sec 3.1 line 118-119).", " Q: **With this prompting approach it could be possible that a model learns to predict imprecise results from the fact that papers most often show predictions. Does the model pick up a bias in artificially predicting imprecise results to emulate predictions?** We agree that potential bias towards imprecise results may arise and following your feedback we now discuss this in the revised Supplementary Material datasheet under Composition. \nGenerally speaking, the goal of our paper is not to present a practical solution for any specific task, but rather to demonstrate how our model can be prompted at test time to solve various tasks **without any training**, only by pre-training on noisy, unlabeled data. Surprisingly, as presented in the synthetic data study (Section 4.3) and in the Supplementary (Section 12), our model shows the ability to generalize to new tasks as well. \n\nQ: **Were licenses taken into account?** Yes, we have considered Arxiv licenses (https://arxiv.org/help/license). As you suggested, we include the full information in a dataset sheet in the revised Suppl. Section 7 “Distribution”. ", " Thank you for the constructive review and suggestions. We address your comments below.\n\nQ: **I strongly suggest creating a “datasheet for datasets” for this dataset.** Thank you for the suggestion. We’ve uploaded a new revision and included a datasheet in Supplementary Material Section 7.\n\nQ: **What is the definition of few-shot learning in this paper and how does it relate to the existing approaches of few-shot learning?** We use the term \"Few-Shot\" as used in the NLP community and in particular as defined in the seminal paper “Language Models are Few-Shot Learners” [3]. In this relatively new setting, the model’s input contains very few input-output examples (in our work, usually less than 2) and new input. The goal of the model is then to deduce the rule from the examples and apply it to the new input. This setting is different from the classic computer vision Few-Shot setting as it does not assume a specific task and the model is not pre-trained on large amounts of base class labeled data. In the updated version, we do have a comparison to classic Few-Shot methods, but we note that this should be treated as an upper limit.\nWe discussed the two uses of the term “Few-Shot” in the original submission (see introduction lines 41-43 and related work lines 79-83).\n\nQ: **How can we make sure that the evaluation of the model uses unseen categories/tasks/images if the dataset is sourced from the results of other papers?** As discussed in the paper (see lines 41-43 and lines 79-83), we follow the definition of Few-Shot as presented in [1], instead of the classic computer vision definition (that is usually restricted to a single task and utilizes base class training). Therefore, we do not distinguish between categories and tasks.\nWe analyzed the potential overlap between the Figures dataset and the test data and found no significant overlap (see Supplementary Material, “Train-test overlap”, lines 453-470). Following your question, we double-checked this again. Specifically, to evaluate the overlap, for each image of 100 random Pascal 5i test images we computed the 5 nearest neighbors in the Figures datasets using CLIP embeddings. Out of the 100 images, we found only a single one contained in a Figure image, and this image did not have any associated ground-truth annotation. Therefore, we conclude that even if there are potential overlaps they are likely very small and insignificant. We discuss this in the revised Suppl. Sec 7 “Composition”. \n\nQ: **How was the dataset collected? What does it contain?** Latex source files were originally collected by the Arxiv team between the years 2010 to 2022. From this data, we automatically extracted figures containing images (as discussed in Section 3.3). By manually annotating a random sample of 500 images, we estimate that the majority of the images (84%) contain grid-like images, where 60% of the images contain some annotation embedded in the image, like text, arrows, heatmap, bounding box, etc. \nWe described the exact data collection process in the original submission Section 3.3 and Suppl Section 6. We also included the distribution of image types in the Suppl Table 5. Following your suggestion, we’ve added a datasheet for the dataset in the revised Supplementary Material.\n\nQ: **(your) dataset will consist of many validation/testing images from popular datasets, thus training a model on this data will likely skew its performance when evaluated on the val/test sets for downstream task performance.** Thank you, this is a good point. To evaluate this potential skew in performance, we analyzed our model (MAE-VQGAN) on the colorization task on ImageNet train vs. validation sets (better performance on the validation set would indicate a skew). Specifically, we randomly sampled two sets of 1000 images from train and validation and compared their performance. We observe an insignificant difference in LPIPS metric (0.39 for train vs. 0.40 for val) which suggests there is no evidence for this particular bias. We plan to add this to the paper.\n", " Thank you for the constructive review and suggestions. We address your comments below.\n\n\nQ: **how the datasets (ImageNet, Figures) are processed during pre-training. Are they concatenated as a grid-like image and processed as input or is only a single image used as input?** There is no pre-processing of images. During pre-training, we use the original images as is, where each image is considered as a single input to the model.\n\nQ: **This paper requires pre-training a whole new model specific to the proposed prompting method which contradicts the value of prompting.** Our main focus was to present a proof of concept that shows it is possible to visually prompt Image Inpainting models that were trained on noisy, unlabeled data. Specifically, we show how we can pre-train a network once, then prompt it to perform reasonably well on a very wide variety of tasks (both established as well as novel). The fact that this is possible is surprising and scientifically interesting, as was also mentioned by other reviewers (v9Mo, WRn2). \n\nQ: **Can’t we just reformulate existing datasets (e.g. ImageNet) into a grid-like structure and use them for pre-training?** Indeed, it is possible to reformulate existing datasets into a grid-like structure and train a single model on all datasets and tasks. However, this requires using explicit annotations in each dataset. Moreover, this approach cannot scale to the huge number of tasks (hundreds) and datasets (thousands) that appeared in computer vision papers over the years, as each task requires manual reformulation.\n\nMore broadly, our goal in this work is not to present a practical algorithm for any specific computer vision task, but rather to demonstrate how a model can be adapted at test time to various tasks **without any training**, only by learning from unlabeled and unstructured data. As presented in the synthetic data study (Section 4.3) and in the Supplementary (Section 12), our model shows the ability to generalize to new tasks outside of the training distribution. We also show that this approach is surprisingly scalable when additional unlabeled (natural) images are blended into the proposed Figures dataset (Section 4.4).\n\nQ: **Having to collect a whole new dataset (Figures) specific to this pre-trained task lacks scalability.** One of our main findings in this work is that blending the Figures dataset with unlabeled ImageNet leads to significantly better performance (Section 4.4 - “Dataset effect”). As unlabeled images can be obtained easily, we believe that this approach is scalable.\n\nTo further demonstrate that our approach scales when using additional large amounts of unlabeled data, we train MAE-VQGAN with a longer schedule (10k epochs instead of 1k) on a combined dataset of Figures and ImageNet. The resulting model achieves mIOU of 34 compared to 31 of Figures only, which is more than a 10% improvement. We plan to update the paper to reflect this. \n\n\nQ: **Comparisons to baselines.** Thank you for suggesting this, we include comparisons to finetuning and classic few-shot approaches and discuss this in the shared comments and in the Supplementary Material Section 6.1. But note that state-of-the-art classic few-shot models should be considered an upper bound of our approach as they are trained on large amounts of clean and annotated (baseclass) data and were designed for a specific task. \n\nQ: **For colorization, you should measure LPIPS.** Thank you for the suggestion. We added the LPIPS metric in Table 1 of the revised version.\n\nQ: **Have you tried using the prompt on any off-the-shelf pre-trained in-painting model (i.e., zero shot)? Or such a pre-trained model doesn't exist for the proposed prompting method?** Yes, we have tried to use the prompt on off-the-shelf models. For example, we used pre-trained VQGAN model, which is a state-of-the-art transformer-based inpainting model (see results in Table 1 and 2). However, we found that it performs poorly. This is mainly because inpainting models are intended to complete natural images which are different than our proposed Visual Prompts.\n", " Thank you for the positive review and suggestions. We address your comments below.\n\nQ: **how sensitive is the approach to the prompted image?** While there might be differences, as long as the supports are not degenerate (e.g, for segmentation - the support mask is all foreground or all background), the overall quality of the result remains similar. To demonstrate this, we include additional prediction examples where the support images change while holding the query image fixed (See the Suppl index.html, Section 13, “Support examples effect”).\n\nQ: **Report the standard deviation on the quantitative results.** ​​Thank you for the suggestion. We’ve included a table with the mean and standard deviations in the revised Supplementary Material (Section 6.1, Table 6).\n\nQ: **how to construct the visual prompt, how are different image sizes being handled? Are the images being resized?** Yes, the images are resized. The grid has a fixed size (the image size) of 224x224 pixels, and thus to construct the Visual Prompt the support pairs and image query are first resized and then placed in a grid. We clarify this in the revised manuscript (lines 126-127). \n\nQ: **is the VQGAN pre-trained? Or only trained on the 88,635 images from the Computer Vision Figures dataset?** Yes, the VQGAN is pre-trained. Specifically, we use an ImageNet pre-trained VQGAN codebook [3]. We clarify this in the revision (lines 110-112).\n\nQ: **Comparison with baselines**. Thank you for the suggestion, we’ve included additional comparisons, please see the shared comment or the updated Supplementary materials for the full information.\n\n\n[3] https://github.com/CompVis/taming-transformers", " We thank the reviewers for their insightful comments, which we incorporated into a new revision of the paper. The reviewers found our approach to Visual Prompting “interesting” (HDgS, v9Mo, WRn2), “novel” (v9Mo, xQdx), and “creative” (v9Mo) and identified it as one of the first works that show “prompting is effective for vision without using language” (WRn2). Additionally, the reviewers noted that the approach is “flexible enough to cover a wide variety of vision tasks” (HDgS, xQdx, WRn2). \n\nThe reviewers requested comparisons to finetuning or few-shot baselines. We follow their suggestion and add finetuning baselines that utilize K={1, 4, 16} training examples for each target class. We also include the results of FWB [1] and CyCTR [2], classic state-of-the-art 1-shot models, which we view as an upper bound of our approach. These approaches utilize a fully labeled base classes train set (2086 to 5883 on different Pascal 5i splits). Additionally, both architectures were designed for the foreground segmentation task (e.g, they operate in higher resolution).\n\nThe results indicate that the Visual Prompting results of MAE-VQGAN trained on Figures are significantly superior to standard finetuning baselines of MAEs pretrained on unlabeled ImageNet. As mentioned in our original submission (lines 79-93), classic few-shot approaches pretrain on a large, clean, tagged base classes dataset and utilize task-specific architectures. Thus, FWB [1] and CyCTR [2] unsurprisingly outperform Visual Prompting, but Visual Prompting is more general as it can be applied to many novel tasks without any fine-tuning. We also include these results in the revised Supplementary Material Section 6.1.\n\n\n\n| Pretraining | #Labeled Examples | #shots | Model | Split1 | Split2 | Split3 | Split4 |\n|--------------------------------------------|--------------------------------------|---------|---------------|--------|--------|--------|--------|\n| Unlabeled Figures Dataset | 1 | 1 | Visual Prompting MAE-VQGAN | 27.8 | 30.4 | 26.1 | 24.3 |\n| Unlabeled ImageNet. | 1 | 1 | Finetune MAE | 11.1 | 13.4 | 13.0 | 12.3 |\n| Unlabeled ImageNet | 4 | 4 | Finetune MAE | 12.9 | 15.8 | 14.3 | 15.0 |\n| Unlabeled ImageNet | 16 | 16 | Finetune MAE | 13.7 | 16.1 | 16.8 | 17.1 |\n| **Labeled Pascal 5i (segmentation masks)** | 2086-5883 (different per each split) | 1 | FWB [1] | 51.3 | 64.5 | 56.7 | 52.2 |\n| **Labeled Pascal 5i (segmentation masks)** | 2086-5883 (different per each split) | 1| CyCTR [2] | 67.2 | 71.1 | 57.6 | 59.0 |\n\n\n\n\n[1] Nguyen, Khoi, and Sinisa Todorovic. \"Feature weighting and boosting for few-shot segmentation.\" ICCV 2019.\n\n[2] Zhang et al. \"Few-shot segmentation via cycle-consistent transformer.\" NeurIPS 2021.", " This paper proposes a novel approach to use a pre-trained visual model to various downstream tasks via “prompting”. In other words, it can be viewed as a one-shot or few-shot learner but without the need for fine-tuning. Similar to NLP systems, they propose to use image inpainting as the pre-training task. To prompt for a downstream task, the method concatenates the exemplar pair images and the input image into a larger “visual prompt image”. Notably, they demonstrate that the dataset that the visual model is pre-trained on is important. Specifically, they collected images of paper figures from arXiv papers. Empirically, they demonstrate their approach to various downstream tasks, including foreground segmentation, single object detection, colorization. # Strengths\nThe proposed approach is novel and interesting. The authors proposed a simple (in a good way) approach to enable prompting on visual models with many analogues to NLP. Personally, I find the insight to training on arXiv paper figures intuitive and creative. Overall, the writing is organized and clear. Finally, the authors also promise to release the data and code, which would benefit the community.\n# Weaknesses\n#### 1. I believe the paper would benefit from a comparison with a one-shot baseline. While the proposed approach does not require fine-tuning, it would be interesting to see how it competes.\n#### 2. The prompt engineering details are unclear. I was only able to get a vague idea on how to construct the visual prompt. Is it possible to document them more precisely? Additionally, how are different image sizes being handled? Are the images being resized? \n#### 3. Missing training details? Specifically, I am wondering if the VQGAN is pre-trained? Or only trained on the 88,635 images from the Computer Vision Figures dataset.\n#### 4. The authors have stated that they repeated the experiment with four random seed (line. 194). It would be great to report the standard deviation on the quantitative results. Also, I wonder how sensitive the approach is to the prompted image. For example, would the approach still work if the cat, in Fig.3, is prompted by an image of a white cat that's outdoor?\n\n# Misc.\n- Line 173: \"224x224\" --> \"224 \\times 224\"\n See #2, #3, #4 from above. The paper contains a limitation section and adequately discussed the shortcomings of the approach. Specifically, the inherit ambiguity when prompting from a single image for a task. ", " The paper aims to generalize the idea of natural language prompting to the vision domain. They propose visual prompting by image inpainting. Given task examples (image, target) and a image query, they construct a new “grid-like image” (i.e., visual prompt) and the inpainting model predicts the masked region such that it is consistent with the example(s). Given that it was trained on the right data, the inpainting model can perform prompting in various vision tasks. One key assumption is that the tasks can be defined as an image-to-image translation problem. To train the in-painting model, they collect a dataset of 88k figures, which is critical for performance. Strengths\n* It is very interesting to see that in-context learning in NLP works for images, too. While the set-up is extremely simple, the paper shows that pre-trained task of image in-painting is flexible enough to cover a wide variety of vision tasks.\n* It is one of the first papers that shows that in-context learning works for vision. While other works incorporate vision into a language modeling task or focus on using language prompts, I like that this paper takes a “purely vision” approach. One value of this paper is showing that prompting can be effective for vision *without* using language pre-training tasks or language prompts. \n* Overall, the paper is well-written and easy to follow.\n\nWeaknesses\n* Having to collect a whole new dataset (Figures) specific to this pre-trained task lacks scalability, when having a sufficiently large training set is crucial for creating a high-quality pre-trained model. \n* In fact, how the authors perform pre-training seems inefficient. For prompting, closely matching the pre-trained and downstream tasks is key. While their prompt for the downstream tasks is “grid-like images” with support examples and a query image, the pre-training task is in-painting with random region masking. This creates an unnecessary domain gap between pre-training and downstream tasks. Can’t we just reformulate existing datasets (e.g. ImageNet) into a grid-like structure and use them for pre-training? \n* The paper lacks baseline comparisons. For Table 1, how does the method compare to state-of-the-art, fine-tuning, and zero shot? It is very difficult to assess the value of the proposed method without any comparisons. Essentially, prompting is a method that allows *no additional training*. This paper requires *pre-training a whole new model* specific to the proposed prompting method which contradicts the value of prompting. Have you tried using the prompt on any off-the-shelf pre-trained in-painting model (i.e., zero shot)? Or such a pre-trained model doesn't exist for the proposed prompting method?\n* For colorization, you should measure LPIPS [1] instead of MSE. MSE cannot accurately measure the perceptual similarity between colored output and the target.\n\n[1] The Unreasonable Effectiveness of Deep Features as a Perceptual Metric, Zhang et al., CVPR 2018. * How does the method compare to baseline methods (state-of-the-art, fine-tuning, zero shot)? In fact, what is the practical utility of this prompting method? The benefit of prompting is the ability to generalize to a new task *without additional training*. However, the proposed method requires having to train a whole new model which essentially contradicts the value of prompting. If the prompting method only works on the proposed pre-trained model, how useful/general is this pre-trained model? Does the proposed pre-training + prompting method outperform training from scratch using the same dataset/model with a task-specific state-of-the-art objective? If not, why do we have to use this prompting method?\n\n* I’m confused about how the datasets (ImageNet, Figures) are processed during pre-training. Are they concatenated as a grid-like image and processed as input or is only a single image used as input? From what I understand, during pre-training, a single image is used as input and a random region is masked, and the model is trained to reconstruct the image. I believe this is inefficient. For prompting, closely matching the pre-trained and downstream datasets is key. Therefore, rather than collecting a whole new dataset (Figures) and do random region masking, can’t you reformulate existing datasets (e.g. ImageNet) into a grid-like structure and pre-train the model? Have you tried this? How does this performance compare to having the models pre-trained on the Figures Dataset with random masking? This would further reduce the domain gap between pre-training and downstream tasks. I think the key limitations of this work are domain gap between pre-trained and downstream task, scalability (the pre-trained model specifically requires figure-like training images), and possibly performance (does this method outperform training from scratch using state-of-the-art?). While I really like the pure vision approach for prompting, these limitations make me question the practical utility of this work. ", " The paper proposed a dataset constructed of figures from computer vision and machine learning arXiv papers filtered for grid-style figures. This allows training an image in-painting model on a large variety of input-output examples (contained in one image). The model can then be used to solve several popular tasks by “prompting” in the sense that an image is constructed as a grid of input-output examples with the last output being masked for in-painting. The model prediction can then be interpreted as a task prediction. \nThe paper contains a broad evaluation of different architectures, tasks, and prompts and shows that in many arrangements the final model produces good results when trained on this dataset vs. ImageNet.\n Strengths\n\nThe paper shows a straight-forward way to train an inpainting model to be used for visual prompting at inference time. This idea is quite interesting and the trained model shows reasonable performance on several tasks.\n\nThe paper evaluates several architectures, training datasets, prompting schemes and tasks, and analyzes all major design choices to show that the final model performs best.\n\nThe proposed dataset of figures from arxiv papers and could be of broader interest to the research community.\n\n\nWeaknesses\n\nFew-shot learning: The paper claims that models trained like that are few-shot learners. However, this statement is based on a very stretched definition of few-shot learning. The most common example of few-shot learning in computer vision is classification, where a model is trained on a dataset of training classes with images. During test time, the model is shown a few annotated examples of a previously unseen class and has to produce a classifier that can recognize this class. The important difference here is that in this setup (and others such as few-shot detection, semantic segmentation, etc.) is that while the task remains the same, the model is presented with unseen categories at test time. In this paper the setup is quite different, as the evaluation does not test unseen categories. Without a proper analysis of the dataset (which is missing) it is unclear which categories, tasks and even images have been seen during training. As most of the tasks that are evaluated in the paper are highly popular computer vision tasks with hundreds of papers on arxiv (segmentation, colorization, detection, inpainting, edge detection) there is a high chance that the model simply recognizes the task from the prompt instead of learning a new category or a new task from the given examples.\n\nDataset: There is very little information about the collected dataset in the paper and appendix which makes it not only difficult to assess the findings of the paper but also leaves many questions on the dataset itself. I strongly suggest creating a “datasheet for datasets” for this dataset. For example: it is not clear if the license of the arxiv papers has been taken into account before extracting the figures. It is unclear if copyright and attributions are retained in the dataset. By nature, the dataset will consist of many validation/testing images from popular datasets, thus training a model on this data will likely skew its performance when evaluated on the val/test sets for downstream task performance.\n\nLearning Signal: This point is related to the content of the dataset. Figures in papers often compare the results of the method with the ground truth and other methods. With this prompting approach it could be possible that a model learns to predict imprecise results from the fact that papers most often show predictions. It could be interesting to investigate how much this affects the method. Potential ideas could be adding labels (e.g. “GT”, “pred”, “ours” to the columns).\n The questions here directly reflect the weaknesses above.\n1. What is the definition of few-shot learning in this paper and how does it relate to the existing approaches of few-shot learning?\n2. How can we make sure that the evaluation of the model uses unseen categories/tasks/images if the dataset is sourced from results of other papers?\n3. How was the dataset collected? What does it contain? Were individual licenses taken into account?\n4. Does the model pick up a bias in artificially predicting imprecise results to emulate predictions? The paper contains a section on technical limitations of the work but does not discuss the societal impact of the work nor the dataset. A datasheet is strongly encouraged as well as a more clear description on how the dataset was collected and filtered. ", " This paper proposes a uniform framework for various image tasks, such as colorization, edge detection, inpainting, segmentation, and style transfer. This paper treats different image tasks as an inpainting task and achieves good performance. The authors also collect a dataset for training models. (+) The authors propose a smart way to solve various image tasks by treating them as an inpainting task. This paper is very impressive. The idea is novel and the paper is well-written. The results are convincing. \n\n(-) Some method details and implementation details can be added to improve this paper.\n\n(-) This paper did not compare with the standard methods for image segmentation, colorization, object detection, etc. For example, Mask-RCNN [1], Mask2Former[2], [3] and so on. The authors should add comparison with these standard methods and the state-of-the-art methods in each domain.\n\n[1] Mask R-CNN\n[2 ] Masked-attention Mask Transformer for Universal Image Segmentation\n[3] Real-Time User-Guided Image Colorization with Learned Deep Priors\n How did you combine the MAE and VQGAN? During training, did you train VQGAN first? And then use MAE to reconstruct the visual tokens? During inference, do you encode the whole combined image using VQGAN first? And then use MAE to fill the missing part (visual tokens instead of pixels)? And then use VQGAN decoder to reconstruct the image? \n\nWhy do you use MAE to fill the missing parts? Why not just use the transformer from VQGAN to generate the missing parts directly? Is the MAE part necessary?\n\nHow do you assign probabilities to visual tokens in L111? \n\nDoes the prediction of z^{\\hat}_k related to z^{\\hat}_{k-1} in the eqn under L113? Does each z^{\\hat}_k are predicted only based on x and m? \n\nIn the eqn right above L119, is m a mask of the part needs to be filled?\n\nIf the authors can address all the above questions well, I think this is a good paper to be accepted. \n\n Some model details are missing. For example, the model details of MAE and VQGAN as mentioned above. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 6, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 5 ]
[ "nOYiCMM0v6", "zX1tE3bIsMx", "wMnbkVwkKcS", "drBS-fm3r-", "C_HOQ1Tx4JW", "mK7IwkyFCfG", "mK7IwkyFCfG", "loUu1RsaBx", "nips_2022_o4uFFg9_TpV", "lS_JXTtMECsQ", "reiRXFhTdXT", "p54Qo0FPIyf", "PFLRw_rmwCZ", "OKdVWUlnpkD", "nips_2022_o4uFFg9_TpV", "nips_2022_o4uFFg9_TpV", "nips_2022_o4uFFg9_TpV", "nips_2022_o4uFFg9_TpV", "nips_2022_o4uFFg9_TpV" ]
nips_2022_sBrS3M5lT2w
Global Convergence and Stability of Stochastic Gradient Descent
In machine learning, stochastic gradient descent (SGD) is widely deployed to train models using highly non-convex objectives with equally complex noise models. Unfortunately, SGD theory often makes restrictive assumptions that fail to capture the non-convexity of real problems, and almost entirely ignore the complex noise models that exist in practice. In this work, we demonstrate the restrictiveness of these assumptions using three canonical models in machine learning, then we develop novel theoretical tools to address this shortcoming in two ways. First, we establish that SGD's iterates will either globally converge to a stationary point or diverge under nearly arbitrary nonconvexity and noise models. Under a slightly more restrictive assumption on the joint behavior of the non-convexity and noise model that generalizes current assumptions in the literature, we show that the objective function cannot diverge, even if the iterates diverge. As a consequence of our results, SGD can be applied to a greater range of stochastic optimization problems with confidence about its global convergence behavior and stability.
Accept
This paper analyzes the asymptotic convergence behavior of SGD on the class of locally Hölder continuous functions, by generalizing the technique and results of Patel [2021]. The paper extends and generalizes prior SGD analyses that were conducted under the assertion that certain conditions (e.g. smoothness or continuity) hold globally. The reviewers found that the results are well presented, correct, and of interest to the community. The results could stipulate further research on SGD analyses on function classes more relevant to neural network training or deep learning, and also on non-asymptotic analyses of SGD on the function class studied in this paper. The internal discussion brought up a few concerns should be carefully addressed when preparing the final version: - some reviewers found the examples a bit overclaimed and not clearly showing the necessity of considering locally Hölder continuous functions. For instance, analyses that do not require a bounded variance assumption have become standard in recent years (see for instance the textbook by Bottou, Curtis, & Nocedal that is cited in the paper) and thus Example 1 (Linear Regression) could be a bit misleading as it is bringing up an already solved issue. Please relate the examples carefully to the (novel) contributions in this paper, - and please mention and explain the relation to the arXiv preprint https://arxiv.org/abs/2104.00423.
train
[ "O2OicW0ZLFE", "ukv3Y5rqi6", "0FReLw9wwmR", "HTMCPe2LI-jV", "M67HB-LgoJT", "RmUXd1teTLm", "11DG6LkUGBY", "XwP89UB_z0", "XNZE8oqObG1", "XgwwNMa23GC", "I_B_k9JNgaZ", "GevNcbgzdkv", "b3LV9KWZxMs", "byr-1ZcQa9" ]
[ "official_reviewer", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nthank you for the detailed response and the clarification. I may have underestimated the difficulty of some technical details. I updated my score. This may change again during the discussion phase.\n\nBest regards,\n\nReviewer JXLB", " thank you for your answer which mostly clears up the question I had. My opinion on the paper remains that it's a nice one which clears the bar for publication at NeurIPS this year.", " Thank you for your consideration. \n\nJust as a final point that is more important than the score: we hope that we have convinced the reviewer that the local Lipschitz assumption cannot be relaxed further in realistic ML contexts as shown by our examples (excepting when differentiability no longer exists, which would require **weaker** assumptions than local Lipschitz continuity of the gradient) and by our examples for the reviewer. As a result, owing to the \"No Free Lunch Theorem\" (Wolpert, D.H., Macready, W.G. (1997), IEEE Transactions on Evolutionary Computation), we cannot even have a rate of convergence even if the noise were to be zero (i.e., the deterministic context). Thus, any *realistic* rate of convergence would have to be very example dependent and would not apply to, say, a three-layer or deeper feed forward network that makes use of (smoothed) ReLU functions.", " I would like to thank the authors for their detailed response. I do believe that these assumptions are still not satisfactory. My point is that the authors may strengthen the assumptions a bit which may yield a guarantee of convergence speed. Nevertheless, I have pointed out that this paper has its own value. So I will keep the rating unchanged.", " The reviewer is *partly* correct about the intuition. We discuss the statements made by the reviewer below.\n\n# 1. Iterates are either bounded or diverge\n\nThere are three cases, not just boundedness or divergence. We go through these cases first based on the reviewers comments before discussing the third.\n\nOne cannot predetermine which $R$ we need to choose. Let us consider a one-dimensional example, $F(\\theta) = 0$ and so $\\dot F(\\theta) = 0$, and we let $\\dot f(\\theta,X) = X$ where $X$ is a normal random variable. The iterates will converge to a normal random variable with mean zero and variance $\\sum M_k^2$. Of course, there is no clear choice of $R$. This becomes a bit more interesting if we let $F(\\theta) = \\sin(\\theta)$ and $\\dot f(\\theta,X) = \\cos(\\theta) + X$. Now, we can approach any of the elements in $\\lbrace \\frac{3\\pi}{2} + 2 k \\pi : k \\in \\mathbb{Z} \\rbrace$. Here, there is not apriori choice for $R$ that makes sense as we cannot know it. Thus, *even if the iterates are bounded, there does not exist an finite $R$ that guarantees all iterates remain within a ball of radius $R$ around the origin*. There is still another nuance that needs to be carefully handled discussed below in **5**.\n\nThe second case is that the iterates can diverge. This actually offers several interesting possibilities, of which we provide sufficient conditions for \"good\" behavior through Assumption 5 and Theorem 3.\n\nThe third case is that our iterates can visit infinity and return, and then go back. That is, the limit infimum and supremum are at the extremes. Think of a reflected random walk. With diminishing step sizes, this seems like it could not happen. But what if our noise is allowed to grow arbitrarily large as the iterates get farther and farther away? Then, the noise could outpace the diminishing step size. In effect, we could end up with something that behaves similar to a reflected random walk, in which the iterates get pushed back towards the origin (but maybe not too close) and then pushed towards infinity. *Because of the generality of the noise, especially asymptotically, this case is non-trivial to preclude, but is done through Theorem 1.*\n\n# 2. Diminishing Step-sized and Bounded Variance Implies Convergence\n\nThe fact that we would converge to a stationary point should not be taken for granted. Consider the extreme of no noise and infinitely small step-size (i.e., continuous gradient descent). In this case, the iterates will converge to a limit cycle (see Palis-de Melo's book, Chapter 1, Example 3). The fact that stochastic gradient descent behaves strictly differently **was not apriori known**. For instance, the work of Mertikopolous et al. (2020) in NeurIPS does not say convergence to a stationary point as they could not discount the possibility of a limit cycle. We are able to preclude limit cycles and cycles in general. **Again, such a statement is not obvious, but we are able to prove it.**\n\n# 3. Choosing worst constants over all $R$\n\nTake the linear regression case. Here, the variance goes off to infinity as we must let $R$ go to infinity (see the previous point). So the worst case of the constant is $\\infty$, and the argument would fall apart as you have stated it. One could artificially restrict the iterates to a region, and then say \"with high probability\", but this would be a weaker qualitative statement than what we have presented.\n\n# 4. Finding a Rate\n\nThis would also be a qualitative statement as $R$ cannot be choosen apriori, so the rate can become arbitrarily bad. Moreover, we could try to make a \"with high-probability\" statement, but this would be a weak qualitative statement as we would not know where to set the cut-off or how awful $R$ would be. Thus, the rate can get arbitrarily bad as the constants can get arbitrarily bad (see **3** above).\n\n# 5. Arguments with bounded $R$\n\nSuppose we fix the set $R$ and we look at all iterate sequences that remain within $R$. How? This would require knowledge about the future of the iterates, so if we went to take (conditional) expectations, we would not even know how to calculate them. \n\nAs we see, the intuition is partly correct and the natural arguments that flow from it fall apart quickly. Our work handles these subtleties carefully and thoughtfully, which we hope that we have conveyed. We hope the reviewer will reconsider their position. ", " We thank the reviewer for providing references to support their points. We find that this is rarely done, and often claims are just made. So thank you for doing so. We address the comments below.\n\nFor [1,2]: The ($L_0,L_1$)-Smooth condition is not satisfied by Example 2 (remove the penalty term for simplicity), which is a really simplified feed forward network. To see this, we can use any norms owing to the equivalence of norms. The Frobenius norm of the Hessian can be lower bounded by the absolute value of its [1,1] entry, which is\n$$\\frac{0.5 W_4^2 W_3^2 W_2^2}{( 1+\\exp(W_4 W_3 W_2 W_1) )^2}$$\n\nThe $\\ell^1$ norm of the gradient is\n$$\\frac{0.5}{1+ \\exp(W_4 W_3 W_2 W_1)} \\left[ |W_4 W_3 W_2| + |W_4 W_3 W_1| + |W_4 W_2 W_1| + |W_3 W_2 W_1| \\right]$$\n\nNow consider when $W_1 = -1$, $W_4 = W_3 = W_2$. Then, the lower bound on the norm of the Hessian is\n$$\\frac{0.5 W_2^6}{ ( 1 + \\exp(-W_2^3) )^2}$$\n\nAnd the gradient is\n$$\\frac{0.5}{1 + \\exp(-W_2^3)} \\left[ |W_2^3| + 3|W_2^2|\\right]$$\n\nThus, as $W_2 \\to \\infty$, we see that the lower bound on the Hessian cannot be bounded by the gradient. Hence, this **simple** feed forward network does not satisfy the assumptions of [1,2], specifically the $(L_0, L_1)$-Smooth assumption. To suppose now that it would apply to more complicated, deeper feedforward networks would be hard to believe.\n\nFor [3,4]. This assumption is generalized by expected smoothness from Peter Richtarik's group. Here is a simple example on a bounded domain where such a condition is not satisfied but our assumption allows us to optimize over the entire interval. Consider $f(\\theta,X) = \\theta^X$ where $X$ is an exponential random variable with parameter $1$ and let $\\theta \\in [1, e^{2/3}]$. Then, \n$$F(\\theta) = \\frac{1}{1 - \\log(\\theta)},$$\nwhich is minimized when $\\theta = 1$. Moreover, its gradient is also computable and finite on the interval. Now, the variance of $\\dot f$ does not exist on $[e^{1/2}, e^{2/3}]$, hence the conditions of [3,4] are not satisfied. However, our noise assumption **is* satisfied when we reduce the Holder constant to be less than 1/2. (Note, restricting SGD to the interval actually simplifies our analysis). \n\n\nWe hope these examples shed light on why the analysis is needed, and why the referenced examples can fall short in some situations that are likely to occur (especially [1,2]). ", " Thank you for your comments!\n\n*Suggestion for Improvements*\n- On counter examples: line 36 says that the details are in Appendix A. We will can remind the reader of this again in the revision.\n- We will try to reduce the size of the introduction/sections 2/section 3 in the revision.\n\n*Questions*\n- No the ridge penalty is not required. We included it as this more closely approximates what is done in practice. \n- L92: There are two likely two things that can occur. First, Assumption 5 may not be satisfied (which we are working on, as we state in the limitations), which means the iterates may yet diverge. Second, there is a difference between the numerical behavior of SGD and its theoretical behavior. To be specific, Patel (2017) 'The impact of local geometry and batch size on the convergence and divergence of stochastic gradient descent' show that for large initial step sizes, SGD can diverge very far away from a minimizing region. If this happens, the numerical calculations can cause further problems before the theoretical convergence region of diminishing step sizes kicks in. Hence, even though we should expect SGD to converge *in theory*, the preliminary step sizes can cause SGD to first blow up and then the resulting numerical problems can prevent the convergence from occurring.", " *Comment on perceived weaknesses:*\nAs SGD is widely deployed, especially since it is one of two algorithms that achieves high quality generalization in deep, over-parametrized network (see Suvrit Sra's MIT Group's recent works), we critically need to understand what it is doing. We address your suggested weakness in two points.\n- There are examples that satisfy Assumptions 1 to 4 that will clearly let the iterates diverge and **for which this is the appropriate behavior**, for instance $\\exp(-\\theta^2)$. Hence, we cannot exclude it apriori. Perhaps more interestingly, there are examples for which the iterates will diverge and **for which it is not an appropriate behavior.** Thus, we have introduced Assumption 5 and its consequence (Theorem 3), which states that we will only converge to regions of zero gradient and finite objective function. Given that the three examples given satisfy Assumptions 1 through 5 and their objective function diverges as the argument goes to infinity, then we can state as a straightforward consequence of Theorem 3 that the iterates **cannot diverge**. Hence, we need only verify that the objective function satisfies Assumption 5 (say because of a regularization term in the objective, as noted in Lines 281 to 282) to ensure that the iterates do not diverge. This would certainly be a clear practical application, and we can expand on Lines 281 to 282 to make this clearer and to **add a deeper discussion about how to use our results in practice.**\n- There are a handful of possible behaviors: (1) divergence (ADAM can do this on a 1-dimensional convex problem), (2) convergence to a stationary point, (3) convergence to a nonstationary point (easy to construct examples), (4) entering a cycle (gradient descent can do this), (5) convergence to a limit cycle (continuous gradient descent can do this). Let's say that (3) or (4) could happen, then there would be no point in any saddle point or local maximum escape work, or even stopping conditions. Thus, we only want (1), (2) or (5) to be possible (only if 5 is a minimizing limit cycle). Apriori, *we do not know which outcome can happen* for SGD. To be able to say that only (1) and (2) can happen is somewhat surprising and useful for escaping non-miniming points and developing stopping conditions. The fact that (5) is precluded is also interesting as it goes against the intuition that continuous gradient descent can be used as a proxy for SGD (see works starting with Chizat & Bach 2018 that have not been as carefuly as this work). \n\nAs this paper is primarily concerned with the theory of a widely deployed algorithm, we did not think it necessary to include experimental results.\n\n*Question:* I think the reviewer with this question points out the utility of this work in understand the behavior of SGD that we have provided. So in the example given, SGD is driven by IID mean zero noise with finite variance. Our theorem states SGD would then converge to a unique point in this region, which coincides with the conclusion of Kolmogorov's 3 series theorem. \n\nWe hope that we have satisfied the reviewer's concerns and questions.", " For Proposition 1, the most common assumption is that the stochastic gradient variance is uniformly bounded for all $\\theta$. The referenced survey by Patel [2021] provides references. As the example shows, this is clearly not true for linear regression because the variance goes to infinity as $\\theta$ goes to infinity (i.e., the variance becomes unbounded), which implies that the uniform boundedness assumption on the stochastic gradient variance is not realistic.\n\nAssumption 5 holds for the given examples. \n\nWe can add these definition to the rebuttal versions.\n\nWe hope, given the pressing nature of understanding SGD in realistic scenarios in machine learning, and the addressing of the suggested weakness and questions, the reviewer is satisfied. ", " This paper analyzes the global convergence of SGD under more practical assumptions, for the ease of studying the realistic performance for training with non-convex function and noisy feedback. Specifically: 1) it relaxes the global Lipschitz continuity of the gradient function to local $\\alpha$-Holder continuity; 2) it relaxes the bounded variance assumption to that the $(1+\\alpha)$-moment of the stochastic gradient is bounded by an arbitrary upper semi-continuous function. Hence, the authors provide a new analysis that SGD will either converge to a stationary point or diverge. To ensure the convergence, an additional assumption that generalizes the notion of expected smoothness is introduced. Strengths:\n\n1. The authors provide three typical losses: linear regression, feed-forward neural networks and recurrent neural networks, to demonstrate the invalidation of current assumptions for analyzing the global convergence of SGD. This shows that current analyses can be applied to practical machine learning problems.\n2. The authors give a solid analysis of the global convergence and high-level descriptions for the proof. \n3. Theorem 1 is a strong result than I thought when I just read the introduction. Provided that the model parameters are usually finite, we can infer that the model converges globally from Theorem 1. \n\nWeakness:\n\n1. Proposition 1 is built by setting $\\|\\theta\\|_2 \\rightarrow \\infty$, which is not the typical situation for training deep learning models, since the model parameters usually are finite during the training process. \n 1. How does assumption 5 hold for the cases in Example 1,2,3?\n2. It is better to give full definitions of $\\alpha$-holder continuous and semi-continuous function.\n The main limitation is the interpretation of Assumption 5, which is left for future works.", " The paper considers the stability of SGD under more general conditions noise models than have been considered before. The paper first demonstrates three common ML optimization were the convergence of SGD has not been established. Then the paper explores the convergence of SGD under quite general condition, e.g., relaxing both the assumption of Lipschitz continuous gradients and bounded variance, which are usually critical in establishing convergence results for SGD. The results establish that under such general conditions, SGD either convergence to a stationary point or they diverge to infinity. Finally, the paper suggests a technical condition that, if met, ensures convergence to a stationary point. Strengths: The paper is mostly well written. The paper is very upfront about the limitations of the work. The mathematical results seem solid. The problem that is considered here is very general and it is indeed that it is possible to establish some convergence results in this case.\n\nWeaknesses: It is difficult to see how the results could be useful in practice, because it is impossible to say before running the algorithm whether it converges to stationary point or diverges. In fact, the only thing the results in the paper exclude is that the iterate move around without diverging and without settling on a stationary point. \n\nThere are no-indepth discussion about how the insights here could be used in practice. There are also no experimental results. I am a bit surprise that the SGD will always converge to a unique stationary point (given that it does not diverge). If there is a connected set of stationary points, is it not possible that the iterates do not converge to any one of them but rather move around around that set? I mean, e.g., if F:R—>R, if [0,1] is the set of stationary points with F’(\\theta)=0 for all \\theta in [0,1], I could imagine that the iterates of SGD would after some time start to move around randomly in [0,1]. According to your results, this is not possible? or I am I misunderstanding something? No major limitations that I could see.", " The paper presents an analysis of stochastic gradient descent for a wide range of applications by removing several usual but problematic assumptions from the analysis, replacing them with more acceptable alternatives. The issues with the current assumptions are illustrated for 3 common problems. Somewhat surprisingly, the authors are able to prove convergence and stability under the new, more permissive assumptions, though not a convergence rate (which wouldn't be possible under these assumptions anyway). The paper is well written and easy to follow (at least superficially). The improved assumptions mean that the analysis is a lot more widely applicable than previous attempts, which relied on violated assumptions and were thus of less value. The results are original and significant, as are the new methods used to prove the theoretical results.\n\nI do have a few suggestions for improvements though:\n- when presenting the 'counter-examples' in the main text, all the authors say is that they have a result that shows there is a counter-example, without even referring to the Appendix for the actual example. I wonder if a simplified version of the counter-examples themselves wouldn't improve the main text, which currently reads as 'trust me on this'.\n- there is a lot of introductory space in the paper, and we only get to the meat of it on page ~6, with many results devolved to the Appendix. If the authors want a self-contained paper, then a journal format would be a better fit. For a conference paper, summarizing the first few pages more succinctly might read better. 1. For 2 counter-examples, a Ridge penalty is mentioned. Is it strictly necessary? If that's the case, then this weakens the argument that previous assumptions were not good enough.\n2. l92, where the authors claim that SGD basically always converges, seems somewhat surprising for deep learning practitioners. Can the authors shed some light as to why there seems to be a difference with practice (where SGD certainly can blow up)? The authors mention the limitations of their work several times, notably making clear that the technical assumption that they require for their analysis cannot easily be interpreted. In that sense, I think limitations are honestly and properly addressed.", " This paper studies the convergence of stochastic gradient descent for smooth non-convex stochastic optimization. The main contribution of this paper is that it relaxes the commonly-used assumptions in proving the convergence: (1) global smoothness of the objective function, (2) bounded variance of the stochastic gradient. They are replaced by a much weaker assumption, i.e. local Hölder continuous of the gradient and bounded $1+\\alpha$ moment of the stochastic gradient. The authors prove that SGD with a diminishing step size must either converge to a stationary point or diverge. They further add a technical assumption and prove a stability result, showing that the objective function cannot diverge even if the iterates diverge. **Strengths**:\n1. This paper is well-written. The motivation is clear and the presentation is concise.\n2. It seems that the results and technical contribution are non-trivial.\n\n**Weaknesses**:\nThe results seems to be not very interesting. While it gives convergence results, it does not say anything about the convergence speed (while the authors have pointed out that it is impossible under such general assumptions). In such a situation, my concern is that are the general assumptions really necessary? For example, modern optimization problems for neural networks are clearly not globally smooth, but relaxing it by locally Hölder continuous in gradients seems too general. Several works considered more restrictive assumptions, e.g. the generalized smoothness [1,2], under which optimal algorithms have been proved. Analogously, several works relax the assumption of bounded gradient variance by unbounded one that is locally proportional to the gradient magnitude [3,4] (which is satisfied by your first example), in which case optimal algorithms are also developed. \n\nBut this paper clearly has its own values. Currently I haven't delved into the proof details and I may not correctly understand the implications of these results, so I toned down the level of confidence.\n\n[1] Why gradient clipping accelerates training: A theoretical justification for adaptivity. ICLR 2020.\n\n[2] Improved Analysis of Clipping Algorithms for Non-convex Optimization. NeurIPS 2020.\n\n[3] The Power of Adaptivity in SGD: Self-Tuning Step Sizes with Unbounded Gradients and Affine Variance. COLT 2022.\n\n[4] Non-convex Distributionally Robust Optimization: Non-asymptotic Analysis. NeurIPS 2021. N/A N/A", " The authors analyze the convergence and stability of stochastic gradient descent under certain assumptions. In particular, they consider the function to be *locally* Holder-continuous (in opposition to global assumptions), and they relax the boundedness of the variance. In particular:\n\nUnder reasonable assumptions on the stochastic gradient descent (SGD) algorithm (with diminishing step size), the authors show the iterates of SGD either converge (to a stationary point) or diverge to infinity. Under another technical assumption, they also show convergence with probability one. The paper claims it uses less restrictive assumptions than other existing results. I agree with the authors; they generalize the analysis of SGD to a broader class of functions. However, my biggest concern is the (kind of hidden) assumption that most results hold under |theta_k|<infty.\n\nMoreover, the result of Theorem 1 seems intuitive, given that we are working with diminishing stepsize. I may be wrong on this point, however. Here is my intuition. \nEither:\nthe iterates are unbounded,\nthe iterates are bounded and, in that case, contained in a compact set of radius at most R. Since we have diminishing stepsize and a bounded variance, it seems reasonable to believe that, eventually, the iterates stop moving. Moreover, I think that in such a case, it is possible to write a rate of convergence for SGD to a stationary point that depends on R (which justifies Theorem 2).\nIn particular, the assumptions made in this paper are that f is locally Holder-smooth, and the variance is locally bounded - the authors justify introducing those hypotheses to contrast with similar results that assume global bounds. However, combining local assumptions and |theta_k|<R is (almost) the same as assuming global constants, as we can take the worst-case of those constants over R. Do the authors believe it is possible to remove the assumption that max_k |theta_k| < infinity? See the \"strengths & weaknesses\" section." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 5, 7, 5, 5 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 2, 2, 3 ]
[ "M67HB-LgoJT", "11DG6LkUGBY", "HTMCPe2LI-jV", "RmUXd1teTLm", "byr-1ZcQa9", "b3LV9KWZxMs", "GevNcbgzdkv", "I_B_k9JNgaZ", "XgwwNMa23GC", "nips_2022_sBrS3M5lT2w", "nips_2022_sBrS3M5lT2w", "nips_2022_sBrS3M5lT2w", "nips_2022_sBrS3M5lT2w", "nips_2022_sBrS3M5lT2w" ]
nips_2022_qj-_HnxQxB
Functional Indirection Neural Estimator for Better Out-of-distribution Generalization
The capacity to achieve out-of-distribution (OOD) generalization is a hallmark of human intelligence and yet remains out of reach for machines. This remarkable capability has been attributed to our abilities to make conceptual abstraction and analogy, and to a mechanism known as indirection, which binds two representations and uses one representation to refer to the other. Inspired by these mechanisms, we hypothesize that OOD generalization may be achieved by performing analogy-making and indirection in the functional space instead of the data space as in current methods. To realize this, we design FINE (Functional Indirection Neural Estimator), a neural framework that learns to compose functions that map data input to output on-the-fly. FINE consists of a backbone network and a trainable semantic memory of basis weight matrices. Upon seeing a new input-output data pair, FINE dynamically constructs the backbone weights by mixing the basis weights. The mixing coefficients are indirectly computed through querying a separate corresponding semantic memory using the data pair. We demonstrate empirically that FINE can strongly improve out-of-distribution generalization on IQ tasks that involve geometric transformations. In particular, we train FINE and competing models on IQ tasks using images from the MNIST, Omniglot and CIFAR100 datasets and test on tasks with unseen image classes from one or different datasets and unseen transformation rules. FINE not only achieves the best performance on all tasks but also is able to adapt to small-scale data scenarios.
Accept
This paper tackles OOD generalisation through a mechanism for analogy-making in functional spaces rather than the data space. It involves construction of a functional framework that maps inputs to outputs---by abstracting the transformation between inputs and outputs through a separate (hyper)network which provides the weights of the mappings. It further contributes a benchmark for evaluating OOD on IQ tasks. The reviewers agree that the paper tackles an interesting and relevant problem with the perspective of functional indirection, and the IQ task does appear challenging. The primary outstanding issue with the work appears to be with what class of models they compare against---there exists work in meta-learning (e.g. CAVIA, MAML) that ought to be discussed. If they are not appropriate it is crucial that this is explained, because from a functional perspective, they are very similar. Indeed as Reviewer kR5Y points out there are clearly relevant pieces of work that out to figure as comparisons here. Simply speculating that 'it is not clear...can deal with unseen transformations' will not do; this ought to be established in order for the paper to stand strongly on its own. And while I'm somewhat inclined to buy the proposition that PGM etc can have 'shortcuts' but I would perhaps have still expected evaluations on these in addition to the IQ task setup to mitigate claimed deficiencies in benchmarks like RAVEN or PGM. The authors also provided additional experiments over more extreme OOD settings as well as tempered some imprecise statements to address reviewer concerns, which was good. On balance, though it appears as if the paper has more merits than issues, and most of the issues raised could be addressed with some work. I would strongly urge the authors to actually make the edits for comparison and incorporate the additional experiments over existing benchmarks from the rebuttal into the manuscript, as requested by the reviewers.
train
[ "mVN4bC131bK", "H-pFzss_ahx", "Yt2ZOKcurdf", "4ED3Na8nC6", "1nwv-2EIflv", "RhsZuSkEZv3", "JqP5oIrk1Q2", "LQTFhq0IHCt", "s6DLck9nQ_2", "KEcjMZU5PJ" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for your detailed response. The authors have addressed most of my concerns, so I would like to raise my score from 5 to 6.\n\n", " We thank the reviewer for your detailed and insightful comments. We have updated our manuscript and would like to address the reviewer's concerns as follows: \n\n• “Evaluations on unseen transformations would be a valuable addition to the paper”: We thank the reviewer for the constructive suggestion. We agree with the reviewer and have added more extreme OOD tasks in the revision. In particular, the testing samples consist of unseen rules during training (as suggested by the reviewer), and we further construct testing problems with objects from unseen datasets (i.e. train and test on different datasets). Overall, our model continues to outperform competing models with an average margin of 7.9% to the runner-ups, which shows a clear superiority of our model on generalizing OOD. Please refer to Section 4.3 for more details. \n\n• “It would be a fair comparison to provide results on all models without NICE layers”: We have added experimental results of FINE with 2-layer MLP backbone in the revision. Overall, FINE with MLP backbone still performs better than competing models: on Omniglot dataset, FINE-MLP achieves the best performances on 2 single-transformation tasks, while maintaining second place (below FINE with NICE backbone) in the rest 7 single-transformation tasks; on CIFAR100, FINE-MLP achives the first and second place in 2 and 6 single-transformation tasks, respectively. This shows clear advantages of learning on functional spaces. Please refer to Section 4.1 and 4.2 for more details. \n\n• “The training setup for baseline models should be described in the supplementary material”: We have added the training setup in the Supplementary.\n\n• “The choice of using 2 NICE layers instead of one layer is not fully explained”: Since each NICE layer only changes half of the input, we need at least 2 NICE layers to completely transform the input. In the experiments, we use 4 NICE layers to balance the complexity of the backbone and computational cost. We have added this detail in Section 4.\n\n• “The vectors used in the analogy step are output from $\\gamma_t(y)$. The motivation and significance of this choice are not explained.”: For an input-output pair $(x, y)$, we would like to estimate the weights of the neural network transforming $x$ to $y$, which means $x$ should the input for the first layer and $y$ should be the output of the last layer of the neural network. Hence, the outputs of intermediate layers are unknown. A common technique (e.g. see [1]) to estimate the intermediate outputs is to treat them as a function of $y$, i.e. $\\gamma(y)$ where $\\gamma$ is a trainable neural network.\n\n• “Can this framework be adapted and evaluated on OOD generalization in image classification or other tasks in ML ?”: The IQ problems introduced in our paper can be considered as an instance of one-shot learning in which the model is given a single hint. Therefore, we believe that FINE has the potential to successfully deal with few-shot learning tasks. However, for the clarity we did not include tasks other than ones introduced in the paper so that we can emphasize our 2 main contributions: the FINE dataset and FINE model. We believe applications of FINE on few-shot or meta-learning tasks are interesting and worth to investigate in future work. \n\n• “The first 2 paragraphs of section 3.1 could reformulated for clarification”: Thank you for the suggestion. We have revised the two paragraphs based on your thoughtful comment. In short, we clarify that learning a single function may be infeasible for OOD scenarios when there are new relations in the testing set, and that learning to compose functions adaptively may have a great potential on OOD tasks. Please refer to Section 3.1 for more details. \n\nTo summarize, in the revision, we have added more extreme OOD tasks with unseen rules and unseen datasets, which we believe would be a valuable addition to our manuscript. We also rewrote Section 3.1 to improve clarification. We would like to emphasize that our paper makes 2 main contributions: first, we propose a new dataset for testing OOD generalization. Despite its simple appearance, our dataset remains challenging for most of the models, and moreover possesses a great flexibility to increase problem difficulties to fit with different OOD levels (e.g. unseen objects in the same dataset, unseen rules, unseen objects from unseen datasets). Second, we propose the FINE model that operates on functional spaces to learn to adaptively compose functions, and empirically show that FINE outperforms competing methods in various settings on our dataset.\n\nWe hope that our responses have addressed the main concerns of the reviewer, and that the reviewer would increase their score accordingly.\n\n[1] Munkhdalai, Tsendsuren, et al. \"Metalearned neural memory.\" Advances in Neural Information Processing Systems 32 (2019).", " We thank the reviewer for your comments. We have updated our manuscript and would like to address the reviewer's concerns as follows: \n\n• “It's not clear whether the proposed method would work for other transformations”: We included syntactic transformations in our experiments, including black-white and swap. These transformations are non-continuous and determined by pre-defined rules. Overall, our model performs better than competing models on the syntactic transformations.\n\nIn the revision, we have added more extreme OOD tasks, including ones with unseen rules and unseen objects from unseen datasets (i.e. train and test on different datasets). Our model FINE continues to outperform others on most of the tasks with an average margin of 7.9% to the runner-ups, which shows a strong capability of FINE on generalizing on OOD samples. We believe this is a valuable addition to our paper.\n\nAs conclusion, we would like to emphasize that our paper makes 2 main contributions: first, we propose a new dataset for testing OOD generalization. Despite its simple appearance, our dataset remains challenging for most of the models, and moreover possesses a great flexibility to increase problem difficulties to fit with different OOD levels (e.g. unseen objects in the same dataset, unseen rules, unseen objects from unseen datasets). Second, we propose the FINE model that operates on functional spaces to learn to adaptively compose functions, and empirically show that FINE outperforms competing methods in various settings on our dataset.\n\nWe hope that the reviewer would satisfy with our responses and increase their score accordingly.", " [1] Parascandolo, Giambattista, et al. \"Learning independent causal mechanisms.\" International Conference on Machine Learning. PMLR, 2018.\n\n[2] Rahaman, Nasim, et al. \"Dynamic inference with neural interpreters.\" Advances in Neural Information Processing Systems 34 (2021): 10985-10998.\n\n[3] Fedus, William, Barret Zoph, and Noam Shazeer. \"Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.\" (2021).\n\n[4] Zhang, Chi, et al. \"Raven: A dataset for relational and analogical visual reasoning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.\n\n[5] Barrett, David, et al. \"Measuring abstract reasoning in neural networks.\" International conference on machine learning. PMLR, 2018.\n\n[6] Spratley, Steven, Krista Ehinger, and Tim Miller. \"A closer look at generalisation in raven.\" European Conference on Computer Vision. Springer, Cham, 2020.\n\n[7] Webb, Taylor W., Ishan Sinha, and Jonathan D. Cohen. \"Emergent symbols through binding in external memory.\" ICLR (2021).", " We appreciate the reviewer for your careful review and great pointers to related works. We have edited the manuscript accordingly and would like to address the reviewer's concerns as follows:\n\n• Questions regarding tasks with unseen rules or unseen datasets: In the revision, we have included more extreme OOD tasks, consisting of IQ problems with unseen rules or unseen objects from unseen datasets (i.e. train and test on different datasets). Please refer to Section 4.3 for detail. In short, FINE continues to outperform other competing methods with an average margin of 7.9% to the runner-ups in this extremely challenging OOD settings, thus further demonstrates the superiority of learning on functional spaces. \n\n• “Why create new set of IQ tasks when a similar benchmark of PGMs studying different forms of rule based OOD generalization exists already?”: There are 3 main reasons why we propose a new dataset for OOD problems. 1/ As far as we are aware of, the number of datasets for OOD tasks is still limited and there is no previous dataset that is as flexible as ours. Here OOD tasks are constructed at different levels: unseen objects from the same dataset, unseen rules, or even unseen objects from unseen datasets. Despite the simple appearances, our dataset remains challenging for previous models as demonstrated in empirical experiments. 2/ Previous PGM datasets, such as RAVEN [4] or PGM [5], may contain some critical issues that allow models to “cheat” on their problems, as reported in [6]. 3/ Recent inspiring works, such as [7], tested models on very simple IQ problems compared to ones in our FINE dataset. Along with the flexibility of FINE dataset as discussed above, the problem difficulties can be further increased, thus be able to adaptively fit with different testing goals. \n\n• “Why are tasks restricted to identifying image transformations only?”: For the scope of the paper, we only concentrate on geometric transformations to illustrate the idea of learning relations between objects. As far as we are aware of, real IQ tests involving images often require identifying image transformations. Moreover, using simple-looking image transformations allows us to easily generate data and control the task complexity, while still being able to increase problem difficulties whenever necessary. As shown in the empirical results in our paper, different types of image transformations (linear, non-linear, syntactic) along with images from various datasets and different training-testing strategies can help to construct from very easy tasks where all models can solve almost perfectly to very difficult tasks where models only randomly guess. \n\n• “How are the basis of network weights restricted such that it only spans to a limited set of functions that can be used by the backbone?”: The basis is constrained in the total number of weight matrices, and they are shared across all tasks, seen or unseen. The size of the support region of the matrices is therefore estimated by training across a variety of tasks.\n\n• Question regarding meta-learning: we agree with the reviewer that our proposed model is closely related with meta-learning framework. We did mention some related work regarding meta-learning and few-shot learning in Section 5. However, for the sake of paper completeness, we did not include applications of FINE on meta-learning or few-shot learning tasks as we would like to keep the readers focused on our 2 main contributions, which are the FINE dataset and the FINE model. We believe that meta-learning, few-shot learning tasks, or even OOD classification and regression tasks, are all interesting and potential applications of FINE, which we will investigate in future work. \n\n• “Why is similarity metric chosen to identify correct output instead of using normal cross-entropy?”: The predicted $y^*$ is a feature vector and not a probability distribution, thus we use the weighted MSE to compare the similarity between $y^*$ and $y^\\prime_i$. Given the similarity, we compute the probability of selecting each image, and thus still use cross-entropy loss for learning.\n\n• Regarding the references [1], [2] and [3]: We thank the reviewer for their insightful references. We have added these references into the related work section as well as provided necessary analysis to compare with our model.\n\nTo summarize, in the revision, we have added results on more extreme OOD tasks in Section 4.3 and relevant references in Section 5. We hope that our responses can address the main concerns of the reviewer, and the reviewer would increase their score accordingly. ", " We thank the reviewer for your interests in our idea, implementation and findings, as well as for your insightful comments. We have revised our manuscript accordingly and would like to address the reviewer's concerns as follows: \n\n• Regarding quantitative evidence for human's capability to generalize OOD: To the best of our knowledge, there are some works providing quantitative evidence on human's ability to generalize OOD. The work in [1] measures performances of human and well-known deep neural networks (DNNs) on object recognition, and concludes that DNNs perform poorly when trained and tested on images with different distortion types, while human's performances are much more robust. If we switch our view to the field of neuroscience, we can further find more quantitative evidence, e.g. human can recognize visual scenes from line drawings and photographs with the same speed and accuracy [2], which shows a strong generalization ability on novel domains. We added relevant references to the revision to cover these quantitative evidence for human's ability on OOD generalization. \n\n• “On line 85, the authors assert that the models are required to recognize the objects and figure out the relation between them. What is the evidence for any of this?”: We thank the reviewer for pointing this out. We agree with the reviewer that our model takes inspiration from human's ability and that not every model is required to do so. We have rewritten related parts in the revision so that our approach is plausible and not a strict requirement to deal with OOD problems. \n\n• “The issue is how similar the training set is to the test set. Can the authors train with Omniglot classes and test with CIFAR classes or vice versa? That would be impressive OOD!”: We thank the reviewer for the nice suggestion. In the revision, we have conducted experiments that are “more” OOD: models are tested on problems with unseen rules and unseen images from unseen datasets. Overall, our model still outperforms competing methods with an average margin of 7.9% to the runner-ups. Please refer to Section 4.3 for detail. \n\nAs conclusion, we would like to emphasize that our paper makes 2 main contributions. First, we introduce a new dataset to measure models' performances on OOD tasks. Despite its simple appearance, our dataset remains challenging for previous methods as demonstrated in experiments. Moreover, our dataset is flexible enough to offer OOD tasks with different levels: unseen objects from the same dataset, unseen rules, or even unseen objects from unseen dataset. Second, we propose FINE, which has been shown to work more effectively on different OOD scenarios induced from our dataset. \n\nWe hope our responses can address the main concerns of the reviewer, and that the reviewer would increase their score accordingly. \n\n[1] Geirhos, Robert, et al. \"Generalisation in humans and deep neural networks.\" Advances in neural information processing systems 31 (2018).\n\n[2] Biederman, Irving, and Ginny Ju. \"Surface versus edge-based determinants of visual recognition.\" Cognitive psychology 20.1 (1988): 38-64.", " The problem of generalization to out-of-distribution (OOD) samples is key to most problems in AI. The authors address this challenge in the context of IQ-like tasks by introducing a method called FINE, for Functional indirection neural estimator. The authors are inspired by the idea of indirection, trying to connect two different representations and using one to learn or interpret the other. The authors show that the proposed architecture does well in those IQ tasks when considering images that are different from those in the training set upon applying the same rules as those in the training set. Strengths\n\nThe question of generalization to OOD is fundamental to intelligence problems. \n\nThe idea of indirection is very interesting and the authors propose a very nice implementation of this idea. \n\nFigure 4 is interesting in showing clusters in weight space. It would be interesting to show whether other models reveal the same property or whether this is specific to the proposed architecture.\n\nWeaknesses\n\nThe term OOD is often used in a very loose manner. This study is a good example. To really define OOD, one should define D (i.e., the distribution). What I understand the authors are doing is selecting some “classes” in Omniglot and testing on other classes. Or selecting some “classes” in CIFAR and testing on other classes. This seems pretty standard. But the question is how much OOD is really being tested here. Imagine that your training class is letter “i” and your test class is letter “l”. Those letters are really similar. Sure, they are different “classes”. The issue is how similar the training set is to the test set. Can the authors train with Omniglot classes and test with CIFAR classes or vice versa? That would be impressive OOD! \n\nThe word generalization here only applies to testing with somewhat different images. The hard challenge in IQ tests is to generalize to novel rules. \n The paper starts with the assertion that humans can generalize to OOD. I am curious about the evidence for this. Sure, everybody says this sort of thing. But what kind of real quantitative evidence do the authors have for this statement?\n\nOn line 85, the authors assert that the models are required to recognize the objects and figure out the relation between them. What is the evidence for any of this? Sure, from introspection, humans may reason about the task in this way, but this does not mean that this is what the models are *required* to do. This is an example of anthropomorphizing.\n(In contrast, lines 92/93 are trivially true. Yes, models must use the training data, there is no magic! )\n\nThe conclusions are similarly based on introspection only. The authors state that FINE reliably figures out the hidden relational pattern in each IQ task and is able to solve new tasks but the authors do not show any of this. The authors show that the model works better than a few other models when considering somewhat different images between the training set and the test set, which is pretty cool in and of itself. But there is nothing less and nothing more than that here. Other than somewhat different images, where does the paper show \"new tasks\" or \"discovering relational pattern\"? \n\n\n\n The authors spell out some limitations mostly to highlight other interesting problems that could be addressed. The key challenges mentioned above are not discussed.", " This paper introduces a mechanism for functional indirection (FINE) in neural networks to achieve OOD generalization in abstract reasoning tasks. FINE proposes to dynamically select weights for a neural network backbone for a particular data input-output pair and use those weights to make prediction for an input that share similar hidden rule. The weights are selected from pre-defined key-value memory which is comprised of the weights that spans the space of possible functions. FINE is like a constrained form of hypernetworks where weights of the main (backbone) network are determined by another network. In this paper, this role is played by function composer $\\phi$ which finds optimal weights and their arrangement for the main network using a limited basis of weights in the memory.\n\nThe paper further introduces a new abstract reasoning dataset based on Omniglot and CIFAR100 for evaluation. **Strength:**\n- The proposed methodology of achieving functional indirection is novel and interesting.\n- The paper is well written and easy to follow.\n- The method is shown to achieve good performance on OOD generalization across unseen categories of the datasets used for training.\n\n\n**Weaknesses:**\nMy main concerns revolve around the evaluation of method.\n- The paper goal is to solve OOD abstract reasoning and analogy making. However, the method is not evaluated on the known abstract reasoning benchmark of PGMs. Instead authors propose similar but less complex tasks for evaluation.\n- The newly introduced datasets test generalization only for the unseen characters of the training dataset but not for unseen rules that could be extrapolated or interpolated. See me related comment in the questions section.\n- The proposed method, FINE, implicitly selects the weights based on the transformation (hidden rule) of the input-output pair. This is highly similar to [1] where a mixture of expert (networks) explicitly compete to explain image transformations on MNIST and Omniglot. [1] also showed huge benefits of mechanism-specific function selection for OOD generalization. I believe a comparison or even explanation on the similarities among the method would be good to have in the paper.\n- Since this framework focuses on the generalization of image transformation mechanisms across unseen classes (of Omniglot and CIFAR100), I would encourage authors to test FINE model trained with Omniglot transformations on MNIST data with transformation, which was shown in [1]. \n\n[1] https://arxiv.org/abs/1712.00961\n[2] https://arxiv.org/abs/1807.04225 - Why create new set of IQ tasks when a similar benchmark of PGMs studying different forms of rule based OOD generalization exists already? It fulfils the criterion of providing hints of the hidden rule with few images and then using that to infer the predictions.\n- Why are tasks restricted to identifying image transformations only? Would the method not be able to infer complex relational reasoning tasks as demonstrated in PGMs?\n- How are the basis of network weights restricted such that it only spans to a limited set of functions that can be used by the backbone?\n- Wouldn’t the limited basis of network weights restrict generalization to only observed rules and their combinations thereof? How would you scale to unseen rules?\n- Some of the references in the related work section are missing for e.g. Neural Interpreters [3] uses attention mechanism to recompose functional modules for each input-output pair and test their method on abstract reasoning tasks. Similarly, Switch transformers[4] switch modules based on relevance to inputs.\n- The model and the evaluation share similarities with meta-learning framework where a few input-output pairs are provided to adapt the network weights (e.g. in MAML) and the resulting model is used to make inference on the unknown inputs coming from the same classes. Analogous to the FINE datasets, the classes would be the hidden rule that input-output share. Curious to know what authors think about this and would it be possible to make a small experiment with Omniglot?\n- Why is similarity metric chosen to identify correct output $y’$ instead of using normal cross-entropy?\n- Do the newly introduced datasets test generalization only for unseen characters or also unseen rules? For e.g. the model could be trained for detecting character rotation from 0 to 90 degrees and tested on the characters rotated from 90 -270 degrees.\n\n[3] https://arxiv.org/abs/2110.06399\n[4] https://arxiv.org/abs/2101.03961 The main limitations seem to be around experimental evaluation. I would be happy to increase my score if authors address those concerns.\n\n\nThere doesn't seem to be any obvious negative societal impact.", " This paper addresses the out-of-distribution generalization of deep learning models in IQ visual tasks involving extracting geometric transformation between a pair of images and applying the extracted transformation to a new image. It presents a memory-augmented neural architecture and on-the-fly model parameter retrieval from the memory to achieve OOD generalization in functional spaces.\n Strengths:\n+ Unlike previous work that performs indirection and analogy-making in the data spaces, this paper proposes a mechanism for OOD in functional spaces.\n\n+ The weights of backbone can be determined on-the-fly using (input, output) query to retrieve from a memory.\n\nWeaknesses:\n- The proposed method may need a large memory to store trainable weights for very deep backbones.\n\n- The authors tested the proposed method only in functional spaces related to geometrical transformation. It's not clear whether the proposed method would work\nfor other transformations. It's not clear whether the proposed method would work for other transformations. Yes, the authors have addressed the limitations of their work.", " This paper introduces FINE, a method for achieving out of distribution generalization through analogy making and indirection in the space of functions. FINE makes an analogy from a given input and output example to infer the function that ties them then uses indirection to approximate the function by composing a set of functions saved in memory. The paper introduces a visual reasoning dataset for evaluating out of distribution generalization. The dataset is based on standard vision benchmarks, CIFAR100 and Omniglot. Each sample consists of an input-output pair used as a cue for the transformation, an input image and 4 choices. FINE performs better than several comparable models over all functions and function combinations in terms of accuracy and sample efficiency. Ablation experiments highlight the importance of the number of layers and the memory size. Strengths:\n- The paper takes inspiration from features of human intelligence and combines them in a novel way.\n- It proposes a formal framework that uses this technique to mitigate the lack of OOD generalization in neural networks. It shows that the proposed method can abstract image-level transformations in the latent space and that the abstraction allows the model to generalize transformations to novel images.\n- The paper proposes a novel use case in visual reasoning that highlights the new framework’s advantage. The method is compared to several relevant baselines.\n- The paper is clearly written overall. The code is provided for reproducing the results.\n- The authors discuss the certain limitations of their approach.\n\nWeaknesses:\n- To learn the proposed task, the model has to identify and build the transformation. Thus, the OOD samples should be different functions, not only different input images (for example, can the model learn rotations of angles between 0 and 90 degrees and extrapolate to angles of 90-180 degrees ?)\n- To provide a fair comparison to baselines, all models need to be adapted to this framework. The training setup and hyperparameters used for training these models are not explained in the main paper or the supplementary material. This information is crucial for explaining their performance.\n- The paper doesn’t discuss other applications for this framework. OOD generalization is an active research topic in ML and image classification is among the main applications (adversarial examples and domain shifts and noise corruptions are OOD examples). How can this framework benefit OOD generalization in this task?\n- Although the use of NICE layers is intuitively motivated, it is not a necessary building block for FINE. Its use is motivated by the reversibility of certain transformations, which is a design choice in the dataset. Since most baselines (excluding hypernetwork) are not equipped with NICE layers, it would be a fair comparison to provide results on all models without NICE layers.\n- The choice of using 2 NICE layers instead of one layer is not fully explained.\n- The $y_t$ vectors used in the analogy step are output from $\\gamma_t(y)$. The motivation and significance of this choice are not explained.\n- In the first 2 paragraphs of subsection 3.1, H1 considers a unique output for each input while H2 considers the possibility of several outputs based on the transformation. The second hypothesis is not different from H1 if the target transformation is supplied with the input. This is the choice made indirectly in the dataset by providing an input-output pair as a hint for the transformation. It would be more concise to write this section without using hypotheses. It’s simpler to explain FINE’s advantage when the task requires learning many functions with a single backbone.\n\n These questions are directly related to the weaknesses mentioned above.\n- The paper should clarify that the OOD generalization claim concerns the input images that undergo the transformations, not the transformations that are learned by the model. Evaluations on unseen transformations would be a valuable addition to the paper. \n- The training setup for baseline models should be described in the supplementary material. Ideally, all models should be capable of inferring the transformation.\n- As mentioned above, It would be a fair comparison to provide results on all models without NICE layers.\n- Can this framework be adapted and evaluated on OOD generalization in image classification or other tasks in ML ?\n- The first 2 paragraphs of section 3.1 could reformulated for clarification. As mentioned above, it would be more concise to write this section without using hypotheses. It’s simpler to explain FINE’s advantage when the task requires learning many functions with a single backbone.\n \n The authors address certain limitations of the paper but address no negative social impact." ]
[ -1, -1, -1, -1, -1, -1, 3, 6, 6, 5 ]
[ -1, -1, -1, -1, -1, -1, 5, 4, 4, 4 ]
[ "Yt2ZOKcurdf", "KEcjMZU5PJ", "s6DLck9nQ_2", "1nwv-2EIflv", "LQTFhq0IHCt", "JqP5oIrk1Q2", "nips_2022_qj-_HnxQxB", "nips_2022_qj-_HnxQxB", "nips_2022_qj-_HnxQxB", "nips_2022_qj-_HnxQxB" ]
nips_2022_n0dD3d54Wgf
SparCL: Sparse Continual Learning on the Edge
Existing work in continual learning (CL) focuses on mitigating catastrophic forgetting, i.e., model performance deterioration on past tasks when learning a new task. However, the training efficiency of a CL system is under-investigated, which limits the real-world application of CL systems under resource-limited scenarios. In this work, we propose a novel framework called Sparse Continual Learning (SparCL), which is the first study that leverages sparsity to enable cost-effective continual learning on edge devices. SparCL achieves both training acceleration and accuracy preservation through the synergy of three aspects: weight sparsity, data efficiency, and gradient sparsity. Specifically, we propose task-aware dynamic masking (TDM) to learn a sparse network throughout the entire CL process, dynamic data removal (DDR) to remove less informative training data, and dynamic gradient masking (DGM) to sparsify the gradient updates. Each of them not only improves efficiency, but also further mitigates catastrophic forgetting. SparCL consistently improves the training efficiency of existing state-of-the-art (SOTA) CL methods by at most 23X less training FLOPs, and, surprisingly, further improves the SOTA accuracy by at most 1.7%. SparCL also outperforms competitive baselines obtained from adapting SOTA sparse training methods to the CL setting in both efficiency and accuracy. We also evaluate the effectiveness of SparCL on a real mobile phone, further indicating the practical potential of our method.
Accept
This paper introduces a new continual learning scheme whose efficiency and effectiveness are achieved through three key components that encourage sparse network weight connection, replay buffer selection, and sparse gradient truncation. After the author-review discussion phase, a majority of reviewer suggest acceptance. Only one negative reviewer did not respond to the authors' rebuttal, but AC thinks that it is convincing enough to resolve her/his concerns. AC thinks that investigating sparse networks for continual learning is novel, and demonstrating it under edge-device level is a big plus. Overall, AC is happy to recommend acceptance. AC strongly recommend the authors to incorporate all additional results and discussion-with-reviewers into the final draft.
train
[ "K-rt-3DDzox", "HLJf7OdNN8Y", "xoM9xsONIWK", "fRHsz3wj5y", "-YNdTehNVN", "6962aArzV6C", "LyZZPlw_vIT", "Aa2o4N8oDjF", "ZF8o0E6jlKG", "QN-C6-H98K1", "Sf0jIoZ0XoX", "vMHik1qGe-A", "7q-fILC-eIa", "fUSnrOeeo7L", "pLcSYRuaDsG", "0pQOI7O9EpI2", "2BM3ldFnnb-", "8T1I-3103Lz", "hI5fvrY_8cm", "gkGZ8Lfgd9-" ]
[ "author", "author", "author", "author", "author", "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear Reviewer g1tN,\n\nThank you very much for spending time reviewing our paper. Since the discussion will end very soon, we sincerely hope that you have found time to check our detailed response to your previous questions/comments. If you have any further questions, please feel free to let us know. We will try our best to reply to you before the discussion deadline.\n\nThank you very much,\n\nAuthors", " Dear Reviewer BMhS,\n\nThank you very much for spending time reviewing our paper. Since the discussion will end very soon, we sincerely hope that you have found time to check our detailed response to your previous questions/comments. If you have any further questions, please feel free to let us know. We will try our best to reply to you before the discussion deadline.\n\nThank you very much,\n\nAuthors\n", " Dear Reviewer 8efq,\n\nThank you very much for reviewing our paper and recognizing the contributions of our work. In our posted response earlier, we have clarified the technical novelty of our method. Furthermore, we added additional experiments in the updated paper to demonstrate the generality of our method. We hope that you can find our response convincing. If you have any additional comments, feel free to let us know. We look forward to discussing with you and will try our best to address any further concerns before the discussion deadline.\n\nThank you very much,\n\nAuthors\n", " Dear Reviewer g1tN,\n\nThank you very much for reviewing our paper and leaving valuable comments. In our posted response (Part 1 - 4) earlier, we have conducted new experiments and added additional clarifications to address your concerns/questions about our original submission. We hope that you can find our response convincing. If you have any additional comments, feel free to let us know. We look forward to discussing with you and will try our best to address any further concerns before the discussion deadline.\n\nThank you very much,\n\nAuthors", " Dear Reviewer BMhS,\n\nThank you very much for reviewing our paper and leaving valuable comments. In our posted response (Part 1 and 2) earlier, we have conducted new experiments and added additional clarifications to address your concerns/questions about our original submission. We hope that you can find our response convincing. If you have any additional comments, feel free to let us know. We look forward to discussing with you and will try our best to address any further concerns before the discussion deadline.\n\nThank you very much,\n\nAuthors", " We sincerely thank the reviewer for recognizing the further improvement of the quality of our work and confirming acceptance! We also appreciate the reviewer’s support of our work by being open to discussing with other reviewers to provide additional opinions.", " Thank you for addressing my suggestions, I believe that the new experiments provided further improve the quality of your submission (with particular reference to the ones on CO2L and the localization of pruning). I confirm that I recommend acceptance, I am open to discussing with other reviewers if they want an additional opinion on any point they raised.\n\nAlso, thanks for pointing out the FAQs about the placement of the limitation section, you may disregard my initial request for moving it.", " We thank the reviewer for recognizing the strengths of our paper and their valuable feedback. Before we go into details about per-question response, we would like to clarify several crucial points\n- Our method, SparCL, is a general framework for enhancing the efficiency for all kinds of CL methods. Thus, **the effectiveness of SparCL does not depend on the existence or size of the rehearsal buffer**. Put differently, the overhead that the buffer introduces is not because of SparCL (see also R2, R3 below). Nevertheless, to address the reviewer’s concerns, we include additional experiments to show the effectiveness of SparCL over CL methods without a buffer (R4). \n- The goal of SparCL is **enhancing efficiency while maintaining accuracy, instead of merely pushing the absolute performance in accuracy higher**. As noted in our response to Reviewer BMhS, accomplishing this is neither trivial nor straightforward with existing (non-CL) sparsification methods.\n\nWe respond to each of the reviewer’s questions point-by-point below:", " > Q1: Sec 4.1 Task-aware Dynamic Masking not clear.\n\nR1:\nWe have carefully proofread and revised the corresponding section by adding more explanations and fixing typos in Algorithm 1. Please let us know if there are any more detailed concerns or suggestions.\n\n> Q2: Claim about the memory foot print and training FLOPS. Memory buffer overhead and extra steps overhead. Clarify this if I am wrong in my understanding on how these additional overheads won't effect the training FLOPS and memory footprint.\n\nR2:\nWe would like to emphasize that we have already taken into account all overheads when computing training FLOPs in Table 1-3; similarly, we have taken into account all overheads when computing the. memory footprint in Table 3 (please refer to Appendices D.1, D.2 for more details about the calculation of FLOPs and memory footprint). Please note that SparseCL still outperforms all other methods in terms of both FLOPs and memory footprint. In Table 1, our method (SparCL-DER$_{95}$) achieves training FLOPs that is at least 7$\\times$ less than PackNet [1] and LPS [2], which are also CL on pruning algorithms without a buffer or any of the extra steps introduced by our method.\n\n\n\nMore specifically:\n\n1) For the first concern on the memory buffer, we would like to restate that SparCL itself does not require any form of buffer. Instead, SparCL serves as a general framework that is compatible with existing CL methods, with or without a buffer, to *improve* their efficiency while maintaining accuracy. Therefore, **the memory buffer is not a specific overhead of our method; it is simply a feature of the rehearsal-based methods that we combined with SparCL**. In an additional experiment (see also R4), we combine SparCL with a CL method without any buffer, and SparCL consistently yields improvements in both accuracy and efficiency.\n\n2) For the second concern on “extra steps” of TDM, DDR and DGM, all the three techniques are updated every $\\delta k$ epochs as mentioned in lines 166 and 188. The total FLOPs for TDM, DDR, and DGM on split CIFAR-10 during the training process is approximately $4.5\\times 10^{9}$, which is **negligible** compared to the entire training FLOPs (total training FLOPs $ >10^{15}$, therefore less than $0.0001$%) as illustrated in Table 1. For the memory overhead, DDR only requires the indices (stored in int8 format and approximately 3KB) for the easier examples while TDM and DGM only rely on the existing parameters and do not incur additional memory cost. We also include these results in Appendix D.1 and D.2 of our updated paper.\n\n \n> Q3: I am also not clear with authors point of view of choosing the \"Rehearsal based methods also\" for comparing for \"memory foot print and training\" as it is obviously clear that rehearsal based methods does need additional amount of training and memory.\n\nR3: \nAgain, we would like to emphasize the focus of our paper is **not** directly comparing our method against rehearsal-based, or any other specific CL methods. Instead, we focus on designing a sparse training framework to **improve** existing CL methods in terms of efficiency without compromising accuracy. We mainly use rehearsal-based methods as our backbone because of their effectiveness in the challenging and practical Class-IL setting (see R5 for detailed explanation). No matter how much memory footprint or training FLOPs the base methods (such as DER [3] or ER [4]) have, we do take them into account for both the base method, and the corresponding version combined with SparCL (e.g. DER v.s. SparCL-DER). Therefore, we are not cherry-picking rehearsal methods because they have additional training and memory cost and thus are easy to outperform. From the experiments, our method reduced both training FLOPs (Table 1) and memory footprint (Table 3) of the base methods, even with an accuracy improvement. Notably, all variants of SparCL-DER in Table 1 outperform other non-rehearsal based methods (LPS [2], EWC [5], etc.) that already have less training FLOPs than DER, by a large margin in terms of training FLOPs.\nTo further demonstrate the generality of our method beyond improving rehearsal-based methods, we also conduct additional experiments by combining our framework with CO2L [6] (without buffer); see R4 for more detailed explanation.", " > Q4: I am also very curious to know what happens when buffer size of the proposed method is 0. What happens to accuracies as shown in the Table 1 when buffer size increased from 200--->500 the acc increased from 65-->72\n\nR4:\nPlease note that for rehearsal-based methods, it is well-known that increasing buffer size leads to accuracy improvement (see, e.g., [3, 4, 7]). Also, we would like to clarify that in Table 1, our method is combined with ER [3] and DER [4]. They are rehearsal-based methods that are not compatible with buffer size = 0. Nevertheless, our method, SparCL, is compatible with buffer size = 0 simply by removing buffer-related terms in equation (1) and (3) in the main paper. To demonstrate the generality of our method and see how it performs with buffer size = 0, we combine SparCL with another recent SOTA method, CO2L [6], that is indeed compatible with buffer size = 0. The results are shown in the following table:\n\n| Method | Buffer | Class-IL | FLOPs Train | Mem-Footprint |\n|-|-|-|-|-|\n| CO2L | $0$ | $58.89$ | $3.3\\times 10^{16}$| 213MB |\n| SparCL-CO2L$_{75}$ | $0$ | $59.43$ | *$0.6\\times 10^{16}$*|110MB|\n| CO2L | $200$ | $65.57$ | $4.4\\times 10^{16}$|293MB|\n| SparCL-CO2L$_{75}$ | $200$ | $66.03$ | *$0.8\\times 10^{16}$*|186MB|\n| CO2L | $500$ | $74.26$ | $4.4\\times 10^{16}$|293MB|\n| SparCL-CO2L$_{75}$ | $500$ | $75.87$ | *$0.8\\times 10^{16}$*|186MB|\n\nWe also include these additional results in Appendix F of our updated paper. The results demonstrate that our SparCL can constantly improve the accuracy and efficiency regardless of the buffer size. We also want to point out that introducing the buffer (with buffer size > 0) indeed adds overhead in both FLOPs and memory footprint, because of the additional loss terms that are related to buffered examples (see [4, 6] for more details). However, the calculation of training FLOPs and memory footprint is independent of the buffer size, when buffer size > 0. This is because increasing the buffer size does not affect total iterations of the training process and batch size per iteration. Please refer to Appendix D.1, D.2 for more details about the calculation of FLOPs and memory footprint.\n\n\n> Q5: The improvement in the acc w.r.t to SOTA minimal as PackNet shown in table 1 does show packnet, LPF achieves 93.73%,94.50% acc when compared to best SparCL-DER75(95.19±0.34)(500 buffer) and SparCL-DER75(94.06±0.45)(200 buffer).\n\nR5:\nWe would also like to kindly clarify that PackNet [1] and LPS [2] are specialized and limited to the Task-IL setting (where task identity is given at test time), which is much easier than the Class-IL setting (where task identity is *not* given at test time); please see [8] for a detailed comparison of these two settings. These two methods are not compatible with the Class-IL setting. On the contrary, rehearsal-based methods maintain a buffer to address the harder Class-IL setting. In our paper, we only show the Task-IL accuracy for completeness and use Class-IL accuracy as our main accuracy metric, following more recent prior work [4, 6, 9]. In Table 1, SparCL-DER$_{75}$ outperforms all methods in terms of Class-IL accuracy, including PackNet and LPS since they are not compatible with this setting.\n\nMoreover, we would like to clarify the focus of our paper and Table 1: SparCL is not trying to outperform all SOTA methods in terms of accuracy (although SparCL achieves this goal by combining with DER at $0.75$ sparsity ratio). Instead, SparCL serves as a general framework to improve the efficiency of existing CL methods while maintaining accuracy. Thus, accuracy improvement should not be treated as a hard requirement of our method. Please see the efficiency gap reflected in training FLOPs of our methods and other methods: SparCL-DER is $2-8\\times$ more efficient than PackNet and LPS with different sparsity ratios, and also how SparCL maintains and even improves accuracy in the more challenging Class-IL setting.", " > Q6: One more limitation I see regarding the proposed approach is towards the scalability as you can see current papers such as ANML(https://ecai2020.eu/papers/939_paper.pdf), of CL does show CL >100's of tasks. But how would given proposed scale comparing the overhead of buffer, memory masks while trying to learn huge no of tasks.\n\nR6:\nThanks for pointing us to this work. It is very important to note the following aspect of the experimental setting used in ANML: although they claim to use a sequence of 600 tasks by splitting the Omniglot dataset, there is **actually only one class per task** (see Figure 2 and 3 in [10]). As a result, the total number of classes used is 600 classes. In our setting, we use Split Tiny-ImageNet, a standard, challenging CL benchmark widely used in the community [4, 6, 11], which has 200 classes split into 10 tasks. In this sense, the benchmark we use and the dataset in ANML are actually quite comparable in terms of total number of classes to learn.\n\nMoreover, we would like to emphasize that the problem we study is of fundamental importance, and would of course be crucial to all and any attempts at scalability! The ultimate goal of SparCL is to greatly enhance efficiency of existing CL methods, while keeping accuracy. Thus, the effectiveness of SparCL is actually independent of the setting (long or short task sequences) and the exact CL method (with or without buffer). The fact that we propose a more efficient training framework would help, not hinder, any attempts at scaling the number of tasks or classes. \n\nAs for the reviewer’s specific concern on the overhead when there are huge number of tasks, we would also like to further clarify with regard to the buffer and the masks:\n- As clarified in R2, the buffer is *not* a specific overhead of our method, but the rehearsal-based methods that SparCL combined with. Since we already showed in R4 that our SparCL actually works well with methods with buffer size = 0, the overhead of the buffer really depends on the specific methods and how they manage the buffer with long sequences, instead of how SparCL handles it. .\n- As for the cost of masks, please note that we only have 2 masks (TDM and DDR), and the number of masks does *not* scale with the number of tasks, since we are dynamically adjusting the same masks when new tasks come, instead of adding new masks like prior work did [1, 2]. Moreover, the masks are typically saved in a low-bit (consider it only contains 0 and 1) and sparse manner (see Appendix E.1.1), which lead to minimal memory overhead. \nHowever, when the number of tasks scales up, SparCL could apply a mask with a lower sparsity ratio to provide higher network capacity. We treat this scalability problem in CL and how to better adapt SparCL for greater scalability as interesting future work.\n\n\nPlease let us know if all of your concerns have been addressed and we are happy to further discuss and clarify. We look forward to your reply.\n\n**References**\n\n[1] Mallya, Arun, and Svetlana Lazebnik. \"Packnet: Adding multiple tasks to a single network by iterative pruning.\" Proceedings of the IEEE conference on Computer Vision and Pattern Recognition. 2018.\n\n[2] Wang, Zifeng, et al. \"Learn-prune-share for lifelong learning.\" 2020 IEEE International Conference on Data Mining (ICDM). IEEE, 2020.\n\n[3] Chaudhry, Arslan, et al. \"On tiny episodic memories in continual learning.\" arXiv preprint arXiv:1902.10486 (2019).\n\n[4] Buzzega, Pietro, et al. \"Dark experience for general continual learning: a strong, simple baseline.\" Advances in neural information processing systems 33 (2020): 15920-15930.\n\n[5] Kirkpatrick, James, et al. \"Overcoming catastrophic forgetting in neural networks.\" Proceedings of the national academy of sciences 114.13 (2017): 3521-3526.\n\n[6] Cha, Hyuntak, Jaeho Lee, and Jinwoo Shin. \"Co2l: Contrastive continual learning.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n\n[7] Mai, Zheda, et al. \"Online continual learning in image classification: An empirical survey.\" Neurocomputing 469 (2022): 28-51.\n\n[8] Van de Ven, Gido M., and Andreas S. Tolias. \"Three scenarios for continual learning.\" arXiv preprint arXiv:1904.07734 (2019).\n\n[9] Wu, Yue, et al. \"Large scale incremental learning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.\n\n[10] Beaulieu, Shawn, et al. \"Learning to Continually Learn.\" ECAI. 2020.\n\n[11] De Lange, Matthias, et al. \"A continual learning survey: Defying forgetting in classification tasks.\" IEEE transactions on pattern analysis and machine intelligence 44.7 (2021): 3366-3385.", " We thank the reviewer for recognizing the strengths of our paper and the feedback. We are happy to address weaknesses and questions raised below:\n\n> Q1: I see sparsity as a universal and generic way of improving deep model efficiency. Therefore it is not surprising to see introducing sparsity to CL can improve efficiency.\n\nR1: \nWe agree that sparsity is a generic way of improving deep model efficiency. However, we would like to emphasize that **there is no guarantee that introducing sparsity to training can maintain model accuracy**. Naive ways of introducing sparsity can lead to a significant decrease in accuracy, and there is a large body of research on how to do this in the standard learning (non-CL) setting (see, eg., [1,2], but also [9]). Most importantly, our method is not a trivial combination of such sparsity techniques and CL. Applying such methods directly comes with a severe accuracy degradation. For example, applying [1,2] directly to the CL setting leads to at least a $5$% lower accuracy than our SparCL, with $1.5\\times$ more training FLOPs (see Table 2). Notably, SparCL maintains accuracy even at $0.95$ sparsity ratio, i.e. $1/20$ of the original model size while none of the “simple” combinations of sparse training and CL methods achieve similar performance. Moreover, all components of our methods are well-motivated and specially designed for the CL setting; please also refer to our response R2 to see how we design all three components to both enhance efficiency and overcome forgetting in CL. \n\n\n> Q2: Among the three techniques proposed by this paper, only the weight sparsity part seems to have a direct impact on overcoming catastrophic forgetting, which is the key to improving continual learning.\n\nR2: \nWe would like to kindly argue that all three components of SparCL not only enhance training efficiency, but are all well-motivated for improving continual learning by overcoming catastrophic forgetting:\n- Dynamic data removal (DDR) maintains more informative examples and addresses the data imbalance issue between the past and current data; please see lines 194-200 in the paper for a detailed discussion. According to [3, 4], selecting informative examples is crucial to alleviate forgetting. According to [5], addressing the data imbalance issue between tasks is also important for overcoming forgetting.\n- Dynamic gradient masking (DGM) promotes the preservation of past knowledge by preventing a fraction of weights from updating (lines 215-216). Our strategy actually shares motivation with regularization-based methods like [6, 7] to address forgetting.\n- Most importantly, the ablation study in Table 3 (and appendix D.4) demonstrates all components contribute to *both* efficiency and accuracy, which shows their direct impact on improving CL.\n\n\n> Q3: In Table 1, the latest method included in the comparison is from 2020. The quantitative comparisons are clearly insufficient to support the claim that the proposed framework 'further improves the SOTA accuracy by at most 1.7%.'\n\nR3:\nWe thank the reviewer for suggesting that there are more recent CL methods. To strengthen our claim, we include the comparison with one of the more recent SOTA CL methods, CO2L [8], suggested by Reviewer igAw. We set the sparsity ratio of SparCL as 0.75. The results on Split-CIFAR10 are shown in the table below:\n| Method | Buffer | Class-IL | FLOPs Train |\n|-|-|-|-|\n| CO2L | $200$ | $65.57$ | $4.4\\times 10^{16}$|\n| SparCL-CO2L$_{75}$ | $200$ | $66.03$ | *$0.8\\times 10^{16}$*|\n| CO2L | $500$ | $74.26$ | $4.4\\times 10^{16}$|\n| SparCL-CO2L$_{75}$ | $500$ | $75.87$ | *$0.8\\times 10^{16}$*|\n\nWe have included these additional results in Appendix F of our updated paper. From these results, we can see that SparCL consistently improves upon CO2L, in terms of both accuracy and efficiency measured in training FLOPs. This further strengthens our claim made in the paper, indicating the generality of our SparCL framework for efficient CL without compromising accuracy.\n\nMoreover, we would like to emphasize that our method serves as a *general framework* which enables existing CL methods to become more efficient, without sacrificing accuracy. In that sense, our experiments aim to demonstrate this improvement in efficiency over a broad array of methods (at no accuracy loss). In fact, the observed improvement in accuracy is a surprising property; it further strengthens our argument that efficiency via sparsification should indeed be pursued.", " > Q4: Overall, this paper delivers a useful message that sparsity can improve the efficiency of CL, and even slightly improve the accuracy. However, the novelty and soundness of the paper do not meet the standard of NeurIPS.\n\nR4:\nOur work is novel in the following aspects:\n- We investigate learning efficiency in CL. We are the very first to do so: this is an under-investigated, yet important aspect in CL. It is also crucial for the real-world application and deployment of CL methods in resource-limited platforms.\n- Our proposed method serves as a general framework that introduces multiple levels of sparsity into CL, and greatly improves existing CL methods in efficiency while maintaining accuracy. On the other hand, existing representative sparse training methods cannot be readily used in the CL setting, resulting in a significant accuracy drop. This is a finding not previously known, that we both first report here, and directly address.\nWe provide the following supporting evidence for the soundness of our approach. We believe these to be quite strong, well within experimental evidence provided by usual NeurIPS papers; please let us know if there are any other specific soundness concerns: :\n- We have conducted comprehensive comparison experiments as well as ablation studies to demonstrate the effectiveness of our method. Additional experiment results with the more recent SOTA method CO2L [8] (presented in R3), and methods without buffers [6, 8] (presented in R5) further strengthen our argument in favor of SparCL.\n- As recognized by the reviewer, we evaluate SparCL on a real mobile edge device, demonstrating the practical potential of our method. The techniques and implementation details can also be found in section 5.4 and appendix E. \n\n\n> Q5: The proposed method relies on the usage of a reply buffer. As also discussed by the authors in Appendix Section A, the proposed method can potentially be applied to more advanced methods that do not rely on any previous data. Showing the proposed method to be more 'model agnostic might help improve the paper.\n\nR5:\nOur proposed method, SparCL, as a general framework, can be easily adapted to scenarios without a buffer by removing buffer-related terms in Equation (1) and (3). For completeness, we show the effectiveness of our method upon EWC [6], one of the most well-known rehearsal-free CL methods, and a more advanced CL method, CO2L [8], without a buffer.\nThe results on Split-CIFAR10 with 0.75 sparsity ratio are shown in the table below:\n| Method | Buffer | Class-IL | FLOPs Train |\n|-|-|-|-|\n| EWC | $0$ | $19.49$ | $8.3\\times 10^{15}$|\n| SparCL-EWC$_{75}$ | $0$ | $20.78$ | *$1.6\\times 10^{15}$*|\n| CO2L | $0$ | $58.89$ | $3.3\\times 10^{16}$|\n| SparCL-CO2L$_{75}$ | $0$ | $59.43$ | *$0.6\\times 10^{16}$*|\n\nWe also include these additional results in Appendix G of our updated paper. SparCL consistently improves both EWC and CO2L, in terms of both accuracy and efficiency. The results also show the model-agnostic potential of SparCL to improve upon all kinds of methods, including both rehearsal and non-rehearsal methods. \n\n\nPlease let us know if all of your concerns have been addressed and we are happy to further discuss and clarify. We look forward to your reply.\n\n**References**\n\n[1] Evci, Utku, et al. \"Rigging the lottery: Making all tickets winners.\" International Conference on Machine Learning. PMLR, 2020.\n\n[2] Lee, Namhoon, Thalaiyasingam Ajanthan, and Philip HS Torr. \"Snip: Single-shot network pruning based on connection sensitivity.\" arXiv preprint arXiv:1810.02340 (2018).\n\n[3] Aljundi, Rahaf, et al. \"Gradient based sample selection for online continual learning.\" Advances in neural information processing systems 32 (2019).\n\n[4] Buzzega, Pietro, et al. \"Rethinking experience replay: a bag of tricks for continual learning.\" 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021.\n\n[5] Mai, Zheda, et al. \"Online continual learning in image classification: An empirical survey.\" Neurocomputing 469 (2022): 28-51.\n\n[6] Kirkpatrick, James, et al. \"Overcoming catastrophic forgetting in neural networks.\" Proceedings of the national academy of sciences 114.13 (2017): 3521-3526.\n\n[7] Zenke, Friedemann, Ben Poole, and Surya Ganguli. \"Continual learning through synaptic intelligence.\" International Conference on Machine Learning. PMLR, 2017.\n\n[8] Cha, Hyuntak, Jaeho Lee, and Jinwoo Shin. \"Co2l: Contrastive continual learning.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n\n[9] Mostafa, Hesham, and Xin Wang. \"Parameter efficient training of deep convolutional neural networks by dynamic sparse reparameterization.\" International Conference on Machine Learning. PMLR, 2019.", " We sincerely appreciate your careful review and a great summary of our contributions. And thank you very much for the very constructive comments, which are addressed below.\n\n> Q1: Comparison with more recent SOTA, e.g., representation learning method, CO2L.\n\nR1: We thank the reviewer for these great references. To further demonstrate the generality of SparCL, we include the comparison between CO2L and SparC-CO2L at a sparsity ratio of 0.75.\nThe results on Split-CIFAR10 are shown in the table below:\n| Method | Buffer | Class-IL | FLOPs Train |\n|-|-|-|-|\n| CO2L | $0$ | $58.89$ | $3.3\\times 10^{16}$|\n| SparCL-CO2L$_{75}$ | $0$ | $59.43$ | *$0.6\\times 10^{16}$*|\n| CO2L | $200$ | $65.57$ | $4.4\\times 10^{16}$|\n| SparCL-CO2L$_{75}$ | $200$ | $66.03$ | *$0.8\\times 10^{16}$*|\n| CO2L | $500$ | $74.26$ | $4.4\\times 10^{16}$|\n| SparCL-CO2L$_{75}$ | $500$ | $75.87$ | *$0.8\\times 10^{16}$*|\n\nWe also include these additional results in Appendix F of our updated paper. From these results, we can see that SparCL consistently improves CO2L with different buffer sizes, in terms of both accuracy and training FLOPs. The result further indicates the generality of SparCL that it even improved representation learning approaches. We are happy to explore how SparCL would improve more different kinds of CL methods to gain more empirical and theoretical insights as our future work. \n\n> Q2: Why is DER++ not among competitors?\n\nR2: Our apologies for the confusion; the “DER” in our paper actually refers to the stronger “DER++”. We have revised the naming in the updated version.\n\n\n> Q3: Where are weights pruned the most in the backbone, is there a pattern?\n\nR3:\nIn this work, we conduct uniform pruning (i.e., each layer has the same pruning ratio) across different CONV layers as mentioned in line 250 in experimental details. The usage of uniform pruning ratio is to match the single-instruction multiple-data (SIMD) [1] architecture of embedded CPU/GPU processors for efficient hardware accelerations. \n\nTo observe the pruning pattern, we also experimented with setting an overall pruning ratio as $95$% for the entire network, allowing each layer to have different pruning ratios by ranking CWI for the whole model. We include these additional results in Appendix I of our updated paper. According to the results, earlier CONV layers tend to have a smaller pruning ratio, which is only around $25-30$%, while the pruning ratios for the latter CONV layers can reach $99$%. The results are reasonable, as latter layers are more redundant with a larger amount of parameters. In addition, the weights in earlier layers might be more important for keeping high accuracy, but take a large portion of the computation. Therefore, though slightly improving the accuracy performance to $72.45$% compared to the uniform pruning ratio setting, allowing different pruning ratios across different layers yields worse acceleration (drop to $2.2\\times$ compared with $3.1\\times$ when adopting the uniform pruning ratio) on the hardware. As our purpose is to facilitate the efficiency of the CL-system, we adopt the uniform pruning ratio setting. ", " > Q4: an experimental comparison could be made between DDR and the approach for buffer reduction delineated in [d] (gradient based) or the one described in [h] (loss based)\n\nR4:\nWe agree that it is interesting to compare DDR and GSS [d] or Loss-Aware Reservoir Sampling (LARS) [h], however, we would like to kindly point out that the objective of DDR and these methods are different: \nAs discussed in our paper (lines 182-185), DDR aims at removing *training examples* for efficiency, while GSS and LARS put the focus on selecting more informative examples that are saved in the buffer.\nTechnically, DDR removes less informative *training examples* at certain epochs (and thus indirectly affects samples saved in the buffer), while GSS and LARS directly replaces less informative *buffered examples* in the buffer. \nThus, the original GSS and LARS are not directly comparable to DDR. However, we can actually use the example importance criteria used in GSS and LARS to remove less informative training examples as well. We replace the misclassification rate in DDR by the gradient-based (GSS) and loss-based criteria (LARS) objectives and get two variants of our approach, DDR-GSS and DDR-LARS, respectively. For fair comparison, we fix all other parameters used in DDR the same for all methods (sparisity 0.75, remove $30$% training data, with TDM only). Since all variants of DDR already remove training examples for efficiency, we mainly focus on their accuracy performance here. The final results on Split-CIFAR10 is shown in the table below:\n| Method | Importance | Accuracy | \n|-|-|-|\n| DDR | Misclassification | $73.80$ | \n| DDR-GSS | Gradient | $73.45$ | \n| DDR-LARS | Loss | $73.67$ | \n\nWe also include these additional results in Appendix H of our updated paper. We can see our basic strategy outperforms other variants, indicating that our DDR strategy is simple and effective. Moreover, the actual difference between the variants are very small and all variants lead to both accuracy and efficiency improvement. This observation further demonstrates DDR is even robust to different example importance criteria. Moreover, DDR actually serves as a general strategy for training data removal in CL and can be further improved by future work on better learning and representing example importance.\n\n> Q5: Fix algorithm.\n\nR5:\nThanks for the catch. We have fixed these typos by adding ends to both if’s and fixed the else-if to if in our updated version.\n\n> Q6: Fix figure format.\n\nR6:\nThanks for the kind reminder. During the submission, we tried to upload our paper with all figures in vectorial format. However, Figure 1 was not displaying correctly and left a blank in the vectorial format. So we instead used the .png format for Figure 1. We will troubleshoot this issue and make sure it is in vectorial format in the camera ready version.\n\n> Q7: Limitations and societal impact position.\n\nR7:\nThanks for the suggestion. When submitting the paper, we referred to the submission FAQs (https://neurips.cc/Conferences/2022/PaperInformation/NeurIPS-FAQ) and found out the following instruction:\n\n*You may include a discussion of these potential negative societal impacts anywhere in the paper (in the intro, in the conclusion, as a stand-alone section, *in the supplemental material if appropriate*, etc.)*\n\nThus we decided to put these sections in the Appendix. Nevertheless, we are happy to move them to the main paper after we finalize all other changes in the final version.\n\n\n> Q8: Typos.\n\nR8:\nWe have fixed these typos in the updated version.\n\n\nPlease let us know if all of your concerns have been addressed and we are happy to further discuss and clarify. We look forward to your reply.\n\n**References**\n\n[1] Wei Niu, Xiaolong Ma, et al. Patdnn: Achieving real-time dnn execution on mobile devices with pattern-based weight pruning. ASPLOS’ 2020", " > Q1: The proposed techniques are straightforward and do not offer technical novelty. However, suggesting a straightforward solution in a paper that raises a new problem/challenge (to the best of knowledge) could still be OK\n\nR1:\nWe sincerely thank the reviewer for recognizing the new problem and challenge we are trying to solve. However, we would still like to point out the novelty of our work beyond solving the crucial yet under-investigated efficiency problem in CL.\n\nOur proposed method serves as a general framework that introduces multiple levels of sparsity into CL, and greatly improves existing CL methods in efficiency while maintaining accuracy. On the other hand, existing representative sparse training methods do not consider the CL setting; as a result, they suffer from significant accuracy drop when transfered to the CL setting.\nMoreover, our method is model-agnostic and could improve CL methods of all kinds: Besides rehearsal-based methods explored in the paper, we added additional experiments to demonstrate the effectiveness of our method on non-rehearsal based [1] and representation learning based methods [2] (please see the Appendix F and G of the updated paper). These results demonstrate that SparCL consistently improves different kinds of CL methods under different settings, further indicating the generality of our method.\n\nBesides comprehensive empirical study on benchmark datasets, we evaluate SparCL on a real mobile edge device, demonstrating the practical potential of our method. We believe our work is an important pilot work that encourages future research on CL on-the-edge.\n\n> Q2: There is no section 6 in the Appendix.\n\nR2: Sorry for the confusion. Section 6 is actually the conclusion section, not in the appendix.\n\nPlease let us know if all of your concerns have been addressed and we are happy to further discuss and clarify. We look forward to your reply.\n\n**References**\n\n\n[1] Kirkpatrick, James, et al. \"Overcoming catastrophic forgetting in neural networks.\" Proceedings of the national academy of sciences 114.13 (2017): 3521-3526.\n\n[2] Cha, Hyuntak, Jaeho Lee, and Jinwoo Shin. \"Co2l: Contrastive continual learning.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.\n", " This paper introduces sparse continual learning, a continual learning framework that overcomes catastrophic forgetting of deep networks in the learning of a task stream and accelerates the training and inference. \nThe efficiency and effectiveness are achieved through three key components that encourage sparse network weight connection, replay buffer selection, and sparse gradient truncation. \nThe effectiveness is validated on both public benchmarks on real-world edge deivce. Strengths:\n\n- The presentation of the paper is clear and easy to follow. \n\n- The results on mobile devices are a plus\n\n\nWeakness:\n\n- I see sparsity as a universal and generic way of improving deep model efficiency. Therefore it is not surprising to see introducing sparsity to CL can improve efficiency. \n\n- Among the three techniques proposed by this paper, only the weight sparsity part seems to have a direct impact on overcoming catastrophic forgetting, which is the key to improving continual learning. \n\n- In Table 1, the latest method included in the comparison is from 2020. The quantitative comparison are clearly insufficient to support the claim that the proposed framework 'further improves the SOTA accuracy by at most 1.7%.'\n\n- Overall, this paper delivers a useful message that sparsity can improve the efficiency of CL, and even slightly improve the accuracy. However, the novelty and soundness of the paper do not meet the standard of NeurIPS. The proposed method relies on the usage of a reply buffer. As also discussed by the authors in Appendix Section A, the proposed method can potentially be applied to more advanced methods that do not rely on any previous data. Showing the proposed method to be more 'model agnostic might help improve the paper. \n\n The authors discussed the limitations in Appendix Section A", " The paper proposes Sparse Continual Learning (SparCL in which they wanted to do cost-effective continual learning on edge devices by leveraging the sparsity. Author's also claim that they achieve both training acceleration and accuracy preservation through the synergy of three aspects: weight sparsity, data efficiency, and gradient sparsity. Including TDM process for dynamic data removal (DDR) to remove less informative training data, and dynamic gradient masking (DGM) to sparsify the gradient updates. Authors also claim that the proposed method is uses 23Xless training FLOPs, and, achieving the SOTA accuracy by at most 1.7%. The authors also tested the method on two datasets split cifar-10, spilt tiny Image-net in Class Incremental and Task Incremental setting. \n\nStrengths:\n\n+ Inclusion of Dynamic data removal along with sparsity is good benefit to overall system.\n+ Sparsity in gradient and propose dynamic gradient masking \n+ Method also considers the importance of weights w.r.t. data saved in rehearsal buffer.\n\nCon's:\n- While reading the paper, I really felt that some sections of the paper are not clearly written specifically Sec 4.1 Task-aware Dynamic Masking.\n\n- I am not really clear/convincing about the authors claim about the memory foot print and training FLOPS. Consider one of the longback papers basic paper on CL using Neural Pruning (https://arxiv.org/abs/1903.04476)(You can choose any other regular CL on pruning paper) and some other CL using sparsity papers which does CL on split CIFAR-10/CIFAR-100 using pruning and they achieve the CL on them without using memory buffer of 500 images. Similarly in the method proposed by authors in which it is also showed that that they need to do extra steps of dynamic Gradient Masking,Dynamic Data Removal which add the overhead too. Clarify this if I am wrong in my understanding on how these additional overheads won't effect the training FLOPS and memory footprint.\n\n- I am also not clear with authors point of view of choosing the \"Rehearsal based methods also\" for comparing for \"memory foot print and training\" as it is obviously clear that rehearsal based methods does need additional amount of training and memory.\n\n- I am also very curious to know what happens when buffer size of the proposed method is 0. What happens to accuracies as shown in the Table 1 when buffer size increased from 200--->500 the acc increased from 65-->72\n\n- The improvement in the acc w.r.t to SOTA minimal as PackNet shown in table 1 does show packnet, LPF achieves 93.73%,94.50% acc when compared to best SparCL-DER75(95.19±0.34)(500 buffer) and SparCL-DER75(94.06±0.45)(200 buffer).\n\n- One more limitation I see regarding the proposed approach is towards the scalability as you can see current papers such as ANML(https://ecai2020.eu/papers/939_paper.pdf), of CL does show CL >100's of tasks. But how would given proposed scale comparing the overhead of buffer, memory masks while trying to learn huge no of tasks.\n\n\n\nOverall the paper is generally well written and easy to follow apart from few sections., and the experiments are thorough and well-executed but limited to fewer datasets, lesser tasks. and minimal improvement on SOTA There are extensive ablation experiments demonstrating some of the key components the proposed approach. \n\n \nPlease refer \"Strengths And Weaknesses\" Please see \"Strengths And Weaknesses\"", " The authors discuss whether continual learning can be investigated under the perspective of training efficiency and propose combining network sparsification techniques in a CL setting. As a result, they notice that sparser network are not necessarily worse continual learners and that they can possibly surpass some recent non-sparse approaches. The effectiveness of the proposed SparCL method is investigated by means of ablative studies and a conclusive test on an edge device. Strenghts\n- At the core of this work is a very well-thought intuition: the learning process is noisy/overparameterised and pruning can be beneficial in CL, where we need to cram knowledge in waves into a model and possibly retain generalisation capabilities to facilitate upcoming tasks.\n- the presentation of this work is clear, the exposition is easy to follow.\n\nWeaknesses (in decreasing order of importance)\n- While the authors claim that they are using a SOTA roster of competitors, I believe that this is not exactly the case. Some recent methods that are missing and could provide an interesting comparison are: LUCIR [a]/BiC [b] (somewhat stronger baselines w.r.t. iCaRL that additionally manage the bias problem), MIR [c]/GSS [d] (which similarly to DDR operate a selection on data which is memorised), CO2L [e]/DualNet [f] (recent works focussing on representation learning, which could or could not benefit from sparsification, as their inner working is rather different from standard replay methods).\n- In line with the previous point, the authors take DER as one of their reference baselines. In doing so, however, they are omitting the improved DER++ method proposed in the same paper that appears to be better in all respects w.r.t. DER++. Is there a reason for this exclusion? If not, I take that DER++ should be included instead.\n- It would be very interesting to have an additional study on where the sparsification takes place in the network. The authors describe very practical routines to establish which weights to prune. Linked to the recent trend discussing the effect of catastrophic forgetting on different layers of the networks [g], it would be very interesting to have an analysis of at which level the network is pruned the most and whether this changes at all within or between tasks.\n- an experimental comparison could be made between DDR and the approach for buffer reduction delineated in [d] (gradient based) or the one described in [h] (loss based)\n- there seem to be some problems with Alg.1: the second if-then (if $t=\\delta$ k) is missing its end and I cannot understand whether the following $t mod \\delta k = 0$ also applies for tasks > 1. If this is the case -- as I understood from the text -- then this is not an else-if.\n- I recommend that the authors include figures in this paper in vectorial format so as to facilitate reading.\n- as per neurips policies, limitations and societal impact should be in the main paper, not in the appendix\n- Typos: line 95 (thus not applicable - verb missing), line 161 (missing escape of space after w.r.t. - write w.r.t.\\ in latex), line 282 (contributes -> contribute), line 288 (as much as informative -> as much informative)\n\n[a] Hou, Saihui, et al. \"Learning a unified classifier incrementally via rebalancing.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. \n[b] Wu, Yue, et al. \"Large scale incremental learning.\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019. \n[c] Aljundi, Rahaf and Belilovsky, Eugene and Tuytelaars, Tinne and Charlin, Laurent and Caccia, Massimo and Lin, Min and Page-Caccia, Lucas, \"Online Continual Learning with Maximal Interfered Retrieval\", NEURIPS 2019 \n[d] Aljundi, Rahaf, et al. \"Gradient based sample selection for online continual learning.\" Advances in neural information processing systems 32 (2019). \n[e] Cha, Hyuntak, Jaeho Lee, and Jinwoo Shin. \"Co2l: Contrastive continual learning.\" Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021. \n[f] Pham, Quang, Chenghao Liu, and Steven Hoi. \"Dualnet: Continual learning, fast and slow.\" Advances in Neural Information Processing Systems 34 (2021): 16131-16144. \n[g] Ramasesh, Vinay V., Ethan Dyer, and Maithra Raghu. \"Anatomy of catastrophic forgetting: Hidden representations and task semantics.\" arXiv preprint arXiv:2007.07400 (2020). \n[h] Buzzega, Pietro, et al. \"Rethinking experience replay: a bag of tricks for continual learning.\" 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021. Please refer to the weakness section above, I am mostly interested in seeing the following point addressed:\n\n- Why is DER++ not among competitors?\n- Can any hypothesis be made on whether representation learning approaches CO2L/Dualnet can also benefit from sparsification?\n- Where are weights pruned the most in the backbone, is there a pattern?\n The authors adequately addressed the limitations and potential negative societal impact of their work. It is recommended to do so in the main paper, not in the appendix.", " The paper addresses continual learning in edge devices. It claims that previous work on CL didn't report the actual FLOPs in training which is very important in real life applications on edge devices. The paper includes adaptations of sparse training to CL . It applies three techniques: 1)dynamic weight sparsification based on the magnitude of the weights and magnitude of the gradients on the novel task and on the buffer (it combines it with collapse and expand approach on class boundaries). 2)It reduces the data based on misclassification. 3)It applies less weight updates by not updating parameters with low gradients. Applying these techniques results in significant time and memory savings without loss of accuracy when integrated with rehearsal methods for CL. It even shows slight improvement in forgetting. \n\n Strength: reduces the resources in CL, while previous work focused on not increasing them. Useful in real-life applications. Novel in terms of focus. \nWeakness: The proposed techniques are straightforward and do not offer technical novelty.\nHowever, suggesting a straightforward solution in a paper that raises a new problem/challenge (to the best of knowledge) could still be OK. There is no section 6 in the Appendix. Yes" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 7, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 5, 4, 4 ]
[ "8T1I-3103Lz", "2BM3ldFnnb-", "gkGZ8Lfgd9-", "8T1I-3103Lz", "2BM3ldFnnb-", "LyZZPlw_vIT", "pLcSYRuaDsG", "8T1I-3103Lz", "8T1I-3103Lz", "8T1I-3103Lz", "8T1I-3103Lz", "2BM3ldFnnb-", "2BM3ldFnnb-", "hI5fvrY_8cm", "hI5fvrY_8cm", "gkGZ8Lfgd9-", "nips_2022_n0dD3d54Wgf", "nips_2022_n0dD3d54Wgf", "nips_2022_n0dD3d54Wgf", "nips_2022_n0dD3d54Wgf" ]
nips_2022_u3vEuRr08MT
Memorization Without Overfitting: Analyzing the Training Dynamics of Large Language Models
Despite their wide adoption, the underlying training and memorization dynamics of very large language models is not well understood. We empirically study exact memorization in causal and masked language modeling, across model sizes and throughout the training process. We measure the effects of dataset size, learning rate, and model size on memorization, finding that larger language models memorize training data faster across all settings. Surprisingly, we show that larger models can memorize a larger portion of the data before over-fitting and tend to forget less throughout the training process. We also analyze the memorization dynamics of different parts of speech and find that models memorize nouns and numbers first; we hypothesize and provide empirical evidence that nouns and numbers act as a unique identifier for memorizing individual training examples. Together, these findings present another piece of the broader puzzle of trying to understand what actually improves as models get bigger.
Accept
This paper studies the underlying training and memorization dynamics of very large language models. The main take aways are that larger-sized language models memorize training data faster, and that this memorization happens before the overfitting of language modeling. Tokens with certain part-of-speech tags (nouns, numerals) seem to be memorized faster during training. Overall, most reviewers feel positively about this paper, agreeing that it tackles an important problem and that it provides a solid contribution. The experimental results are detailed and use reasonable metrics for data memorization, including the forgetting identifier experiments. Some of the weaknesses that have been pointed out (e.g. regarding the significance of the part-of-speech tags experiment, clarifying the criteria for memorization, etc.) seem to have been well addressed during the author response. Therefore, I recommend acceptance.
train
[ "3EPQHFVUWy", "O3PHX3RTT0c", "OWqzCZKAdDM", "0N8mf0lZhfV", "otgTOcycDSz", "dpsFVNDmzZ", "HIj7jGKI3N", "jmta3fzebhN", "WYSU_XYzkSD", "RvyGE5LIJxo", "7IKWxwp0J4", "JEhui3saOMx", "eMdL3Kgp2F6" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for providing a detailed response to my questions. I'm glad that most of my questions are answered, and I'm happy to improve my original evaluation.", " \n*W7: I'm not convinced by the experiments that consider memorization relationship with overfitting as though these are two separate phenomena. What if the model is overfitting on a subset of the data, but generalization is improving overall, just not on examples that might be related to that subset? Seems like overfitting by this metric might be an emergent property of having enough memorization occur.*\n\nA: In practice, overfitting is defined with respect to a fixed train/test split (if we do not have out-of-sample data to evaluate model performance on, we have no way of knowing that the model is fitting noise in the train set). Our goal in this section was to see if larger models memorize faster, because they overfit faster with respect to the given train/test split. In other words, we are considering “memorization on the train/test split” and “overfitting on the train/test split” as separate phenomena, not necessarily “memorization” and “overfitting.”\n\nWe agree that in non i.i.d. settings, the model could be overfitting on subsets of data while generalization improves. However, Wikitext103 is relatively i.i.d. because all data is collected from high quality, factual, and neutral Wikipedia articles (see Section 4.3 in [5]). Therefore, we do not believe the situation you describe occurs very frequently, although confirming this experimentally is intractable (would require evaluating over all possible subsets of training data). \n\nNevertheless, we see your point and are hoping to re-run this experiment over different train/test splits and include results in the appendix in the final version (we are unable to do so in the short timeline of a week, due to the scale of models we consider).\n", " ### Citations:\n[1] Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646, 2022.\n\n[2] Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, et al. Extracting training data from large language models. In 30th USENIX Security Symposium (USENIX Security 21), pages 2633–2650, 2021.\n\n[3] Chiyuan Zhang, Daphne Ippolito, Katherine Lee, Matthew Jagielski, Florian Tramèr, and Nicholas Carlini. Counterfactual Memorization in Neural Language Models. arXiv:2112.12938 [cs], December 2021. arXiv: 2112.12938 version: 1.\n\n[4]: Lee, K., Ippolito, D., Nystrom, A., Zhang, C., Eck, D., Callison-Burch, C., and Carlini, N. (2021). Deduplicating training data makes language models better. CoRR, abs/2107.06499\n\n[5]: Merity, Stephen, et al. \"Pointer sentinel mixture models.\" arXiv preprint arXiv:1609.07843 (2016).\n\n[6]: McCoy, R. Thomas, et al. \"How much do language models copy from their training data? evaluating linguistic novelty in text generation using raven.\" arXiv preprint arXiv:2111.09509 (2021).\n\n[7]: Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning requires rethinking generalization. arXiv:1611.03530 [cs], February 2017. URL http://arxiv.org/abs/1611.03530. arXiv: 1611.03530\n\n[8]: Pondenkandath, Vinaychandran, et al. \"Leveraging random label memorization for unsupervised pre-training.\" arXiv preprint arXiv:1811.01640 (2018).\n\n[9]: Zhang, Susan, et al. \"Opt: Open pre-trained transformer language models.\" arXiv preprint arXiv:2205.01068 (2022).\n\n\n\n", " *W8: Sparse literature review*\n\n*(1) This work is not contextualized in the existing literature on training dynamics in language models*\n\nA: Thank you for pointing this out, we agree we missed a section of relevant work. We have added a paragraph of relevant work on training dynamics in language models in Section 2.\n\n*(2) Is label memorization extending an existing concept?*\n\nA: We have added relevant citations when we introduce label memorization in Section 3, as well as a footnote informally describing what “label memorization” is referring to.\n\n*(3) There is a mention of spaced repetition from the cognitive science literature, but the authors do not acknowledge that it has also been applied in natural language processing training: https://aclanthology.org/D17-1255/*\n\nA: This is a very relevant citation for our spaced repetition forgetting experiments, thank you for mentioning this! We have added the relevant citation in Section 5 (see L247).\n\n*(4): Many concepts are introduced without any kind of citation to the existing literature: catastrophic forgetting, machine unlearning*\n\nA: We define and cite both catastrophic forgetting and machine unlearning in Section 2 (in “Forgetting in Language Models” subsection). See L64-66 for catastrophic forgetting and L72-L73 for machine unlearning.\n\n*(5): Terms like \"basin\" and \"phase transition\" are introduced without background, so it's not clear what type of phenomenon each of these phrases is intended to refer to in this context.*\n\nA: We agree it would make the paper more clear to precisely define these terms, since they are general terms that many researchers use. We have added clarification for these terms in the updated draft. Note: We decided to remove the mention of “basin” since the described phenomena in Figure 5 does not consistently show long flat regions of T(N, 0.9) as we vary LR; we think “lowest point on the curve” better describes what we mean and is more self-explanatory than “basin.”\n\n### Minor Comments:\nThank you for pointing out these details, we have fixed them in the updated draft.\n\n### Questions:\n\n*Q1: Why did you pick the particular reference implementations that you picked? Are these considered standard for work like this?*\n\nA: For training models at the 1B+ scale, there are not a lot of open-source reference implementations available. We chose our particular implementations/frameworks because they have open sourced models [9] and are consistent — both in configuration (see Table 1 and Section 2.1 in [9]) and performance (see Section 3 and Figure 4 in [9]) — with standard large-scale language model implementations such as GPT-3. \n\n*Q2: lines 240-242: Does this imply that the special batch is not seen once immediately when introduced, but is seen later on? Or is this implying that the spectral batch is seen repeatedly in each epoch? That was not my understanding initially, en did makes this confusing*\n\nA: In L240-242, the special batch is only seen once immediately when introduced. We have updated the draft on L228 to state this more explicitly, thank you for pointing out that this is confusing.\n\n*Q3: line 202: Maybe clarify that this means memorization of which words correspond to a particular part of speech? Or are you claiming that the correct part of speech is consistently selected? Because that should definitely not be referred to as memorization.*\n\nA: With the metric R(p) on L202 (or L207 in the updated draft), we are claiming to track memorization of which words correspond to a particular part of speech. We have clarified it in the updated draft on L207, thanks for noting that this phrasing is confusing.\n\n*Q4: 187-189, 92-95: Could you rewrite these sentences? I find them very confusing.*\n\nA: We have updated these portions in the draft (hopefully they are more clear now, but please let us know if that is not the case).\n", " *W3: What does it mean for a part of speech to be memorized?... I don't think that it's appropriate to claim that a sequence is memorized just because the predictions made are the correct part of speech.*\n\nA: If the model predictions are the correct part of speech, we do not claim that the “sequence is memorized” but that the parts of speech for a sequence are memorized. We agree that predicting the part-of-speech correctly for individual words may not be extreme enough to constitute “memorization.” However, if *all* the parts of speech for a given sequence are consistently predicted correctly, we believe that constitutes “memorizing parts of speech” (especially considering that the average sequence length in WikiText103 is 430.12 tokens). As we mentioned in the answer to W1, we study the transition from [not predicting any parts of speech correctly] => [predicting all parts of speech correctly = “memorizing parts of speech”].\n\n*W4: 209: If the model is just learning to predict that the token is a numeral, and not learning to predict the specific numeral, it’s not clear to me that this is actually going to have privacy implications.*\n\nA: We see what you are saying and agree that only knowing part-of-speech has limited privacy implications; however, we disagree that it has no privacy implications. If the training set contains a prompt such as:\n\n“The password for root is _ _ _ _ _”\n\nand the language model has completely memorized parts-of-speech (it can correctly predict part-of-speech), it makes it easier for an attacker to extract the password for `root` since they know what positions should be numerals.\n\nAlso, right plot in Figure 7 demonstrates that the model learns to verbatim output nouns/proper nouns/numerals faster than other parts of speech. This has privacy implications, since in this case the training data itself is exposed.\n\nIn any case, we have tempered the language on that line to reflect that there are “potential privacy implications” and not necessarily direct privacy implications.\n\n*W5: In terms of the uniqueness experiments, I would want to see what the effect of word frequency is in general before considering the role of completely unique words.*\n\nA: Previous work has extensively shown that memorization increases as word frequency (or sequence frequency) increases [1, 2, 4]. Most related to our experimental setup, [1] systematically increases sequence frequency in the training dataset, and measures the impact on memorization. \n\nFor masked language models, [1] defines memorization the same way we do (see Section 5.1 in the `Model and dataset` section). Figure 4c in [1] shows that increasing string frequency increases memorization.\n\nFor causal language models [1] defines a sequence as memorized if (a) it is present in training data and (b) it is extractable from the language model using greedy decoding. This simply extends our definition to consider memorization of entire sequences, rather than individual (context, word) pairs. We see in Figure 1b in [1] that across model sizes, increasing string frequency increases memorization.\n\nThis motivates the use of completely unique words — if we use non-unique words, then any measured increase in memorization could be attributed to increased frequency of words, as opposed to the model uniquely identifying training examples.\n\n*W6: 174: \"classical ML concepts cannot even explain such a memorization trend.\" Not sure what this means, you should probably talk about the concepts that you feel fail to explain the trend.*\n\nA: The concepts we had in mind were along the lines of what we tested in the paper (hyperparameters such as learning rate, concepts from learning theory such as overfitting, complexity measures). However, we realized that we are not running experiments ruling out any other concepts, so we decided to remove this sentence to avoid misleading readers. Thank you for pointing out that this sentence was not clear.", " Thank you for such an insightful review! We would like to address your points individually below:\n\n### Weaknesses:\n\n*W1: A broad issue I have is I'm not convinced \"memorization\" is the appropriate term for the behavior discussed.*\n\nA: We understand the hesitancy to call what we study in section 4 “memorization” since it can also be seen as accuracy (we highlight this on L95-96). Memorization itself is not well defined in the case of language modeling, with multiple works all defining memorization differently (for example [1] in section 3.1, [2] in section 3.1.1, [3] in section 2.1). The reasons we chose our particular metric: \n\n(1) We are interested in memorization from a privacy/fairness perspective, which means we first need to study the question: if we deploy this model, and someone prompts with a context from training data, will the model output exactly what was in the training data? More importantly, how does the transition from [not outputting training data] => [outputting training data] depend on factors such as scale?\n\nOnce we’ve characterized this behavior, we can ask more complicated question such as:\nHow robust is memorization of training data: what happens if we change particular words in the sentence, or if we make the input context length longer? Does the model still consistently output the same word?\nWhat about semantic memorization: what if the model always outputs a label y’ which conveys the same information as y (for example, a synonym)?\n\nWhile these sorts of experiments more clearly align with the natural concept of memorization, we still need to establish the baseline of how quickly the model exactly outputs training data.\n\n(2) “Memorization” is also sometimes defined as perfectly fitting the training data [7, 8]. However, this definition only works if you consider the final model after training. We are interested in how the model transitions from [not fitting training data at all] => [perfectly fitting training data = “memorization”]. In order to do that, we must use a metric that measures the degree of memorization, and analyze how it evolves over training. Therefore, even if intermediate values of our metric might not be extreme enough to be considered full “memorization,” we still believe it is valid as a measure of the degree of memorization.\n\n*W2: Also, how is this affected by conditions where the same context appears multiple times in the training set with different labels? If every time a certain sequence occurs in training, it always occurs with a particular word, it seems that it's not memorization to indicate that that's the highest probability word.*\n\nA: To address the first question: we believe that if we train the model on examples (c, y) and (c, y’), then the model will become less confident in its prediction when prompted with “c” (although it is unclear which label it will choose).\n\nTo address the second sentence: We believe the important factor here is context length. \n\nWe agree that there are short contexts with (probably) high sequence frequency, which if consistently outputted, does not necessarily indicate memorization — for example, the sequence “in the” is a very common phrase in natural english that is likely to appear frequently in a dataset. However, there are also short contexts that appear multiple times in WikiText103 such as “The Boat Race” (see [6], section 5.2, in the section title “What causes supercopying”). This is not an extremely common phrase in normal English, so consistently outputting this sequence can be considered more memorization of training data than general learning.\n\nOn the other hand, if we have long contexts that the model consistently outputs, even if it is due to high sequence frequency, we believe it is more indicative of memorization. This is because these long sequences are unlikely to be very frequent in general English. As a concrete example, [6] shows that there are high-frequency long contexts (100-grams) in WikiText103 that language models consistently output, which they refer to as “supercopying” (see Section 5.2). They attribute this behavior to high sequence frequency (see Figure 9), and if you inspect Figure 8, you see some examples of these 100-grams, which clearly shows that they are not likely to be frequent in natural English.\n\nSince previous work [1] has shown that short context length sequences are difficult to extract (which implies they are hard to memorize by our metric: our memorization metric implies extractability as defined in [1], so (not extractible) implies (not memorized)), we did not focus extensively on distinguishing different cases within the family of short context-length sequences.\n\n", " Thank you for reviewing our work! We are happy that you found our experiments well-designed and agree with our main points.\n\n### Weaknesses\n\nW1: Some example-based analysis (e.g. the POS-wise breakdown of memorization and the forgetting experiment) would be helpful in further demonstrating the procedure.\n\nA: We are not entirely sure what you mean by “further demonstrating the procedure” but we are guessing that you mean that you would like to see the example-based analyses rerun with different configurations (i.e. changing hyperparameters)? \n\nFor the forgetting experiment, we are able to consider varying model scales up to 13B, but sweeping hyperparameter/model families at this scale is very computationally expensive, so we leave such experiments for future work. Previous work has shown that large scale language models are relatively robust to variations in learning rate and related parameters [2] (see 3rd paragraph in Section 7 in [2]).\n\nFor the POS-wise breakdown of memorization, there is work [1] that conducts a similar study with smaller model sizes and a different language model family (masked language models). They also find that different parts of speech are learned at different speeds.\n\nFor both example-based analyses, our intention was to introduce ways of analyzing how language models memorize training data, rather than to provide a full sweep of configurations (which we leave as future work).\n\n### Limitations:\nL1:One potential related reference: https://aclanthology.org/2022.tacl-1.1.pdf\n\nA: We agree that this is a relevant citation, and have included in the “Related Works” section of our paper (see L59 in the updated draft).\n\n### Citations:\n[1]: Cheng-Han Chiang, Sung-Feng Huang, and Hung-yi Lee. 2020. Pretrained Language Model Embryology: The Birth of ALBERT. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6813–6828, Online. Association for Computational Linguistics.\n\n[2]: Kaplan, Jared, et al. \"Scaling laws for neural language models.\" arXiv preprint arXiv:2001.08361 (2020).\n", " Thank you for thoroughly reviewing our paper! We are glad that you found our work well-written and interesting. We would like to address your comments individually:\n\n### Weaknesses\nW1: “It is a bit confusing to understand which model family has been used as the authors only state they have used ‘Transformer language model architectures’ in L114”\n\nA: We see how this portion is confusing, as we only specify the broad class of models, and leave the details of the architectures to the cited works in L114. We have updated the draft (L124-125 in the updated draft) to include that we consider both causal and masked language models. \n\nW2: Dataset size issue (we don’t consider large datasets for all our experiments)\n\nA: We completely agree that it would be best if we could run all our experiments on very large datasets (i.e. The Pile, C4). As you mentioned, this would be quite computationally expensive, which was the primary reason we were unable to run these experiments. However, we are able to analyze our main claim (Larger models memorize faster) on the data used to train RoBERTA (see section 4.1), which allows us to consider the under-parameterized regime for all our model scales (because the dataset has close to 39B tokens).\n\nWe have added a portion of text *explicitly* addressing the dataset size limitation in section 3 (see L130-132 the updated draft)\n\nW3: Lack of explanation of POS (part of speech) results:\n“The study on parts of speech is especially interesting, however the section lacks a bit on the explanation of the behaviour … It would be interesting/beneficial for the paper to study this angle further even using Wiki Text 103 dataset, to see if there are any correlation with memorization with the frequency of individual POS class.”\n\nA: We agree that the frequency of a particular POS class could aid in memorization of that POS class, especially since previous studies have shown that higher sequence frequency leads to higher sequence memorization [1]. Our intention with this section was just to introduce the idea of analyzing “what is memorized when” through the lens of POS, not necessarily determine why the POS trend occurs, although we hope to analyze this in future work.\n\n### Questions\nQ1: In L122, the authors seem to refer the BookCorpus as “RoBERTa” dataset. Any specific reason why? \n\nA: If you look at the original RoBERTa paper (See section 3.2 in [2]) there are more datasets than just BookCorpus that are used to train the model, which is why we referred to this dataset as the RoBERTa corpus. This includes BookCorpus, CC-News, OpenWebtext, and Stories.\n\nQ2: Furthermore, BookCorpus which is used to trained RoBERTa model comes in two sizes: 16GB (which is used for RoBERTa-base) and 160GB (which is used for RoBERTa-large). I'm not sure which one is being used here.\n\nA: We used the RoBERTa-large training dataset. The total size is around 160GB.\n\nQ3: What is the model family used in these experiments? Are the results specific to Causal language models or Masked Language Models? Do the authors expect the results to differ based on the class of the model family being used?\n\nA: Our results in section 4 (“Larger models memorize faster”) are for both causal (Figures 1, 3, 4) and masked (Figures 2, 3, 4) language modeling. We also show memorization dynamics for both causal and masked language modeling (on both WikiText103 and the RoBERTa dataset) in the appendix A.1. Apart from those experiments, all other analyses were performed on causal language models. We do not expect the overall trends (with respect to model scale) to differ based on the model family, because the memorization dynamics between model families are similar (see Figure 11/12 in section A.1.).\n\nQ4: In the catastrophic forgetting experiment, the special batch is taken from the validation set of Wiki 103, which is in-distribution with the training set. If I were to choose a special batch being significantly out-of-distribution with respect to the training data, would the same results hold?\n\nA: We expect that the in-distribution forgetting is an upper bound on the out-of-distribution forgetting, but we expect to see a similar dependence on model size, which is supported by previous work showing that increasing model scale mitigates out-of-distribution forgetting [3]\n\n### Citations:\n[1] Nicholas Carlini, Daphne Ippolito, Matthew Jagielski, Katherine Lee, Florian Tramer, and Chiyuan Zhang. Quantifying memorization across neural language models. arXiv preprint arXiv:2202.07646, 2022.\n\n[2]: Liu, Yinhan, et al. \"Roberta: A robustly optimized bert pretraining approach.\" arXiv preprint arXiv:1907.11692 (2019).\n\n[3]: Ramasesh, Vinay Venkatesh, Aitor Lewkowycz, and Ethan Dyer. \"Effect of scale on catastrophic forgetting in neural networks.\" International Conference on Learning Representations. 2021.\n", " Thank you for taking the time to review our paper! We are glad you agree with our claims and found our paper clear and well-written. \n\nTo answer your question: “Any hypotheses as to why memorization is faster when adding tokens to the dictionary without even using them?”: \n\nOur hypothesis is that adding “fake” tokens to the dictionary increases the number of trainable parameters in the model (in the Embedding layer and the last Linear layer), this allows the model to memorize training data faster. This is supported by the evidence in Section 4 that larger models memorize faster, although in Section 4 we increased # parameters in more conventional ways. Taken together, this seems to point towards the hypothesis that any increase in # parameters (however unconventional) will generally lead to faster memorization of training data. We say “generally” mainly because very recent work by Google/Deepmind has demonstrated that different transformer architectures have different scaling laws [1], indicating that memorization scaling laws will also slightly differ if we change *how* we increase the # of parameters.\n\n### Citations:\n[1]: Tay, Yi, et al. \"Scaling Laws vs Model Architectures: How does Inductive Bias Influence Scaling?.\" arXiv preprint arXiv:2207.10551 (2022).\n", " In this paper, the authors define memorization as any situation in which the model predicts the correct word as maximum probability given its context. They find that larger models memorize faster under this definition. They then study the asymptotic behavior of forgetting the memorized samples, finding that larger models likely never completely forget their memorized samples. They then study memorization paper according to part of speech, suggesting that rare words tend to lead to memorized sequences. Strengths:\n\nThe question of how memorization occurs over the course of training is an interesting one. I like that they bring in implications for privacy, though I’d like to see more explicit links to the generalization literature.\n\nlines 227-228: As the number of parameters increases, the model forgets less. Seems obvious, but actually really nice to see this demonstrated in such a simple way. In general, some of their results seem intuitive, so when their methods are convincing and well explained, it's satisfying. This sort of work often finds people complaining that these are things that we already know, but if they aren't tested in the current empirical work, this kind of result has a lot of value. The results on forgetting curves and the lower bounds that they exhibit at various scales was convincing and presented a good framing for the phenomenon of forgetting.\n\nWeaknesses:\n\nA broad issue I have is I'm not convinced \"memorization\" is the appropriate term for the behavior discussed. \n\nIt's not clear to me that every time the correct word is predicted as maximum probability, it should be counted as memorization rather than generalised learning. Also, how is this affected by conditions where the same context appears multiple times in the training set with different labels? If every time a certain sequence occurs in training, it always occurs with a particular word, it seems that it's not memorization to indicate that that's the highest probability word.\n\n What does it mean for a part of speech to be memorized? I don't agree that a part of speech is memorized just because it is predicted correctly in context. There aren't that many parts of speech, so unlike when trying to predict the missing word in a unique sentence, I don't think that it's appropriate to claim that a sequence is memorized just because the predictions made are the correct part of speech. 207 highlights this problem, as it refers to the behavior first as learning and then as memorizing.\n\n209: If the model is just learning to predict that the token is a numeral, and not learning to predict the specific numeral, it’s not clear to me that this is actually going to have privacy implications.\n\nIn terms of the uniqueness experiments, I would want to see what the effect of word frequency is in general before considering the role of completely unique words.\n\n174: \"classical ML concepts cannot even explain such a memorization trend.\" Not sure what this means, you should probably talk about the concepts that you feel fail to explain the trend.\n\nI'm not convinced by the experiments that consider memorization relationship with overfitting as though these are two separate phenomena. What if the model is overfitting on a subset of the data, but generalization is improving overall, just not on examples that might be related to that subset? Seems like overfitting by this metric might be an emergent property of having enough memorization occur.\n\nProblems with sparse literature review:\n\nThis work is not contextualized in the existing literature on training dynamics in language models, outside of recent literature on general scaling laws. I recommend looking through a variety of work on the area: https://www.semanticscholar.org/search?q=training%20dynamics%20language%20models&sort=relevance\n\nIn general, I take issue with the lack of citations to existing work. Is label memorization extending an existing concept? You wouldn't think so, from the paper. There is a mention of spaced repetition from the cognitive science literature, but the authors do not acknowledge that it has also been applied in natural language processing training: https://aclanthology.org/D17-1255/\n\nMany concepts are introduced without any kind of citation to the existing literature: catastrophic forgetting, machine unlearning. Terms like \"basin\" and \"phase transition\" are introduced without background, so it's not clear what type of phenomenon each of these phrases is intended to refer to in this context.\n\nMinor:\n- 132: \"generally monotonically decreasing\" just say that it's generally decreasing. It's not monotonically decreasing, and appending \"generally\" in this context just means \"not monotonic\".\n- 51: \\citep should be \\citet here Why did you pick the particular reference implementations that you picked? Are these considered standard for work like this?\n\nlines 240-242: Does this imply that the special batch is not seen once immediately when introduced, but is seen later on? Or is this implying that the spectral batch is seen repeatedly in each epoch? That was not my understanding initially, en did makes this confusing\n\nline 202: Maybe clarify that this means memorization of which words correspond to a particular part of speech? Or are you claiming that the correct part of speech is consistently selected? Because that should definitely not be referred to as memorization.\n\n187-189, 92-95: Could you rewrite these sentences? I find them very confusing. I didn't feel that the authors were precise enough in their definitions, leading them to use some words, like memorization, in ways that might not reflect what we usually mean when we talk about memorization. I think being more precise in their language would help this paper greatly, as the results that they describe might not influence other notions of memorization.", " This paper demonstrates a detailed picture of training data memorization in the process of language modeling. The authors show that larger-sized language models memorize training data faster. Besides, this memorization happens before the overfitting of language modeling. Moreover, specific part-of-speech-tagged tokens are memorized faster like nouns and numbers during training. Lastly, they use the validation set as special insertion batch to test the forgetting mechanism with different model scales. Strength\n- The experimental results are detailed with the reasonable proposed metrics in data memorization.\n- The forgetting identifier experiments are well-designed.\n- In general, these fine-grained findings add value to drawing the picture of how and what data are memorized in transformer-based language modeling.\n\nWeakness:\n- Some example-based analysis (e.g. the POS-wise breakdown of memorization and the forgetting experiment) would be helpful in further demonstrating the procedure. The PoS are not of comparable size (e.g. numbers can be considered relative as a closed set compared to nouns, even though they are not actually, and the size of prop. nouns are relatively small compared to the set of nouns). Therefore, the memorization of nouns demonstrated in the experiments may not be as informative as the memorization of more fine-grained categories like numbers. One potential related reference: https://aclanthology.org/2022.tacl-1.1.pdf", " This paper analyzes the phenomenon of memorization by Transformer language models in the light of model size, catastrophic forgetting and memorization of unique tokens such as part of speech. The paper finds that larger the size, more the model is prone to memorize train distribution. Larger models also tends to forget less, and they seem to memorize unique parts of speech tokens such as nouns and numbers.\n\n - The paper is well written and easy to follow.\n- The experimental protocol to measure memorization makes sense, and a good spread of model sizes have been used. It is a bit confusing to understand which model family has been used (as the authors only state they have used \"Transformer language model architectures\" in L114).\n- While I mostly agree with the empirical results, the issue I have with them is that the majority of experiments are conducted on Wiki Text 103, which is 500MB dataset. This dataset is significantly small compared to those being used to train the large models (most of the model sizes considered in this paper are typically trained on significantly larger datasets). Thus, while undoubtedly the models somewhat favor memorization, it is unclear whether they still do the same when the input data is large enough. While I do agree such a study would be quite expensive and perhaps intractable, a disclaimer or limitation of this study should have been explicitly mentioned (on the data size front).\n- The study on catastrophic forgetting is quite interesting - it is nice to see a forgetting baseline set by the models of individual sizes.\n- The study on parts of speech is especially interesting, however the section lacks a bit on the explanation of the behaviour. Specifically, nouns, proper nouns and numerals being memorized noticeably faster might be due to the frequency effect (Wei, J., Garrette, D., Linzen, T., & Pavlick, E., Frequency Effects on Syntactic Rule Learning in Transformers, arXiv:2109.07020 [cs], 2021). It would be interesting/beneficial for the paper to study this angle further even using Wiki Text 103 dataset, to see if there are any correlation with memorization with the frequency of individual POS class.\n\n*Update*: I upgraded my initial evaluation after reading the author responses.\n \n- In L122, the authors seem to refer the BookCorpus as \"RoBERTa\" dataset. Any specific reason why?\n- Furthermore, BookCorpus which is used to trained RoBERTa model comes in two sizes: 16GB (which is used for RoBERTa-base) and 160GB (which is used for RoBERTa-large). I'm not sure which one is being used here.\n- What is the model family used in these experiments? Are the results specific to Causal language models or Masked Language Models? Do the authors expect the results to differ based on the class of the model family being used?\n- In the catastrophic forgetting experiment, the _special batch_ is taken from the validation set of Wiki 103, which is in-distribution with the training set. If I were to choose a special batch being significantly out-of-distribution with respect to the training data, would the same results hold?\n\n The authors have not discussed the limitations of their work thoroughly. As I mentioned above, one big limitation is the usage of a significantly small dataset to explain the behaviour of large language models, those of which are typically trained on much larger datasets (for eg, The Pile or C4). This is a window of improvement for the paper, and the authors should try to clearly set the expectations early on for the reader.\n", " This paper investigates the memorization and forgetting dynamics of large language models during training, as a function of model and dataset sizes, learning rate, and task (i.e. causal vs masked language modeling). The paper measures memorization as the proportion of correctly predicted labels (which corresponds to predicting the correct next/masked token) during training, and forgetting as the decline in memorization.\n\nThe main experimental results show that: (i) larger models memorize data faster and forget less; (ii) memorization occurs before overfitting; and (iii) memorization occurs for both small and large datasets, and for both causal and masked language modeling, regardless of the learning rate. The paper further investigates the memorization mechanism and shows that the presence of unique identifiers in the training data can contribute to memorization, e.g. models tend to memorize numerals, nouns and proper nouns faster than other parts-of-speech. Moreover, the paper shows that while memories from a dataset are forgotten when training with other data, the degree to which forgetting occurs has a lower bound that depends on the model size (larger models having a higher lower bound). \nThis paper addresses an important aspect of large language models: their ability to memorize the training data without necessarily overfitting. This unexpected behavior has been observed in previous work and calls into question the classical framework of bias-variance tradeoff. However, little is understood about the dynamics of memorization and forgetting in these models. Yet, this can have problematic implications for real-world deployments related to e.g., privacy (e.g., models can memorize personal identifiers or other sensitive information) and fairness (e.g., models can memorize negative stereotypes about groups of individuals). This paper gives insights into the memorization and forgetting dynamics in large language model training and about the scaling laws that seem to govern these models. \nOverall, the paper is well written and clear and the main claims are well supported by experimental evidence. - Any hypotheses as to why memorization is faster when adding tokens to the dictionary without even using them?\n authors have discussed the limitations" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 6, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 4, 4, 4 ]
[ "jmta3fzebhN", "OWqzCZKAdDM", "0N8mf0lZhfV", "otgTOcycDSz", "dpsFVNDmzZ", "RvyGE5LIJxo", "7IKWxwp0J4", "JEhui3saOMx", "eMdL3Kgp2F6", "nips_2022_u3vEuRr08MT", "nips_2022_u3vEuRr08MT", "nips_2022_u3vEuRr08MT", "nips_2022_u3vEuRr08MT" ]
nips_2022_Q38D6xxrKHe
High-dimensional limit theorems for SGD: Effective dynamics and critical scaling
We study the scaling limits of stochastic gradient descent (SGD) with constant step-size in the high-dimensional regime. We prove limit theorems for the trajectories of summary statistics (i.e., finite-dimensional functions) of SGD as the dimension goes to infinity. Our approach allows one to choose the summary statistics that are tracked, the initialization, and the step-size. It yields both ballistic (ODE) and diffusive (SDE) limits, with the limit depending dramatically on the former choices. We find a critical scaling regime for the step-size below which this ``effective dynamics" matches gradient flow for the population loss, but at which, a new correction term appears which changes the phase diagram. About the fixed points of this effective dynamics, the corresponding diffusive limits can be quite complex and even degenerate. We demonstrate our approach on popular examples including estimation for spiked matrix and tensor models and classification via two-layer networks for binary and XOR-type Gaussian mixture models. These examples exhibit surprising phenomena including multimodal timescales to convergence as well as convergence to sub-optimal solutions with probability bounded away from zero from random (e.g., Gaussian) initializations.
Accept
The paper is quite interesting and rigorous, with intriguing conclusions. The rebuttal also addressed all the major concerns -- mostly technical clarity. I congratulate the authors for the nice work and recommend an acceptance for the paper.
val
[ "v-UEO1fdgdk", "KNAAMa6mEOG", "W6_OImRsT0", "EQPOx0cWBHGd", "KNphMyiGCrD", "PjxyOZAkOb4", "J5Jy9BRyBe4", "5z0nsfuxNdKD", "-X6ujiQW4Nh", "kMYMxgSPtl8", "c8Wb6XIMxsO", "vVVxM2OR6HZ", "gn5Ue-aCCKv" ]
[ "official_reviewer", "official_reviewer", "official_reviewer", "author", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Dear authors,\n\nI really appreciate the time you took to answer my questions and clarify certain points. I was also confused by your answer to my question about the gradient-like form of the drift and the bias term as reviewer hQ5R pointed out. But the authors clarified this issue quite well for me. I would also like to thank reviewer hQ5R for addressing this misunderstanding. Based on the technical strength and the significant contribution of this paper as well as the willingness of the authors to revise the presentation style and the organization, I am willing to increase my rating to '7:accept'. ", " I thank the authors for detailing more this aspect. That's clear.\n\nGiven the authors' openness to the reviewers suggestions - which I think will help making the work more accessible to the NeurIPS community - as well as their effort for clarifying some points during the discussion period, I am happy to raise my score towards a stronger accept. ", " That's clear. I thank the authors for the clarification.", " In the specific examples we have looked at, for the evolution of the error, as well as other summary statistics, we expect the stochastic contribution to only be non-negligible in neighborhoods of the fixed points of the ballistic dynamics. \n\nIn order for this not to be the case for some statistic $u$, it would have to be that the \\emph{variance} of the SGD increment is especially large in the specific direction of $\\nabla u$ (namely, of an order $\\delta_n^{-1}$ larger than it is in most other directions). \n", " Thanks for the question. While the population dynamics for $X_t$ is of gradient type by definition, it can be that the function $\\mathbf{f}$ is not the gradient of any scalar function $F:\\mathbb R^k \\to \\mathbb R$ because of the non-linear projection from parameter space to the summary statistics. \n\nFor example, in the case of matrix PCA with $u_1 = m$ and $u_2 = r_\\perp^2$, the population dynamics of the summary statistics is given by \n$$\\dot u_1 = 2 \\lambda k u_1 - 2 k u_1^3 - 2k u_1 u_2\\quad \\mbox{and}\\quad \\dot u_2 = - 4k u_1^2 u_2 - 4k u_2^2.$$\nIt can be checked by taking antiderivatives that the right-hand sides are not the $(u_1, u_2)$ derivatives of any scalar function on $\\mathbb R^2$. In the special case where the population loss $\\Phi$ is a function $\\phi$ of the summary statistics, and the summary statistics are all linear functions, the population dynamics will indeed be of gradient type $\\mathbf{f} = \\nabla \\phi$. \n\nIn the course of writing this, we realized that we might have misunderstood referee uVoh's original question. In particular, one might ask whether the effective drift is the projection of a corrected gradient dynamics, namely if there is a function $F$ such that the effective drift for the summary statistics $\\mathbf{u}$ is of the form $\\langle DF,D\\mathbf{u}\\rangle$. Even though this holds for the population drift where $\\mathbf{f} = \\langle D\\Phi,D\\mathbf{u}\\rangle$, it is not necessarily the case for the full drift because the correction term corresponds to a second-order (as opposed to first-order) differential operator on $\\mathbf{u}$. \n", " I thank the authors for the time spent answering my questions and concerns. I appreciate it. I might need some time to fully digest the replies to all the other reviewers. Meanwhile, to make the best of the discussion period let me follow up in a reply. \n\n> (3) In general, nothing precludes this possibility since Theorem 2.2 is stated in rather broad generality. In particular, one does not need to apply it to a region near a fixed point in all coordinates to see a stochastic term. One could even take a combination of variables being tracked that have different scalings, some of which are initialized essentially near their ``optimal\" value and some moving ballistically. In this case, one might wish to rescale the first kind of variables to understand fluctuations while the remaining variables move ballistically. We will provide examples of this last scenario in a full version.\n\nThank you for clarifying this. I see how one could construct such an example by combining summary statistics at different scalings (and I look forward for the explicit example in the full version).\n\nHowever, if instead of looking at the summary statistics we focus on the evolution of the error (mse for tensor-PCA, population risk for GMM classification, etc). Can the authors envisage a situation where the stochastic contribution is non-zero outside a fixed point of the drift? I know this is sort of an open ended question, but I would like to hear the author's intuition on this point.", " I completely share reviewer's *uVoh* view on the presentation style, and therefore I would also endorse her/his request for expanding the introduction and motivation in a future version (I am well aware of the space constraints at this stage).\n\n> *\"With regards to the second part of the question, neither the population drift, nor the population corrector are necessarily of gradient type. In particular, even the population dynamics are a possibly non-linear function of a higher dimensional gradient flow, and transformations of this type can lead to non-gradient systems. \"*\n\nCan the authors develop further on this interesting point? In the discussion in L145-147, the author's remark that the population drift $f$\" corresponds to *\"evolution under gradient descent on the population loss $\\Phi$\"*. Why it is not necessarily of gradient type?", " We sincerely thank the referee for their detailed comments and helpful suggestions. \n\nRegarding terseness: we have added an Appendix B which discusses in detail each of the items in Definition 2.1. We hope this clarifies the role each of these conditions play in Theorem 2.2 and why they should hold for a wide class of high-dimensional problems. For intuition on the proof of Theorem 2.2, and in particular the suggested Taylor expansion heuristic, due to space considerations, we will expand on this in a full version and thank the referee for the suggestion. \n\nWe have added a short proof of the 3/32 probability of ballistic convergence to an optimal classifier to Appendix F.4. We apologize for our oversight in not having earlier included this and thank the referee for pointing that out. \n\nThe question of what happens when the number of summary statistics also diverges is a very interesting one that unfortunately doesn't fit directly into our framework; we leave this to future investigation. \n", " We sincerely thank the referee for their detailed comments and helpful suggestions. \n\nWe have expanded on the relation of our work to the works of Saad and Solla, as well as Veiga et. al. in the places the referee requested. We thank the referee for explaining the relations of our work to those two important works, and apologize for not having realized this connection earlier. \n\nRegarding the difficulty of parsing the localizability condition: we have added two extended remarks in what is now Appendix B that discuss in detail each of the items in Definition 2.1. We hope these help clarify the role each of these play and why they should hold for a wide class of high-dimensional problems. \n\nTo respond to the questions the referee asked:\n\n(1) Yes, by \"fixed dimensions\" here we mean regimes in which both the dimension of the data and the parameter space are fixed constants, and the number of data points tends to infinity. \n\n(2) It is a very interesting question to probe over-parametrized regimes where $p_n \\gg d_n$ and we leave this to future investigation. \n\n(3) In general, nothing precludes this possibility since Theorem 2.2 is stated in rather broad generality. \nIn particular, one does not need to apply it to a region near a fixed point in all coordinates to see a stochastic term. \nOne could even take a combination of variables being tracked that have different scalings, some of which are initialized essentially near their ``optimal\" value and some moving ballistically. In this case, one might wish to rescale the first kind of variables to understand fluctuations while the remaining variables move ballistically. We will provide examples of this last scenario in a full version.\n\n(4) We thank the referee for giving us the opportunity to discuss this very important point. \nAs we demonstrate in our GMM-based examples, the diffusive scaling limits near fixed points of the ballistic dynamics correspond to truly degenerate diffusions. In particular, the volatility matrix may go to zero as one approaches an unstable fixed point. Such a degenerate diffusion can then have a behavior similar to geometric Brownian motion (see, e.g., the behavior of $\\tilde v_i$ in Proposition 4.3 as $a_i\\to 0$) which depending on initialization can be trapped indefinitely near a repelling zero of its drift. \n", " We sincerely thank the referee for their detailed comments and helpful suggestions, especially given the time constraint of an emergency review. \n\nWe have added a notation section in Appendix A as suggested by the referee. \n\nWith respect to the organizational style: we apologize for not following the style perhaps more common in machine learning conferences. Given the space constraints and the audience of this conference we have omitted a background discussion on the importance of stochastic gradient descent to the Stats/ML community. We have attempted to include an extensive historical discussion of prior work going back to the 1950s in the first four paragraphs of our paper to position our work as compared to the classical literature on asymptotic theory and the more recent works in high-dimensions and discuss our main contributions in the next six paragraphs respectively. We have added signposts to this discussion as requested by the referee. \n\nFurthermore, in the ``Our contributions\" section, we have better clarified the connection between our results of research and the most closely related line of works, namely the work of Saad and Solla and the very recent works of Veiga et. al. (both in the case of teacher-student networks). \n\nWe have expanded the caption to Figure 1 to make it more descriptive. A host of simulations will be made available in the full version of this paper; as anticipated by the reviewer, space limitations preclude us from including these here. \n\nTo respond to the questions the referee raised: \n\n(1) We show here that even in simple (though not overparametrized) examples like the XOR one, SGD may not have implicit regularization and in high-dimensional regimes, converges to classifiers with sub-optimal generalization error. Studying this in overparametrized regimes would be of interest (see also item below) and we thank the referee for this suggestion. \n\nWith regards to the second part of the question, neither the population drift, nor the population corrector are necessarily of gradient type. In particular, even the population dynamics are a possibly non-linear function of a higher dimensional gradient flow, and transformations of this type can lead to non-gradient systems. \n\n(2) The scaling relation between $p_n$ and $d_n$ in Theorem 2.2 is not constrained, so the theorem applies in generality. It would be interesting to consider the limiting dynamics obtained by overparametrized versions of our examples for instance and we leave this to future investigation. \n", " This paper investigates the high-dimensional limit of SGD with constant step-size. The authors obtain an ODE/SDE type of limiting dynamics for the trajectory of a given finite set of summary statistics when the dimension of the data and the parameters go to infinity. The limiting dynamics is shown to be matching with the gradient flow for the population loss when the step-size is below the sub-critical scaling regime. When the step-size is in the critical scaling regime, it is shown that the limiting dynamics might possibly get a ballistic correction term and a diffusive term. In addition, the authors also develop a separate diffusive (SDE) limit for the rescaled summary statistics around a fixed-point of the unscaled ballistic dynamics in order to investigate the microscopic behavior of the dynamics around those fixed-points. The authors also demonstrate the applicability of their main theorem in several examples including tensor spike model, and classification with two-layer neural networks of binary and XOR-type Gaussian mixtures. Strengths:\n1. The main result seems to apply to a general class of loss functions and data distributions compared to prior works.\n2. The theoretical claims are supported by experiments and explicit analysis of several benchmark models.\n\nWeaknesses:\n1. Presentation and the structure of the paper make it harder to follow.\n2. Contribution over the prior works is not very clear. \n\nOverall, I find this paper quite interesting and in good quality in terms of technical analysis. However, the material could be presented better for clarity. I would suggest the authors to include a notation section or a table in the main text or in the appendix. The introduction can be expanded more to motivate the subject better and can be divided into explicit subsections of prior works and contribution in order to clarifiy the limitations of the prior works and the advantages of your results better. It could also be helpful to mention the prior work and compare with your results in sections 3-5. Figures could be explained better either in the main text or in the captions. A summary illustration visualizing different limiting dynamics under different scaling regimes can included but the page limit might not allow that. \n\nI would also like to ask the following questions to the authors:\n1. What can you say about implicit bias/regularization of SGD in the critical scaling regime or in the microscopic regime with your main theorem? Can the population correction term be written as the gradient of another function which might imply that SGD in critical regime minimizes a new loss function with an additive bias term?\n2. What can your result tell about the over-parameterized regime where $p_n/d_n \\gg 1$? Authors can discuss limitations of their results and future work in more detail. They mention the limiting dynamics of time-de4pendent rescaling as a future work in footnote 2. ", " This work investigates the high-dimensional limit for the evolution of summary statistics under one-pass stochastic gradient descent (SGD).\n\nGiven some data, a loss function and a learning rate, the one-pass SGD dynamics is defined by evaluating the gradient *at a single data sample* and performing a gradient step in this direction. In the so-called *classical* limit where the dimensions of the problem are fixed and the learning rate is taken to zero, one-pass SGD is known to converge to the gradient flow in the population risk. This work characterises the corrections to the gradient flow dynamics arising when the dimensions of the problem (data dimension, number of parameters) scale relatively to the learning rate.\n\nIts main theoretical contribution is to map the one-pass SGD dynamics into a set of tractable low-dimensional stochastic differential equations (SDE) for the evolution of the summary statistics (i.e. low-dimensional functions of interest of the parameters, such as their covariance and correlation with a ground truth).\n\nExplicit examples such as planted matrix and tensor PCA and Gaussian mixture classification are discussed in detail, where it is shown how the SDE simplifies in different regimes associated to different choices of scalings for the dimensions. The investigation of one-pass SGD through the lenses of its summary statistics has a long history, which to my best knowledge dates back to the seminal work of Saad & Solla [41, 42], and was followed by an intense research activity since the mid-90s [BS, RB, CC]. These early works were inspired by ideas from statistical physics of disordered systems, where reducing the study of a high-dimensional random system to the study of low-dimensional equations for quantities that concentrate (known in that field as \"order parameters\") is quite natural. These early works have focused in the particular setting of two-layer neural networks (NNs) in a teacher-student setting (i.e. when the training data is generated by a two-layer NN itself) and in the critical scaling (adopting the author's terminology), but since have been enlarged to other data models, such as correlated Gaussians [20], Gaussian mixture classification [37], and to other scaling regimes [49]. I stress that because although [41] has been published in this very own venue (NeurIPS'95), closely related ideas have recently resurfaced in the mathematical machine learning literature, but missed this connection (e.g. [TV, 52]) - likely because they were independently rediscovered.\n\nIn this work, the authors take an important step forward by collecting and extending these specific settings into a general abstract framework for mapping the one-pass SGD dynamics of a probabilistic learning task into a SDE for the summary statistics. In particular, the characterisation of the stochastic correction in terms of a simple diffusion is an important contribution that goes beyond the literature above, and that allows to study how the dynamics escape from unstable fixed points.\n\nHowever, although most of the classic literature discussed above is cited in this manuscript, the connections are poorly acknowledged on the level of the discussion. For instance:\n\n- The trade-off between the \"population drift\" $f$ and the \"population corrector\" $g$ in the critical scaling already appeared in [41, 42] in the specific context of teacher-student two-layer networks. There, it was shown that the presence of $g$ gives rise to a fixed point that prevents the dynamics to reach perfect learning at linear time scales.\n\n- The cross-over between the \"critical\" and \"sub-critical\" regimes (when the correction $g=0$) with the relative scaling between the data dimension $d_{n}$, the number of parameters $p_{n}$ and the learning rate $\\delta_{n}$ was recently discussed in [49]. This should be reflected in the discussion in L63-L67 and L141-L144.\n\n- The SNR scaling for the existence transition of good classifiers for one-pass SGD on the Gaussian Mixture XOR classification task with two-layer NNs was discussed in [37].\n\nIn my opinion, the manuscript is also well-written and the conceptual thread is easy to follow. However, I believe the technical part in Section 2 and in particular Definition 2.1 is hard to parse for the general NeurIPS audience. It would be nice if the authors could add some intuition on what each of the conditions 1-3 mean, and maybe the simplest concrete example the author's can think of a localising and a non-localising sequence to help the reader develop an intuition.\n\nTo summarise, the strengths and weaknesses of the manuscript are:\n\n**Strengths**: The manuscript is well written and easy to follow. The theoretical contribution is a significant addition to the literature, setting a fairly general framework to understand the behaviour of one-pass SGD through tractable low-dimensional equations. The examples given are pertinent and useful to understand the theory.\n\n**Weaknesses**: The discussion is not well placed in the relevant literature. Some technical parts of the manuscript are hard to parse.\n\n\n**References** (numbered refs. are from the bibliography in the paper)\n\n[BS] M. Biehl and H. Schwarze, \"Learning by on-line gradient descent\", Journal of Physics A: Mathematical\nand General, vol. 28, no. 3, pp. 643–656, feb 1995.\n\n[RB] P. Riegler and M. Biehl. \"On-line backpropagation in two-layered neural networks\". Journal of\nPhysics A: Mathematical and General, 28(20), 1995.\n\n[CC] M. Copelli and N. Caticha, “On-line learning in the committee machine,” Journal of Physics A: Math-\nematical and General, vol. 28, no. 6, pp. 1615–1625, mar 1995.\n\n[TV] Y.S. Tan, R. Vershynin, \"Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval\", arXiv: 1910.12837 [stat.ML] -**[Q1]**: In L137, \"fixed dimension\" means both fixed $d_{n}$ and $p_{n}$?\n\n-**[Q2]**: In [49] it was shown that for teacher-student two-layer NNs by letting the hidden-layer width grow faster than the data dimension, one can go from a regime of imperfect to perfect learning (corresponding to the cross-over between a critical to sub-critical regime in the author's vocabulary). Can a similar conclusion be drawn from Theorem 2.2 (for instance by letting $p_{n}$ grows faster than $d_{n}$), or does it depend on the setting? \n\n-**[Q3]**: In the examples discussed, the stochastic term $\\Sigma$ is only present around a fixed point of the drift part of the dynamics. Is there anything in the theorem preventing us from having $\\Sigma\\neq 0$ outside a fixed point?\n\n-**[Q4]**: What the authors mean by \"converging to a unstable fixed point\" (L268-L273, L316-L317)? I understand that the dynamics can get trapped in a neighbourhood of an unstable fixed points for some time if initialised close to it, but how can it converge to a unstable fixed point? An inevitable limitation of the level of generality of this work is that it does not provide a constructive way to choose the summary statistics for a given problem of interest. This could be briefly commented in the text as well.", " This paper studies rigorous approximations of SGD dynamics. Assuming an SGD algorithm of the form\n$$ X_{n+1} = X_n - \\delta \\nabla L(X_n, Y_n), $$\nand a set of summary statistics $u: \\mathbb R^p \\to \\mathbb R^k$ for fixed $k$, the authors show that the dynamics of $u$ converge in probability\nto a stochastic differential equation of the form\n$$ du = (f(u) + g(u))dt + \\Sigma(u)dW_t.$$\n\nThe authors also provide a phenomenology of the three terms in the RHS, wherein $f(u)$ corresponds to the gradient flow term, $g(u)$ is a deterministic correction term, and $\\Sigma$ encapsulates the variance of each SGD step. Depending on the chosen stepsize $\\delta$, those terms can be either influential or negligible in the associated SDE.\n\nThe remainder of the article is devoted to several examples. First, the authors study the use of SGD for matrix/tensor PCA:\n$$ Y = \\lambda x^{\\otimes k} + W, $$\nwhere $W$ is an i.i.d Gaussian $k$-tensor. With careful renormalizations of the dynamics, they map the SGD problem to an Ornstein-Uhlenbeck process, and show that the threshold for stability of this OU process matches the already known thresholds for hardness in terms of $\\lambda$.\n\nThe last two examples refer to Gaussian mixture classification; one is a simple binary task, while the other is the so-called XOR Gaussian mixture, which is not amenable to a simple linear separator. In both cases, the authors consider a two-layer neural network, and provide a full set of sufficient statistics to describe the dynamics. Those dynamics are characterized by a complex structure of fixed points (resp. 4 and 625 connected components), some of those being unstable; the authors then demonstrate how \"zooming in\" on the dynamics helps characterize the behaviour around those fixed points.\n\n This paper is the culmination of several previous works studying rigorous approximations of SGD dynamics (e.g. Tan and Vershynin, Ben Arous et al., Veiga et al. ...); it provides a very general result encompassing all previous works. The hypotheses are quite reasonable, and less restrictive than most usual restrictions (no uniform Lipschitz condition on $u$, only fourth moment conditions on $L$).\n\nThe provided examples illustrate quite well both the power and the limitations of this approach: on the one hand, the main theorem is applicable to a wide array of learning problems, and yields a very precise characterization as an SDE. On the other hand, those SDEs are highly non-trivial to study, even in the ballistic phase without any noise. Hence, there is still a lot of work to extract interesting insights from this theorem on a new learning task.\n\nThe main weakness of this paper is its terseness. The main theorem is introduced without giving much intuition about it, while the gist of the proof (disregarding the important aspects of martingale bounding) is based on a Taylor expansion of $u(X_{n+1})$ in powers of $\\delta$. This would give a much needed intuitive interpretation to the correction term and the diffusion matrix. - Can you provide additional explanations regarding the 3/32 probability of ending up at a stable fixed point in the XOR mixture ?\n- What would happen if, like in many real-world networks, the number of hidden units (and therefore, the number of sufficient statistics) starts to diverge ? The limitations are adequately addressed." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 7, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4 ]
[ "kMYMxgSPtl8", "EQPOx0cWBHGd", "KNphMyiGCrD", "PjxyOZAkOb4", "J5Jy9BRyBe4", "-X6ujiQW4Nh", "kMYMxgSPtl8", "gn5Ue-aCCKv", "vVVxM2OR6HZ", "c8Wb6XIMxsO", "nips_2022_Q38D6xxrKHe", "nips_2022_Q38D6xxrKHe", "nips_2022_Q38D6xxrKHe" ]
nips_2022_HQDvPsdXS-F
Neur2SP: Neural Two-Stage Stochastic Programming
Stochastic Programming is a powerful modeling framework for decision-making under uncertainty. In this work, we tackle two-stage stochastic programs (2SPs), the most widely used class of stochastic programming models. Solving 2SPs exactly requires optimizing over an expected value function that is computationally intractable. Having a mixed-integer linear program (MIP) or a nonlinear program (NLP) in the second stage further aggravates the intractability, even when specialized algorithms that exploit problem structure are employed. Finding high-quality (first-stage) solutions -- without leveraging problem structure -- can be crucial in such settings. We develop Neur2SP, a new method that approximates the expected value function via a neural network to obtain a surrogate model that can be solved more efficiently than the traditional extensive formulation approach. Neur2SP makes no assumptions about the problem structure, in particular about the second-stage problem, and can be implemented using an off-the-shelf MIP solver. Our extensive computational experiments on four benchmark 2SP problem classes with different structures (containing MIP and NLP second-stage problems) demonstrate the efficiency (time) and efficacy (solution quality) of Neur2SP. In under 1.66 seconds, Neur2SP finds high-quality solutions across all problems even as the number of scenarios increases, an ideal property that is difficult to have for traditional 2SP solution techniques. Namely, the most generic baseline method typically requires minutes to hours to find solutions of comparable quality.
Accept
In this paper, the authors proposed to use learning with neural network to amortize the cost in the two-stage optimization problems. The authors tested the algorithms on several problems, demonstrating the advantages of the proposed algorithm empirically. Most of the reviewers think this work is interesting, although there are already plenty of existing work considering the similar methods, especially a similar work has been published that using learning to amortize for multi-stage stochastic programming. Another concern raised by reviewer is that the empirical comparison is not comprehensive. The decomposition methods, e.g., Benders-based methods, are not involved, which is a major algorithm for two-stage stochastic optimization problems. Therefore, the advantages of the proposed method is not clear. Please take the reviewers' points into account to improve the paper.
train
[ "0nLus52PdJ8", "p47c0zXtrOG", "jAzd8xWPhW", "MmEjmhLXXA", "ReMDdGe2oUT", "46mGr8ukJIm", "Ue-Z7sePz9M", "u-1Fnmrgo7S", "Bgl7ngSI19b", "d2zcM9eCdKo", "ps6pGcxv5hD", "rYtXjbEct_", "bF9D59g7QKs", "h-AwFrtU9s" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I appreciate that the camera-ready version would be more upfront about the training and data collection time, given that this is not the typical case where you would generalize for a wide set of instances.\n\nI understand the reasons not to include stronger baselines. This is however in my opinion the main drawback of the paper: even though decomposition approaches may not have the same generality as this approach, it would have been very useful to know whether one may prefer this approach over traditional decomposition approaches, which are most commonly used. That said, I believe that the current contributions outweigh this issue and my hope is that this paper sparks future work that can answer this question properly.", " Thank you for the clarification. We use a sufficiently large big-M value of 100,000 based on preliminary experiments. Specifically, we note that the pre-activation value for the ReLU hidden units are on the order of +/-100s for both MC and SC. As such the big-M value of 100,000 will be valid for the instances we consider.\n\nWe acknowledge that choosing the large big-M is obviously not ideal as it increases the solving time of our approach; a smaller big-M can only yield tighter LP relaxations. We note some better methods for obtaining better valid bounds are the following:\n- Allowing Gurobi to handle the bounds with logical constraints (https://www.gurobi.com/documentation/9.5/refman/py_model_agc_indicator.html)\n- Once we have a trained ReLU network, the weights can be leveraged to obtain bounds by solving an optimization problem over the first stage decision variables to determine the lowest valid big-M for a given scenario or set of scenarios. As either the scenario or set of scenarios are fixed during evaluation (i.e. solving a surrogate MIP), the bounds computed by big-M with respect to either a scenario for multi-cut or the latent representation can be computed similary. We believe this specifically addresses the last point of your comment. \n- Lastly, the most suitable approach would be to leverage existing techniques for choosing big-M values in ReLU MIPs, e.g. [1,2].\n\n[1] Serra, Thiago, and Srikumar Ramalingam. \"Empirical bounds on linear regions of deep rectifier networks.\" Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 34. No. 04. 2020.\n\n[2] Grimstad, Bjarne, and Henrik Andersson. \"ReLU networks as surrogate models in mixed-integer linear programs.\" Computers & Chemical Engineering 131 (2019): 106580.", " >> How are valid(?) bounds computed for the input of the value function? This is a crucial ingredient necessary to determine the validity and computational performance of the end-to-end MIP formulation. In particular, my understanding from the paper is that the downstream MIP formulation only ever considers the latent space learned to represent the aggregated scenarios. How are bounds computed in this space?\n\n> Neur2SP is a data-driven heuristic solution technique. Hence, theoretically, we cannot obtain bounds using it. However, empirically we can observe that the optimal objective of the surrogate MIP is indeed close to the actual objective when evaluated on the true scenarios using the first-stage solution from the surrogate (See Appendix D for details). We note that a learning theoretic analysis would be a useful result for providing bounds, however, this is more so a direction for future work.\n\nMy original question was a bit underspecified, so let me clarify: I am referring to variable bounds in the MIP model on the inputs to the value function, not optimality bounds on the end SP problem. These input variable bounds can (should) be used to determine the magnitude of the \"big-M\" coefficients used to formulate the nonlinear ReLU activations in the MIP model. Can the authors please clarify how these big-M coefficients are computed in their implementation? I am particularly interested because, while in the original space it may be relatively straightforward to compute \"reasonable\" bounds on the inputs by inspection, after the translation to the latent space I imagine most/all interpretability is lost.", " Thank you for engaging with our rebuttal! We appreciate the opportunity to further address your key concern regarding novelty.\n\n__1. Positioning our work:__ We do not claim that approximating the inner optimization problems’ objective function is our key contribution. In fact, it is not, and we will build on your constructive review and the references you’ve kindly shared to emphasize that point in the next iteration of the paper. Our contribution is in designing an ML-based heuristic algorithm for two-stage stochastic programming problems. Achieving this ambitious goal has required not only function approximation, but also careful neural network architecture design to accommodate an extremely large number of scenarios (the single-cut architecture) and also an integration of the trained model in a final MILP that produces the heuristic solutions.\n\n\n__2. The importance of our “application”:__ Two-stage stochastic programming is a modeling framework rather than an application. As such, Neur2SP enables the solution of not one real application, but a very wide range of problems that can be modeled in the two-stage SP language. Stochastic Programming is a very large subfield of Mathematical Programming, with even a conference of its own (https://na.eventscloud.com/website/40825/), and a large community of researchers; for instance, the OptimizationOnline pre-print repository (the arXiv of Math. Optimization), counts 919 Stochastic Programming papers, one of the biggest categories: https://optimization-online.org/repository/. We have tackled a diverse set of 4 applications in our paper, something which is extremely rare in the SP literature due to the specialized algorithm design philosophy. That we achieve near-optimal solutions in a few seconds for all instances of these problems using the single-cut architecture is extremely promising with wide-reaching implications for this field.\n\n\n__3. Predicting the value vs. the solution:__ Yes, the latter seems more challenging at face value, but it is in fact not even possible in our setting. Given that we have hard constraints in both stages, it is extremely unlikely that any model will be able to output a feasible solution directly. This is where embedding the trained model into a final “heuristic” MILP becomes crucial. Papers [1–4] all treat unconstrained problems (min-max/robust) and thus may be amenable to the prediction of a solution directly. However, as we will show in the next point 4, they can only do so in restricted settings and/or at very small scale.\n\n\n__4. Scale and generality of the problems being tackled:__\nIn terms of problem scale as measured by the number of decision variables, some of our instances have __millions__ of variables, for example the CFLP_50_50_1000 instances have 50 first-stage variables and 2.5 million second-stage variables, and the SSLP_10_50_2000 instances have 10 first-stage variables and 1 million second-stage variables.\n\nIn contrast, in the paper [1] that you’ve kindly referenced, the min-max problem at hand is unconstrained, and the number of decision variables in the experiments is tiny, 2 for Seesaw and Rotated Saddle, and 25 for Matrix Game, a limitation that is acknowledged in the Conlusion of that paper and its title (“a pilot study”).\n\nIn [2], again, an unconstrained optimization problem is addressed with only a single experiment that involves only 20 decision variables.\n\nIn [3] and [4], the important adversarial training problem is addressed. While of wide interest in the deep learning literature, it is only one (also unconstrained) problem.\n\nOur work, while addressing a different class of problems than papers [1–4], is both widely applicable to many two-stage stochastic programming problems and simultaneously scales to extremely large numbers of variables. We believe that this makes our contributions here at least comparable to, if not more substantial, than [1–4].\n", " I appreciate the authors' response, but my biggest concern --- the novelty of using a neural network to approximate the inner optimization --- still remains. Some prior studies, including the ones I referenced in my earlier review, may predict the optimal solution to the inner problem, whereas Neur2SP predicts the optimal value of the inner problem. This is not a key differentiator. In fact, predicting the optimal solution (typically a multi-dimensional vector) is even more challenging and general than predicting a single optimal value. With the predicted optimal solution, one can trivially obtain the optimal value by plugging the predicted solution into the objective function, but not vice versa. Whether the outer problem is continuous, binary, or mixed-integer programming, has nearly no impact on the use of a neural network to approximate either the solution to or the value of the inner problem.\n\nI stand by my point given that the use of a neural network to approximate the inner problem is the key *novelty and contribution* of this paper but has been well explored in the literature. More specifically, I do not view the empirical application of a well-explored idea to a variant problem setting, without new rigorous theoretical analysis or significant modification to the already-explored method, as a major novelty or contribution.\n\nPS: By \"application of a well-explored idea\", I meant the use of a neural network to approximate the inner optimization problem.", " Dear Reviewers, \n\nThanks for providing the review. The discussion stage will end in next Tuesday. Please check the authors' response and feel free to discuss with authors. \n\nBest,\nAC", " \n\n1. In the stochastic programming (SP) community, it is a standard assumption to have a relatively complete recourse; in fact, many stochastic programs arising from specific applications indeed satisfy a stronger version of this property, namely, the complete recourse condition [1]. In case it is not readily satisfied, it can be easily ensured by introducing slack variables to (a subset of) the second-stage constraints along with a penalty to the objective function, as such incurring a high cost for an infeasible second-stage solution. This assumption is actually very useful in practice [2]. Therefore, in the SP community, this is indeed deemed as a good modeling practice, thus not considered a limiting assumption. The reason is that infeasibility is technically almost always unavoidable in practice since there would be a chance of observing realizations of uncertain parameters outside of the support of the empirical probability distribution. However, there is always a feasible recourse action in practice, despite highly undesirable outcomes. \n\n\tNevertheless, if one would like to work with an SP model that does not have relatively complete recourse, then one can add feasibility cuts, both during the data generation time and in an iterative fashion during test time to handle infeasibility by refining the first-stage feasible set. \n\t\n2. The multi-cut model is simply a feed-forward network that takes in a single scenario and a first-stage decision vector and predicts the second-stage cost as described in subsection 4.1. As the architecture is quite simple we do not discuss it much, however, in the revision, we can make an explicit note on the architecture. \n\n\n3. Neur2SP is a data-driven heuristic solution technique. Hence, theoretically, we cannot obtain bounds using it. However, empirically we can observe that the optimal objective of the surrogate MIP is indeed close to the actual objective when evaluated on the true scenarios using the first-stage solution from the surrogate (See Appendix D for details). We note that a learning theoretic analysis would be a useful result for providing bounds, however, this is more so a direction for future work. \n\n4. We have not done any analysis regarding the impact of seed on the downstream task. However, we did a random search for different hyperparameter settings, which had low variance on the validation error (See Appendix F). Since we have the optimal hyperparameter configuration, we can include such an analysis in the appendix. However, we do not believe that the downstream results would drastically vary as the variance in the predictive models was quite small. \n\n5. We will add the number of timeouts in the tables. In addition, we will include distributional information for the objective and solving time. For Table 5, we have a single instance per row as the scenarios are sampled in a structured manner. Hence, there is only one instance which timed out for the multi-cut approach, specifically PP_1000. Also, looking at the results for other scenario sizes, the multi-cut approach performs poorly for the Pooling problem. \n\n[1] Rockafellar, R. Tyrrell, and R. JB Wets. \"Stochastic convex programming: relatively complete recourse and induced feasibility.\" SIAM Journal on Control and Optimization 14.3 (1976): 574-589.\n\n[2] Birge, John R., and Francois Louveaux. Introduction to stochastic programming. Springer Science & Business Media, 2011.\n", " 1. Yes, other predictors can be embedded easily for the multi-cut approach. We include a comparison for an embedded linear model in the Tables 9-12 in Appendix D where we show significantly better performance. For the single-cut case the neural network was a design choice as it allows one to obtain the compact scenario representation in an end-to-end learning procedure. The extension of this to other predictors is less clear as we rely on the permutation invariant model to embed the scenarios end-to-end. \n2. It worked in all of the settings we tried so far without any difficulty. We are actively looking for more challenging instances to further test our method on.\n3. In principle, the general idea for embedding a predictor in a MIP is likely possible in the bilevel optimization case, but it may not be straightforward. Note that the standard bilevel programs aims to find x and y optimizing F(x,y), where x is an upper-level feasible solution whereas y is a lower-level optimal solution given x. Therefore, in order to get rid of y and end up with a monolithic formulation, we would need the representation of the optimality conditions for the lower-level problem. Having an approximation of the lower-level objective function may not be enough for that aim.\n\n", " 1. We agree with the observation that having the computation time in the main text would be beneficial for the overall readability of the paper. We would try to rearrange and fit the training times in the main text. Also, we would add the point about the usability of the multi-cut approach in the absence of parallel computation. \n2. We agree that having more baselines, progressive hedging and decomposition methods, would be useful. As you mention, our approach is more general, which was the primary motivation for only comparing to another general approach (EF). We note that applying progressive hedging, i.e., iteratively updating the anticipative solutions and Lagrangian multipliers, is possible. However, there may not be convergence guarantees and the solved problems would be computationally expensive for the non-linear case. For other decomposition techniques, we note that as the variable domains vary between instances it will require more specific decomposition implementations. For instance, Benders decomposition can be only applied to the linear problems with purely continuous second-stage variables, the integer L-shaped method requires only binary first-stage variables to be linked to the second-stage problem, and their most general version called the logic-based Benders decomposition necessitates the design of problem-specific cuts. ", " 1. We acknowledge the fact that there exists literature for predictions with respect to inner optimization problems and we will add them accordingly to the related work. We realize that Neur2SP can be classified in a similar manner, however, there are a few important differences since Neur2SP was designed for two-stage discrete problems. Specifically,\n\n\t- All the above-cited methods focus on predicting the solution directly using neural networks. This is easily achieved as the output is either continuous or binary. Whereas Neur2SP integrates a trained prediction model within a classical optimization technique, a MIP in this case, to directly handle the variable domains in addition to any hard constraints on the outer optimization problem. \n\n\n\t- As a result of the above point, none of the cited references are designed to be used in the case of hard (and even non-linear) constraints for the inner or outer problems, outside the scope of limited variable domains (i.e., binary or continuous). Whereas Neur2SP can be straightforwardly applied. \n\n\t- Reference [2] is closest to the problem class we consider as they have addressed the case of binary decision variables. In [2], only the outer problem is binary, whereas Neur2SP can be applied when both the inner and outer problems are binary (and more generally integer). In addition, the computational results are limited to only one relatively small robust optimization problem in [2], whereas Neur2SP is evaluated on significantly larger benchmark instances for four different two-stage stochastic programming problems. Lastly, the methodology in [2] is designed for the robust case, whereas we focus on the expectation over inner optimization problems. For this reason, the extension of [2] to the stochastic case is non-trivial. \n\n\t- References [1,3,4] all deal with continuous unconstrained bi-level problems. As the optimization problem we are dealing with is discrete, the use of any of the methods proposed in these papers is not obvious, and perhaps not possible. \n\n\tThat being said, we agree that these should all be included in the related work.\n\n2. In the training phase, Neur2SP is learning to predict the second-stage cost with respect to an input given by a first-stage decision and scenario pair. In this case the first-stage decisions are sampled randomly to train the network. In the test phase, the exploration over the first-stage decision space is explored through the MIP, while the second-stage cost is approximated with the embedded trained neural network. We do acknowledge that there may be out-of-distribution issues as we only train on a subset of first-stage decision scenario pairs, however, the strong empirical results demonstrate that out-of-distribution is not an issue for the problems we consider. \n3. In section 4.4 we include a discussion of two of the more important differences between the single- and multi-cut approaches, namely, the learning and downstream trade-offs between the models. Specifically, for data generation the multi-cut approach requires less computation to produce a single sample. However, for downstream MIP complexity, the multi-cut model requires on the order of K times the number of decision variables compared to the single-cut approach. \n4. We acknowledge that this section pertains to a previous study that was cited. This section was included in the background as it is an important component of how we formulate the surrogate optimization model for two-stage problems. However, we note that the use of MIP embeddings of neural networks in the context of stochastic programming is new, and enables a principled optimization over the trained value function approximation model. We believe these to be strong contributions of our work.\n", " This paper studies two-stage stochastic programs (2SPs). To address the computational complexity associated with the second-stage problem, the key idea is to use a neural network to approximate the second-stage value that facilitates the first-stage optimization. Both single-cut and multi-cut approximations are considered. Finally, Neur2SP is validated via experiments. Strengths:\n+ The paper is fairly well written and easy to follow.\n+ 2SP potentially has practical applications.\n\nWeakness:\n- The key idea of using a neural network to approximate the otherwise computationally-demanding second-stage problem has been well explored in a number of prior studies (not cited), so the novelty of this paper is rather limited. For example, the following papers use neural networks to approximate the solution of an inner maximization/minimization problem in worst-case robust optimization. While the specific contexts are a bit different from the one in this paper, the main technical novelty (i.e., using a neural network to speed up the inner optimization) in those papers still apply here.\n\n[1] Learning A Minimax Optimizer: A Pilot Study, ICLR'21.\n[2] Learning for Robust Combinatorial Optimization: Algorithm and Application, INFOCOM'22.\n[3] Learning to Defend by Learning to Attack, AISTATS'21.\n[4] Improved Adversarial Training via Learned Optimizer, ECCV'20.\n\n- The training distribution for the neural network to approximate the second-stage optimization is pre-generated/-determined, but it's entangled with the first-stage solution. As a result, the actual testing distribution (depending on the first-stage solution, which itself also depends on the second-stage approximation) can differ from the pre-generated distribution. This can raise convergence and/or out-of-distribution issues.\n\n- The single-cut vs. multi-cut tradeoff is only briefly touched without an in-depth analysis.\n\n- Secton 2.2 (Embedding Neural Networks into MIPs) is interesting, but this is taken directly from a prior study. - The key idea of using a neural network to approximate the otherwise computationally-demanding second-stage problem has been well explored in a number of prior studies (not cited), so the novelty of this paper is rather limited. For example, the following papers use neural networks to approximate the solution of an inner maximization/minimization problem in worst-case robust optimization. While the specific contexts are a bit different from the one in this paper, the main technical novelty (i.e., using a neural network to speed up the inner optimization) in those papers still apply here.\n\n[1] Learning A Minimax Optimizer: A Pilot Study, ICLR'21.\n[2] Learning for Robust Combinatorial Optimization: Algorithm and Application, INFOCOM'22.\n[3] Learning to Defend by Learning to Attack, AISTATS'21.\n[4] Improved Adversarial Training via Learned Optimizer, ECCV'20.\n\n- The training distribution for the neural network to approximate the second-stage optimization is pre-generated/-determined, but it's entangled with the first-stage solution. As a result, the actual testing distribution (depending on the first-stage solution, which itself also depends on the second-stage approximation) can differ from the pre-generated distribution. This can raise convergence and/or out-of-distribution issues.\n\n- The single-cut vs. multi-cut tradeoff is only briefly touched without an in-depth analysis.\n\n- Secton 2.2 (Embedding Neural Networks into MIPs) is interesting, but this is taken directly from a prior study. NA", " The paper introduces a heuristic method for two-stage stochastic optimization problems that approximates the value function of the second stage with a neural network. Notably, the problem may be mixed-integer, possibly nonlinear, in the second stage, which can make the problem particularly challenging. The value function for a problem is learned in advance by sampling scenarios and training a neural network, which is embedded into the first-stage problem as a MIP. Computational results on a variety of stochastic optimization instances show positive results compared to the extensive form of the problem. This paper provides an effective heuristic to solve two-stage stochastic optimization problems, which has several important applications in Operations Research. An appealing property of this method is its generality, as it can be applied without any modifications even when the second stage is mixed-integer or nonlinear, which often makes the problem more challenging. The computational experiments are reasonably extensive and positive: the method can produce better results than the extensive form for certain large-scale instances, particularly when the number of scenarios is large, and sometimes even when taking into account data generation and training time sequentially. There are also instances where the method does not perform as well when taking training time into account (INVP), which is useful to have in the paper. The paper is overall well-written and clear. \n\nMy concerns with the paper are the following:\n\n1. The text is not very explicit when taking into account the data generation and training time, often implicitly treating as if they were not significant. Granted, they can be parallelized and the learned function can generalize to a larger number of scenarios, but that does not mean they can be dismissed, particularly because as is, this approximation is not intended to generalize to other instances. In particular, I do not believe it is fair to present the solution time in the abstract without proper contextualization (i.e. that this is without data generation and training time), and Section 6.1 should also be framed within this context. Ideally, I would prefer to see training and data generation times in the main text and I would encourage the authors to find space to fit them in, but I understand that this is difficult due to space issues. Another side effect of this is that, as is, at first glance the multicut approach does not seem very useful, but if you take into account data generation and training time it becomes more appealing particularly if one does not have access to broad parallelization capabilities, and perhaps this can be pointed out in Section 6.1.\n\n2. One limitation is that the paper does not compare with decomposition algorithms for two-stage stochastic optimization problems (e.g. Benders-based methods, progressive hedging). Decomposition methods tend to be substantially better than the extensive form in practice, and thus this paper does not answer the question of whether one should choose this approach over a decomposition method. It is also worth noting that this method is a heuristic and the extensive form is exact, which is not an ideal comparison (although I do appreciate the column \"EF time to\" in the tables). On the other hand, an argument is that the method presented in the paper is more general. In any case, although this is a limitation that should not be ignored, in my opinion this is outweighed by the rest of the paper.\n\nGiven that item 1 above is addressed, in my opinion this paper should be accepted. Not only this approach might be immediately applicable in practice for an important class of optimization problems (although not too clear due to the lack of comparison with decomposition methods), it can also be a useful stepping stone for further papers applying similar ideas. Therefore, I believe this paper can have a high impact in the field. No questions or suggestions besides the above. A minor detail is that Equation (3) should be in section 4.2. The paper should be more explicit regarding data generation and training time as discussed above, but there are no limitations on societal impact that need to be addressed.", " The authors propose a method for quickly obtaining high quality solutions to two stage stochastic programs (2sp) by training a neural network to estimate the second stage solution quality given the first stage solution, and then solving for the first stage solution by encoding the second stage estimator in the first stage MIP formulation, replacing the second stage value with it’s estimator. The authors evaluate on a variety of realistic 2sp instances with both linear and bilinear second stage problems demonstrating the flexibility of their approach to a variety of problems. Overall the authors demonstrate orders of magnitude of performance improvement over reasonable baselines that are equivalently general, making the class of 2sp problems much more accessible for realistic problems. The core strength of the paper is in its contribution to making a general class of 2sp problems computationally tractable in a sensible manner, combining the expressive power of neural networks with the computational power of MIP solvers to obtain high-quality solutions to 2sp problems quickly. The paper is very clear, original, and significant.\n\nAdditionally, the experiments and motivation are sound making this a great contribution to the literature of improving optimization performance with machine learning.\n\nI think the main area that could be improved is to investigate the sample efficiency of the approach and potential for other predictive models such as linear models or decision trees which could also be encoded as MIP. Otherwise, the approach seems to be sound and have great significance for the optimization community or anyone who is tackling problems of two stage stochastic programs. It would also be interesting to see whether this could be employed for other bilevel optimization tasks such as those that arise in game theory where the second level task is difficult to solve. Finally, it would be interesting to see if the same approach could work for more stages of decision making. \n Is it possible to compare against embedding other predictive models in the MIP formulation?\n\nAre there any settings in which the method didn't work, or instance in settings where the first or second stage is highly combinatorial?\n\nIs this approach applicable for bilevel optimization settings? What would be the potential pitfalls there? The authors adequately addressed limitations and potential negative social impact", " This paper presents an algorithm for producing feasible solutions for two-stage stochastic optimization problems. The main approach is to train a neural network to approximate the second-stage value function, and then embed this neural network into the first stage optimization problem. The authors present a deep learning architecture for learning the second-stage value function in both a \"single-cut\" and \"multi-cut\" setting, and discuss the data generation scheme for supervised learning. The computational section shows that the new \"single-cut\" method produces solutions that are typically roughly as good, and sometimes superior to, the \"extended formulation\", and arrives there in far less time (seconds vs. hours).\n\n The paper is clearly written, and presents an interesting and seemingly novel approach to a problem of great interest to the operations research community. The computational results suggest that the approach can offer a compelling tradeoff between solution quality and solve time. * Have the authors considered how their method would generalize to problems with incomplete recourse? There is a glancing mention about artificially inducing relatively complete recourse in the appendix, but no other discussion. At the very least, please add a brief discussion about any assumptions on Y in the Preliminaries section, and a bit of discussion on the resulting limitations.\n* The \"Multi-cut\" subsection in 4.1 seems oddly abbreviated: is there nothing of interest to say about the architecture used? The equation block (3) also seems out of place.\n* How are valid(?) bounds computed for the input of the value function? This is a crucial ingredient necessary to determine the validity and computational performance of the end-to-end MIP formulation. In particular, my understanding from the paper is that the downstream MIP formulation only ever considers the latent space learned to represent the aggregated scenarios. How are bounds computed in this space?\n* I am curious if the authors have done any sensitivity analysis regarding the randomness inherent in the NN training procedure. More specificially, is there a wide variance in downstream performance for models trained with differing random seeds, for example? As intuitively it seems that the MIP optimizer will attempt to identify and take advantage of any artificial locally optimal \"artifact\" in the model, I am curious if randomness affects the presence or location of these artifacts, and if this ends up affecting the downstream performance. \n* In the computational section, can the authors provide more distributional information about the objective difference and solve time (and also, potentially, the number of \"time outs\")? I am especially curious about Table 5, and if the relatively poor performance of EF-MC w.r.t. objective difference can be attributed primarily to instances where it \"timed out\", or if the performance is just uniformly worse across the board. The authors do not discuss the potential negative societal impact of their work." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 7, 8, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 4 ]
[ "Bgl7ngSI19b", "jAzd8xWPhW", "Ue-Z7sePz9M", "ReMDdGe2oUT", "d2zcM9eCdKo", "nips_2022_HQDvPsdXS-F", "h-AwFrtU9s", "bF9D59g7QKs", "rYtXjbEct_", "ps6pGcxv5hD", "nips_2022_HQDvPsdXS-F", "nips_2022_HQDvPsdXS-F", "nips_2022_HQDvPsdXS-F", "nips_2022_HQDvPsdXS-F" ]
nips_2022_S2Awu3Zn04v
Approximate Value Equivalence
Model-based reinforcement learning agents must make compromises about which aspects of the environment their models should capture. The value equivalence (VE) principle posits that these compromises should be made considering the model's eventual use in value-based planning. Given sets of functions and policies, a model is said to be order-$k$ VE to the environment if $k$ applications of the Bellman operators induced by the policies produce the correct result when applied to the functions. Prior work investigated the classes of models induced by VE when we vary $k$ and the sets of policies and functions. This gives rise to a rich collection of topological relationships and conditions under which VE models are optimal for planning. Despite this effort, relatively little is known about the planning performance of models that fail to satisfy these conditions. This is due to the rigidity of the VE formalism, as classes of VE models are defined with respect to \textit{exact} constraints on their Bellman operators. This limitation gets amplified by the fact that such constraints themselves may depend on functions that can only be approximated in practice. To address these problems we propose approximate value equivalence (AVE), which extends the VE formalism by replacing equalities with error tolerances. This extension allows us to show that AVE models with respect to one set of functions are also AVE with respect to any other set of functions if we tolerate a high enough error. We can then derive bounds on the performance of VE models with respect to \textit{arbitrary sets of functions}. Moreover, AVE models more accurately reflect what can be learned by our agents in practice, allowing us to investigate previously unexplored tensions between model capacity and the choice of VE model class. In contrast to previous works, we show empirically that there are situations where agents with limited capacity should prefer to learn more accurate models with respect to smaller sets of functions over less accurate models with respect to larger sets of functions.
Accept
A key discussion point in the rebuttal phase was the practical use of the proposed bounds, which two of the three reviewers brought up. The authors in response added an additional section (Section 6) and experiment to address this concern. While some concerns regarding the practical use of these bounds remain, the authors have made a sufficiently convincing case in my view. Hence, I recommend acceptance.
train
[ "lAIEovDK53n", "oOydCCKHbT", "Pwd1KwRwlqUO", "5Lr23hsBak", "zGv2iJxJ8J", "I75FB7UgET5", "h10qktEe2SH", "Img1fRIDdC", "zl23edjuRe6", "R_4G76nCgv" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I thank the authors for the response to the issues I raised. Some additional notes:\n\n* Regarding providing some intuition on how the different propositions are established, I think that a 1-liner hint on how the result is established would be helpful in all, but - personally - I found that Propositions 2 and 3 could use that 1-liner intuition about the proof.\n\n* I appreciate the inclusion of Section 6, which I believe sheds some light regarding the potential practical significance the results in the paper. \n\n* Regarding the usefulness of the bounds provided in the paper, I acknowledge the author's reply. I am still not fully convinced regarding how easily meaningful bounds can be derived from the topological properties of the AVE classes, but I agree that the results in the paper provide an important link between these, given $\\beta$ and $\\mathcal{E}_\\epsilon$.", " I would like to thank the authors for providing detailed answers to the questions that I had. I have no further question and I believe that with the addition of Section 6, the paper is in a better shape. I have updated my score accordingly.", " We would like to thank the reviewers again for their feedback. We are committed to producing the best possible version of our submission, and have already included a new section with experiments to address a shared concern among the reviewers regarding the practical applicability of our theoretical results. We are willing to interactively address any remaining concerns, so please let us know if there is anything else we can do to improve our submission.", " We would like to thank the reviewer for their thoughtful comments on our submission. \n\n**It seems clear that for the kind of models studied here, we need bounds that control how close the predicted performance is to performance in the actual environment, and it seems desirable to be able to obtain this for bounded orders. This is the main strength of the paper.**\n\nAs the reviewer points out, it is important to be able to quantify how well a learned value equivalent (VE) model will perform when used for planning. To our knowledge, ours is the first work to address this problem. Previous work did not present any performance bounds, but rather (restrictive) conditions under which VE models would perform optimally.\n\nIn particular, [1] showed that a VE model with respect to all policies and all functions must also be the true model of the environment, and thus yields optimal planning. This result was generalized in [2], in which it was shown that any proper VE model with respect to all deterministic policies also yields optimal planning. \n\nBesides these two quite restrictive scenarios, there are no results on the performance of VE models available in the literature. To illustrate how restrictive this is, note that if a single function or a single policy is removed from either of the aforementioned model classes, we have no guarantees whatsoever on the performance of the models within the resulting class. \n\nThis is the gap in the literature that our work aims to bridge: to provide bounds on the planning performance of VE model classes with arbitrary orders and function sets. \n\nIn hindsight, we feel like we should have made this point more clear in the paper, and we intend to do so in the revised version.\n\n**I am not so moved by the results about relating arbitrary pairs of classes. I don't see the utility of this notion for such large values of epsilon as such relationships would require -- and indeed, intuitively, we would not expect totally arbitrary pairs to be \"equivalent\" in any meaningful sense.**\n\nThe reviewer is correct in that the bounds provided in our submission can be quite loose when general VE model classes are compared. Note though that the ability to compare general VE classes is not an end in itself, but rather a means to convert the topological properties presented in previous work into performance bounds, which were nonexistent up until now. \n\n**What's really missing from this work is evidence that the definition can be made to do useful work for us. I'd want to see it used to analyze an algorithm or show that some MDP can be represented more compactly, or something of similar flavor. Can you point to evidence that it is productive to consider this notion?**\n\nWe agree that our submission should have been more clear regarding the benefits of resorting to the notion of approximate VE (AVE). We have updated the paper to fix that. Here, we summarize why we think it is productive to think in terms of AVE:\n\n1) As pointed out above, the concept of AVE allows us to provide performance guarantees for VE classes of any order and with respect to any set of functions. This sort of guarantee was not presented in previous work, which focused on idealized scenarios that are useful to present a new concept but completely disconnected from the practice. This point should have been emphasized more in the paper, and we will make sure it is clear in the revised version (as can be seen in the version currently submitted).\n\n2) Past work [1,2] have assumed, either implicitly or explicitly, that increasing the number of functions used to define a VE class is always beneficial. However, this was not always corroborated by empirical evidence, since in some experiments increasing the number of functions would in fact harm performance. Using the AVE formalism, we were able to present a more nuanced view that clearly characterizes the trade-offs involved in adding more functions and describes the empirical evidence more accurately. Concretely, AVE makes it clear why sometimes it is preferable to be VE with respect to a smaller set of functions: because this will in general yield a small VE error (epsilon). We provide instantiations of this phenomenon both in the form of examples and small experiments in the newly added Section 6 of the updated submission.\n\n[1] Grimm, Christopher, et al. \"The value equivalence principle for model-based reinforcement learning.\" Advances in Neural Information Processing Systems 33 (2020): 5541-5552.\n\n[2] Grimm, Christopher, et al. \"Proper value equivalence.\" Advances in Neural Information Processing Systems 34 (2021): 7773-7786.\n\n\n", " We would like to thank you for taking the time to review our paper (we are also grateful for your attention to detail which helps us catch and correct typos!)\n\n**On the role of the deterministic policy set in PVE classes: Why does the PVE class wrt all deterministic policies contains only models which can plan optimally (see the last par. of Sec. 2)? I mean what is the role of determinism here? Why doesn’t the PVE class wrt all policies (including deterministic and stochastic ones) contains only models which can plan optimally?**\n\nThis is a good question! \n\nBoth the PVE class with respect to all policies and the PVE class with respect to all deterministic policies contain only models which plan optimally, but the former class is a subset of the latter. Accordingly, [2] showed that the PVE class with respect to all deterministic policies contains only models which plan optimally (see Corollary 1 in [2]); note that the result also holds for the PVE class with respect to all policies (which is a subset of the previous one). Our reason for mentioning the PVE model class wrt all deterministic policies was to point out Grimm et al’s most general result about model classes that only contain models which can plan optimally. \n\n**On the significance of the theoretical results in terms of practicality: In the previous studies of Grimm et al. (on VE), the authors have provided experimental results that either justify the usage of VE models or improve the currently existing algorithms. However, in this study there is no empirical demonstration of the usefulness of the theoretical results. Even though the theoretical results are interesting by themselves, I think that providing a section on how these results can at least help developing better VE MBRL algorithms can significantly benefit the paper. Is it possible to provide a discussion on how these theoretical results can be useful in giving rise to new algorithms?**\n\nWe agree that the original submission was missing a demonstration of the utility of the theoretical results. Accordingly, we have updated the submission to include a new section (Section 6) which shows how approximate value equivalence can facilitate in the design of VE MBRL algorithms. \n\nIn particular, we consider a setting where an agent with a limited capacity model can choose between learning a model with respect to many functions (and tolerate a high approximation error due to having low-capacity) or learning a model with respect to fewer functions (and tolerate a lower approximation error). We showed in our original submission that higher approximation error results in worse performance guarantees (Proposition 2), so it is natural to wonder if, in such settings, it is ever preferable for the agent to choose to learn a VE model with respect to fewer functions. We empirically show that there are indeed situations like this, hinting at the utility of AVE in deciding what types of value equivalent models to learn.\n\n**In Def. 1 and Property 1, should be a V subset of \\VS?**\n\n\n**There seems to be a typo inside beta in Eq. 18 where one of the value functions should not have a hat?**\n\n**In Prop. 2 should it be \\FS_\\PS in the upper bound?**\n\n\nGood finds! We have fixed all of the above in the updated submission.\n\nI think the par. that starts with line 164 should explicitly mention that a certain amount of error will be tolerated for M^inf. I had to read it several times to get this part.\nThis is a good point, we’ve made this explicit in the revised submission.\n\n**Why is there 2^V’s in line 242? Shouldn’t these be V’s?**\n\nWe believe this is correct. Beta takes as input 2 arguments each of which may be any subset of FS. This means that each argument is a member of the power-set of FS denoted 2^FS. We have added a note in the submission about this notation to improve clarity. \n\n[1] Grimm, Christopher, et al. \"The value equivalence principle for model-based reinforcement learning.\" Advances in Neural Information Processing Systems 33 (2020): 5541-5552.\n\n[2] Grimm, Christopher, et al. \"Proper value equivalence.\" Advances in Neural Information Processing Systems 34 (2021): 7773-7786.\n\n", " Thank you for your careful review, we appreciate the thoughtful comments!\n\n**The article is not self-contained, in the sense that most of the less intuitive results are presented with no hint on the proof. The paper still has available 1/2 page that could be used to provide even if a one-liner hint on how the result is established.**\n\nWe agree that more intuition could have been provided in some of the results, and will amend that in the revised version of the paper. If the reviewer has specific suggestions about where intuition would be mostly welcome please let us know and we will be happy to add it to the paper. \n\n**It is not clear what's the practical use of the proposed bounds. The authors make an effort in Section 5 to discuss this issue, but I believe that this is a core point in assessing the significance of the contributions of the paper.**\n\nWe agree that a discussion of the practical use of the bounds is missing from the original submission and have provided an additional section (Section 6) discussing this, including an experiment. \n\nIn particular, we show that when model capacity is limited, there are situations where an agent can choose between learning a VE model with respect to more functions (and tolerate a higher error) or learning a VE model with respect to fewer functions (and tolerate a lower error). In our submission we showed that tolerating a higher error results in poorer performance guarantees (Theorem 2) for the associated VE models, so it is natural to wonder if there are ever situations where it is better to learn a VE model with respect to fewer functions (note that this is a nuance that has gone unexplored in the previous works on VE [1,2], which encouraged learning VE / PVE models with respect to as many functions as was practical). \n\n**In particular, the provided bounds depend critically on the distance and the minimum tolerated error , but---as far as I can understand---these are very difficult to assess in practical scenarios (particular in scenarios involving high-capacity approximations).**\n\n**My main question is related with the practical use of the bounds provided in the paper, in light of the difficulty of determining the factor dependent on the distance/divergence , and the minimum tolerated error .**\n\nThe reviewer is correct to point out that the minimum tolerated error defined in Definition 2 is very difficult to assess. However, we don’t need to compute the minimum tolerated error, just an upper-bound on it. In Proposition 2, we show that we can bound the suboptimality of a model’s planning performance in terms of its minimum tolerated error (or any upper bound on it). \n\nThis is one of the core contributions of our paper: a framework for converting properties about the topology of AVE classes (minimum tolerated errors or upper-bounds thereof) into results about the performance of models contained inside them. \n \nIn the subsequent sections of the paper we provide several upper-bounds on this error (Propositions 3, 4 and 6) which apply when computing the minimum tolerated error between different types of VE classes. We would like to emphasize that our framework facilitates the future study of these bounds. \n\n[1] Grimm, Christopher, et al. \"The value equivalence principle for model-based reinforcement learning.\" Advances in Neural Information Processing Systems 33 (2020): 5541-5552.\n\n[2] Grimm, Christopher, et al. \"Proper value equivalence.\" Advances in Neural Information Processing Systems 34 (2021): 7773-7786.\n\n", " We sincerely thank all the reviewers for their careful reviews and constructive feedback! We will address each review individually, but here we make some general comments that apply to all the reviews, and present a summary of the changes made to the paper.\n\nFirst, we want to address a comment that has shown up in all of the reviews: the practical “use” of the notion of approximate value equivalence (AVE). In short, the AVE formalism allows us to provide performance guarantees for broad families of VE classes. Such guarantees were not available in the literature, which focused on idealized scenarios that do not reflect the practice. In particular, in previous work guarantees were only provided for two very restrictive cases: when all the policies and functions are used to enforce VE [1] or when all deterministic policies are used to enforce proper VE [2]. Our paper extends these guarantees to VE classes of any order and with respect to any set of functions.\n\nStill on the practical benefits of AVE, we notice that previous work [1,2] painted a somewhat simplistic picture of VE in which more functions or policies were always beneficial. In this paper we present a more nuanced view that highlights the fact that, when the model has limited capacity, sometimes fewer functions yields better performance. We added a new section to the paper that discusses this point in more depth (Section 6). The new section includes a set of experiments that nicely illustrate the trade-offs involved in using AVE in practice, including the fact that more functions does not always yield improved performance.\n\n[1] Grimm, Christopher, et al. \"The value equivalence principle for model-based reinforcement learning.\" Advances in Neural Information Processing Systems 33 (2020): 5541-5552.\n\n[2] Grimm, Christopher, et al. \"Proper value equivalence.\" Advances in Neural Information Processing Systems 34 (2021): 7773-7786.\n", " The paper discusses the notion of _approximate value equivalence_ in model-based reinforcement learning. Approximate value equivalence is concerned with the study of families of models that yield $k$-step Bellman updates that are approximately equivalent to the Bellman update yielded by the true model, in the sense that---given a policy $\\pi$ in some set of policies and a value function $v$ in some set of functions---$\\tilde{\\mathcal{T}}^k_\\pi v\\approx\\mathcal{T}^k_\\pi v$ (here $\\tilde{\\mathcal{T}}_\\pi$ and $\\mathcal{T}_\\pi$ represent, respectively, the approximate and exact Bellman operators). The paper provides a number of results that relate different families of models in terms of the error that they incur in terms of Bellman updates. *Strong points:*\n* The problem addressed (understanding how approximate models impact the error incurred in the computation of the value function in RL) is important and interesting;\n* The presentation is very clear, well-organized, and several of the results are quite intuitive.\n\n*Weak points:*\n* The article is not self-contained, in the sense that most of the less intuitive results are presented with no hint on the proof. The paper still has available 1/2 page that could be used to provide even if a one-liner hint on how the result is established.\n* It is not clear what's the practical use of the proposed bounds. The authors make an effort in Section 5 to discuss this issue, but I believe that this is a core point in assessing the significance of the contributions of the paper. In particular, the provided bounds depend critically on the distance $\\beta$ and the minimum tolerated error $\\mathcal{E}_\\epsilon$, but---as far as I can understand---these are very difficult to assess in practical scenarios (particular in scenarios involving high-capacity approximations). Therefore, it would be important to discuss, in light of this, the practical use of the provided bounds.\n My main question is related with the practical use of the bounds provided in the paper, in light of the difficulty of determining the factor dependent on the distance/divergence $\\beta$, and the minimum tolerated error $\\mathcal{E}_\\epsilon$. N/A.", " This paper proposes approximate value equivalence (AVE) which extends the previously proposed VE formalism by replacing equialities with error tolerances. With this extension, it shows that AVE models with respect to one set of functions are also AVE with respect to any other set of functions if a high enough error is tolerated, which then allows for deriving bounds on the performance of VE models with respect to arbitrary sets of functions by relating them to a particular model class with known performance guarantees. Finally, the paper supports these results by providing intuitions and discussions of their implications. **Originality:** The paper seems to be original in the sense that it extends the previously proposed VE formalism by replacing equialities with error tolerances and that it develops a whole variety of new theoretical results out of this. I am not aware of any study that does this. The related work section also seems to cover the related studies in the literature.\n\n**Quality:** Overall, there seems to be no serious quality issues with the paper. Though, I would like to indicate that I haven’t checked the proofs in a very detailed manner and just went over them at a high-level. The submission looks technically sound and the core ideas are well-explained. However, I do have a few major and (mostly) minor questions on the quality of the paper (see the Questions section below). \n\n**Clarity:** I found no problems with the clarity of the paper. The paper is very well-written. I have nothing to suggest.\n\n**Significance:** Even though the derived theoretical results seem to be interesting in their own sense, I would like to indicate that I found the paper to have problems in the significance of the results. More specifically, I am not sure if the paper provides enough discussion on how the derived results can be helpful in developing VE MBRL algorithms. More on this can be found in Question 2 below.\n **Major:**\n\n1. On the role of the deterministic policy set in PVE classes: Why does the PVE class wrt all deterministic policies contains only models which can plan optimally (see the last par. of Sec. 2)? I mean what is the role of determinism here? Why doesn’t the PVE class wrt all policies (including deterministic and stochastic ones) contains only models which can plan optimally?\n2. On the significance of the theoretical results in terms of practicality: In the previous studies of Grimm et al. (on VE), the authors have provided experimental results that either justify the usage of VE models or improve the currently existing algorithms. However, in this study there is no empirical demonstration of the usefulness of the theoretical results. Even though the theoretical results are interesting by themselves, I think that providing a section on how these results can at least help developing better VE MBRL algorithms can significantly benefit the paper. Is it possible to provide a discussion on how these theoretical results can be useful in giving rise to new algorithms?\n\n**Minor:**\n\n1. In Def. 1 and Property 1, $\\mathcal{V}$ should be a subset of $\\mathbb{V}$?\n2. I think the par. that starts with line 164 should explicitly mention that a certain amount of error will be tolerated for $\\mathcal{M}^{\\infty}$. I had to read it several times to get this part.\n3. In Prop. 2 should it be $\\mathbb{V}_{\\mathbb{\\Pi}}$ in the upper bound?\n4. Why is there $2^{\\mathbb{V}}$’s in line 242? Shouldn’t these be $\\mathbb{V}$’s?\n5. There is a typo at line 257 right after the second column.\n6. There seems to be a typo inside beta in Eq. 18 where one of the value functions should not have a hat?\n\nIf the authors are able to address the concerns provided in this review during the rebuttal period, I am willing to raise my score.\n Yes the authors mention the limitations of their work.", " This paper extends the notion of \"value equivalence\" among MDPs w.r.t. sets of policies and value functions (and number of steps) to \"approximate value equivalence,\" that permits the values after k steps of the policy to differ by up to some parameter epsilon. It is then possible, for any two classes of models, to find an epsilon for which the models satisfy the definition. Bounds that relate the performance of policies across such approximate value equivalent models are also obtained.\n It seems clear that for the kind of models studied here, we need bounds that control how close the predicted performance is to performance in the actual environment, and it seems desirable to be able to obtain this for bounded orders. This is the main strength of the paper. \n\nThe notion of approximate value equivalence is a very natural generalization of the notion of value equivalence -- indeed, it borders on being \"obvious\" given the existing notion of value equivalence. I am not so moved by the results about relating arbitrary pairs of classes. I don't see the utility of this notion for such large values of epsilon as such relationships would require -- and indeed, intuitively, we would not expect totally arbitrary pairs to be \"equivalent\" in any meaningful sense. This dampens the significance of these contributions.\n\nWhat's really missing from this work is evidence that the definition can be made to do useful work for us. I'd want to see it used to analyze an algorithm or show that some MDP can be represented more compactly, or something of similar flavor.\n Can you point to evidence that it is productive to consider this notion? (See the last part of \"Strengths and Weaknesses\" for a couple examples of what this might look like.) This is fine." ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 7, 4 ]
[ -1, -1, -1, -1, -1, -1, -1, 2, 4, 3 ]
[ "I75FB7UgET5", "zGv2iJxJ8J", "nips_2022_S2Awu3Zn04v", "R_4G76nCgv", "zl23edjuRe6", "Img1fRIDdC", "nips_2022_S2Awu3Zn04v", "nips_2022_S2Awu3Zn04v", "nips_2022_S2Awu3Zn04v", "nips_2022_S2Awu3Zn04v" ]
nips_2022_gQBetxnU4Lk
Learning-based Motion Planning in Dynamic Environments Using GNNs and Temporal Encoding
Learning-based methods have shown promising performance for accelerating motion planning, but mostly in the setting of static environments. For the more challenging problem of planning in dynamic environments, such as multi-arm assembly tasks and human-robot interaction, motion planners need to consider the trajectories of the dynamic obstacles and reason about temporal-spatial interactions in very large state spaces. We propose a GNN-based approach that uses temporal encoding and imitation learning with data aggregation for learning both the embeddings and the edge prioritization policies. Experiments show that the proposed methods can significantly accelerate online planning over state-of-the-art complete dynamic planning algorithms. The learned models can often reduce costly collision checking operations by more than 1000x, and thus accelerating planning by up to 95%, while achieving high success rates on hard instances as well.
Accept
Robot motion planing in dynamic environments remains a significant problem. All reviewers consistently agree that the suggest GNN approach in this paper has useful merits, is of general interest, and that the paper is above the publication threshold. Detailed comments of the reviewers provide a good source for some fine-tuning improvements of the paper.
val
[ "sB58zPeVdsg", "QEQIkC70Pxy5", "m6xrguRAP5S", "ZwJ2-642jz", "OMgDUy4krZ", "ec_Jfr12KLG", "im6mbLdU66A", "YUkI6rNpXjF", "Se2eLywhK4I", "eCKkci9wMmI" ]
[ "author", "official_reviewer", "author", "official_reviewer", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " We thank the reviewer for the crucial questions and carefully reading our response! As also discussed in Appendix D, we think the topics on planning in more challenging environments and problems will be a great direction for our future work.", " Thanks for the response! My questions are addressed.\n\nIt's interesting seeing the trade-off \"curve\" with different number of training examples. The added experiments also suggested the inadequacy of backtracing in harder environments.", " We thank the reviewer for the important questions on the definition and baseline, and for reading our response. We will clarify the definition and add videos in the final version.", " Thanks for your response. My concerns are well-addressed.\n\nPreviously I was confused about the definition of ‘fixed graph’ (line 91) because:\n1. in the Stage 1, the graph is irrelevant to timestep t. \n2. the connection between nodes does not change at all. \nI now understand the it means in the second state, the feature is time-dependent so ‘not fixed’. \nHowever, I suggest the author clarify this point in the text because when the reader reads ‘learning to explore edges given fixed graphs [41, 23]’, they may think your work operates on a dynamic graph where the graph connectivity changes.\n\nI think including comparison with OrcleNet makes the paper more complete and solid. \n\nRegarding the demonstration of failure case, I would highly encourage you to have some videos instead of image snapshots in the final version. Currently it is a bit hard to imagine the whole trajectory especially because there are dynamic obstacles.", " Thank you for your careful review and acknowledgment of novelty and effectiveness. We address the specific questions as follows.\n\n> The writing requires significant proof reading to correct spelling and grammar mistakes.\n\nThank you for pointing them out. We have updated the paper with more thorough proofreading and will continue to improve the writing. \n\n> The title mentions \"Manipulation Planning\" but actually there is no object manipulation performed...Title should be changed accordingly.\n\nThanks for pointing this out. We were trying to emphasize the context of high-dimensional planning problems with robot arms, which is different from the simpler problem of navigating in 2D environments. We have changed the title in the updated paper.\n\n> The supplementary material should provide the network details to allow for reproducibility of the work. Ideally, also code and datasets should be provided publicly.\n\nWe have provided the network details in Appendix B. The code and data will be fully made open-source upon publication of the work. \n\n> l.115 the explanation of where the attention mechanism will be used is confusing...\nl. 143, please introduce the subscript notation v_i,v_j, x_i, x_j ...\nThe notation O = O + TE is unclear. ...\n\nWe have organized and rephrased the mentioned parts in the updated version.\n\n> l. 226 it's unclear how the Dijksta-H baseline works from the description. How does it handle arrival time and does it use collision checking ? Please revise and provide more implementation details in the supplementary material.\n\nWe have provided the pseudo-code of Dijkstra-H in the updated Appendix A. A brief description is as follows. Similar to our GNN-TE, Dijkstra-H first samples multiple configurations of the ego-arm to form a graph along with the start and goal node. It uses the Dijkstra algorithm to calculate the shortest distance from every node to the goal. When planning, the ego-arm will follow this heuristic and prioritize the node with the shortest distance to the goal as the next step, while checking collision on edge and keeping track of the time. If there is a collision when moving to the target node, Dijkstra-H will turn to query the next closest node, and so on. Dijkstra-H fails when it cannot find any next available nodes. \n\n\n> l. 246 is SIPP guaranteed to find optimal paths and for which planning problem ?\n\nYes, SIPP is guaranteed to find optimal paths, for planning problems in dynamic environments with known obstacle trajectories. We refer readers to Section III.B of [32] for more details.\n\n\n\n\n\n\n\n\n\n\n\n> The ablation study in 5.2 only reveals little improvements by the individual components relative to the performance of the baselines. What is the overall performance gain by all the components over the basic GNN ?\n\nWe provide all components' overall performance gain over basic GNN (GNN-basic) here and in the updated Appendix C.7. Specifically, in the first stage, the basic GNN has no attention mechanism and temporal encoding. And in the second stage, it only inputs the obstacle encoding at the time step.\n\n| | | SIPP | Dijkstra-H | GNN-TE | GNN-basic |\n| :---: | :---: |---: |---: |---: |---: |\n| **Success Rate** | random | 100% | 89.70% | **94.10%** | 92.70% |\n| | hard |100%| 0% | **62.50%** | 32.00% |\n| **Avg Path Time Ratio** | random | 100%|123.73%|**107.78%**|112.42%|\n| | hard | 100%|/|**122.13%**|185.92%|\n| **Avg Collision Checking** | random | 60K|56.21|**17.44**|28.80|\n| | hard |1081K|/|**45.23**|109.70|\n\n\n> The paper does not address limitations and assumptions properly. For instance, it should be clearly stated that the model needs to be trained for the specific actor arm, the obstacle arms and the static obstacles which it is tested on, i.e. that specific kinematic structures and obstacle shapes are assumed. How can the approach be generalized to arbitrary obstacle shapes and arm kinematics?\n\nThanks for pointing this out. Since our method is learning-based, it should be trained on the same actor and obstacle arms as it’s tested on, as both the sampled graph and the expert trajectory are implicitly conditioned on the kinematic structure. This assumption is reasonable as the most immediate use of learning-based components is for reducing repeated planning computation in a relatively fixed setting of arm configurations. We believe learning planning models that can be generalized to arbitrary arms and obstacles may be too ambitious for now, and it requires an in-depth study of many other issues that have not been fully understood, such as the inherent generalization properties of graph neural networks. We have provided more discussions in Limitations and Future Work in the updated Appendix D.\n", " Thanks for your careful review and important suggestions for the comparison with learning-based approaches. We address the specific questions as follows.\n\n> How is the obstacle trajectory represented? … The authors may want to clarify this.\n\nThe dynamic obstacle trajectory is represented by a vector of all the joint positions in the workspace in a time window (as mentioned in l.106-107). We only maintain a graph of all the sampled configurations of the ego-arm, while the trajectory is not represented as a node on the graph. Instead, we introduce the trajectory of obstacles, which can be of arbitrary length, using the attention mechanism to encode the information into the nodes and edges of the configuration graph (as illustrated in Figure 2). \n\nThe approaches in [21, 39] directly use fixed graph structures to update the internal values. Since no temporal information is introduced in their architecture, the output of the network is invariant to different time steps, which might not apply to the dynamic environment, since some edges can be feasible to traverse at some time steps but infeasible at the other time steps. \n\nOur architecture does not directly generate the outputs solely based on the fixed graph, and provides a temporally-sensitive solution by introducing the second-staged planner. It uses local-level information for the network to plan towards the goal vertex while also considering the dynamic obstacles to avoid collisions. We believe this is one of the key components of the dynamic environment.\n\n> However, I found the comparison between the proposed method and other learning-based method is missing... \n\nWe provided comparisons with RL baselines in Section 5.1 in the paper and C.4 in the updated Appendix. In general, we found that GNN-based approaches outperform learning-based approaches that do not exploit the graph structure. For instance, we have further compared with another baseline OracleNet in the dynamic environment (OracleNet-D), which is modified from [1], with the trajectories of obstacles as additional inputs in every rollout of the network. We use the original repository of the paper and provide the result and analysis in C.5 in the updated Appendix.\n\n> However, the performance drop seems to be huge in the hard case (Figure 4 and Table B2)... I think it would be better to provide some illustrations and convincing improvement direction in the paper. \n\nWe have visualized some failure cases in 2Arms environment (See C.8 in the updated Appendix). We find there are mainly two failure modes: it fails to make a detour in Fig.8 or gets too close to the moving obstacles in Fig.9. In Fig. 8, we can observe that GNN-TE plans to directly get to the goal while the feasible path is to make a detour to avoid the obstacle. In Fig.9, although GNN-TE moves in the correct direction but fails in getting too close to the obstacle arm.\n\nWe have provided more discussions on improvement direction in the updated Appendix D. We put the same discussions here as follows. Because the trained policy using the learning-based method relies on the training distribution, and in the paper, we train on the randomly generated cases and test on both random and hard ones. Therefore, the performance on the hard examples may not be as prominent as the random ones. We’ve tried to train the algorithm on extra hard examples and test its performance, and the success rate rises from 62.5% to 71.3% on 2Arms environment. \n\nIn general, we believe the safety and reliability of learning-enabled systems are always a core issue that needs to be solved after learning-based approaches show clear benefits. For motion planning, a potential future direction is to integrate our learning-based component with monitoring. Such monitoring identifies hard graph structures that are out-of-distribution for the neural network components. It ensures that the learning-based components are only used when the planning can be safely accelerated, in which case they will provide great benefits in reducing collision checking and overall computation. When hard or out-of-distribution cases occur, the planner should fall back to more complete algorithms such as SIPP. There has been much ongoing development in frameworks for ensuring the safe use of learning-based components in planning and control, which we believe is orthogonal to our current work.\n\n> Another possibility to make the contribution more significant is to consider cases where the object trajectories are not known thus prediction and rapid re-planning is needed.\n\nThis would be a great next step for our work, which would bring us out of the context that SIPP is typically used for. We can add pre-trained prediction modules using Gaussian Processes or deep neural models, and use the output of it as the pseudo obstacle input of GNN-TE. In practice, our GNN component only requires around 0.1~0.2 seconds to infer for the test case on average, and has the potential to be used for rapid re-planning.", " Thank you for your careful reading. We are glad you found our GNN-based method to be a novel solution to an important problem. We address the specific questions as follows.\n\n> There are quite a few typos throughout the paper. Please do more proofread for future revisions. \n\nThanks for pointing them out. We have done more proofreading and fixed the typos in the updated version.\n\n> If the pros and cons are thoroughly analyzed and presented, it can be of greater significance.\n\nWe have included further discussions of the pros/cons and limitations in the updated Appendix D. \n\n> The authors did an attempt using backtracing, but only showed the result on the most simple setting, two-DOF arms. \n\nWe refer the reviewer to Section C.3 (Table 7) in the updated Appendix. As the DoF and the complexity of the configuration space increase, the searching space grows and requires more backtracking steps. Thus the increase in success rate by backtracking may not be as significant as in the simple settings if we keep the backtracking steps the same. However, GNN-TE still shows a significant advantage over Dijkstra-H even with backtracking in all the settings.\n\n> How is the speed (saving number of collision checks) in trade-off with quality (success rate of finding paths), and how does this trade-off vary with different amount of training? E.g., currently 2000 problems are used for training. Is the performance increasing with more training problems? Is it in a linear or log trend? Does it saturate?\n\nThank you for mentioning the trade-off between the number of collision checking and the success rate. We have included more experimental results in the updated Appendix C.6, showing that the success rate increases and the collision rate decreases as the training size increases, both approximately in logarithmic trend. A brief description of this additional experiment is as follows. We train GNN-TE on varying training problems (100, 200, 300, 400, 500, 1000, 1500, 2000, 2500, 3000) and test on the same random sampled and hard problems in 2Arms environment. We observe that GNN-TE benefits from increasing the training problems, both in better success rate and less collision checking. We also provide the trends of the two criteria in the right column of the figure, and we believe both the trends are prone to be log-like. It shows the performance will be saturated as the training set covers the problem distribution. \n\nDuring inference, GNN-TE acts in a greedy way to follow the edges with the highest priority value. By backtracking, we keep a stack of policy edges of the top-n priority values and allow the algorithm to take the suboptimal choices if it fails. Therefore, the backtracking will increase the collision checking with the hope of finding a solution. Although adding this or other systematic searching algorithms boosts the quality in the sacrifice of speed, we think the actual bottleneck might still be the priority value as the heuristic produced by the model. We believe this trade-off may be a crucial learning-based dynamic motion planning topic and needs future investigations.\n\n> The separation between random and hard cases is determined by Dijkstra-H… It's also helpful to provide a direct measure on the complexity of the planning problems.\n\nIntuitively, an instance is hard if using greedy actions fails, i.e., when the shortest path to the goal needs to be avoided to find feasible paths. The typical hard cases considered in motion planning, such as U-shaped barriers, all share this characterization. Right now, we are using Dijkstra-H as an empirical metric to reflect this consideration. A fully accurate formal measure may not be easily definable because to do so, we need to basically separate the classes of simple planning problems (say polynomial-time solvable instances, i.e., in complexity class P) from the general dynamic motion planning problem (which is PSPACE-complete [35]), which is related to separating P from PSPACE. \n\n > Using the attention mechanism to encode obstacles is not analyzed separately, i.e., why is attention mechanism the most proper to model the obstacles?\n\nSince the dynamic obstacles form a trajectory in the time dimension, attention mechanisms make it possible to learn the correlation between the position of obstacles at every time and the ego-arm configuration on the graph. This is an empirical observation, and we have included further discussions in the updated Appendix D.1. \n\n> No specific analysis or discussion is provided on the limitations by the authors. …More comprehensive analysis on this would be greatly appreciated by the community. \n\nWe have provided the limitations and future work in the updated Appendix D.", " The paper aims to improve the efficiency of motion planning problems in dynamic environments, such as with moving obstacles or multiple manipulators. The approach taken is graph neural networks and temporal encoding framework. Trained with imitation learning and data aggregation procedures, the proposed methods significantly reduce collision checks and planning time, at the cost of reducing planning success rate.\n\nConcretely, the core building blocks are graph neural networks, attention mechanism, temporal encoding of moving obstacles, and imitation learning with DAGGER style of data aggregation. Extensive experiments are provided to compare the proposed method with search-based baselines, ablation baselines, and end-to-end RL methods. Originality\n\nThis paper takes a graph neural network (GNN) approach and temporal encoding to tackle motion planning in dynamic environments. It is a novel way of applying existing approaches such as GNN and attention mechanisms to an existing problem. Many details were reasonably designed, e.g., the temporal encoding of moving obstacles.\n\nQuality and Clarity\n\nThe algorithm is reasonably designed, justified by the provided ablation studies of module and data flow choices. The approach is clearly presented, backed with diagrams and example drawings.\n\nThere are quite a few typos throughout the paper. Please do more proofread for future revisions. Just to name a few, \"(\" in L112, why (k) is needed for the aggregation function in eq(1)? L118 \"has to\". 165 \"E_j\" -> \"E_i\".\n\nSignificance\n\nThe problems of motion planning in dynamic environments is an important problem, relevant for a wide group in the community. Leveraging GNNs (likely the first time) to solve this problem is interesting. If the pros and cons are thoroughly analyzed and presented, it can be of greater significance.\n\nThis is an important issue to address. It is not a surprise using a GNN and temporal embedding, trained on the same set of static and dynamic objects, can achieve with more efficient search heuristics. However, this efficiency benefits comes at a price, sacrificing the success rate, i.e., paths cannot be found in many cases. The authors did an attempt using backtracing, but only showed the result on the most simple setting, two-DOF arms. Yet it is not strong result (98% for random and 89% for hard). From a reader's point of view, understanding this trade-off in a more systematic way would be a key contribution. See also Questions. 1. How is the speed (saving number of collision checks) in trade-off with quality (success rate of finding paths), and how does this trade-off vary with different amount of training? E.g., currently 2000 problems are used for training. Is the performance increasing with more training problems? Is it in a linear or log trend? Does it saturate?\n\n2. The separation between random and hard cases is determined by Dijkstra-H. Is there a justification of this choice or because there are no better ways? It's also helpful to provide a direct measure on the complexity of the planning problems.\n\n3. Using the attention mechanism to encode obstacles is not analyzed separately, i.e., why is attention mechanism the most proper to model the obstacles? No specific analysis or discussion is provided on the limitations by the authors. Though, from the result section, it can be observed the proposed GNN based motion planning has a reduced success rate compared to the baseline SIPP. More comprehensive analysis on this would be greatly appreciated by the community. Refer to Questions.", " This paper proposes to use a learning-based approach for motion planning with dynamic obstacles. Each possible robot configuration (after discretization) is represented as a node. This method assumes it knows the full trajectory of the obstacles. It first generate expert trajectories using some sampling-based path planing method (SIPP) and use that as the ground-truth and try to imitation that policy. The performance is further enhanced by using DAGGER. The advantage of using learning-based approach to imitation the tradition approach is that the learning-based approach is much faster. [originality] This work is inspired by the transformer literature and successfully modify it to the motion planning community. I believe this paper has original contribution for this adapatation.\n\n[quality] I believe overall this paper gives detailed and comprehensive evaluation of the proposed method. However, I found the comparison between the proposed method and other learning-based method is missing. For example, one simple baseline I could come up is using [1]’s network, with an additional input of all the possible obstacle trajectory (since the full trajectory is known), and output the next optimal configuration.\n\n[clairty] I think the technical part is clear except for one point: this paper claims previous method considers fixed graphs ([39, 21]). Then I’m wondering how the obstacle trajectory is represented? My understanding is the obstacle’s full trajectory is represented as a node in the graph. However, in that case the graph is fixed and some previous methods need to be considered in the experiments (such as [21, 39]). The authors may want to clarify this.\n\n[significance] The proposed method is much master than the traditional optimization-based method. This can be a great advantage for many applications. However, the performance drop seems to be huge in the hard case (Figure 4 and Table B2). Since motion planning would be safety-critical in many cases. This may be a major limitation for the proposed method. I think it would be better to provide some illustrations and convincing improvement direction in the paper. Another possiblity to make the contribution more significant is to consider cases where the object trajectories are not known thus prediction and rapid re-planning is needed. The following questions are essentially what I said in the above section:\n\n1. How is the obstacle trajectory represented?\n\n2. How to deal with cases where the obstacles' trajectories are unknown?\n\n3. Is there any possible comparison with other learning-based methods? N/A", " This paper proposes an imitation learning approach which mimics the behavior of a search-based configuration space motion planner with dynamic obstacles (SIPP [32]) using DAgger. Planning is performed on a sampled road map graph which is encoded with the motion of a dynamic obstacle and the state of static obstacles in a GNN. The method seems to assume a fixed shape and kinematic structure of dynamic (robot arms) and static (boxes) obstacles. The method is evaluated in simulation and compared in planning performance metrics with ablations, SIPP and a simpler Dijkstra-based baseline. The approach well improves the required computation in terms of collision checks, while keeping a high level of success rate and path efficiency. Strengths:\n* The approach seems novel and effective in cloning the planner behavior for the trained environment settings.\n* The paper is well structured and can be easily followed. \n* Improvements over ablations, the search-based planning baseline SIPP, and a simpler Dijkstra based planner are demonstrated in several planning problems with various robot kinematics and difficulties.\n\nWeaknesses:\n* The writing requires significant proof reading to correct spelling and grammar mistakes.\n* The title mentions \"Manipulation Planning\" but actually there is no object manipulation performed. The approach performs motion planning. Title should be changed accordingly.\n* The supplementary material should provide the network details to allow for reproducibility of the work. Ideally, also code and datasets should be provided publicly.\n* l.115 the explanation of where the attention mechanism will be used is confusing, because the concepts are unclear at that point of reading. Just introduce the attention mechansim abstractly.\n* l. 143, please introduce the subscript notation v_i,v_j, x_i, x_j and that the subscript links nodes and respective components of the features. the y needs to be separated with a comma.\n* The notation O = O + TE is unclear. first of all, this is not a proper equation but an assignment. Rather use a new symbol for the left O. Also, O and TE as well as the + operation are not defined.\n * l. 226 it's unclear how the Dijksta-H baseline works from the description. How does it handle arrival time and does it use collision checking ? Please revise and provide more implementation details in the supplementary material.\n* l. 246 is SIPP guaranteed to find optimal paths and for which planning problem ?\n* The ablation study in 5.2 only reveals little improvements by the individual components relative to the performance of the baselines. What is the overall performance gain by all the components over the basic GNN ?\n\n\n * The paper does not address limitations and assumptions properly. For instance, it should be clearly stated that the model needs to be trained for the specific actor arm, the obstacle arms and the static obstacles which it is tested on, i.e. that specific kinematic structures and obstacle shapes are assumed. How can the approach be generalized to arbitrary obstacle shapes and arm kinematics?" ]
[ -1, -1, -1, -1, -1, -1, -1, 6, 6, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, 3, 4, 4 ]
[ "QEQIkC70Pxy5", "im6mbLdU66A", "ZwJ2-642jz", "ec_Jfr12KLG", "eCKkci9wMmI", "Se2eLywhK4I", "YUkI6rNpXjF", "nips_2022_gQBetxnU4Lk", "nips_2022_gQBetxnU4Lk", "nips_2022_gQBetxnU4Lk" ]
nips_2022_IJNDyqdRF0m
Decomposing NeRF for Editing via Feature Field Distillation
Emerging neural radiance fields (NeRF) are a promising scene representation for computer graphics, enabling high-quality 3D reconstruction and novel view synthesis from image observations. However, editing a scene represented by a NeRF is challenging, as the underlying connectionist representations such as MLPs or voxel grids are not object-centric or compositional. In particular, it has been difficult to selectively edit specific regions or objects. In this work, we tackle the problem of semantic scene decomposition of NeRFs to enable query-based local editing of the represented 3D scenes. We propose to distill the knowledge of off-the-shelf, self-supervised 2D image feature extractors such as CLIP-LSeg or DINO into a 3D feature field optimized in parallel to the radiance field. Given a user-specified query of various modalities such as text, an image patch, or a point-and-click selection, 3D feature fields semantically decompose 3D space without the need for re-training, and enables us to semantically select and edit regions in the radiance field. Our experiments validate that the distilled feature fields can transfer recent progress in 2D vision and language foundation models to 3D scene representations, enabling convincing 3D segmentation and selective editing of emerging neural graphics representations.
Accept
The paper proposes an approach for manipulating 3d scenes represented with implicit neural representations (NeRF-like), via distilling 2D feature extractors into a 3D feature field. The method shows convincing qualitative results on scene editing and promising quantitative results on semantic segmentation. All reviewers are (to a different degree) positive about the paper, noting good presentation, interesting and fairly novel approach, fairly thorough and convincing results (it would be nice to have quantitative results on scene editing, but that's quite non-trivial). Overall, this is an interesting and well-executed paper and I am happy to recommend acceptance.
train
[ "QJ_9bJe_Jzj", "pHy6RHJgm9r", "WaXoIDcNdB-", "b_5taPMlLeHB", "JybWGWzUz6A", "yQMEyvQj45T", "U1I2Gh2brQ5", "bsTf83a8zk", "100f6-fmQ4_", "D96WSsH2zgR", "WixSHEi4EsX", "rVs6u0h7if-", "80yT57QNMbf" ]
[ "official_reviewer", "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for the response. I am happy to keep my original score and suggest acceptance.\n\n", " Thanks for the Authors' response, which I believe clarifies many concerns of mine and other reviewers. Without additional concerns from other reviewers, I would keep my original score and suggest acceptance.", " Dear reviewer 1TRr,\n\nThank you for your review! Do you have any remaining questions or concerns following our response? Please let us know. We’d be very happy to do anything we can that would be helpful in the time remaining!\n", " Dear reviewer cXkV,\n\nThank you for your review! Do you have any remaining questions or concerns following our response? Please let us know. We’d be very happy to do anything we can that would be helpful in the time remaining!", " Dear reviewer XqPY,\n\nThank you for your review! Do you have any remaining questions or concerns following our response? Please let us know. We’d be very happy to do anything we can that would be helpful in the time remaining!", " Dear reviewer 48WN,\n\nThank you for your review! Do you have any remaining questions or concerns following our response? Please let us know. We’d be very happy to do anything we can that would be helpful in the time remaining!\n", " ## Distilled Feature Fields for other NeRF Variants\nWe agree that demonstrating DFF on other NeRF variants would serve to highlight the generality of the proposed approach. We note that our method does not make any assumptions on the parameterization of the NeRF - the proposed approach is agnostic to the parameterization and thus in principle compatible with voxelgrid-, tri-plane-, ground-plan and other types of scene representations. We can also use an independent MLP for feature fields that uses a parameterization of the feature field **different** from that of the feature field, as demonstrated in line 240 and Appendix, Table 3 (see also \"Train Feature Branch Independently\"-section in this response). We did not succeed in running such an experiment in time for the rebuttal, but will include such a result in the camera-ready paper. We will add this discussion to the methods section, as well as to the discussion section.\n\n## Discussion of Limitations\nWhile we discuss several failure modes of our approach (lines 286, 288, 317), we will compile them in a dedicated section “Limitations”. We will further add a discussion of the point raised by reviewer 1TRr: It is indeed the case that if the teacher network can only output low-resolution feature maps, the resulting supervision might be noisy and harm training fine-grained feature fields, although the DFF could have some denoising effects thanks to multi-view fusion. This is also discussed in [SemanticNeRF's](https://arxiv.org/pdf/2103.15875.pdf): the lower the resolution, the lower the segmentation performance. We will further merge in the discussion in appendix line 812, where we point out that the feature field inherits certain weaknesses of the teacher network.", " ## Rigor of Claim\nReviewer XqPY points out that our claim of distilling self-supervised pre-trained image feature extractors is not rigorous, as we also distill the LSeg feature extractor. We will re-formulate this claim to instead state “feature encoders pre-trained in both self-supervised and supervised frameworks”. We note that fundamentally, our framework does not make any assumptions on how the image encoder is trained, and we note that much of our results are accomplished with the DINO encoder, which is indeed trained fully self-supervised. We argue that the fact that our method is agnostic to the training method is a _strength_ of the proposed approach, as it lays the groundwork to “lift” any 2D image feature extractor - supervised or not - into 3D to enable editing and potentially other downstream tasks. \n\n## Train Feature Branch Independently\nIt is indeed the case that the feature and radiance fields can be trained independently, with completely separate MLPs. In line 240 and Appendix, Table 3, we report quantitative results of this approach and note that it performs comparably to a version where feature field and radiance field share part or all of the MLP.\nIt is further indeed possible and an interesting direction for future work to not train the NeRF on RGB pictures at all, and use the proposed Feature Field by itself as a representation for downstream tasks in vision and robotics. While we don’t make any claims on 3D semantic segmentation performance, our analysis nevertheless indicates that the proposed approach could, in principle, be used to “lift” any predictions made by 2D image processing models into a consistent 3D representation, which is a promising direction for future work.\nMeanwhile, as feature maps may be 3D inconsistent and of lower frequency, this might come with additional challenges on the accuracy of reconstructed geometry - we leave such an investigation to future work.\nWe will add this discussion to the experiments section, as well as the discussion of future work.\n\n## Novelty\nWe propose to distill a pre-trained 2D feature extractor into a 3D-structured neural scene representation, which, to the best of our knowledge, is a novel approach. The proposed approach achieves a high quality, and convincingly enables a down-stream task of significant interest, NeRF editing. With this work, we believe we motivate exciting future work towards distilling *any* pre-trained 2D image model into Neural Radiance Fields, which we believe to be an impactful and fascinating area of future work. \n\n## Performance on Semantic Segmentation\nXqPY argues that the fact that we do not claim SOTA performance on 3D semantic segmentation weakens our contributions. They further argue that our paper “basically introduces a distillation method for NeRF semantic segmentation”. \n\nWe would like to highlight that semantic segmentation is *not* the scope of this paper. The scope of our paper is to demonstrate a novel approach towards selection of 3D regions and subsequent editing of the underlying 3D representation, by distilling a pre-trained 2D feature extractor such as DINO. Indeed, the proposed approach is significantly more general than distilling a pre-trained semantic segmentation model, which would allow selection _only of areas that were in the set of classes that the model was pre-trained for_. Instead, we demonstrate distillation of both a pre-trained _self-supervised_ feature extractor, DINO, and of LSeg, a more application-specific feature extractor, enabling edits of uncommon objects, such as the dinosaur skeleton.\n\nWe agree that 3D semantic segmentation is an interesting direction for future work, which is the reason for the inclusion of such a benchmark in the present paper. However, we believe that the set of experiments and results we show is absolutely necessary to properly evaluate our selection and editing claims. The analysis proposed by the reviewer, while indeed very valuable, would far exceed the scope of the present paper, which is already at the page limit. \n\nThe reviewer further proposes to benchmark semantic segmentation approaches on other indoor scenes of the LLFF dataset. However, we note that LLFF neither features ground-truth semantic segmentation, nor are the LLFF images appropriately covered by the semantic classes of any existing dataset, such as SCANNET (please see the SCANNET classes [here](http://www.scan-net.org/ScanNet/Tasks/Benchmark/classes_SemVoxLabel-nyu40id.txt)). We thus believe a semantic segmentation benchmark on LLFF to be infeasible. \n\nFinally, we note that the [current state-of-the-art on SCANNET ](https://arxiv.org/pdf/2110.02210v2.pdf), (see leaderboard [here](https://paperswithcode.com/sota/semantic-segmentation-on-scannet)) indeed is based on the MinkowskiNet42 architecture that we used in our 3D segmentation benchmark, and thus, we believe that the paper in its present form serves as a motivation and starting point for future work in this direction.\n", " We thank the reviewers for their careful reading, and detailed and considerate feedback.\nWe are glad that reviewers think that our method is “both very relevant and novel” with a “principled approach\" (48WN), “studies an important problem” and is “well presented” (cXkV), and shows “solid experiments and promising results, quantitatively and qualitatively” (1TRr).\nThe reviewers also agree that additional experiment & training details, as well as a better discussion of failure cases and limitations, will serve to make the paper stronger, and better inspire future work. We agree. Below, we offer several clarifications and discuss how we will integrate reviewer feedback.\n\n## Details on Editing using Feature Fields\nReviewers remarked a lack of details especially on how exactly the editing was performed (48WN). We were indeed constrained by the page limit, and thus opted to provide only a general formulation of decomposition-and-editing with Equations (5-7) in Section 4.2. However, we agree that more editing details will benefit the paper and will include them in a new Section “4.3 Editing Operations”. \n\nEditing for colorization, translation, and deletion proceed as follows: \n(1) Sample points $[…, x_i, …]$ on a ray (as usual in NeRF). \n(2) For each point, we query the DFF. We can now use Eq. 5 to calculate the softmax probability of the coordinate being matched with a set of queries as $p \\in [0, 1]$. With a single query, we may use cosine similarity (see paper line 252 or L.131). We now define \"the coordinate is selected by the query\" if $p$ is above a user-defined threshold, otherwise \"not selected\". \n(3) If *not* selected, we calculate density $\\sigma(x)$ and color $c(x)$ at $x$ via the vanilla NeRF. \n(4) If selected, we may apply the following transforms: \n(4-A) Deletion (Figure 4): We set the density $\\sigma(x)$ of the point to zero. \n(4-B) Color editing: We query the NeRF for density $\\sigma(x)$ and color $c(x)$ by querying the NeRF. The color is then edited by a colorization function $b$, i.e., it is transformed to $b(c(x))$. \n(4’-C) Translation / rescaling: Geometric transformation needs another step before performing (2). We first compute a deformed point coordinate $x’$: $x’$ is computed by applying the inverse of the editing transformation; that is, $x’ = g^{-1}(x)$. For translation, $g$ would be a simple addition with a vector. If $x’$ is selected by the query, $x’$ is used instead of $x$ for calculating color and density. If both $x$ and $x’$ are selected and have non-zero density (e.g., the boundary between the deformed apple and others in Figure 7), we mix their colors $c(x)$ and $c(x')$ in the ratio of their alphas at the point for simplicity. \n(5) Finally, as usual, we perform volume rendering with the series of (density, color) tuples.\n\nWe may also use two different NeRF MLPs, and simple alpha-compositing to mix scenes as done in Figure 8.\n\nNote that these edits do *not* require re-training of the NeRF. An exception is the CLIPNeRF-based editing (Fig. 8, line 323 onwards). Here, we perform an edit using the method defined in CLIPNeRF, which, however, fails to accurately select relevant 3D regions. We thus use DFF for selecting the relevant region, and then compose the background taken from the un-edited NeRF with the CLIPNeRF-edited foreground NeRF.\n\n\n## Does each editing operation require re-training of the model?\ncXkV inquires whether each editing operation requires re-training the model. Of all the edits we show, only the CLIPNeRF edits require fine-tuning of a NeRF by minimizing the embedding loss between a rendered image and the CLIP text embedding. **All other edits, i.e. decomposition and selection and subsequent re-scaling, scene composition, color change, deletion, etc., do not require re-training, and incur negligible computational cost.**\n\n\n## Training Details\nWe agree that training details would benefit the clarity of the paper. We will integrate all training details (currently found in the supplement) in Section 5.\n\n\n## Comparisons with other NeRF editing methods\nWe agree that a quantitative comparison with a baseline NeRF editing method would make the paper stronger. Alas, we could not identify any dataset or method that would enable us to perform such a quantitative comparison, as this would require pairs of ground-truth edited and unedited scenes, and such a dataset is, to the best of our knowledge, not available. We are grateful for any proposals of how such a quantitative comparison could be performed, and what baselines we should consider.", " The authors propose a novel approach for getting a NeRF to output both (i) the standard color, and, novel for this work (ii) zero shot-trained features that can be used for semantic segmentation / decomposition / editing. Put differently, whereas previous work had parallel branches for generating, eg, closed-set semantic segmentation labels, this work generates features that can be used with standard zero-shot segmentation techniques. These features are trained via a distillation using LSeg / DINO-based teacher networks. Positive:\n- The paper is generally well written and easy to understand.\n- The topic is both very relevant and novel -- there is much work on the geometric side of NeRF, but much less on semantics and even less on feature generation and semantics / editing.\n- The approach is principled.\n\nNegative:\n- While the feature generation parts is fairly well explored, I found the actual editing part to not be explained very much / very well. Unless I'm missing something, most of the text focuses on the generation of the features and the decomposition, but the editing is captured only by \"We can combine this with more complex edits, including optimization-based methods like CLIPNeRF\". I would very want more detail to be added here.\n- While the overall structure of the proposed approach is novel (to my knowledge), the individual components are not. I would almost go as far as to say that I see this as a systems paper.\n\nOverall I am quite positive about the paper -- (i) the methodology, while simple, is clear and appears novel; (ii) the results are very good. If accepted, I think the work might end up being quite influential. Would the approach be able to train just the feature generation branch? While that would not be very useful in the current context, it would nevertheless be interesting to know if the image branch is absolutely necessary for training the feature branch. It would be useful to have a discussion on potential failures.", " To edit a scene more easily, the authors propose a semantic scene decomposition of NeRF by leveraging distillation techniques to distill the 2D feature extractor such as LSeg or DINO into a 3D feature field. Therefore, the distilled NeRF can segment the regions of interest with given texts. Strengths:\n1. The proposed method is easy to understand and follow.\n2. The edited results on scene \"Flower\" looks fancy.\n\nWeaknesses:\n1. The claim is not rigorous. The authors claim \"feature encoders pre-trained in a self-supervised framework on the image domain\". However, this is partly true as LSeg requires semantic segmentation annotation for training.\n2. As the author argues \"We note that the goal of this paper is not to achieve state-of-the-art performance on 3D semantic segmentation tasks.\". However, this will weaken your contribution as you already learned from a large-scale pre-trained model (LSeg), but do not plan to achieve SOTA performance even on NeRF-related datasets. Further, you compare only one 3D segmentation method in Table 1, which contradicts your claim.\n3. Mixed-use of Figure and Fig. 1. Overall, the paper introduces a distillation method for NeRF semantic segmentation. Although it works well on several scenes, it is highly suggested to test your method on other indoor scenes of LLFF dataset and compared with SOTA indoor 3D semantic segmentation dataset (e.g. Scannet), as you already adopted a highly complicated 2D segmentation model.\n2. The authors verify their method based on vanilla NeRF. How about the generalization ability on other NeRF variants? For example, MipNeRF, NeRF for unbounded scenes? You can try to integrate the distillation into an unbounded NeRF variant and show the segmentation quality/view consistency effect by quantitative metrics and visualized videos (e.g. on KITTI dataset with some SOTA segmentation models). See the weakness and questions.", " The paper studied an important problem, i.e., editing NeRF. It argued that previous works show difficult to selectively edit specific regions or objects. Thus, the paper decomposed scene semantics by learning a semantic feature descriptor distilled from the pretrained CLIP-LSeg or DINO models. The full method DFF (Distilled Feature Field) supports simple query-based decomposition and editing. **Strengths**\n1. It is smart to exploit the distilling avenue to allow query-based decomposition and editing for NeRF.\n\n2. The DFF framework is well presented in Figure 1 (left). And the method part clearly explains how to learn a semantic feature descriptor by taking the pretrained CLIP-LSeg or DINO models as teachers.\n\n3. The paper examines different editing operations and shows many qualitative results to demonstrate the effectiveness of DFF.\n\n\n**Weaknesses**\n\nWhile the DFF approach is really interesting, the reviewer has some concerns w.r.t. the implementation details and experiments. \n\n1. The reviewer might miss some details, but it seems the paper has not clearly presented how to perform scene editing after training a DFF. Do we need to individually train a target scene for each editing operation? \n\n2. As the paper has stated that the code of complete reproduction of all the results is not yet publicly available, it would be better to include the training details in the main paper.\n\n3. Is it possible to make some quantitate comparisons with other NeRF editing approaches?\n\n Overall, the presented method is interesting and novel. But there are some unclear points w.r.t. the implementation details and experiments. The reviewer hopes the authors can answer the questions in the \"Weaknesses\" part.\n\n\n************************\nThanks for answering my questions. I am happy to raise my rating to \"Weak Accept\", and hope the authors can carefully addresses the concerns in the revision. Especially, please explain more on the implementation details and clearly present the editing details. Possible limitations have not been discussed.", " This paper presents a simple yet novel solution to achieve dense volumetric semantic segmentation on NeRF, which supports semantics-guided NeRF decomposition and local editing. This decomposition is achieved by adding an additional renderer that renders the volumetric feature field similarly to radiance by distilling from a pre-trained 2D image semantic feature encoder like CLIP-LSeg. A rich set of experiments is presented to show the decomposition quality, generalization capability to different text and even image queries, and local editing (appearance and geometry) results based on the decomposition. ++ The major strength of this work is that it allows high-quality volumetric NeRF semantic segmentation by reusing the zero-shot image segmentation models.\n\n++ The method is simple and easy to implement, allowing for good results on various applications and works on various NeRF scenes without needing extra training data.\n\n++ Solid experiments and promising results, quantitatively and qualitatively.\n\n-- The method does not show any significant technical breakthrough and is heavily based on LSeg, which limits its technical depth. But I do not think it is a major issue. I think most of the technical questions and evaluations I would like to see are addressed in the paper. Some minor questions I have:\n1. The difference between image segmentation and NeRF segmentation is that image pixels are equally weighted but volumetric sample points have naturally-defined weights (density). Would this cause uncertain/noisy labels in less certain regions (empty space or inside the objects)? I know these regions might not be as important as object surfaces, but just being curious.\n2. Is the algorithm robust against artifacts that are commonly seen in real-world NeRF (like misalignment and cloudy artifacts) due to less-perfect camera or input image artifacts?\n3. The distillation supervision is applied on alpha-blended renderings, would this be a potential issue that the training tries to overfit the 2D feature supervision with bad volumetric features on the samples? Especially on NeRF trained on narrow baselines? The radiance may have a similar issue but the difference is that the ultimate results for radiance are still rendered views, but here we want to directly use the volumetric features.\n4. Is there any boundary issues with the segmentation? Like incomplete regions or holes/noisy points? There is not enough discussion on the limitations as far as I can find. I would suggest the authors add such discussions in the next version." ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 5, 6, 7 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 5, 4, 3 ]
[ "yQMEyvQj45T", "WaXoIDcNdB-", "80yT57QNMbf", "rVs6u0h7if-", "WixSHEi4EsX", "D96WSsH2zgR", "bsTf83a8zk", "100f6-fmQ4_", "nips_2022_IJNDyqdRF0m", "nips_2022_IJNDyqdRF0m", "nips_2022_IJNDyqdRF0m", "nips_2022_IJNDyqdRF0m", "nips_2022_IJNDyqdRF0m" ]
nips_2022_AXDNM76T1nc
Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
Pretraining on noisy, internet-scale datasets has been heavily studied as a technique for training models with broad, general capabilities for text, images, and other modalities. However, for many sequential decision domains such as robotics, video games, and computer use, publicly available data does not contain the labels required to train behavioral priors in the same way. We extend the internet-scale pretraining paradigm to sequential decision domains through semi-supervised imitation learning wherein agents learn to act by watching online unlabeled videos. Specifically, we show that with a small amount of labeled data we can train an inverse dynamics model accurate enough to label a huge unlabeled source of online data -- here, online videos of people playing Minecraft -- from which we can then train a general behavioral prior. Despite using the native human interface (mouse and keyboard at 20Hz), we show that this behavioral prior has nontrivial zero-shot capabilities and that it can be fine-tuned, with both imitation learning and reinforcement learning, to hard-exploration tasks that are impossible to learn from scratch via reinforcement learning. For many tasks our models exhibit human-level performance, and we are the first to report computer agents that can craft diamond tools, which can take proficient humans upwards of 20 minutes (24,000 environment actions) of gameplay to accomplish.
Accept
The authors have introduced Video Pre-Training (VPT), a semi-supervised learning approach that allows relatively small volumes of labeled data to train an inverse-dynamics model that is subsequently applied to predict the action labels associated with a far larger, unlabeled dataset. They then train an agent in a supervised regime with respect to these labels to achieve strong performance in Minecraft, which requires reasoning over very long time horizons. Overall there is clear consensus among the reviewers that this paper is novel, technically sound and of broad interest to the NeurIPS community. The authors have also proactively engaged with reviewer feedback to improve manuscript clarity. I am confident in recommending this paper for acceptance.
val
[ "O3fH2iikdrq", "ugmzTcQH_P", "1Igt61qqri9", "5yP34P1JEx", "t18CZqD-6qf", "sTrUMktaBh9", "155ufQFqd5L", "eEBH3m1uqgGH", "6t0iuUM7bQW", "1NpQoYJQ5TD", "EkPy2nZxVB0", "1xlO8Wd1BVe", "iB75rCF-ZsQ", "wSuC-co5iWu0" ]
[ "official_reviewer", "author", "official_reviewer", "author", "official_reviewer", "official_reviewer", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thanks for the answers to my questions, they are all satisfactory! \n\nOne minor suggestion: could you please discuss relations to a recent concurrent work, MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge (https://arxiv.org/abs/2206.08853)? It seems very complementary, and readers from the community may benefit from the discussion in the final camera-ready draft of this paper. ", " Thank you for responding quickly! We do not think that our inverse dynamics models are novel, and we did not claim as such in the paper or in our responses to your questions. We define our method to be the entirety of Figure 2, which includes collecting contractor data to train an IDM, using the IDM to label a large amount of unlabeled data, and then finally training a policy on it with behavioral cloning. While each independent part of the method is not novel, we think their combination is. The NeurIPS reviewer guidelines around originality are: _“Are the tasks or methods new? Is the work a novel combination of well-known techniques? (This can be valuable!)”_, and in our case we feel that we show that this novel combination is valuable and effective with unprecedented results in Minecraft.\n\n“Does the small amount of labelled data already see all the actions, observations, and objects? What if the actions, observations, and objects are out of the training distribution?”\n - We shared your concern at the start of the project; however, in general human contractors tend to reach most parts of the game state space, and so we did not find this to be an issue. This shouldn’t be surprising since the distribution of unlabeled data we collect from the internet is also human play. In our original submission we wrote, “Because human contractors reach most relevant parts of the state space, we can hold the IDM fixed throughout BC training.”\n\n“I feel this paper is solving a challenging problem and I give an Accept, but it also requires lots of manually designed data collection and data filtering, and the inverse dynamics model and policy learning part seem not very novel.”\n - Manually designed data collection and data filtering. We did not do anything out of the ordinary or arduous here, and we do not think that collecting data should count against a method, especially when the data collection was not complicated. Certainly we would also prefer it if one were able to obtain results as good as ours without any data collection; however, as far as we are aware it is nigh impossible to do so with RL or any other method. Finally, we are open sourcing all of the data we did collect for use in future projects.\n - Again we do not claim that our inverse dynamics model or policy learning are novel in isolation, but we do think that their combination with easy-to-collect contractor data is novel and effective.\n\nWe are also curious about your ratings for the soundness (2 out of 4). Could you elaborate why you think our experiments are unsound?\n", " I agree that this paper solves a challenging task. But I am not convinced that the inverse dynamics model is a novel technical contribution based on your reply `Our method aims to unlock this data for use in constructing behavioral priors with a simple method: labeling it with an inverse dynamics model. To the best of our knowledge, we believe that this method is novel.` Why do you think the inverse dynamics model is novel?\n\nDoes the `small amount of labelled data` already see all the actions, observations, and objects? What if the actions, observations, and objects are out of the training distribution (not seen during training)?\n\nI feel this paper is solving a challenging problem and I give an Accept, but it also requires lots of manually designed data collection and data filtering, and the inverse dynamics model and policy learning part seem not very novel. \n\n", " Thank you for increasing your score! You mention that we only address some of your concerns; which concerns do you still have? We would be happy to discuss them further with an eye towards understanding where the remaining gap lies between your score and the scores of other reviewers.\n\nIn our updated draft we posted Tuesday, at your suggestion we cite _GATO_ in the Introduction, and we cite _LID_ and _Can Wikipedia Help Offline Reinforcement Learning?_ in the Discussion section. We already had cited _CLIP_ in the original submission, and we felt that _DALL-E_ was not relevant enough to cite in this work because arguably we already cite stronger cases showing that “more labeled data is better” such as _GPT-3_. We would love to hear if you think we could improve our discussion of these works or think otherwise in the case of _DALL-E_.", " Thanks for the answers, there weren't major concerns anyways and I am happy that the authors have further elaborated on some of the design decisions. I believe this paper would be influential and should be highlighted at the conference. ", " The response addressed some of my concerns. It would be good if they included the discussion of the mentioned related works in their final version.\n", " Thank you for your review and for noting that our results are impressive and recognize that no prior work has been able to match our results in the difficult domain of Minecraft. We’ll first respond to your largest criticism around novelty and technical contribution and then address the remaining more minor points. Your overall review of our work is quite different from that of the other reviewers, so we are interested to better understand and address the limitations you feel the work has. We are glad to hear that you are open to changing your mind about your score for the paper and engaging with us and the other reviewers in the review process.\n\nThe point of this paper is not that more data = better performance, which as you point out has been validated many times over in multiple domains. The point of this paper is that for domains like Minecraft or e.g. computer usage, a huge amount of data exists but it does not have action labels, so it is impossible to train a behavioral prior using it without additional scientific tools. Our method aims to unlock this data for use in constructing behavioral priors with a simple method: labeling it with an inverse dynamics model. To the best of our knowledge, we believe that this method is novel, but if you could point us to a paper that uses the same method we would love to read it and cite it.\n\n“This paper did not cite works that are quite strongly related to this paper”\n - LID and Can Wikipedia Help Offline Reinforcement Learning? - In our opinion these papers are only somewhat related in that they use a pretrained model to solve reinforcement learning tasks. They do so by hypothesizing that language models are good priors for non-language sequence tasks and then embed observations from the RL environment in a way the language model can accept. While this does not solve the issue that VPT solves (which again is to unlock unlabeled data for use in pretraining behavioral priors), they could potentially complement our method by initializing the VPT foundation model or IDM with a reasonable representational prior. However, even if we did this we would still have to label the unlabeled internet data in order to train on it and create a behavioral prior, which is why we feel these papers are not strongly related. Thus we feel our paper is distinct and an essential independent contribution. We’ll add a citation for these works and describe how they are related and can be combined in the Discussion section of our work. Thank you for the suggestion!\nGATO. This paper was only released 7 days prior to the Neurips submission, and was not peer reviewed at the time. More importantly, we believe it is not directly related to the main contribution of our work. For all control tasks, GATO only trains on labeled data. The point of our work is not to prove that more labeled data is better, but rather can we train an effective behavioral prior from a large amount of unlabeled data. We will add a citation to this paper in the introduction.\n\n“The results highly depend on the policy trained on the small amount of labeled data. If this policy is not good enough, it cannot provide reliable predictions for the unlabelled data.”\n - We agree this is a central hypothesis of our paper (that a small amount of data is sufficient for training an IDM accurate enough to label a large unlabeled dataset sufficiently well such that BC-training on that labeled data produces good behaviors). We feel the zero-shot and fine-tuning results, including those with reinforcement learning that produced unprecedented new capabilities for computer agents in Minecraft, confirm that this hypothesis is true. Moreover, we provide experiments showing just how little labeled contractor data may be required in our domain (Section 4.6).\n\n“The unlabelled video data cannot be used directly. The authors still need carefully design the filtering procedure to ensure the data are clean enough for training a better policy.”\n - Good question! We’ve ablated the data cleaning process at request of another reviewer. That being said, we tried to limit ourselves to simple data cleaning steps and actually we did not iterate on the data cleaning process over the course of the project; it was the first process we tried. We will make this more clear in the paper text. Also please see Appendix A.2.3 of the updated text if you are interested in the effect of data cleaning. We found that it is indeed beneficial to clean the data in this way; however, again this was a fairly simple data cleaning process that we did not tune.\n\nThank you for your thoughtful commentary. We hope we have addressed your concerns and–as you said– you are open to changing your score!\n", " Thank you for your thoughtful review. We’re glad that you felt our work was well executed, well presented, and has potential for generalizability to other domains! We will respond to your criticisms and questions in turn.\n\n“The described method may currently be limited in its generality - prediction of inverse dynamics is a difficult problem in its own right, and is somewhat simplified by the world studied by the authors - though this is a compelling direction for further research.” and “The authors could certainly elaborate further on the generalisability of the methods presented - how important is it that e.g. screen resolution and settings match up so closely? To what extent could we hope to learn an inverse kinematics model on a more general or noisy distribution of data?”\n - We only experiment in a domain where there is a wealth of first person (and therefore aligned to how the agent would act) demonstrations. As we note, there are many other domains where it would be possible to get this type of data (e.g. computer usage). The videos of Minecraft in our dataset that were uploaded to the internet do not have consistent screen resolutions and graphic settings, so our work already provides some evidence that VPT can work in noisy settings. We agree it is an extremely interesting line of future research to try VPT or similar methods in even noisier, more general, or even unaligned domains. We will add more discussion of this into the Discussion section. Thank you for the suggestion!\n\n“It would be nice to see a little more depth in the results and conclusions section, and more discussion of future work.”\n- We will add more, and have received some great suggestions from you and the other reviewers!\n\n“A little more detail about the architectures and models in the main body of the paper could be welcome.”\n - These were not novel nor the driving force behind the success of the method, and unfortunately we’re already at the limit of the NeurIPS space constraints so we won’t be able to move these details to the main paper body.\n\nNits - thank you! We’ll fix as many of these as we can in the allotted time.\n", " Thank you for your thoughtful review. We’re glad that you agree that this framework could be paradigm-shifting for the RL community! We’ll respond to each of the weaknesses and questions you raise in turn.\n\n“Is IDM the way to learn from contractor videos, we discard a ton of info. Similarly, is predicting only actions in VPT optimal”\n - Great question. Behavioral prior’s must predict actions by definition, otherwise they wouldn’t be able to act in the environment (this is in contrast to what we call \"representational priors\", such as might be achieved by predicting future frames). In this work we wanted to focus on constructing behavioral priors during large-scale pretraining, and we found that action prediction was sufficient to obtain our results. Due to the large design space of auxiliary objectives, e.g. next frame prediction as you suggest, image auto-encoding, video auto-encoding, time contrastive losses, CLIP style losses versus closed captions, etc, we felt that investigating all of these were outside the scope of this already rather beefy paper. That being said, we agree it is a fascinating direction for future work, and we will add some discussion around this in the Discussion section.\n\n“PPO seems to be a legacy choice as well”\n - We actually use the PPG (Phasic Policy Gradient) algorithm, which is based on PPO. We decided to use an \"off the shelf\" RL algorithm like PPG because the specific choice of RL algorithm was not the focus of this paper. RL is just one way to fine-tune such a behavioral prior (we also discuss behavioral cloning fine-tuning in the paper). Using a common and well understood RL algorithm makes it more clear that our results are due to the VPT prior and not due to a more complex RL algorithm. For these reasons we felt that investigating off-policy, model-based, or other RL algorithms was outside the scope of this paper. That being said, we absolutely agree that this is an interesting direction for future research, and we will note as such in the Discussion section as an potentially fruitful avenue for future research.\n\n“Reward functions are very carefully human designed to get the right behaviour”\n - The purpose of our work was not to introduce a general reward function, but rather to introduce a method to generate a prior that can be then used to help solve downstream tasks. Our reward function, while hand-engineered, is still extremely sparse making it a good benchmark for VPT, and as we show in the paper it is such a difficult reward function to optimize that RL from scratch can make almost no progress on it. Furthermore, we actually spent very little time tuning the reward function as it is quite a natural reward function in Minecraft (rewarding items in the technology tree path on the way to the target item). That being said, we absolutely agree that works attempting to construct more general reward functions are extremely promising and we’ll note this in the Discussion section, e.g. there was work released after the neurips deadline called MineDojo that does exactly this and could be complementary to VPT.\n\n“The decreasing zero-shot performance with increasing model scale seems puzzling. Why do you think that happens to be the case? Are the model under-trained?”\n - Great question! We were also puzzled by this and already provide some discussion around this in Appendix Section H. We see there that larger models (and therefore models with lower loss) have worse zero-shot performance; however, they do perform better when fine-tuned to our contractor_house dataset. While that analysis is not definitive, it points to the possibility that our models start to overfit to some visual peculiarities in the internet dataset that are not present in our environment, but then when presented with data from our environment they can quickly update this high frequency features. We’ll make sure we point more directly to this discussion from the main body of the paper..\n\n“How were the optimal model/data/compute scales determined? Were these ad-hoc choices or did you refer to Kaplan or Chinchilla scaling laws?”\n - We were inspired by these works but due to compute limitations we were unable to run comprehensive studies to fit scaling laws for this domain. For model size, we noticed that the 248M parameter model at some point during training started to go into a less compute efficient regime (shallower slope on a loss versus compute log-log plot), so we decided to train the 0.5B parameter model. We tried to tune model size to the dataset size we had such that we could use all of it, and we show a variety of datascale ablations throughout the paper. We will update the text to make this more clear.\n \n“The limitations should point out that a VPT like recipe would only work on RL domains where large-scale unlabelled data is accessible.”\n - We’ve changed some language that hopefully makes this more clear. Thank you for the comment!\n", " Thank you for your thoughtful review. We’re glad you think that our method is novel, simple, and powerful, our results are impressive, and that the paper is well written! We will respond to each of your suggested ablations below, and we hope that after we include some of your suggested ablations along with additional clarification you will consider increasing your review score.\n\n“On the effect of pretraining. From figure 9 we know that IDM can be quite data efficient. So what if the IDM is only trained on the house building contractor data, and then train BC on the labeled early game data as well as the house building data? This may tell us whether we need the \"pretraining stage\" in the paper.”\n - Figure 5 (right) shows an ablation where we show fine-tuning performance as we vary the amount of pretraining data used. The x-axis ranges from 0 epochs to 30 epochs of pretraining. The leftmost point (0-epochs) is the same as training from scratch on the target dataset (i.e. on the Early Game or Contractor House Building datasets without pretraining). Performance is lower across the board when doing no pretraining (0-epochs leftmost point) compared to full pretraining (30-epochs rightmost point), which is evidence that pretraining is helping.\n\n“On the effect of data filtering. The paper states that \"With enough data, a large enough model, and enough training compute, a BC model trained on both unclean and clean videos would likely still perform well\" - is there any concrete evidence on how much data filtering is worth in terms of model capacity, data size, and training budget?”\n - Great question and we are interested as well. We’ve run an ablation comparing models trained on cleaned and uncleaned early game data (see Appendix section A.2.3 of the updated text). We ran this ablation in the regime with equivalent compute, so for the cleaned dataset which had about ~2000 hours of data we ran for 20 epochs, and for the uncleaned dataset which had about ~14000 hours of data we ran 2.8 epochs. In this regime we found it very beneficial to clean the data, which yielded a ~10x improvement in crafting of crafting tables and evidence of wooden item crafting versus none at all for the model trained on unclean data. We hope this provides some intuition as to how useful our data cleaning pipeline was.\n\n“On the sequence length of IDM. There is a 3D Conv to aggregate nearby frames (similar to frame-stacking). Why do we still need a long sequence (128) to train the IDM? Will the performance drop significantly with smaller sequence lengths?”\n - Great question. At your suggestion we ran an experiment where we mask the attention matrix in the transformer layers such that only bands of varying numbers of neighboring frames (from 0 to 64) are included in the attention. So for instance, 0 neighbors would be the case where each transformer layer cannot attend to any neighboring frames and they are equivalent to MLPs with residual connections. We found that only the case of 0 neighbors was far worse than the architecture we use in the paper, and that each other configuration we tested (2-64) were comparable to the full 128 unmasked IDM used in the paper. We’ll add these results to the appendix.\n\n“On the model scalability of IDM. Figure 19 provide some scalability analysis of IDM on the \"full\" dataset. However, according to Figure 9, we do not need such large amount of data. So is such a large IDM still necessary? Some scalability analysis on smaller contractor dataset may be illustrative.”\n - Figure 19 shows model scaling analysis of the VPT foundation model, not the IDM. We did not make this entirely clear from the caption and will update it to make it more clear.\n - As to IDM model scalability, IDM training was a very small portion of the overall compute used in this project compared to foundation model pretraining. For this reason we did not investigate IDM model size thoroughly and only did minor tuning for accuracy and loss. On the other hand, IDM data scaling was much more important to us, which is why we provide ablations for this in Figure 9. We’ll add a comment saying as much in the paper to make all of this more clear.", " This paper introduces Video Pre-Training (VPT), which is a new algorithm to learn foundation models for complex, long-horizon embodied agents in the popular Minecraft game. First, an inverse dynamics model is trained with human contractor data in Minecraft to predict keyboard and mouse actions from videos. Second, the inverse dynamics model is used to process large amounts of YouTube videos to obtain noisy action labels. Third, an agent imitates from the human players in YouTube by behavior cloning on the predicted actions. Finally, the policy can be further finetuned with RL on challenging tasks. # Strengths\n\n* Novelty: this paper proposes to use an inverse dynamics model learned from human contractor data to automatically label much larger amounts of YouTube data. The idea is simple, powerful, and novel. \n* Performance: the VPT foundation models exhibit impressive zero-shot behaviors. The VPT model fine-tuned through RL is able to craft diamond tools, which is an extremely long-horizon task that takes up to 24,000 actions. Minecraft is a very challenging domain, so the performance is impressive and solid. \n* It is surprising that keyboard and mouse action space can be learned effectively and works quite well. This is a promising direction for solving other challenging video games. \n* This paper is well-written and easy to follow. \n\n# Weaknesses\n\nWhile the final performance of VPT is impressive, I hope to see more ablations studies: \n\n1. On the effect of pretraining. From figure 9 we know that IDM can be quite data efficient. So what if the IDM is only trained on the house building contractor data, and then train BC on the labeled early game data as well as the house building data? This may tell us whether we need the \"pretraining stage\" in the paper. \n\n2. On the effect of data filtering. The paper states that \"With enough data, a large enough model, and enough training compute, a BC model trained on both unclean and clean videos would likely still perform well\" - is there any concrete evidence on how much data filtering is worth in terms of model capacity, data size, and training budget? \n\n3. On the sequence length of IDM. There is a 3D Conv to aggregate nearby frames (similar to frame-stacking). Why do we still need a long sequence (128) to train the IDM? Will the performance drop significantly with smaller sequence lengths? \n\n4. On the model scalability of IDM. Figure 19 provide some scalability analysis of IDM on the \"full\" dataset. However, according to Figure 9, we do not need such large amount of data. So is such a large IDM still necessary? Some scalability analysis on smaller contractor dataset may be illustrative. The questions are listed in the \"Weakness\" section above to ask for more ablations. The authors have addressed the limitations adequately. ", " This paper introduces Video Pre-training (VPT), a foundation model pre-trained on large-scale unlabelled videos in Minecraft along with a small amount of human demonstrations. The small amount of data is used to train an inverse dynamics model that can identify the action (M&K inputs directly which is quite neat) taken between frames, and can be used to relabel the larger unlabelled subset. VPT shows some promising zero-shot behaviour, but its performance is significantly improved by fine-tuning with Reinforcement Learning. \n\nThe experiments in this paper are of an unprecedented scale in RL, performed on a popular/relevant domain of Minecraft where the range of possible capabilities are numerous, and has large implications for the RL field in general. More generally, this paper points towards a unifying scale and fine-tune paradigm for RL. A large scale foundation model learns a \"common sense\" layer for the agents, and the agents can be later on made goal-directed by fine-tuning on a relevant reward function. The bitter lesson strikes again as we see that to get to agents with such general capabilities, all we needed was a clean large-scale dataset and simple/existing methods like IDM and PPO sufficed. Strengths: \n* The experiments in this paper are of an unprecedented scale in RL, performed on a popular/relevant domain of Minecraft where the range of possible capabilities are numerous, and has large implications for the way RL field in general. More generally, this paper points towards a unifying scale and fine-tune paradigm for RL. \n* It uses the native mouse and keyboard interface for its agents, thus the results in this paper should translate well to other domains as well. \n* The trained agents are able to demonstrate very sophisticated capabilities such as crafting diamonds which seemed quite out of reach for AI agents until now. \n\nWeaknesses: \n* Because of the scale of experiments, the design choices are not carefully ablated. Is IDM the way to learn from contractor videos, we discard a ton of info. Similarly, is predicting only actions in VPT optimal (how about tokenized frame predictions)? PPO seems to be a legacy choice as well, and we don't get to see how it performs other off-policy or model-based methods. \n* Reward functions are very carefully human designed to get the right behaviour. This seems antithetical to the scaling paradigm, at internet scale, it should be possible to infer reward functions directly. * The decreasing zero-shot performance with increasing model scale seems puzzling. Why do you think that happens to be the case? Are the model under-trained? \n* How were the optimal model/data/compute scales determined? Were these ad-hoc choices or did you refer to Kaplan or Chinchilla scaling laws?\n\n The limitations should point out that a VPT like recipe would only work on RL domains where large-scale unlabelled data is accessible. It should also point to the reliance of carefully designed reward functions. ", " In this paper, the authors use ~2,000 hours of hand-labeled data to train an Inverse Dynamics Model (IDM) to predict the mouse and keyboard actions from raw video of Minecraft games. This model benefits from using both the future and past video frames when computing per-frame predictions (i.e. “acausality”).\n\nThe authors then use this \"IDM\" to predict the mouse and keyboard action labels for 70,000 hours of raw unlabelled video of Minecraft games sampled from the internet. The authors call the labels predicted by the IDM on this data “pseudo-labels”.\n\nGiven the 70,000 hours of pseudo-labelled gameplay, the authors train a model to predict future actions from past actions using Behavioural Cloning (BC). This model is referred to throughout the work as a Video PreTraining, or VPT model.\n\nThe authors show that their VPT model already achieves “non-trivial zero shot performance”, and has thus successfully learnt a prior over the behaviour distribution. To achieve their strongest results, the authors then incorporated this VPT into two further downstream experiments: fine tuning using more specialised datasets covering a specific desired behaviour, and applying reinforcement learning.\n\nThe authors found that factoring the problem into two parts: an inverse dynamics model to predict pseudo labels, and a model for behavioural cloning, were key to achieving good performance. The VPT model trained by the authors using this technique is able to provide a very strong baseline behavioural prior, and further fine tuning allows the authors to obtain very compelling results in Minecraft gameplay that appear far beyond the current capabilities of RL to learn from scratch. Perhaps the most impressive demonstration of results is that the agent was able to create \"diamond tools\", which require tens of thousands of complex actions be taken by the agent.\n Strengths:\n* The work is very well-executed and demonstrates very impressive results on a challenging learning task, namely Minecraft with the full action space. The excellent performance through training a large behavioural prior is the paper's main novelty, and this largely results from the author's careful execution, and careful application of learning methods. This also clearly required resolving numerous technical challenges, which are mostly enumerated in depth in the supplemental material.\n* The outlined method could be generalised to other learning tasks, unlocking the potential of large amounts of unlabelled video data that is increasingly available. This would not be without challenges (inverse dynamics is hard), but the approach seems promising and worthy of further study.\n* The paper is clearly written, making it easy to follow and understand the work. The level of detail is appropriate across the sections, and the supplemental material appears to contain sufficient details required to reproduce the work.\n\nWeaknesses:\n* The described method may currently be limited in its generality - prediction of inverse dynamics is a difficult problem in its own right, and is somewhat simplified by the world studied by the authors - though this is a compelling direction for further research.\n* It would be nice to see a little more depth in the results and conclusions section, and more discussion of future work.\n* A little more detail about the architectures and models in the main body of the paper could be welcome.\n * Deep in the appendix it is mentioned that the BC model and the IDM are identical. This information would be useful in the main body of the paper. i.e. - “The behavioural cloning model architecture is the same as the IDM architecture described in Appendix 971 D.1 except that we modify the architecture so that it is causal”. Minor nit: in the appendix, it is written “residual transformer layers”, where I believe these are referred to as attention layers in the main body. (attention layers seemed clearer.\n\n* Minor nit: I didn’t really understand figure 3 (right) - what is being compared exactly here?\n\n* Minor nit: IDM Architecture [appendix] - there’s a long wordy description of the model, but maybe a picture or table could be nice?\n\n* Minor nit: Figure 13 (and others) in the appendix appear almost “washed out” :) The authors could certainly elaborate further on the generalisability of the methods presented - how important is it that e.g. screen resolution and settings match up so closely? To what extent could we hope to learn an inverse kinematics model on a more general or noisy distribution of data? \n\nFor example, it seems unlikely to me that this could currently work on real-world video (which would be an exciting prospect for e.g. autonomous-driving). Generally, the \"results and conclusions\" paragraph could benefit from more thought and further deeper discussion.", " In this paper, the authors use a small amount of labeled data and a large amount of unlabelled data for solving Minecraft tasks. The authors aim to build a general-purpose framework for sequential decision-making tasks. The model has zero-shot generalization ability. (+) The results are impressive. Minecraft tasks are challenging. This paper shows a promising way to solve such tasks.\n\n(-) The idea is not novel. The action is predicted based on a set of historical observations in Eqn 1. Even though the method part has three stages as shown in Fig2, the technical contribution is limited in the method part.\n\n(-) This paper did not cite works that are quite strongly related to this paper, such as [1], [2], and [3].\n[1] Pre-Trained Language Models for Interactive Decision-Making\n[2] A Generalist Agent\n[3] Can Wikipedia Help Offline Reinforcement Learning?\n\n Using large models and massive data always seems helpful for better results and zero-shot generalization. This has become an obvious \"fact\" based on recent papers, such as DALLE [4], CLIP[3], LID[1], and GATO[2]. So what are the main contribution or novelty left after excluding the large models and massive data part? \n\nThis is a good paper, trying to solve a challenging problem. Minecraft tasks are challenging, and there are no methods that can solve such challenging tasks. However, if the good performance mostly comes from the usage of large data and large models, this paper is not convincing enough to be accepted. If the authors could provide more evidence of what makes this paper different from other decision-making works in terms of the method part (except for the large data part), I would change my mind.\n\n[3] Learning Transferable Visual Models From Natural Language Supervision\n[4] Hierarchical Text-Conditional Image Generation with CLIP Latents\n (-) The results highly depend on the policy trained on the small amount of labeled data. If this policy is not good enough, it cannot provide reliable predictions for the unlabelled data. \n\n(-) The unlabelled video data cannot be used directly. The authors still need carefully design the filtering procedure to ensure the data are clean enough for training a better policy.\n\n(-) This is a good paper with impressive results, but it seems the novelty is limited and not a good match for NeurIPS. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 8, 9, 9, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 4, 4, 4, 5 ]
[ "1NpQoYJQ5TD", "1Igt61qqri9", "5yP34P1JEx", "sTrUMktaBh9", "6t0iuUM7bQW", "155ufQFqd5L", "wSuC-co5iWu0", "iB75rCF-ZsQ", "1xlO8Wd1BVe", "EkPy2nZxVB0", "nips_2022_AXDNM76T1nc", "nips_2022_AXDNM76T1nc", "nips_2022_AXDNM76T1nc", "nips_2022_AXDNM76T1nc" ]
nips_2022_se2oxj-6Nz
Rethinking Image Restoration for Object Detection
Although image restoration has achieved significant progress, its potential to assist object detectors in adverse imaging conditions lacks enough attention. It is reported that the existing image restoration methods cannot improve the object detector performance and sometimes even reduce the detection performance. To address the issue, we propose a targeted adversarial attack in the restoration procedure to boost object detection performance after restoration. Specifically, we present an ADAM-like adversarial attack to generate pseudo ground truth for restoration training. Resultant restored images are close to original sharp images, and at the same time, lead to better results of object detection. We conduct extensive experiments in image dehazing and low light enhancement and show the superiority of our method over conventional training and other domain adaptation and multi-task methods. The proposed pipeline can be applied to all restoration methods and detectors in both one- and two-stage.
Accept
In this paper, the authors provide an interesting formulation of an adversarial attack that can directly help object detector training in the presence of various degradations. This is a departure from the usual formulation of restoration, followed by detector training. I liked the initial derivation which is elegant and logical, and their experimental results show that mAP is clearly improved over baseline training. 2 of the 3 reviewers supported acceptance (with one being a strong accept). The third reviewer felt that the experimental improvements were insufficient. While I agree that the mAP improvements are not in the "wow" category, I still think the method is solid (as also accepted by third reviewer) and worthy of acceptance.
train
[ "fuwsUiydg5-", "kpzunnZWqbj", "vdpV8Aq1Kws", "6wqFuXzs6PP", "LZti962MB2V", "3Yc2UowHJ-W", "jG30-yMoGOB", "QgGGjBgKvC", "ASxWbk6izDi", "QH5P1a8fRw1", "4uGrwtiFq-w" ]
[ "author", "official_reviewer", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " Thank you for your issue. But we still want to emphasize the universality of our method. Our algorithm can extend to most of the restoration networks and detection networks. For most restoration tasks, e.g., haze removal (Table 2), low-light enhancement (Table 3), and the case with multiple degradations (the table responded to Reviewer 6L2X), fine-tuning the restoration network can improve the performance of detection networks. Especially when multiple kinds of degradation exist, the improvement of mAP achieves better results by up to 1% as shown in the table below. We also admit that our method can be improved in terms of performance, therefore, we will focus on better learning the difference between the images before/after performing the attack from pseudo-ground truth as suggested. \n\n* The table below shows the experiment when two kinds of degradation exist, i.e. low light and haze. The restoration and detection methods are NDNet [Reference 2] and YOLOv3 respectively. \n\n\n||no restoration|Conventional Training|TOG|Ours|\n| :- | :- | :- | :- | :- |\n|PSNR|11.92|25.53|25.35|25.34|\n|SSIM|0.5375|0.8603|0.8564|0.8576|\n|mAP(%)|53.01|69.55|70.11|70.50|", " Thank you so much for the response.\n\nAlthough I've read the feedback from the authors, my main concern remains that the proposed method's performance gain is limited on the test set. For instance, Table 2 shows only 0.14 and 0.08 mAP improvements on VOC_fog_test and RTTS from TOG for YOLOv3+MSBDN. Moreover, there are 0.43 and 0.04 improvements on VOC_fog_test and RTTS for YOLOv3+GridDehaze. We can also say the same thing for the result in Table 3 and PSNR/SSIM values.\nThese facts do not indicate that the proposed method is sufficiently effective.\n\nThus, the authors mention in the response that \"we are the first to explore the problem and prove an effective way to address the problem. Our algorithm and pipeline are theoretically proven to work for all restoration methods for different types of degradation, and detectors of both single and two stages\", but this statement seems overclaimed.\n\nI agree that the proposed approach is interesting and promising, but its effectiveness is not supported well by the experiments and their results.", " Dear Reviewers, the revised manuscript is uploaded. If you have any other concern, please feel free to raise it. Thank you sincerely for your effort and attention.", " \n``Q-4:`` What happens if the images have multiple degradations will the proposed method still applicable\n\n``A-4:`` To evaluate the proposed algorithm on multiple degradations, we train and test a nighttime dehazing algorithm NDNet [Reference 2], which can simultaneously remove haze and brighten images. For training and testing, we simulate a dark foggy version of VOC, dubbed VOC\\_fog\\_dark, including the VOC\\_fog\\_dark\\_train set and VOC\\_fog\\_dark\\_test set. Then, we experiment on the new dataset and the results are shown below. Our method is applicable when two kinds of degradation exist.\n\n* The table below shows the experiment when two kinds of degradation exist, i.e. low light and haze. The restoration and detection methods are NDNet [Reference 2] and YOLOv3 respectively. \n\n\n||no restoration|Conventional Training|TOG|Ours|\n| :- | :- | :- | :- | :- |\n|PSNR|11.92|25.53|25.35|25.34|\n|SSIM|0.5375|0.8603|0.8564|0.8576|\n|mAP(%)|53.01|69.55|70.11|70.50|\n\nIn addition, if we consider noise as a natural degradation, most of the existing dehazing and low-light enhancement algorithms can be regarded as handling multiple degradations (e.g., haze & noise, low-light & noise) in one go. Therefore, if we consider image dehazing and low-light enhancement in this way, our proposed method could be seen as applicable for multiple degradations.\n\n[Reference 1] De Rosal Igantius Moses Setiadi. 2021. PSNR vs SSIM: imperceptibility quality assessment for image steganography. Multimedia Tools Appl. 80, 6 (Mar 2021), 8423–8444. https://doi.org/10.1007/s11042-020-10035-z\n\n[Reference 2] Jing Zhang, Yang Cao, Zheng-Jun Zha, and Dacheng Tao. 2020. Nighttime Dehazing with a Synthetic Benchmark. In Proceedings of the 28th ACM International Conference on Multimedia (MM '20). Association for Computing Machinery, New York, NY, USA, 2355–2363. https://doi.org/10.1145/3394171.3413763", " \n``Q-1:`` L129 uses the math symbol D(x), while L131 utilizes D[R(x)]. It would be better to use a unified symbol. Similar typos are also present in equations 2, 3, and 5.\n\n``A-1:`` Thank you for kindly remind us of the typos. We will revise them. \n\n``Q-2:`` In the sentence “where $\\hat{x}^{}=arg min_{x} ||\\hat{x}^{}-\\hat{x}||$” of equation 6, the symbol $\\hat{x}^{*}$ have two different meanings, i.e., a determined value on the left and a variable on the right.\n\n``A-2:`` Thank you for kindly remind us of the typos. We will revise them.\n\n``Q-3:`` In the captions of Tables 2 and 3, the word “ddetection” is a typo.\n\n``A-3:`` Thank you for kindly remind us of the typos. We will revise them.\n\n``Q-4:`` How to determine the initialization of the Number of attack iterations T, Update stepsize $\\lambda$, and the Magnitude tolerance of Perturbation $\\delta$?\n\n``A-4:`` Thanks for the valuable question. We conduct a group of experiments as shown below about determining the coefficients of adversarial attack. The setting is elaborated in the caption. \"PSNR\" and \"SSIM\" are used to reflect how attacked images look different from original ones in terms of visual quality. \"mAP\" reflects how successfully images are attacked. We can find that $\\delta$ heavily affect visual quality. A larger $\\delta$ yields worse visual quality, but better detection. For a better visual quality, we choose $\\delta=2/255$ where detection performance is sufficiently good. Regarding the pair of $\\lambda$ and $T$, $\\lambda=\\frac{\\delta}{T}$ is sufficient to generate good enough adversarial examples. A larger $T$ costs more time for adversarial attack. In our work, we thereby use $(\\delta,\\lambda,T)=(2/255,1/255,2)$.\n\n\n* The following table shows the experiment of selecting Number of attack iteration $T$, Update stepsize $\\lambda$, Magnitude tolerance of Perturbation $\\delta$ on a subset of VOC\\_fog\\_test with 100 images randomly selected. The object detector used for attack and subsequent detection is YOLOv3. The attack algorithm is ours. The adversarial examples are generated, on which \"mAP\" is computed, and \"PSNR\" and \"SSIM\" are computed between the examples and original images.\n\n|$\\delta$|($\\lambda$, $T$)|1/255,2|1/255,4|1/255,8|0.5/255,4|0.5/255,8|0.5/255,16|0.1/255,20|0.1/255,40|0.1/255,80|\n| :- | :- | :- | :- | :- | :- | :- | :- | :- | :- | :- |\n||PSNR|49.16|47.72|47.40|49.21|47.60|47.39|49.31|48.02|48.49|\n|2/255|SSIM|0.9907|0.9869|0.9859|0.9908|0.9866|0.9859|0.9910|0.9879|0.9893|\n||mAP(%)|94.50|94.62|94.59|94.04|94.46|95.54|95.92|96.78|96.87|\n||PSNR|46.57|44.64|42.54|46.58|44.47|42.57|46.67|44.96|44.24|\n|4/255|SSIM|0.9831|0.9739|0.9582|0.9831|0.9728|0.9586|0.9835|0.9758|0.9717|\n||mAP(%)|95.20|95.96|95.71|95.43|96.46|96.74|96.84|97.99|97.78|\n", " \n``Q-4:`` There are several unclear points in this paper. For instance, what is \"Y\" in Tables 2 and 3? Also, what do you mean by \"detection\" in the caption of Table2? Finally, the authors mention that the mAP of YOLOv3 decreases by nearly 15\\% from that of clean images in lines 227 and 229, but where can we see the result of the clean images?\n\n``A-4:`` Thank you for the request of clarification. \"Y\" in Tables 2 and 3 means YOLOv3. A single \"Y\" means to directly perform detection by YOLOv3. We will clarify this in the revised paper.\n\nThe mentioned mAPs of YOLOv3 are 81.91\\%-66.88\\%=15.03\\% for VOC\\_fog\\_test, which means YOLOv3's performance decreases by 15.03\\% before and after adding synthetic haze on the clean images of VOC\\_fog\\_test. Similar case for 14.11\\% on VOC\\_dark\\_test.\n\nWe will carefully proofread the manuscript and revise all the typos in the revised paper.\n\n``Q-5:`` It is somewhat unclear what is conducted and is the purpose of the experiment in Section 4.2.1. Please elaborate on it.\n\n``A-5:`` Section 4.2.1 evaluates whether adversarial perturbation can boost detection performance in common circumstances. Our experiment is based on the following hypothesis: only if the adversarial perturbation improves detection performance can it be used for fine-tuning restoration networks. \n\nTherefore, we conduct experiments to firstly show that TOG can make adversarial examples on which detection accuracy increases (e.g., from 42.77\\% to 71.98\\% on RTTS) only with a very small perturbation added. Compared to TOG, our ADAM-like algorithm further increases mAP (to 78.50\\%). So intuitively, dehazing model learning from pseudo ground truth generated by ours can recover images with better downstream detection performance, which is demonstrated in Sections 4.2.2 and 4.2.3 in the manuscript. ", " \n``Q-1:`` Although the proposed approach is promising and interesting, my main concern is that the proposed method does not show good performance when image restoration models are trained on the pseudo samples of the training set and are evaluated on the test set. This seems natural because the difference between the images before/after performing the attack is slight, and thus it is difficult to learn.\n\n``A-1:`` We agree that \"the difference between the images before/after the attack is slight\" and it's hard to learn. But we respectfully disagree that our method \"does not show good performance\" because it is challenging to improve detection performance and maintain good visual quality by only training a restoration network.\n\nIn previous works, the performance gain of detection mainly comes from training detectors to adapt to adverse imaging environments. For example, the restoration subnetwork is trained together with a detector in both DSNet [15] and IA-YOLO [23]. However, traditional restoration models generate many artifacts as shown in Figure 6 of the manuscript. By removing the restoration subnetwork and simply training YOLOv3 on the hazy images of VOC\\_fog\\_train. The detection performance (we retrain a YOLOv3 that achieves 75.32\\%) can easily exceed IA-YOLO (67.40\\%) on VOC\\_fog\\_test.\n\nIn summary, we can conclude that the performance gain comes from making the detector adapt to adverse or processed images. Our work is under a stricter and harder setting without changing or retraining the detector. As far as we know, we are the first to explore the problem and prove an effective way to address the problem. Our algorithm and pipeline are theoretically proven to work for all restoration methods for different types of degradation, and detectors of both single and two stages, which is valuable in the field. All the datasets in the work, e.g., RTTS and ExDark, are relatively large-scale, complicated, and naturally captured. Common performance gains on such benchmarks as well show great value.\n\n``Q-2:`` Another concern is that TOG achieves the almost same performance as the proposed method in Tables 2 and 3, indicating that the technical contribution of this work is limited.\n\n``A-2:`` Our main contribution mainly lies in introducing adversarial attacks to generate pseudo ground truth for better downstream detection and restoration quality. Both TOG and ours can generate compelling pseudo ground truth, which shows the effect of our proposed fine-tuning pipeline. Furthermore, the proposed algorithm can theoretically and experimentally outperform TOG for most restoration methods and detectors of both single and two stages. This shows a valuable contribution.\n\n``Q-3:`` To show the effectiveness, it would be better to compare the proposed method with a simple baseline method that simultaneously optimizes the image restoration and object detector to minimize the loss of object detection.\n\n``A-3:`` To show the effectiveness of the proposed method, we compare the proposed method against baseline methods (DSNet [15] and IA-YOLO [23]) that simultaneously optimize the image restoration and object detector to minimize the loss of object detection. However, as shown in Tables 2 and 3 in the manuscript, DSNet [15] and IA-YOLO [23] yield worse visual quality and detection accuracy results than our proposed methods. \n\nIn addition, we also conduct an experiment to optimize the restoration model by summing restoration loss and detection loss and keeping the detector unchanged. The results of MSBDN on VOC\\_fog\\_test with different loss weights are shown in the following table. We can find the introduction of detection loss consistently decreases PSNR and SSIM by creating artifacts. However, the mAP cannot exceed our proposed pipelines (77.52\\% of TOG and 77.66\\% of ours). Therefore, the detection loss cannot propagate to restoration efficiently without changing the detector's parameters. \n\n* The table below shows the experiment on optimizing MSBDN on VOC\\_fog\\_test by weighted summation of restoration loss and detection loss, i.e. $L_{restoration}+\\gamma L_{detection}$. The detection model is remained unchanged during fine-tuning.\n\n|$\\gamma$|0.1|1|4|5|6|10|\n| - | - | - | - | - | - | - |\n|PSNR|28.64|26.92|26.58|26.55|26.56|24.68|\n|SSIM|0.8848|0.8679|0.8667|0.8654|0.8642|0.8424|\n|mAP(%)|77.09|77.12|77.26|77.37|77.22|77.14|\n\n\n", " \n``Q-1:`` Why is the performance using the proposed method significantly less in table 2 and table 3.\n\n``A-1:`` In Table 2-3, our method achieves better results than the conventional training strategy. For example, compared with the original training data, the detection performance improvement of YOLOv3 is significant (77.66\\% vs. 77.06\\% in terms of mAP) using the proposed pseudo ground truth to train the MSBDN model on the VOC\\_fog benchmark as shown in Table 2. Furthermore, our proposed attack algorithm also yields better pseudo ground truth than the recent attack model of TOG, e.g., 77.66\\% vs. 77.52\\% in Table 2.\n\nCompared to Table 1, in Tables 2 and 3, training or fine-tuning of restoration method is conducted to learn the attacked pseudo ground truth. The learning process is hard since the perturbation is small. However, in Table 1, only adversarial attack is conducted to evaluate whether adversarial perturbation can boost detection performance in common circumstances. Therefore, the detection performance in Table 1 is much higher than Tables 2 and 3 where restoration method is trained or fine-tuned to learn the small perturbation.\n\n``Q-2:`` Can authors show zoomed versions of visualizations for easy understanding\n\n``A-2:`` Thank you for the valuable suggestion. We will revise the paper with zoomed figures for better visualization.\n\n``Q-3:`` Can authors explain while generating adversarial attack samples how are they maintain image properties like how are the authors make sure whether the adversarial attack image naturally possible image or not.\n\n``A-3`` The clipping operation in Eq. 10 regulates the perturbation to be small enough for human eyes. \"PSNR\" and \"SSIM\" are commonly used to represent the closeness between a pair of images. We compute the PSNR and SSIM on six datasets. When PSNR exceeds 45 and SSIM exceeds 0.99 between two images, the pixel-wise difference is very slight and negligible [Referece 1]. From the perspective of human vision, the difference can hardly be recognized. As shown in the following tables, all the average PSNR and SSIM exceed 48 and 0.98 respectively. Such a small perturbation cannot change the natural property of attacked images. In summary, the clipping operation ensures the perturbation is sufficiently small so that natural image properties are maintained.\n\n* The following six tables show the detection performance gain by different targeted adversarial attack methods on YOLOv3. The setting is $\\delta=2/255$, $\\lambda=1/255$ and $T=2$. The average PSNR and SSIM between the natural original images and their corresponding adversarial examples are computed for six datasets.\n\n* RTTS\n\n||no attack|TOG|Ours|\n| :- | :- | :- | :- |\n|PSNR|Inf|48.54|48.97\n|SSIM|1|0.9901|0.9905\n|mAP(%)|42.77|71.98|78.50\n\n* Hazy images of VOC\\_fog\\_test\n\n||no attack|TOG|Ours|\n| :- | :- | :- | :- |\n|PSNR|Inf|49.21|49.12\n|SSIM|1|0.9821|0.9898\n|mAP(%)|66.88|92.15|94.12\n\n* Clean images of VOC\\_fog\\_test\n\n||no attack|TOG|Ours|\n| :- | :- | :- | :- |\n|PSNR|Inf|49.30|49.24\n|SSIM|1|0.9952|0.9938\n|mAP(%)|81.91|94.81|96.28\n\n* ExDark\n\n||no attack|TOG|Ours|\n| :- | :- | :- | :- |\n|PSNR|Inf|48.43|48.49\n|SSIM|1|0.9919|0.9916\n|mAP(%)|46.27|74.29|79.29\n\n* Dark images of VOC\\_dark\\_test\n\n||no attack|TOG|Ours|\n| :- | :- | :- | :- |\n|PSNR|Inf|48.35|48.47\n|SSIM|1|0.9937|0.9957\n|mAP(%)|56.88|75.69|81.99\n\n* Clean images of VOC\\_dark\\_test\n\n||no attack|TOG|Ours|\n| :- | :- | :- | :- |\n|PSNR|Inf|49.18|49.04\n|SSIM|1|0.9940|0.9933\n|mAP(%)|70.99|84.68|88.83\n\n", " the proposes a fine-tuning approach to improve the object detection performance without reducing the visual quality of the restored image. To achieve the authors follow a target adversarial attack on the object detection task in order to improve the detection performance and on the other hand they don't effect the restoration task optimization. Strengths:\n- the paper proposes a simple technique that improves the object detection performance and on the other hand doesn't decrease the image restoration performance.\n- they generate adversarial samples using the second-order gradients and cumulative momentum approach.\n\nWeaknesses:\n- Why is the performance using the proposed method significantly less in table 2 and table 3.\n- Can authors show zoomed versions of visualizations for easy understanding\n- Can authors explain while generating adversarial attack samples how are they maintain image properties like how are the authors make sure whether the adversarial attack image naturally possible image or not.\n- what happens if the images have multiple degradations will the proposed method still applicable \n Please refer weaknesses Authors discussed limitations that are relevant to the proposed method", " Aiming at better object detection performance in bad environments such as hazy or low-light scenes, this paper proposes a new training protocol where an image restoration model is supervised by pseudo ground truths. The pseudo ground truths are generated using an adversarial attack method (Adam-like targeted attack in this paper). By doing so, the trained model shows better object detection performance in hazy or low light scenes than existing studies and plain models. Originality\n\nThe proposed approach is interesting and it looks different from existing studies.\n\nQuality\n\n- Although the proposed approach is promising and interesting, my main concern is that the proposed method does not show good performance when image restoration models are trained on the pseudo samples of the training set and are evaluated on the test set. This seems natural because the difference between the images before/after performing the attack is slight, and thus it is difficult to learn.\n\n- Another concern is that TOG achieves the almost same performance as the proposed method in Tables 2 and 3, indicating that the technical contribution of this work is limited.\n\n- To show the effectiveness, it would be better to compare the proposed method with a simple baseline method that simultaneously optimizes the image restoration and object detector to minimize the loss of object detection.\n\nClarify\n\n- There are several unclear points in this paper. For instance, what is \"Y\" in Tables 2 and 3? Also, what do you mean by \"detection\" in the caption of Table2? Finally, the authors mention that the mAP of YOLOv3 decreases by nearly 15% from that of clean images in lines 227 and 229, but where can we see the result of the clean images?\n\n- It is somewhat unclear what is conducted and is the purpose of the experiment in Section 4.2.1. Please elaborate on it. Please see the Strengths And Weaknesses section. Yes, the authors mention the limitations of their work but do not mention potential negative social impact.", " The paper rethinks image restoration for object detection from the perspective of adversarial examples and proposes a fine-tuning pipeline by formulating the tasks of image restoration and object detection into one. Unlike most existing methods that focus on modifying object detectors, the paper attempts to adapt restoration algorithms for generating high-quality visual perception and better detection results. To this end, a momentum-based ADAM-like iterative targeted adversarial example generation algorithm is designed to produce pseudo ground truth for fine-tuning restoration models. The proposed methods are evaluated on two image restoration tasks, i.e., image dehazing and low-light image enhancement, and extensive experimental results demonstrate its superiority to state-of-the-arts both in quality and quantity. Strengths:\n1. Image restoration is usually viewed as a pre-processing for high-level computer vision tasks such as detection, but related discussions are rare in existing publications. The paper makes valuable explorations by formulating the two tasks into one.\n2. Different from the conventional methods that modify object detectors, the proposed approach aims to adapt the restoration algorithms for improving the performances of both tasks.\n3. The proposed Adam-variant adversarial example generation framework is based on Theorem 1, which is proved in the supplement, and thus becomes solid in theory.\n4. Extensive experiments are conducted on various datasets, including a dehazing dataset and a low-light enhancement dataset, to demonstrate the effectiveness of the proposed method both in quality and quantity.\n5. The paper is written well and easy to follow. It is good to illustrate the idea in Figure 1 and provide the proof and more analysis in the supplementary material.\n\nWeaknesses:\n1. L129 uses the math symbol D(x), while L131 utilizes D[R(x)]. It would be better to use a unified symbol. Similar typos are also present in equations 2, 3, and 5.\n2. In the sentence “where \\hat{x}^{*}=arg min_{x} ||\\hat{x}^{*}-\\hat{x}||” of equation 6, the symbol \\hat{x}^{*} have two different meanings, i.e., a determined value on the left and a variable on the right.\n3. In the captions of Tables 2 and 3, the word “ddetection” is a typo.\n4. How to determine the initialization of the Number of attack iterations T, Update stepsize λ, and the Magnitude tolerance of Perturbation δ? See the weaknesses in the above. The authors have addressed the limitations of the proposed method in the paper. " ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 5, 4, 8 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, 4, 3, 5 ]
[ "kpzunnZWqbj", "jG30-yMoGOB", "nips_2022_se2oxj-6Nz", "ASxWbk6izDi", "4uGrwtiFq-w", "QH5P1a8fRw1", "QH5P1a8fRw1", "ASxWbk6izDi", "nips_2022_se2oxj-6Nz", "nips_2022_se2oxj-6Nz", "nips_2022_se2oxj-6Nz" ]
nips_2022_Oq2bdIQQOIZ
On Privacy and Personalization in Cross-Silo Federated Learning
While the application of differential privacy (DP) has been well-studied in cross-device federated learning (FL), there is a lack of work considering DP and its implications for cross-silo FL, a setting characterized by a limited number of clients each containing many data subjects. In cross-silo FL, usual notions of client-level DP are less suitable as real-world privacy regulations typically concern the in-silo data subjects rather than the silos themselves. In this work, we instead consider an alternative notion of silo-specific sample-level DP, where silos set their own privacy targets for their local examples. Under this setting, we reconsider the roles of personalization in federated learning. In particular, we show that mean-regularized multi-task learning (MR-MTL), a simple personalization framework, is a strong baseline for cross-silo FL: under stronger privacy requirements, silos are incentivized to federate more with each other to mitigate DP noise, resulting in consistent improvements relative to standard baseline methods. We provide an empirical study of competing methods as well as a theoretical characterization of MR-MTL for mean estimation, highlighting the interplay between privacy and cross-silo data heterogeneity. Our work serves to establish baselines for private cross-silo FL as well as identify key directions of future work in this area.
Accept
The paper presents an analysis of item-level or sample-level DP with personalization in cross-silo federated learning. The reviews are strongly divided with two recommending acceptance and two recommending rejection. After reading all the reviews and the paper, I find the argument of the "accept" side stronger. While the paper does not propose new algorithms, it presents a systematic analysis of existing methods that helps explain their properties. I believe this can be a more valuable contribution to the community than yet-another-new-algorithm. That said, there are also important weaknesses, as noted by the reviewers. It is not clear how to select the optimal $\lambda$, especially in the non-convex setting with no theory as a guide. It is therefore not clear how useful the method would be in actual application where suitable $\lambda$ would have to be found somehow. This seems like an obvious topic of future work. A broader assessment with more data sets and some actual algorithm to select $\lambda$ would clearly strengthen the paper. Some reviewers were confused by the proposed definition for sample-level cross-silo DP. I would strongly encourage the authors to use a part of the extra space for camera ready for writing this definition explicitly to avoid such confusion.
train
[ "LwzUOEemOtZ", "cVsqGO-RX0", "UhgUTt0t1c0", "uGnVZ4MNHl6", "ykcCtAZZID", "n7RKftXKEQ8", "20rH0GEI63z", "QkslmvkMirY", "rMtUPaS_fOR", "cOZla78PwcO", "U2m767jL7Ku", "G_dq5I2DaEX", "mPKqadLVyHC", "6D3pMlYY4H0", "r0B9lsjVU_k" ]
[ "official_reviewer", "author", "author", "author", "author", "author", "author", "author", "author", "author", "author", "official_reviewer", "official_reviewer", "official_reviewer", "official_reviewer" ]
[ " I tend to agree with the authors that the main weakness I mentioned (discussion and claims about DP) is due to writing clarity, and trust that the authors will fix these issues for the next version of the paper. I will therefore raise my score to recommend accepting the paper: I think the main point about the need to balance utility loss from DP noise against the utility loss from data heterogeneity is a solid and novel line of research.\n\nSome further minor comments on the rebuttal:\n\n> Q2 We believe that in Truex et al. 2019 [use] client-level DP\n\nThis is correct, as is clear from their Sec 3.2; my earlier claim was due to careless checking of their definitions.\n\n> Q5 We have two interpretations for the reviewer’s comment\n\nI ment the first one, so a client running DP-SGD on the local dataset and calculating sensitivity using client-level DP (so, e.g., a client uses full local dataset size as batch size, sums the local gradients, clips the sum to enforce bounded norm for the query, and adds noise. A gradient step should now provide client-level DP since changing the client arbitrarily will always have bounded effect due to clipping, although subsampling amplification etc needs more assumptions). But this issue is a minor sideline for the current paper.", " Hi reviewers,\n\nThanks again for your valuable feedback! Since the author-reviewer discussion period ends in 24hrs, we’d like to kindly follow up with you again to check if our responses/additional results have adequately resolved your questions. We'd love to know if there’s anything else we could discuss and/or help clarify.\n\nThank you,\n\nPaper3693 Authors", " Dear reviewers,\n\nThank you again for taking the time to provide valuable feedback! We’d like to follow up with you and check if our responses/additional results have adequately resolved your questions around the privacy notion/definition (Reviewer JfCt, 7hbo), algorithmic novelty (Reviewer JfCt, dzK1), empirical evaluation (Reviewer JfCt, dzK1), and writing/presentation clarity (Reviewer 7hbo). If not, we'd really appreciate it if you could let us know if there’s anything we could discuss and/or clarify. \n\nThank you,\n\nPaper3693 Authors", " We really appreciate the reviewer for their time, their detailed review, and for recognizing the strengths of this work. We understand that the reviewer is primarily concerned about our claims around client-level DP; **we believe that we are in general agreement with the reviewer, and that most issues stem from the clarity of our writing**. We hope to address your questions in the following.\n\nQ1\n- Thank you for the suggestion! We will add citations / update the main text accordingly.\n\nQ2\n- **Re Heikkilä et al. 2020**: Thank you for raising this issue! We noticed this mistake after the submission and have since corrected it.\n- **Re Truex et al. 2019**: We believe that in Truex et al. 2019, Algorithm 1 and the 2nd last paragraph of section 3.2 suggests that noise is **directly added to the client model update** $Q_s$ (with bounded sensitivity), which would provide a **client-level DP** instead of sample-level DP (please also see our response to Q5 below). At the same time, certain parts of Truex et al. 2019 (e.g. Def. 1, Alg. 4) instead suggest that **sample-level DP** is provided (as noise is now added onto clipped individual gradients). \n- Our goal is to abundantly credit early work that may be related to ours, and in doing so we may have had minor omissions – we will revise accordingly in the updated version. Thanks for pointing these out.\n\n\nQ3\n- We agree with the reviewer that “client-level” and “sample-level” inherently differ in the granularity of protection, rather than that the DP protection is customized for each “record in the dataset” (however this “dataset” may be defined). We believe this to be a minor misunderstanding from our writing clarity.\n- By “canonically … shared privacy guarantee”, we mean that when treating the clients as the “dataset”, a **central client-level $(\\epsilon, \\delta)$-DP guarantee typically** (e.g. as in [a, b, h, i]) protects all “records” (i.e. clients) equally (since one-shot noise is added on the server), as opposed to more recent notions of “per-instance” accounting (e.g. [c, f]). \n- Indeed, we agree with the reviewer that under **client-level *local*-DP** each client can perturb their updates with varying noise levels and achieve their own DP guarantees.\n- We will improve the writing accordingly.\n\n\nQ4\n- We again agree with the reviewer that client-level local-DP does not require trust; the reviewer might have overlooked L120 where we specifically mentioned “**non-local** DP guarantees”. We will revise and make this clearer.\n\nQ5\n- We have two interpretations for the reviewer’s comment (due to “e.g. using DP-SGD and client-level LDP…”) and address them separately.\n- **[Meaning #1: Using DP-SGD to achieve client-level LDP]**\n - In essence, client-level local-DP (LDP) is treating each client itself as a single “record” in the “dataset” (which is the set of clients), meaning that under FL, clients can apply local perturbations (clip and noise) to their model update **right before** sending it to the server (standard implementation of LDP in FL), and the **local (personalized) model can be kept noise-free**. In other words, by “local training is ‘free’”, we mean that **local training without noise and ever communicating to the server** can be perfectly private under client-level DP; e.g. see also discussions in [d].\n - Moreover, we argue that **simply running DP-SGD (within each silo where gradients are clipped and noised) does _not_ give client-level LDP in general**, since the trained model (using private gradients from DP-SGD) can still have **unbounded sensitivity**. Moreover, the **interpretation** of the resulting $(\\varepsilon, \\delta)$ values would also be different – with large local datasets, it is easy to achieve a small $\\varepsilon$ under sample-level DP (w/ DP-SGD) using amplification theorems, while under client-level LDP the $\\varepsilon$ would not depend on the local dataset size.\n- **[Meaning #2: Using DP-SGD and a separate client-level LDP mechanism *simultaneously*]**\n - In this case, we definitely agree with the reviewer that local training is not free (due to the DP-SGD steps _before_ applying LDP perturbations and reporting back to the server).\n- Nevertheless, we value the reviewer’s feedback and will try to make our writing clearer.\n\nQ6\n- The reviewer raised a great point. By “nullifying”, we meant in the **semantic** sense that one may know all the entities inside the “dataset” (the set of silos), and thus the “plausible deniability” property provided by DP for any participants may be weakened; we **did not** mean in the **statistical** sense that the output of the mechanism no longer satisfies the DP definition. We agree that this could be misleading and will revise accordingly.\n\n(continued)", " (continued from part 1)\n\nQ7\n- For all methods, we first pick a fixed number of FL rounds where **all methods can converge** (e.g. 400 for vehicle). \n- Then, starting from 1 local epoch per round for all methods (which is also a standard choice across many previous works; e.g. [a]), we add extra local epochs/iterations **only when necessary or when there is strong evidence that doing so would improve performance**. \n- This is because privacy cost directly correlates to the overall total number of local gradient steps, adding extra local steps **will directly lead to more DP noise** under the same total privacy budget and can significantly hurt convergence, while the benefit of extra local iterations may be insignificant.\n- Indeed, we tuned the local computation for Mocha and Ditto and observed that extra iterations are often not worth the extra DP noise and should in general be avoided. \n\nQ8\n- We’d like to clarify that hyperparameters are indeed tuned properly for all methods and for different datasets. For example, on top of important hyperparameters like client LR and $\\varepsilon$ values, we also tuned values like $\\lambda$ (for Ditto, Mocha, MR-MTL), number of local iterations (Q7 above), number of clusters (for IFCA), percentage of clustering rounds (e.g. 5%, 10%), privacy budget for private cluster selection (e.g. Appendix D.3), percentage of fedavg rounds for finetuning (e.g. Appendix H), etc.\n- The intent of L175-176 was **simply to indicate that we avoid introducing too many variables** by over-tuning less important parameters like batch size, server LR, momentum, server/client optimizers, etc. for which there are reasonable defaults (as done in previous works; e.g. [a, e, g]). \n- All experiments are also repeated over multiple random seeds.\n- We’ll clarify the above in the updated version. \n\nQ9 / Q10\n- Thanks for pointing out these issues! We will fix them accordingly.\n\nQ11\n- Thanks for pointing out this issue! In terms of implementation, we use the “replacement” notion – we provided some discussions in Appendix E.1 under “weighted vs unweighted model updates”. We’ll clarify and ensure consistency in the updated version. \n\nWe hope that the above addresses your concerns!\n\nRefs:\n- [a] https://arxiv.org/abs/1710.06963 \n- [b] https://arxiv.org/abs/2009.10031 \n- [c] https://arxiv.org/pdf/2206.02617.pdf \n- [d] https://proceedings.mlr.press/v162/bietti22a.html \n- [e] https://arxiv.org/abs/2204.13650 \n- [f] http://dimacs.rutgers.edu/~graham/pubs/papers/pdp.pdf \n- [g] https://arxiv.org/pdf/2003.00295.pdf \n- [h] https://arxiv.org/abs/1712.07557 \n- [i] https://arxiv.org/abs/2102.06387 \n", " (continued from part 1)\n\n### Limitations / Weakness #3: lack of convergence results\n\n> I think it will be great if the authors prove the lower bounds of convergence in a nonconvex setting.\n\nThank you for your suggestion! We’d like to emphasize that **convergence properties are not the focus of this work**. Nevertheless, this is a great question and, to our knowledge, lower bounds of convergence for _nonconvex private training_ is a generally open and active area of research. We will explore this in future work. \n\nRefs:\n- [a] https://arxiv.org/pdf/2108.12978.pdf \n- [b] https://proceedings.mlr.press/v162/bietti22a/bietti22a.pdf \n- [26] https://proceedings.neurips.cc/paper/2020/file/24389bfe4fe2eba8bf9aa9203a44cdad-Paper.pdf \n- [32] https://arxiv.org/pdf/2010.02372.pdf\n- [33] https://arxiv.org/pdf/2002.05516.pdf \n- [35] https://arxiv.org/abs/2108.12978 \n- [71] https://proceedings.neurips.cc/paper/2020/file/f4f1f13c8289ac1b1ee0ff176b56fc60-Paper.pdf \n\n\n", " We thank the reviewer for their time and feedback and for recognizing the strengths of this work. We understand the reviewer is concerned about (1) the lack of non-convex analysis (or alternatively provide more empirical results), (2) that MR-MTL has been analyzed in previous work, and (3) the lack of convergence results. We hope to address these concerns in the following. Please also see [our shared response](https://openreview.net/forum?id=Oq2bdIQQOIZ&noteId=rMtUPaS_fOR) to all reviewers.\n\n### Weakness 1: analysis of MR-MTL for non-convex settings, or additional results on CIFAR\n\nPer the reviewer’s request, we added results on heterogeneous CIFAR-10 with both privacy-utility tradeoff curves (analogous to Fig. 3) and the $\\lambda$ interpolation curves (analogous to Fig. 5) at the top of the Appendix. Please see our shared response for more details. \n\nFor general nonconvex problems, we believe that deriving the optimal $\\lambda^*$ (extension of Thm 6.3) may entail deriving the closed-form error of a nonconvex optimization problem as a function of $\\lambda$ (extension of Lem 6.7) which could be nontrivial, and we acknowledge that this is a limitation of our work (Appendix C.1). We plan to strengthen our evaluation with more empirical results in the updated version.\n\n\n### Weakness 2: MR-MTL has been proposed and analyzed before / limited novelty\n\n> The references [32] in this paper has already analyzed the personalized federated learning with mean-regularization penalty … For example, the references [26], [32] in this paper proposed personalized FL algorithms with theoretical guarantee in a nonvex setting\n\nWe’d like to point out that:\n- While previous work has indeed studied MR-MTL (or the mean-regularized objective) in general, there is in general a lack of work that considers **its intersection with DP**, and most existing analyses focus on specific aspects such as communication (e.g. [32, 33]) or convergence (e.g. [26, 32, 71]) **that are orthogonal to the focus of this work**; for example, the analysis of [26] focuses on convergence and [32] focuses on providing complexity bounds for communication and local oracle calls.\n- Moreover, **constants matter when it comes to deploying differential privacy**. For example, complexity bounds on local computation such as those provided in [32], while valuable for theoretical understanding, would not be as useful for providing practical recommendations under our DP notion; indeed, Fig. 5 illustrates the DP utility cost of a personalization method (Ditto) when there is a **constant factor (2x)** in the number of local iterations compared to MR-MTL.\n\nOverall, we’d like to emphasize that the goal of our work is to holistically study the application of DP in cross-silo FL and provide useful insights using both empirical and theoretical analyses, and our work is not geared toward proving or improving a specific theoretical guarantee.\n\n> This paper simply extends this to the case with the concept of differential privacy.\n\nWe respectfully disagree with the reviewer that extension to DP is trivial.\n- We believe extending existing algorithms to DP, particularly involving **private model personalization**, is an active research area (e.g., see discussions in [a, b] and our response to Weakness #1 from Reviewer JfCt), and in our case, the notion of silo-specific item-level DP is a meaningful relaxation of the commonly studied in client-level DP for private cross-silo FL motivated by real-world applications (see also our response to Limitations from Reviewer JfCt).\n- Moreover, the emerging empirical phenomena under this particular DP notion (e.g. Fig. 2 and Fig. 5) have not been adequately explored in earlier work, and establishing strong baselines requires carefully accounting for the privacy overhead of local computation from multiple aspects (e.g. L189-195, Fig. 4, Appendix D.3 “private cluster selection”, Section 7, Appendix G.3).\n- In line with reviewers TNbE and 7hbo, we argue that picking a suitable privacy notion, making important observations, and providing relevant analyses – even with existing methods – is a valuable contribution to the community.\n\n> Therefore, the novelty is not enough to satisfy the standard of this venue.\n\nWhile we value the reviewer’s feedback, we respectfully disagree with this conclusion. With our responses above and our shared response addressing limited algorithmic novelty, we hope that the reviewer will reconsider the merits of our work in **comprehensively examining private cross-silo FL** – from considering a more suitable notion of DP (sec. 3) and studying various emerging phenomena, to providing extensive empirical evaluation (sec. 4), analyzing why MR-MTL works as a strong baseline under this particular DP notion (sec. 5, 6), and examining the practical challenges of deploying MR-MTL (and other potentially more sophisticated personalization methods) (sec 7, Appendix G).\n\n(continued)\n", " \nThank you for taking the time to provide feedback, and we’re glad that the reviewer finds our work valuable! We provide our response below and please also see [our shared response](https://openreview.net/forum?id=Oq2bdIQQOIZ&noteId=rMtUPaS_fOR) for addressing concerns from other reviewers. \n\n> Explain why Ditto underperforms MR-MTL in Fig 5 \n\nBy construction, each client in Ditto requires at least two local training iterations (gradient steps) for updating its global and local model respectively (Line 5-6 of Algorithm 1 of [a]), whereas clients in MR-MTL require one iteration at a minimum (simply add a mean-regularization term to the batch gradient). Since the privacy cost under silo-specific item-level DP is directly correlated with the number of steps over the client’s local dataset, more iterations mean higher noise per iteration under the same total privacy budget. Fig. 5 aims to illustrate the effect of such privacy overhead. \n\nAlternatively one could also allow MR-MTL to run 2x many iterations (so as to match Ditto in terms of privacy cost), in which case the performance difference would come from faster/better convergence of MR-MTL.\n\n[a] https://arxiv.org/pdf/2012.04221.pdf \n", " We thank all reviewers for their time to provide helpful feedback for improving our paper! In addition to specific responses to each reviewer, we’d like to provide a summary and address some shared comments. \n\nFirst, we are glad that reviewers find our problem of interest relevant (TNbE, JfCt, 7hbo), the privacy notion for cross-silo FL and our observations interesting (TNbE, 7hbo, dzK1), our empirical/theoretical analysis useful for understanding the behavior of MR-MTL (TNbE, 7hbo), that our work provides novel perspective on private cross-silo learning (TNbE, 7hbo), and that the paper is generally well written and presented (TNbE, dzK1, 7hbo)\n\nWe now address some shared comments from the reviewers below.\n\n**[Adding more experimental evaluation (JfCt, dzK1)]**\n\nFollowing the reviewers’ suggestions, **we added results on heterogeneous CIFAR-10** with a setup following previous work (e.g. [a,b]) **at the beginning of the updated Appendix in green in the ZIP file** (will move to the main text in the updated version). We provide both: \n- The privacy-utility tradeoff curves (similar to Fig. 3), where MR-MTL is consistently competitive, and\n- Interpolation curves with varying $\\lambda$ (similar to Fig. 5), where we observe behaviors of MR-MTL similar to those seen in Fig. 5, such as that the best $\\lambda^*$ becomes larger under stronger privacy (Thm 6.3) and the utility curve as a function of $\\lambda$ has a (roughly) quasi-concave (“bump”) shape (Lemma 6.7). \n\nWe’d also like to point out that this is the 6th dataset on top of the 3 datasets in the main text and 2 datasets in the Appendix. We may also expand to more datasets in the updated version to further strengthen the evaluation.\n\n**[Concerns about limited algorithmic novelty (JfCt, dzK1)]**\n\nWe’d like to clarify that our goal is not to propose a new algorithm but to comprehensively examine private cross-silo federated learning from different perspectives:\n1. What DP notions may be more suitable/practical than the commonly used client-level DP?\n2. For a candidate DP notion, what are some emerging phenomena that may go against intuitions developed from client-level DP?\n3. What are some good baselines under this DP notion, and why might a particular baseline work better than others?\n4. Having identified strong baselines, what are then some limitations or challenges of deploying them?\n\nTo study these questions, our work: \n1. explores silos-specific item(sample)-level DP as a realistic alternative to client-level DP for cross-silo FL;\n2. studies the emerging phenomena related to privacy, heterogeneity, and personalization;\n3. dissects the desiderata of a strong baseline and how existing methods may fall short in terms of privacy overhead (Fig. 4, L229, Appendix D.3, etc.);\n4. characterizes how MR-MTL as one of the simplest & strongest baselines interacts with privacy and heterogeneity; and\n5. examines the complications of actually leveraging personalization methods to reconcile with the emerging phenomena, through the lens of private hyperparameter tuning (sec. 7). \n\nOverall, we hope that our work serves to shed more light on private cross-silo FL with both positive and negative results and spur future efforts in this area. We also provide additional results and discussions in the appendix that could be useful for future work.\n\nMoreover, we believe that model personalization under privacy is in general an important and active research area and that as reviewers TNbE and 7hbo pointed out, providing novel perspectives – even with existing algorithms – could be valuable to the community.\n\nRefs:\n- [a] http://proceedings.mlr.press/v139/shamsian21a/shamsian21a.pdf \n- [b] https://proceedings.neurips.cc/paper/2020/file/f4f1f13c8289ac1b1ee0ff176b56fc60-Paper.pdf \n\n### Note on the updated appendix, and not updating the main paper \nDue to page limit, we keep our responses to the reviewers’ feedback on OpenReview and do not update the main paper during the rebuttal; however, **we updated the top of the Appendix (in ZIP file) to provide additional results**. We will incorporate all feedback in the updated version.\n", " (continued from part 1)\n\n### Weakness #1 (cont.)\n\n> “item” vs “sample” in “silo-specific item-level DP”\n\n“Item” refers to a sample (e.g. an image) in a client’s local dataset, and we agree that changing to “sample” may be more clear. Thanks for this suggestion.\n\n> provide a formal DP definition of the silo-specific item-level privacy like the definition 2.1\n\nThank you for the suggestion! Def 2.1 aims to provide a generic DP definition in terms of generic “records” and “datasets” and is thus not specific to a particular DP notion; if a “dataset” refers to a single client’s local samples, then each client has its own DP guarantees that are independent of all other clients’ DP guarantees (thus “silo-specific sample-level DP”). We agree that providing a definition can help the reader and will update accordingly.\n\n\n### Weakness #2: “There is no novelty in the methodology, the paper only uses existing methods”\n\nWe agree with the reviewer that we do not propose new algorithms (perhaps apart from the MR-MTL extension to handle cluster-structured heterogeneity at L270 and Appendix E.3). Please see our shared response above and we hope that the reviewer would consider the merits of this work in holistically studying and providing novel perspectives on the use of existing approaches for private cross-silo FL.\n\n### Weakness #3: “The experiments only include two datasets, and performance difference is not very significant. It is unclear how solid the conclusion that MR-MTL is the best can be hold in general”\n\nWe thank the reviewer for raising these issues. Regarding datasets, we point the reviewer to our response to shared concerns above.\n\n> “performance difference is not very significant”\n\n- We emphasize that we intended to highlight MR-MTL as **a strong baseline under this particular DP notion** due to its simplicity, extensibility (L270), minimal hyperparameters ($\\lambda$), and minimal privacy overhead (L245) compared to existing personalization methods; **these properties make MR-MTL a tough-to-beat baseline, but not necessarily that it always has significantly superior performance**, since mean-regularization may not be optimal under arbitrary heterogeneity (e.g. Fig. 6). In fact, without adding DP, MR-MTL was shown in prior work (e.g. [h, i]) to be inferior to many existing methods that, e.g., leverage additional computation.\n- We also argue that the existence of a region of $\\lambda$ values where it may outperform both endpoints of the personalization spectrum **under the same privacy budget** is an interesting phenomenon alone worth highlighting, particularly that **such utility advantage is flexible (depends on the privacy setting)** and can be made larger under privacy settings (Fig. 5, Propositions 6.5/6.6, L338-L347) compared to non-private settings.\n- See also our additional results on CIFAR-10, where the utility advantage of MR-MTL over local training / FedAvg can be large.\n- Note that all experiments are also repeated over multiple random seeds.\n\n### Minor concerns\n\n> Define “local training”\n\n“Local training” refers to each silo **training and keeping their own model on their own local datasets, without any federated learning**; it is a very simple baseline that works surprisingly well in many cases; see, e.g., [f].\n\n> Fig.2, the right figure, explain what is meant by the semi-transparent plot line\n\nThe semi-transparent lines refer to the local training / FedAvg baselines (reusing legend from Fig. 2 left) with the same $\\varepsilon$ value as finetuning (so 0.5 in the right figure).\n\n### Limitations\n\nThank you for your thoughtful feedback; these are great questions. We have perhaps addressed Q1 and Q2 in Appendix C.1 along with our response to weakness #1 above and in Section 3 respectively, and we hope to add more discussions in the updated version.\n\nFor Q3 (see also our response to weakness #1 above), we believe that the silo-specific sample-level DP notion would be directly applicable to settings where we have prior knowledge that each data subject (person) corresponds to at most one sample in a silo; real-world examples include\n- surgery records or single-shot vaccinations for a particular medical condition across hospitals (duplicates are unlikely);\n- voting records across counties/states for a particular election (a person can legally vote once); and\n- attendance records for a particular exam (e.g. entrance exams).\n\nWe will add more discussion on this in the updated version.\n\nRefs:\n- [a] https://eprint.iacr.org/2017/299.pdf \n- [b] https://eprint.iacr.org/2017/799.pdf \n- [c] https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9343209 \n- [d] https://arxiv.org/abs/2102.11845\n- [e] https://proceedings.neurips.cc/paper/2021/file/a89cf525e1d9f04d16ce31165e139a4b-Paper.pdf \n- [f] https://openreview.net/forum?id=GgM5DiAb6A2\n- [g] http://dimacs.rutgers.edu/~graham/pubs/papers/pdp.pdf \n- [h] https://arxiv.org/pdf/2003.13461.pdf\n- [i] https://arxiv.org/pdf/2012.08565.pdf\n", " We thank the reviewer for their time and feedback! We’re glad that the reviewer finds our problem of interest important. We hope to address your concerns in detail below.\n\n### Weakness #1. “The proposed silo-specific item-level privacy is not clearly defined”\n\n> shouldn't the privacy guarantee be data-subject-level rather than sample(or so-called item)-level?\n\nWe agree with the reviewer that “data-subject-level DP” would be ideal; however, as we will detail below, this may be challenging to enforce in practice (an active research area) and is **an orthogonal direction to our work**.\n\nWe want to first clarify that “silo-specific sample-level DP” is an important step away from the common “client-level DP” used in FL. This notion would directly apply to many real-world settings where we have prior knowledge that each data subject (person) corresponds to **at most one sample in a silo** (such as voting records across voting centers). I.e., **it is a special case** of the more general data-subject-level DP. \n\nMore specifically, we believe that enforcing **general data-subject-level DP is still an active research area**; it may require **custom strategies for different scenarios**. For example,\n1. if a subject has multiple records **within a silo**, then one intuitively expects the protection for that subject degrades gracefully under DP-SGD / sample-level DP (since we learn about that subject through different samples) and each silo can account for such cases independently (by group privacy; see also Appendix C.1). More broadly one may leverage results described in, e.g., recent works at NeurIPS’21 [d, e].\n2. if a subject has records **across multiple silos** (a harder case), then in general one has to **detect the data subjects first** in order to provide data-subject-level DP. Techniques such as private set intersection (e.g. [a, b, c]) may be used, but when combined with DP, they may lead to privacy overhead (e.g. need for additional queries on top of training) and may require crypto protocols (e.g. [a, b]) which are orthogonal to our focus. \n\nPlease also see footnote 2 (page 3) and Appendix C.1.\n\nWe emphasize that our findings under our DP notion already raise important questions that also extend to data-subject-level DP and are worth further research. Developing techniques for enforcing general data-subject-level DP would be an interesting but orthogonal direction to this work.\n\n> …why letting each silo adds its own noise, instead of protecting the combined datasets (from all silos)\n\nThe reviewer raised a very good point. In addition to our response above (challenges of enforcing general data-subject-level DP across silos), we point out that we are motivated by **how silos may realistically implement FL in practice (particularly with model personalization)**.\n- First, the nature of cross-silo FL often necessitates silos having their own **local, personalized models**; for example, if local datasets are large, simple local training may even be the ideal strategy (see, e.g., [f]), in which case silos do not communicate with others at all and thus should (and can only) keep their own DP guarantees for their models.\n- Second, even under the simplifying assumption that each data subject only contributes one sample in each silo (so sample-level DP suffices), computing an **accurate/tight** ($\\varepsilon, \\delta$) for sample-level DP over the **combined** dataset could be challenging **without trusted global statistics** (e.g. the sampling patterns across silos for privacy amplification theorems, the noise levels of each silo, or a centrally coordinated noise level); these statistics can be hard to curate if we want to maintain a trustless setting (L117). That is, while one *could* show that a global model (or the set of personalized models) satisfies some ($\\varepsilon, \\delta$)-DP using minimal information (e.g. with worst-case assumptions that data subjects appear in every silo and do privacy accounting using the minimum noise level across silos), the resulting guarantee could be quite loose & less meaningful.\n- Third, the computed ($\\varepsilon, \\delta$) over the combined dataset can also be less meaningful since silos can be **dynamic** (e.g. silos may exit the FL protocol at any time); silo-specific sample-level DP is directly compatible with **arbitrary client participation patterns** (as well as varying silo DP budgets).\n\nWe believe that enforcing general data-subject-level DP, while ideal, is generally an open area as it also touches on other aspects of FL such as **client incentives** (e.g. whether a hospital would _want_ to spend resources to learn about / account for their patients’ appearances across _all other hospitals_). Moreover, we focus on private model personalization where **each client can keep its own model** and we believe that the reviewer’s suggestion would be more applicable if we only study the case where we have a **single shared model**.\n\n(continued)", " This paper considers the roles of privacy and personalization in cross-silo federated learning (FL) setting, which has less explored than in cross-device setting. This paper firstly shows why existing privacy solutions (employing differential privacy) are less appropriate to cross-silo FL with three key properties of silo-specific privacy. Then, it empirically shows that two notable phenomena under silo-specific item-level DP with two extreme baselines, local fine-tuning and federated average, which provides fundamental trade-off among privacy, utility and heterogeneity. It further shows mean-regularized multi-task learning (MR-MTL) is good interpolation between of the two extreme baselines while providing better trade-off between privacy and cross-silo data heteroneneity. Theoretical analysis of the MR-MTL is also included.\n Overall, even though this paper does not provide new algorithm for cross-silo privacy protection, it raises the new notion of privacy suitable for cross-silo FL, and proposes interesting trade-off among privacy and data heterogeneity by exploring state-of-the-art personalized FL techniques. This paper is organized and written well, and hence it is easy to understand the difference between the notion of privacy in cross-device and cross-silo FL settings; characteristics of cross-silo privacy; and fundamental trade-off between privacy and heterogeneity in cross-silo FL with the strong baseline (MR-MTL). The role of the important hyper-parameter in the MR-MTL is well explored with empirical and theoretical analysis. Please explain more detailed reason that MR-MTL outperforms Ditto with a certain level of privacy budget ($epsilon=0.5 in Figure 5) This paper has no negative societal impact as it studies the privacy protection in collaborative ML training.", " The paper studies differential privacy (DP) in cross-silo (where participants share different attributes of the same data subjects) federated learning (FL). The authors propose a silo-specific item-level privacy (although it is unclear to me what it means and how it is connected to the methods used to achieve this privacy, see Question section) which is more suitable in cross-silo FL mainly because it can allow different silos (organizations/participants) apply their individual privacy policy (i,e. DP budgets). The paper uses FL personalization methods to achieve this privacy, and empirically finds that MR-MTL (encouraging the mean of each local model's weights to be close to the mean of all local models') lead to the best privacy-utility tradeoff. Then the paper justifies this observation with some theoretical analysis on MR-MTL. Strength\n- Topic and problem are important. Privacy concerns in cross-silo FL are indeed different from cross-device FL. The motivation that different silos might want to enforce different privacy policies/budgets is, in my opinion, reasonable, and it is a practical problem in the real-world cross-silo FL deployments\n\nWeaknesses\n- The proposed silo-specific item-level privacy is not clearly defined (see Questions section)\n- There is no novelty in the methodology, the paper only uses existing methods\n- The experiments only include two datasets, and performance difference is not very significant. It is unclear how solid the conclusion that MR-MTL is the best can be hold in general My major concern that needs clarifications is about the definition of proposed silo-specific item-level privacy. First, what is meant by \"item\"? From definition 2.1, it seems to mean sample. If so, I strongly recommend the authors to call it \"sample\" instead because the meaning of \"item\" is quite unclear. Furthermore, it would be helpful if the authors can provide a formal DP definition of the silo-specific item-level privacy like the definition 2.1.\n\nMore importantly, if I understand it correctly, the proposed privacy aims to protect data subjects. If that is the goal, shouldn't the privacy guarantee be data-subject-level rather than sample(or so-called item)-level? I find it hard to understand the proposed privacy definition given the motivation of protecting data subjects because what the paper actually protects is samples in each silo rather than the data subjects. Please let me know if I misunderstand anything.\n\nIn addition, if the goal is to protect data subjects, I do not understand why letting each silo adds its own noise, instead of protecting the combined datasets (from all silos), can mitigate the privacy risk since each data subject's full information can only be found in the combined datasets. I would imagine if you want to protect data-subject-level privacy, you would need to operate on the combined dataset. Therefore, it seems to me the proposed solution and privacy definition are disconnected from the motivation of \"concern in-silo data subjects\". Please let me know if I have a high-level misunderstanding.\n\nMinor concerns\n- Define \"local training\"\n- Fig.2, the right figure, explain what is meant by the semi-transparent plot line In additions to the limitations shown in Section 7, I would suggest the authors to include a discussion on the proposed privacy definition. What might go wrong if practitioners go with this less stringent definition? What are the trade-offs between this definition and other DP definitions in cross-silo FL? Under which real-world scenarios, practitioners would be most likely to benefit from the proposed definition? ", " This paper proposes a mean-regularized multi-task learning based cross-silo federated learning for privacy and model personalization.\nIn cross-silo federated learning, each silo has different item-level different privacy budget on its local dataset\nThe paper empirically shows that local finetuning fails to achieve the high average silo test accuracy (model personalization) with differential privacy noise after FedAvg steps. \nTo achieve model personalization under data heterogeneity with differnetial privacy, a network is trained with a mean-regularization penalty.\nThis penalty is controlled by the hyperparameter \\lambda, and the authors propose an optimal value of \\lambda in \\mu-strongly convex scenario with theoretical analysis. \n Strength\n* The proposed approach for personalized FL with guaranteeing differential privacy is simple\n* The empirical analysis on several simple benchmark datasets is well demonstrated to help readers to understand.\n\nWeakness\n* The paper only provides a theoretical analysis for the convex setting, which may not be directly applied to deep learning scenarios. If it is hard to find the optimal \\lambda for a nonconvex setting, the authors may need to give empirical evaluation at least for CIFAR-10/100.\n* The references [32] in this paper has already analyzed the personalized federated learning with mean-regularization penalty. This paper simply extends this to the case with the concept of differential privacy. For example, the references [26], [32] in this paper proposed personalized FL algorithms with theoretical guarantee in a nonvex setting, and demonstrated the empirical evaluation for harder cases such as CIFAR-10/100. Therefore, the novelty is not enough to satisfy the standard of this venue. \n I wonder that the same regularization effect will hold for CIAFR-10/100. The paper shows an interesting observation in personalized federated learning with guranteeing diffrential privacy. Mean-regularized multi-task learning method has already been used in personalized FL widely, so I think it will be great if the authors prove the lower bounds of convergence in a nonconvex setting.", " The paper discusses personalization under differential privacy (DP) in federated learning (FL). The key observation in the paper is that with DP, there is a potential trade-off between the amount of federation, that can mitigate DP noise effect, and personalization, which can mitigate problems due to data heterogeneity. The authors then discuss the desiderata for a good personalization approach under DP, and explore the performance of mean-regularized multi-task learning, both theoretically and experimentally. ### Update after the discussions\n\nI raise my score to recommend acceptance. In case the paper is still not accepted, I would give it a higher score in the future if the following are addressed: 1) make further improvements in the main content (e.g. how to choose lambda under DP, run still more experiments to check how robust the current results are, especially are there cases which go against the current understanding), 2) improve writing clarity (see the discussions, give less weight to sample-level DP discussion and more to the utility trade-off results)\n\n### Strenghts\n\n+ The observation about a possible trade-off between mitigating utility loss due to DP noise by federation and mitigating utility loss due to data heterogeneity by personalization is a nice one and, to my knowledge, a novel perspective on the problem.\n+ The theoretical analysis, although under simplifying assumptions, seems to support the argument nicely.\n+ The discussion about the price of addional hyperparameters under DP points to an important problem.\n+ The paper is mostly clearly written and easy to read.\n\n\n### Weaknesses\n\n- There seems to be some possible misunderstandings about (client-level) DP in FL, which leads to a number of strange claims in various parts of the paper (see questions for authors for details). 1) Reading the current paper, especially lines 24-28, 35-37, 50-51, 109-110, one could be lead to think that this is the first paper to consider sample or item-level DP in cross-silo FL. Please add cites to relevant papers when introducing the problem to remedy this.\n\n2) Related to the previous point, some existing work is presented in a manner which seems a bit misleading, especially w.r.t. the sample-level vs client-level privacy: both Truex et al. 2019 and Heikkilä et al. 2020 define neighbourhood on a single-sample level, not on client-level (see Def.1, and first paragragh in Sec.3, respectively) as claimed. The secure primitives in these papers help to reduce the effect of DP noise, they do not change the basic DP neighbourhood definition used. I think this points to a misunderstanding in the client-level DP definition (see next questions as well).\n\n3) Line 105: \"Client-level DP canonically defines a shared privacy guarantee for all participants.\" Client-level vs sample-level privacy typically relates to the granularity of the DP guarantee, i.e., to the neighbourhood definition used, not having a joint vs individual DP budgets for each client (and incidentally, by tuning the granularity one can then also easily define the individual level privacy, where a single individual may have one or several samples in one silo, that sits between silo-level and sample-level). There is no problem in enforcing client-level DP with varying privacy budgets for different clients in FL (e.g., just use different amounts of noise for different clients with parameter perturbation).\n\n4) Related to the previous comment, on lines 117-120 you claim that client-level DP requires some trust to the central server. This is simply not true, there is no problem in each client enforcing client-level DP individually (although the model utility could be awful), this is local DP (LDP) on the client-level, and by post-processing immunity and parallel composition the global model will have DP guarantees as well, including against the central server.\n\n5) Again a related point, lines 122-123 \"local training is 'free' under client-level DP.\" I would claim that this depends on the DP mechanism, not on the neighbourhood definition: e.g. using DP-SGD and client-level LDP is a counter-example where local training is not free with client-level DP.\n\n6) Lines 33-34: \"...participation in FL [is] disclosed publicly, nullifying any client-level DP guarantees.\" Also lines 121-123. I do not understand why knowledge of participation in FL training would nullify DP guarantees.\n\n7) Lines 166-169: how do you choose the total number of iterations for the fair comparisons?\n\n8) Lines 175-176: if you do not even try to tune the hyperparameters properly for all methods, why should I believe that the results are not simply due to chance?\n\n### Minor comments:\n\n9) Figure 6: due to the scale, it is next to impossible to see from the figure what happens around eps=1-2 that is usually the most important region. Please fix this (change scale, add additional plot maybe to appendix or something else).\n\n\n10) Lines 86,171: DP-SGD was developed and analysed by Song et al. 2013 and Bassily et al. 2014, while the main contribution of Abadi et al. 2016 was the privacy accounting based on cumulants (moments accountant).\n\n11) DP def. 2.1: the definition of neighbourhood seems a bit strange, since x,x' in mathcal X^n explicitly declares the same dataset size for both, but then you mention addition and removal as well. Which one do you actually use?\n\nReferences:\nAbadi et al 2016: Deep learning with DP.\nBassily et al. 2014: Private empirical risk minimization.\nHeikkilä et al. 2020: DP cross-silo FL.\nSong et al. 2013: Stochastic gradient descent with DP updates.\nTruex et al. 2019: A hybrid approach to privacy-preserving FL. The paper has a good discussion on some important limitations of the proposed method. There is some separate discussion on the potential negative societal impact in the appendix.\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 7, 4, 3, 6 ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, 3, 3, 4, 4 ]
[ "uGnVZ4MNHl6", "nips_2022_Oq2bdIQQOIZ", "nips_2022_Oq2bdIQQOIZ", "r0B9lsjVU_k", "r0B9lsjVU_k", "6D3pMlYY4H0", "6D3pMlYY4H0", "G_dq5I2DaEX", "nips_2022_Oq2bdIQQOIZ", "mPKqadLVyHC", "mPKqadLVyHC", "nips_2022_Oq2bdIQQOIZ", "nips_2022_Oq2bdIQQOIZ", "nips_2022_Oq2bdIQQOIZ", "nips_2022_Oq2bdIQQOIZ" ]